Category Archives: BigBang

Chocolate for the Soul and Mind (But Not Body)

Hot on the heels of the recent research finding that the Mediterranean diet improves heart health, come news that choc-a-holics the world over have been anxiously awaiting — chocolate improves brain function.

Researchers have found that chocolate rich in compounds known as flavanols can improve cognitive function. Now, before you rush out the door to visit the local grocery store to purchase a mountain of Mars bars (perhaps not coincidentally, Mars, Inc., partly funded the research study), Godiva pralines, Cadbury flakes or a slab of Dove, take note that all chocolate is not created equally. Flavanols tend to be found in highest concentrations in raw cocoa. In fact, during the process of making most chocolate, including the dark kind, most flavanols tend to be removed or destroyed. Perhaps the silver lining here is that to replicate the dose of flavanols found to have a positive effect on brain function, you would have to eat around 20 bars of chocolate per day for several months. This may be good news for your brain, but not your waistline!

[div class=attrib]From Scientific American:[end-div]

It’s news chocolate lovers have been craving: raw cocoa may be packed with brain-boosting compounds. Researchers at the University of L’Aquila in Italy, with scientists from Mars, Inc., and their colleagues published findings last September that suggest cognitive function in the elderly is improved by ingesting high levels of natural compounds found in cocoa called flavanols. The study included 90 individuals with mild cognitive impairment, a precursor to Alzheimer’s disease. Subjects who drank a cocoa beverage containing either moderate or high levels of flavanols daily for eight weeks demonstrated greater cognitive function than those who consumed low levels of flavanols on three separate tests that measured factors that included verbal fluency, visual searching and attention.

Exactly how cocoa causes these changes is still unknown, but emerging research points to one flavanol in particular: (-)-epicatechin, pronounced “minus epicatechin.” Its name signifies its structure, differentiating it from other catechins, organic compounds highly abundant in cocoa and present in apples, wine and tea. The graph below shows how (-)-epicatechin fits into the world of brain-altering food molecules. Other studies suggest that the compound supports increased circulation and the growth of blood vessels, which could explain improvements in cognition, because better blood flow would bring the brain more oxygen and improve its function.

Animal research has already demonstrated how pure (-)-epicatechin enhances memory. Findings published last October in the Journal of Experimental Biology note that snails can remember a trained task—such as holding their breath in deoxygenated water—for more than a day when given (-)-epicatechin but for less than three hours without the flavanol. Salk Institute neuroscientist Fred Gage and his colleagues found previously that (-)-epicatechin improves spatial memory and increases vasculature in mice. “It’s amazing that a single dietary change could have such profound effects on behavior,” Gage says. If further research confirms the compound’s cognitive effects, flavanol supplements—or raw cocoa beans—could be just what the doctor ordered.

So, Can We Binge on Chocolate Now?

Nope, sorry. A food’s origin, processing, storage and preparation can each alter its chemical composition. As a result, it is nearly impossible to predict which flavanols—and how many—remain in your bonbon or cup of tea. Tragically for chocoholics, most methods of processing cocoa remove many of the flavanols found in the raw plant. Even dark chocolate, touted as the “healthy” option, can be treated such that the cocoa darkens while flavanols are stripped.

Researchers are only beginning to establish standards for measuring flavanol content in chocolate. A typical one and a half ounce chocolate bar might contain about 50 milligrams of flavanols, which means you would need to consume 10 to 20 bars daily to approach the flavanol levels used in the University of L’Aquila study. At that point, the sugars and fats in these sweet confections would probably outweigh any possible brain benefits. Mars Botanical nutritionist and toxicologist Catherine Kwik-Uribe, an author on the University of L’Aquila study, says, “There’s now even more reasons to enjoy tea, apples and chocolate. But diversity and variety in your diet remain key.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Google Search.[end-div]

Your Tax Dollars at Work

Naysayers would say that government, and hence taxpayer dollars, should not be used to fund science initiatives. After all academia and business seem to do a fairly good job of discovery and innovation without a helping hand pilfering from the public purse. And, without a doubt, and money aside, government funded projects do raise a number of thorny questions: On what should our hard-earned income tax be spent? Who decides on the priorities? How is progress to be measured? Do taxpayers get any benefit in return? After many of us cringe at the thought of an unelected bureaucrat or a committee of such spending millions if not billions of our dollars. Why not just spend the money on fixing our national potholes?

But despite our many human flaws and foibles we are at heart explorers. We seek to know more about ourselves, our world and our universe. Those who seek answers to fundamental questions of consciousness, aging, and life are pioneers in this quest to expand our domain of understanding and knowledge. These answers increasingly aid our daily lives through continuous improvement in medical science, and innovation in materials science. And, our collective lives are enriched as we increasingly learn more about the how and the why of our and our universe’s existence.

So, some of our dollars have gone towards big science at the Large Hadron Collider (LHC) beneath Switzerland looking for constituents of matter, the wild laser experiment at the National Ignition Facility designed to enable controlled fusion reactions, and the Curiosity rover exploring Mars. Yet more of our dollars have gone to research and development into enhanced radar, graphene for next generation circuitry, online courseware, stress in coral reefs, sensors to aid the elderly, ultra-high speed internet for emergency response, erosion mitigation, self-cleaning surfaces, flexible solar panels.

Now comes word that the U.S. government wants to spend $3 billion dollars — over 10 years — on building a comprehensive map of the human brain. The media has dubbed this the “connectome” following similar efforts to map our human DNA, the genome. While this is the type of big science that may yield tangible results and benefits only decades from now, it ignites the passion and curiosity of our children to continue to seek and to find answers. So, this is good news for science and the explorer who lurks within us all.

[div class=attrib]From ars technica:[end-div]

Over the weekend, The New York Times reported that the Obama administration is preparing to launch biology into its first big project post-genome: mapping the activity and processes that power the human brain. The initial report suggested that the project would get roughly $3 billion dollars over 10 years to fund projects that would provide an unprecedented understanding of how the brain operates.

But the report was remarkably short on the scientific details of what the studies would actually accomplish or where the money would actually go. To get a better sense, we talked with Brown University’s John Donoghue, who is one of the academic researchers who has been helping to provide the rationale and direction for the project. Although he couldn’t speak for the administration’s plans, he did describe the outlines of what’s being proposed and why, and he provided a glimpse into what he sees as the project’s benefits.

What are we talking about doing?

We’ve already made great progress in understanding the behavior of individual neurons, and scientists have done some excellent work in studying small populations of them. On the other end of the spectrum, decades of anatomical studies have provided us with a good picture of how different regions of the brain are connected. “There’s a big gap in our knowledge because we don’t know the intermediate scale,” Donaghue told Ars. The goal, he said, “is not a wiring diagram—it’s a functional map, an understanding.”

This would involve a combination of things, including looking at how larger populations of neurons within a single structure coordinate their activity, as well as trying to get a better understanding of how different structures within the brain coordinate their activity. What scale of neuron will we need to study? Donaghue answered that question with one of his own: “At what point does the emergent property come out?” Things like memory and consciousness emerge from the actions of lots of neurons, and we need to capture enough of those to understand the processes that let them emerge. Right now, we don’t really know what that level is. It’s certainly “above 10,” according to Donaghue. “I don’t think we need to study every neuron,” he said. Beyond that, part of the project will focus on what Donaghue called “the big question”—what emerges in the brain at these various scales?”

While he may have called emergence “the big question,” it quickly became clear he had a number of big questions in mind. Neural activity clearly encodes information, and we can record it, but we don’t always understand the code well enough to understand the meaning of our recordings. When I asked Donaghue about this, he said, “This is it! One of the big goals is cracking the code.”

Donaghue was enthused about the idea that the different aspects of the project would feed into each other. “They go hand in hand,” he said. “As we gain more functional information, it’ll inform the connectional map and vice versa.” In the same way, knowing more about neural coding will help us interpret the activity we see, while more detailed recordings of neural activity will make it easier to infer the code.

As we build on these feedbacks to understand more complex examples of the brain’s emergent behaviors, the big picture will emerge. Donaghue hoped that the work will ultimately provide “a way of understanding how you turn thought into action, how you perceive, the nature of the mind, cognition.”

How will we actually do this?

Perception and the nature of the mind have bothered scientists and philosophers for centuries—why should we think we can tackle them now? Donaghue cited three fields that had given him and his collaborators cause for optimism: nanotechnology, synthetic biology, and optical tracers. We’ve now reached the point where, thanks to advances in nanotechnology, we’re able to produce much larger arrays of electrodes with fine control over their shape, allowing us to monitor much larger populations of neurons at the same time. On a larger scale, chemical tracers can now register the activity of large populations of neurons through flashes of fluorescence, giving us a way of monitoring huge populations of cells. And Donaghue suggested that it might be possible to use synthetic biology to translate neural activity into a permanent record of a cell’s activity (perhaps stored in DNA itself) for later retrieval.

Right now, in Donaghue’s view, the problem is that the people developing these technologies and the neuroscience community aren’t talking enough. Biologists don’t know enough about the tools already out there, and the materials scientists aren’t getting feedback from them on ways to make their tools more useful.

Since the problem is understanding the activity of the brain at the level of large populations of neurons, the goal will be to develop the tools needed to do so and to make sure they are widely adopted by the bioscience community. Each of these approaches is limited in various ways, so it will be important to use all of them and to continue the technology development.

Assuming the information can be recorded, it will generate huge amounts of data, which will need to be shared in order to have the intended impact. And we’ll need to be able to perform pattern recognition across these vast datasets in order to identify correlations in activity among different populations of neurons. So there will be a heavy computational component as well.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: White matter fiber architecture of the human brain. Courtesy of the Human Connectome Project.[end-div]

Yourself, The Illusion

A growing body of evidence suggests that our brains live in the future, construct explanations for the past and that our notion of the present is an entirely fictitious concoction. On the surface this makes our lives seem like nothing more than a construction taken right out of The Matrix movies. However, while we may not be pawns in an illusion constructed by malevolent aliens, our perception of “self” does appear to be illusory. As researchers delve deeper into the inner workings of the brain it becomes clearer that our conscious selves are a beautifully derived narrative, built by the brain to make sense of the past and prepare for our future actions.

[div class=attrib]From the New Scientist:[end-div]

It seems obvious that we exist in the present. The past is gone and the future has not yet happened, so where else could we be? But perhaps we should not be so certain.

Sensory information reaches usMovie Camera at different speeds, yet appears unified as one moment. Nerve signals need time to be transmitted and time to be processed by the brain. And there are events – such as a light flashing, or someone snapping their fingers – that take less time to occur than our system needs to process them. By the time we become aware of the flash or the finger-snap, it is already history.

Our experience of the world resembles a television broadcast with a time lag; conscious perception is not “live”. This on its own might not be too much cause for concern, but in the same way the TV time lag makes last-minute censorship possible, our brain, rather than showing us what happened a moment ago, sometimes constructs a present that has never actually happened.

Evidence for this can be found in the “flash-lag” illusion. In one version, a screen displays a rotating disc with an arrow on it, pointing outwards (see “Now you see it…”). Next to the disc is a spot of light that is programmed to flash at the exact moment the spinning arrow passes it. Yet this is not what we perceive. Instead, the flash lags behind, apparently occuring after the arrow has passed.

One explanation is that our brain extrapolates into the future. Visual stimuli take time to process, so the brain compensates by predicting where the arrow will be. The static flash – which it can’t anticipate – seems to lag behind.

Neat as this explanation is, it cannot be right, as was shown by a variant of the illusion designed by David Eagleman of the Baylor College of Medicine in Houston, Texas, and Terrence Sejnowski of the Salk Institute for Biological Studies in La Jolla, California.

If the brain were predicting the spinning arrow’s trajectory, people would see the lag even if the arrow stopped at the exact moment it was pointing at the spot. But in this case the lag does not occur. What’s more, if the arrow starts stationary and moves in either direction immediately after the flash, the movement is perceived before the flash. How can the brain predict the direction of movement if it doesn’t start until after the flash?

The explanation is that rather than extrapolating into the future, our brain is interpolating events in the past, assembling a story of what happened retrospectively (Science, vol 287, p 2036). The perception of what is happening at the moment of the flash is determined by what happens to the disc after it. This seems paradoxical, but other tests have confirmed that what is perceived to have occurred at a certain time can be influenced by what happens later.

All of this is slightly worrying if we hold on to the common-sense view that our selves are placed in the present. If the moment in time we are supposed to be inhabiting turns out to be a mere construction, the same is likely to be true of the self existing in that present.

[div class=attrib]Read the entire article after the jump.[end-div]

Intelligenetics

Intelligenetics isn’t recognized as a real word by Websters or the Oxford English dictionary. We just coined a term that might best represent the growing field of research examining the genetic basis for human intelligence. Of course, it’s not a new subject and comes with many cautionary tales. Past research into the genetic foundations of intelligence has often been misused by one group seeking racial, ethnic or political power over another. However, with strong and appropriate safeguards in place science does have a legitimate place in uncovering what makes some brains excel while others do not.

[div class=attrib]From the Wall Street Journal:[end-div]

At a former paper-printing factory in Hong Kong, a 20-year-old wunderkind named Zhao Bowen has embarked on a challenging and potentially controversial quest: uncovering the genetics of intelligence.

Mr. Zhao is a high-school dropout who has been described as China’s Bill Gates. He oversees the cognitive genomics lab at BGI, a private company that is partly funded by the Chinese government.

At the Hong Kong facility, more than 100 powerful gene-sequencing machines are deciphering about 2,200 DNA samples, reading off their 3.2 billion chemical base pairs one letter at a time. These are no ordinary DNA samples. Most come from some of America’s brightest people—extreme outliers in the intelligence sweepstakes.

The majority of the DNA samples come from people with IQs of 160 or higher. By comparison, average IQ in any population is set at 100. The average Nobel laureate registers at around 145. Only one in every 30,000 people is as smart as most of the participants in the Hong Kong project—and finding them was a quest of its own.

“People have chosen to ignore the genetics of intelligence for a long time,” said Mr. Zhao, who hopes to publish his team’s initial findings this summer. “People believe it’s a controversial topic, especially in the West. That’s not the case in China,” where IQ studies are regarded more as a scientific challenge and therefore are easier to fund.

The roots of intelligence are a mystery. Studies show that at least half of the variation in intelligence quotient, or IQ, is inherited. But while scientists have identified some genes that can significantly lower IQ—in people afflicted with mental retardation, for example—truly important genes that affect normal IQ variation have yet to be pinned down.

The Hong Kong researchers hope to crack the problem by comparing the genomes of super-high-IQ individuals with the genomes of people drawn from the general population. By studying the variation in the two groups, they hope to isolate some of the hereditary factors behind IQ.

Their conclusions could lay the groundwork for a genetic test to predict a person’s inherited cognitive ability. Such a tool could be useful, but it also might be divisive.

“If you can identify kids who are going to have trouble learning, you can intervene” early on in their lives, through special schooling or other programs, says Robert Plomin, a professor of behavioral genetics at King’s College, London, who is involved in the BGI project.

[div class=attrib]Read the entire article following the jump.[end-div]

Distance to Europa: $2 billion and 14 years

Europa is Jupiter’s gravitationally tortured moon. It has liquid oceans underneath an icy surface. This makes Europa a very interesting target for future missions to the solar system — missions looking for life beyond our planet. Unfortunately, NASA’s planned mission has yet to be funded. But should the agency (and taxpayers) come up with the estimated $2 billion to fund a spacecraft, we could well have a probe circling Europa by 2027.

[div class=attrib]From the Guardian:[end-div]

Nasa scientists have drawn up plans for a mission that could look for life on Europa, a moon of Jupiter that is covered in vast oceans of water under a thick layer of ice.

The Europa Clipper would be the first dedicated mission to the waterworld moon, if it gets approval for funding from Nasa. The project is set to cost $2bn.

“On Earth, everywhere where there’s liquid water, we find life,” said Robert Pappalardo, a senior research scientist at Nasa’s jet propulsion laboratory in California, who led the design of the Europa Clipper.

“The search for life in our solar system somewhat equates to the search for liquid water. When we ask the question where are the water worlds, we have to look to the outer solar system because there are oceans beneath the icy shells of the moons.”

Jupiter’s biggest moons such as Ganymede, Callisto and Europa are too far from the sun to gain much warmth from it, but have liquid oceans beneath their blankets of ice because the moons are squeezed and warmed up as they orbit the planet.

“We generally focus down on Europa as the most promising in terms of potential habitability because of its relatively thick ice shell, an ocean that is in contact with rock below, and that it’s probably geologically active today,” Pappalardo said at the annual meeting of the American Association for the Advancement of Science in Boston.

In addition, because Europa is bombarded by extreme levels of radiation, the moon is likely to be covered in oxidants at its surface. These molecules are created when water is ripped apart by energetic radiation and could be used by lifeforms as a type of fuel.

For several years scientists have been considering plans for a spacecraft that could orbit Europa, but this turned out to be too expensive for Nasa’s budgets. Over the past year Pappalardo has worked with colleagues at the applied physics lab at Johns Hopkins University to come up with the Europa Clipper.

The spacecraft would orbit Jupiter and make several flybys of Europa, in the same way that the successful Cassini probe did for Saturn’s moon Titan.

“That way we can get effectively global coverage of Europa – not quite as good as an orbiter but not bad for half the cost . We have a validated cost of $2bn over the lifetime of the mission, excluding the launch,” Pappalardo said.

A probe could be readied in time for launch around 2021 and would take between three to six years to arrive at Europa, depending on the rockets used.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Complex and beautiful patterns adorn the icy surface of Jupiter’s moon Europa, as seen in this color image intended to approximate how the satellite might appear to the human eye. Image Credit: NASA/JPL/Ted Stryk.[end-div]

Your Brain and Politics

New research out of the University of Exeter in Britain and the University of California, San Diego, shows that liberals and conservatives really do have different brains. In fact, activity in specific areas of the brain can be used to predict whether a person leans to the left or to the right with an accuracy of just under 83 percent. This means that a brain scan could more accurately predict your politics than the political persuasions of your parents (accurate around 70 percent of the time).

[div class=attrib]From Smithsonian:[end-div]

If you want to know people’s politics, tradition said to study their parents. In fact, the party affiliation of someone’s parents can predict the child’s political leanings about around 70 percent of the time.

But new research, published yesterday in the journal PLOS ONE, suggests what mom and dad think isn’t the endgame when it comes to shaping a person’s political identity. Ideological differences between partisans may reflect distinct neural processes, and they can predict who’s right and who’s left of center with 82.9 percent accuracy, outperforming the “your parents pick your party” model. It also out-predicts another neural model based on differences in brain structure, which distinguishes liberals from conservatives with 71.6 percent accuracy.

The study matched publicly available party registration records with the names of 82 American participants whose risk-taking behavior during a gambling experiment was monitored by brain scans. The researchers found that liberals and conservatives don’t differ in the risks they do or don’t take, but their brain activity does vary while they’re making decisions.

The idea that the brains of Democrats and Republicans may be hard-wired to their beliefs is not new. Previous research has shown that during MRI scans, areas linked to broad social connectedness, which involves friends and the world at large, light up in Democrats’ brains. Republicans, on the other hand, show more neural activity in parts of the brain associated with tight social connectedness, which focuses on family and country.

Other scans have shown that brain regions associated with risk and uncertainty, such as the fear-processing amygdala, differ in structure in liberals and conservatives. And different architecture means different behavior. Liberals tend to seek out novelty and uncertainty, while conservatives exhibit strong changes in attitude to threatening situations. The former are more willing to accept risk, while the latter tends to have more intense physical reactions to threatening stimuli.

Building on this, the new research shows that Democrats exhibited significantly greater activity in the left insula, a region associated with social and self-awareness, during the task. Republicans, however, showed significantly greater activity in the right amygdala, a region involved in our fight-or flight response system.

“If you went to Vegas, you won’t be able to tell who’s a Democrat or who’s a Republican, but the fact that being a Republican changes how your brain processes risk and gambling is really fascinating,” says lead researcher Darren Schreiber, a University of Exeter professor who’s currently teaching at Central European University in Budapest. “It suggests that politics alters our worldview and alters the way our brains process.”

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Sagittal brain MRI. Courtesy of Wikipedia.[end-div]

Pseudo-Science in Missouri and 2+2=5

Hot on the heels of recent successes by the Texas School Board of Education (SBOE) to revise history and science curricula, legislators in Missouri are planning to redefine commonly accepted scientific principles. Much like the situation in Texas the Missouri House is mandating that intelligent design be taught alongside evolution, in equal measure, in all the state’s schools. But, in a bid to take the lead in reversing thousands of years of scientific progress Missouri plans to redefine the actual scientific framework. So, if you can’t make “intelligent design” fit the principles of accepted science, then just change the principles themselves — first up, change the meanings of the terms “scientific hypothesis” and “scientific theory”.

We suspect that a couple of years from now, in Missouri, 2+2 will be redefined to equal 5, and that logic, deductive reasoning and experimentation will be replaced with mushy green peas.

[div class=attrib]From ars technica:[end-div]

Each year, state legislatures play host to a variety of bills that would interfere with science education. Most of these are variations on a boilerplate intended to get supplementary materials into classrooms criticizing evolution and climate change (or to protect teachers who do). They generally don’t mention creationism, but the clear intent is to sneak religious content into the science classrooms, as evidenced by previous bills introduced by the same lawmakers. Most of them die in the legislature (although the opponents of evolution have seen two successes).

The efforts are common enough that we don’t generally report on them. But, every now and then, a bill comes along veers off this script. And late last month, the Missouri House started considering one that deviates in staggering ways. Instead of being quiet about its intent, it redefines science, provides a clearer definition of intelligent design than any of the idea’s advocates ever have, and mandates equal treatment of the two. In the process, it mangles things so badly that teachers would be prohibited from discussing Mendel’s Laws.

Although even the Wikipedia entry for scientific theory includes definitions provided by the world’s most prestigious organizations of scientists, the bill’s sponsor Rick Brattin has seen fit to invent his own definition. And it’s a head-scratcher: “‘Scientific theory,’ an inferred explanation of incompletely understood phenomena about the physical universe based on limited knowledge, whose components are data, logic, and faith-based philosophy.” The faith or philosophy involved remain unspecified.

Brattin also mentions philosophy when he redefines hypothesis as, “a scientific theory reflecting a minority of scientific opinion which may lack acceptance because it is a new idea, contains faulty logic, lacks supporting data, has significant amounts of conflicting data, or is philosophically unpopular.” The reason for that becomes obvious when he turns to intelligent design, which he defines as a hypothesis. Presumably, he thinks it’s only a hypothesis because it’s philosophically unpopular, since his bill would ensure it ends up in the classrooms.

Intelligent design is roughly the concept that life is so complex that it requires a designer, but even its most prominent advocates have often been a bit wary about defining its arguments all that precisely. Not so with Brattin—he lists 11 concepts that are part of ID. Some of these are old-fashioned creationist claims, like the suggestion that mutations lead to “species degradation” and a lack of transitional fossils. But it also has some distinctive twists like the claim that common features, usually used to infer evolutionary relatedness, are actually a sign of parts re-use by a designer.

Eventually, the bill defines “standard science” as “knowledge disclosed in a truthful and objective manner and the physical universe without any preconceived philosophical demands concerning origin or destiny.” It then demands that all science taught in Missouri classrooms be standard science. But there are some problems with this that become apparent immediately. The bill demands anything taught as scientific law have “no known exceptions.” That would rule out teaching Mendel’s law, which have a huge variety of exceptions, such as when two genes are linked together on the same chromosome.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Seal of Missouri. Courtesy of Wikipedia.[end-div]

Grow Your Own… Heart

A timely article for Valentine’s Day. Researchers continue to make astonishing progress in areas of cell biology and human genomics. So, it should come as no surprise that growing a customized, replacement heart in a lab from reprogrammed cells will one day be on the horizon.

[div class=attrib]From the Guardian:[end-div]

Every two minutes someone in the UK has a heart attack. Every six minutes, someone dies from heart failure. During an attack, the heart remodels itself and dilates around the site of the injury to try to compensate, but these repairs are rarely effective. If the attack does not kill you, heart failure later frequently will.

“No matter what other clinical interventions are available, heart transplantation is the only genuine cure for this,” says Paul Riley, professor of regenerative medicine at Oxford University. “The problem is there is a dearth of heart donors.”

Transplants have their own problems – successful operations require patients to remain on toxic, immune-suppressing drugs for life and their subsequent life expectancies are not usually longer than 20 years.

The solution, emerging from the laboratories of several groups of scientists around the world, is to work out how to rebuild damaged hearts. Their weapons of choice are reprogrammed stem cells.

These researchers have rejected the more traditional path of cell therapy that you may have read about over the past decade of hope around stem cells – the idea that stem cells could be used to create batches of functioning tissue (heart or brain or whatever else) for transplant into the damaged part of the body. Instead, these scientists are trying to understand what the chemical and genetic switches are that turn something into a heart cell or muscle cell. Using that information, they hope to programme cells at will, and help the body make repairs.

It is an exciting time for a technology that no one thought possible a few years ago. In 2007, Shinya Yamanaka showed it was possible to turn adult skin cells into embryonic-like stem cells, called induced pluripotent stem cells (iPSCs), using just a few chemical factors. His technique radically advanced stem cell biology, sweeping aside years of blockages due to the ethical objections about using stem cells from embryos. He won the Nobel prize in physiology or medicine for his work in October. Researchers have taken this a step further – directly turning one mature cell type to another without going through a stem cell phase.

And politicians are taking notice. At the Royal Society in November, in his first major speech on the Treasury’s ambitions for science and technology, the chancellor, George Osborne, identified regenerative medicine as one of eight areas of technology in which he wanted the UK to become a world leader. Earlier last year, the Lords science and technology committee launched an inquiry into the potential of regenerative medicine in the UK – not only the science but what regulatory obstacles there might be to turning the knowledge into medical applications.

At Oxford, Riley has spent almost a year setting up a £2.5m lab, funded as part of the British Heart Foundation’s Mending Broken Hearts appeal, to work out how to get heart muscle to repair itself. The idea is to expand the scope of the work that got Riley into the headlines last year after a high-profile paper published in the journal Nature in which he showed a means of repairing cells damaged during a heart attack in mice. That work involved in effect turning the clock back in a layer of cells on the outside of the heart, called the epicardium, making adult cells think they were embryos again and thereby restarting their ability to repair.

During the development of the embryo, the epicardium turns into the many types of cells seen in the heart and surrounding blood vessels. After the baby is born this layer of cells loses its ability to transform. By infusing the epicardium with the protein thymosin ?4 (T?4), Riley’s team found the once-dormant layer of cells was able to produce new, functioning heart cells. Overall, the treatment led to a 25% improvement in the mouse heart’s ability to pump blood after a month compared with mice that had not received the treatment.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Google Search.[end-div]

Vaccinia – Prototype Viral Cancer Killer

The illustrious Vaccinia virus may well have an Act Two in its future.

For Act One, over the last 150 years or so, it has been successfully used to vaccinate most of the world’s population against smallpox. This helped eradicate smallpox in the United States in the early 1970s.

Now, researchers are using it to target cancer.

First, take the Vaccinia virus — a relative of the smallpox virus. Second, re-engineer the virus to inhibit its growth in normal cells. Third, add a gene to the virus that stimulates the immune system. Fourth, set it to work on tumor cells and watch. While, such research has been going on for a couple of decades, this enhanced approach to attacking cancer cells with a viral immune system stimulant shows early promise.

[div class=attrib]From ars technica:[end-div]

For roughly 20 years, scientists have been working to engineer a virus that will attack cancer. The basic idea is sound, and every few years there have been some promising-looking results, with tumors shrinking dramatically in response to an infection. But the viruses never seem to go beyond small trials, and the companies making them always seem to focus on different things.

Over the weekend, Nature Medicine described some further promising results, this time with a somewhat different approach to ensuring that the virus leads to the death of cancer cells: if the virus doesn’t kill the cells directly, it revs up the immune system to attack them. It’s not clear this result will make it to a clinic, but it provides a good opportunity to review the general approach of treating cancer with viruses.

The basic idea is to leverage decades of work on some common viruses. This research has identified a variety of mutations keeping viruses from growing in normal cells. It means that if you inject the virus into a healthy individual, it won’t be able to infect any of their cells.

But cancer cells are different, as they carry a series of mutations of their own. In some cases, these mutations compensate for the problems in the virus. To give one example, the p53 protein normally induces aberrant cells to undergo an orderly death called apoptosis. It also helps shut down the growth of viruses in a cell, which is why some viruses encode a protein that inhibits p53. Cancer cells tend to damage or eliminate their copies of p53 so that it doesn’t cause them to undergo apoptosis.

So imagine a virus with its p53 inhibitor deleted. It can’t grow in normal cells since they have p53 around, but it can grow in cancer cells, which have eliminated their p53. The net result should be a cancer-killing virus. (A great idea, but this is one of the viruses that got dropped after preliminary trials.)

In the new trial, the virus in question takes a similar approach. The virus, vaccinia (a relative of smallpox used for vaccines), carries a gene that is essential for it to make copies of itself. Researchers have engineered a version without that gene, ensuring it can’t grow in normal cells (which have their equivalent of the gene shut down). Cancer cells need to reactivate the gene, meaning they present a hospitable environment for the mutant virus.

But the researchers added another trick by inserting a gene for a molecule that helps recruit immune cells (the awkwardly named granulocyte-macrophage colony-stimulating factor, or GM-CSF). The immune system plays an important role in controlling cancer, but it doesn’t always generate a full-scale response to cancer. By adding GM-CSF, the virus should help bring immune cells to the site of the cancer and activate them, creating a more aggressive immune response to any cells that survive viral infection.

The study here was simply checking the tolerance for two different doses of the virus. In general, the virus was tolerated well. Most subjects reported a short bout of flu-like symptoms, but only one subject out of 30 had a more severe response.

However, the tumors did respond. Based on placebo-controlled trials, the average survival time of patients like the ones in the trial would have been expected to be about two to four months. Instead, the low-dose group had a survival time of nearly seven months; for the higher dose group, that number went up to over a year. Two of those treated were still alive after more than two years. Imaging of tumors showed lots of dead cells, and tests of the immune system indicate the virus had generated a robust response.

[div class=attrib]Read the entire article after the leap.[end-div]

[div class=attrib]Image: An electron micrograph of a Vaccinia virus. Courtesy of Wikipedia.[end-div]

The Death of Scientific Genius

There is a certain school of thought that asserts that scientific genius is a thing of the past. After all, we haven’t seen the recent emergence of pivotal talents such as Galileo, Newton, Darwin or Einstein. Is it possible that fundamentally new ways to look at our world — that a new mathematics or a new physics is no longer possible?

In a recent essay in Nature, Dean Keith Simonton, professor of psychology at UC Davis, argues that such fundamental and singular originality is a thing of the past.

[div class=attrib]From ars technica:[end-div]

Einstein, Darwin, Galileo, Mendeleev: the names of the great scientific minds throughout history inspire awe in those of us who love science. However, according to Dean Keith Simonton, a psychology professor at UC Davis, the era of the scientific genius may be over. In a comment paper published in Nature last week, he explains why.

The “scientific genius” Simonton refers to is a particular type of scientist; their contributions “are not just extensions of already-established, domain-specific expertise.” Instead, “the scientific genius conceives of a novel expertise.” Simonton uses words like “groundbreaking” and “overthrow” to illustrate the work of these individuals, explaining that they each contributed to science in one of two major ways: either by founding an entirely new field or by revolutionizing an already-existing discipline.

Today, according to Simonton, there just isn’t room to create new disciplines or overthrow the old ones. “It is difficult to imagine that scientists have overlooked some phenomenon worthy of its own discipline,” he writes. Furthermore, most scientific fields aren’t in the type of crisis that would enable paradigm shifts, according to Thomas Kuhn’s classic view of scientific revolutions. Simonton argues that instead of finding big new ideas, scientists currently work on the details in increasingly specialized and precise ways.

And to some extent, this argument is demonstrably correct. Science is becoming more and more specialized. The largest scientific fields are currently being split into smaller sub-disciplines: microbiology, astrophysics, neuroscience, and paleogeography, to name a few. Furthermore, researchers have more tools and the knowledge to hone in on increasingly precise issues and questions than they did a century—or even a decade—ago.

But other aspects of Simonton’s argument are a matter of opinion. To me, separating scientists who “build on what’s already known” from those who “alter the foundations of knowledge” is a false dichotomy. Not only is it possible to do both, but it’s impossible to establish—or even make a novel contribution to—a scientific field without piggybacking on the work of others to some extent. After all, it’s really hard to solve the problems that require new solutions if other people haven’t done the work to identify them. Plate tectonics, for example, was built on observations that were already widely known.

And scientists aren’t done altering the foundations of knowledge, either. In science, as in many other walks of life, we don’t yet know everything we don’t know. Twenty years ago, exoplanets were hypothetical. Dark energy, as far as we knew, didn’t exist.

Simonton points out that “cutting-edge work these days tends to emerge from large, well-funded collaborative teams involving many contributors” rather than a single great mind. This is almost certainly true, especially in genomics and physics. However, it’s this collaboration and cooperation between scientists, and between fields, that has helped science progress past where we ever thought possible. While Simonton uses “hybrid” fields like astrophysics and biochemistry to illustrate his argument that there is no room for completely new scientific disciplines, I see these fields as having room for growth. Here, diverse sets of ideas and methodologies can mix and lead to innovation.

Simonton is quick to assert that the end of scientific genius doesn’t mean science is at a standstill or that scientists are no longer smart. In fact, he argues the opposite: scientists are probably more intelligent now, since they must master more theoretical work, more complicated methods, and more diverse disciplines. In fact, Simonton himself would like to be wrong; “I hope that my thesis is incorrect. I would hate to think that genius in science has become extinct,” he writes.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Einstein 1921 by F. Schmutzer. Courtesy of Wikipedia.[end-div]

Printing Human Cells

The most fundamental innovation tends to happen at the intersection of disciplines. So, what do you get if you cross 3-D printing technology with embryonic stem cell research? Well, you get a device that can print lines of cells with similar functions, such as heart muscle or kidney cells. Welcome to the new world of biofabrication. The science fiction future seems to be ever increasingly close.

[div class=attrib]From Scientific American:[end-div]

Imagine if you could take living cells, load them into a printer, and squirt out a 3D tissue that could develop into a kidney or a heart. Scientists are one step closer to that reality, now that they have developed the first printer for embryonic human stem cells.

In a new study, researchers from the University of Edinburgh have created a cell printer that spits out living embryonic stem cells. The printer was capable of printing uniform-size droplets of cells gently enough to keep the cells alive and maintain their ability to develop into different cell types. The new printing method could be used to make 3D human tissues for testing new drugs, grow organs, or ultimately print cells directly inside the body.

Human embryonic stem cells (hESCs) are obtained from human embryos and can develop into any cell type in an adult person, from brain tissue to muscle to bone. This attribute makes them ideal for use in regenerative medicine — repairing, replacing and regenerating damaged cells, tissues or organs. [Stem Cells: 5 Fascinating Findings]

In a lab dish, hESCs can be placed in a solution that contains the biological cues that tell the cells to develop into specific tissue types, a process called differentiation. The process starts with the cells forming what are called “embryoid bodies.” Cell printers offer a means of producing embryoid bodies of a defined size and shape.

In the new study, the cell printer was made from a modified CNC machine (a computer-controlled machining tool) outfitted with two “bio-ink” dispensers: one containing stem cells in a nutrient-rich soup called cell medium and another containing just the medium. These embryonic stem cells were dispensed through computer-operated valves, while a microscope mounted to the printer provided a close-up view of what was being printed.

The two inks were dispensed in layers, one on top of the other to create cell droplets of varying concentration. The smallest droplets were only two nanoliters, containing roughly five cells.

The cells were printed onto a dish containing many small wells. The dish was then flipped over so the droplets now hung from them, allowing the stem cells to form clumps inside each well. (The printer lays down the cells in precisely sized droplets and in a certain pattern that is optimal for differentiation.)

Tests revealed that more than 95 percent of the cells were still alive 24 hours after being printed, suggesting they had not been killed by the printing process. More than 89 percent of the cells were still alive three days later, and also tested positive for a marker of their pluripotency — their potential to develop into different cell types.

Biomedical engineer Utkan Demirci, of Harvard University Medical School and Brigham and Women’s Hospital, has done pioneering work in printing cells, and thinks the new study is taking it in an exciting direction. “This technology could be really good for high-throughput drug testing,” Demirci told LiveScience. One can build mini-tissues from the bottom up, using a repeatable, reliable method, he said. Building whole organs is the long-term goal, Demirci said, though he cautioned that it “may be quite far from where we are today.”

[div class=attrib]Read the entire article after the leap.[end-div]

[div class=attrib]Image: 3D printing with embryonic stem cells. Courtesy of Alan Faulkner-Jones et al./Heriot-Watt University.[end-div]

Orphan Genes

DNA is a remarkable substance. It is the fundamental blueprint for biological systems. It is the basis for all complex life on our planet, it enables parents to share characteristics, both good and bad, with their children. Yet the more geneticists learn about the functions of DNA, the more mysteries it presents. One such conundrum is posed by so-called junk DNA and orphan genes — seemingly useless sequences of DNA that perform no function. Or so researchers previously believed.

[div class=attrib]From New Scientist:[end-div]

NOT having any family is tough. Often unappreciated and uncomfortably different, orphans have to fight to fit in and battle against the odds to realise their potential. Those who succeed, from Aristotle to Steve Jobs, sometimes change the world.

Who would have thought that our DNA plays host to a similar cast of foundlings? When biologists began sequencing genomes, they discovered that up to a third of genes in each species seemed to have no parents or family of any kind. Nevertheless, some of these “orphan genes” are high achievers, and a few even seem have played a part in the evolution of the human brain.

But where do they come from? With no obvious ancestry, it was as if these genes had appeared from nowhere, but that couldn’t be true. Everyone assumed that as we learned more, we would discover what had happened to their families. But we haven’t – quite the opposite, in fact.

Ever since we discovered genes, biologists have been pondering their origins. At the dawn of life, the very first genes must have been thrown up by chance. But life almost certainly began in an RNA world, so back then, genes weren’t just blueprints for making enzymes that guide chemical reactions – they themselves were the enzymes. If random processes threw up a piece of RNA that could help make more copies of itself, natural selection would have kicked in straight away.

As living cells evolved, though, things became much more complex. A gene became a piece of DNA coding for a protein. For a protein to be made, an RNA copy of the DNA has to be created. This cannot happen without “DNA switches”, which are actually just extra bits of DNA alongside the protein-coding bits saying “copy this DNA into RNA”. Next, the RNA has to get to the protein-making factories. In complex cells, this requires the presence of yet more extra sequences, which act as labels saying “export me” and “start making the protein from here”.

The upshot is that the chances of random mutations turning a bit of junk DNA into a new gene seem infinitesimally small. As the French biologist François Jacob famously wrote 35 years ago, “the probability that a functional protein would appear de novo by random association of amino acids is practically zero”.

Instead, back in the 1970s it was suggested that the accidental copying of genes can result in a single gene giving rise to a whole family of genes, rather like the way animals branch into families of related species over time. It’s common for entire genes to be inadvertently duplicated. Spare copies are usually lost, but sometimes the duplicates come to share the function of the original gene between them, or one can diverge and take on a new function.

Take the light-sensing pigments known as opsins. The various opsins in our eyes are not just related to each other, they are also related to the opsins found in all other animals, from jellyfish to insects. The thousands of different opsin genes found across the animal kingdom all evolved by duplication, starting with a single gene in a common ancestor living around 700 million years ago (see diagram).

Most genes belong to similar families, and their ancestry can be traced back many millions of years. But when the yeast genome was sequenced around 15 years ago, it was discovered that around a third of yeast genes appeared to have no family. The term orphans (sometimes spelt ORFans) was used to describe individual genes, or small groups of very similar genes, with no known relatives.

“If you see a gene and you can’t find a relative you get suspicious,” says Ken Weiss, who studies the evolution of complex traits at Penn State University. Some suggested orphans were the genetic equivalent of living fossils like the coelacanth, the last surviving members of an ancient family. Others thought they were nothing special, just normal genes whose family hadn’t been found yet. After all, the sequencing of entire genomes had only just begun.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: DNA structure. Courtesy of Wikipedia.[end-div]

Politics Driven by Science

Imagine a nation, or even a world, where political decisions and policy are driven by science rather than emotion. Well, small experiments are underway, so this may not be as far off as many would believe, or even dare to hope.

[div class=attrib]From the New Scientist:[end-div]

In your wildest dreams, could you imagine a government that builds its policies on carefully gathered scientific evidence? One that publishes the rationale behind its decisions, complete with data, analysis and supporting arguments? Well, dream no longer: that’s where the UK is heading.

It has been a long time coming, according to Chris Wormald, permanent secretary at the Department for Education. The civil service is not short of clever people, he points out, and there is no lack of desire to use evidence properly. More than 20 years as a serving politician has convinced him that they are as keen as anyone to create effective policies. “I’ve never met a minister who didn’t want to know what worked,” he says. What has changed now is that informed policy-making is at last becoming a practical possibility.

That is largely thanks to the abundance of accessible data and the ease with which new, relevant data can be created. This has supported a desire to move away from hunch-based politics.

Last week, for instance, Rebecca Endean, chief scientific advisor and director of analytical services at the Ministry of Justice, announced that the UK government is planning to open up its data for analysis by academics, accelerating the potential for use in policy planning.

At the same meeting, hosted by innovation-promoting charity NESTA, Wormald announced a plan to create teaching schools based on the model of teaching hospitals. In education, he said, the biggest single problem is a culture that often relies on anecdotal experience rather than systematically reported data from practitioners, as happens in medicine. “We want to move teacher training and research and practice much more onto the health model,” Wormald said.

Test, learn, adapt

In June last year the Cabinet Office published a paper called “Test, Learn, Adapt: Developing public policy with randomised controlled trials”. One of its authors, the doctor and campaigning health journalist Ben Goldacre, has also been working with the Department of Education to compile a comparison of education and health research practices, to be published in the BMJ.

In education, the evidence-based revolution has already begun. A charity called the Education Endowment Foundation is spending £1.4 million on a randomised controlled trial of reading programmes in 50 British schools.

There are reservations though. The Ministry of Justice is more circumspect about the role of such trials. Where it has carried out randomised controlled trials, they often failed to change policy, or even irked politicians with conclusions that were obvious. “It is not a panacea,” Endean says.

Power of prediction

The biggest need is perhaps foresight. Ministers often need instant answers, and sometimes the data are simply not available. Bang goes any hope of evidence-based policy.

“The timescales of policy-making and evidence-gathering don’t match,” says Paul Wiles, a criminologist at the University of Oxford and a former chief scientific adviser to the Home Office. Wiles believes that to get round this we need to predict the issues that the government is likely to face over the next decade. “We can probably come up with 90 per cent of them now,” he says.

Crucial to the process will be convincing the public about the value and use of data, so that everyone is on-board. This is not going to be easy. When the government launched its Administrative Data Taskforce, which set out to look at data in all departments and opening it up so that it could be used for evidence-based policy, it attracted minimal media interest.

The taskforce’s remit includes finding ways to increase trust in data security. Then there is the problem of whether different departments are legally allowed to exchange data. There are other practical issues: many departments format data in incompatible ways. “At the moment it’s incredibly difficult,” says Jonathan Breckon, manager of the Alliance for Useful Evidence, a collaboration between NESTA and the Economic and Social Research Council.

[div class=attrib]Read the entire article after the jump.[end-div]

Shedding Some Light On Dark Matter

Cosmologists theorized the need for dark matter to account for hidden mass in our universe. Yet, as the name implies, it is proving rather hard to find. Now astronomers believe they see hints of it in ancient galactic collisions.

[div class=attrib]From New Scientist:[end-div]

Colliding clusters of galaxies may hold clues to a mysterious dark force at work in the universe. This force would act only on invisible dark matter, the enigmatic stuff that makes up 86 per cent of the mass in the universe.

Dark matter famously refuses to interact with ordinary matter except via gravity, so theorists had assumed that its particles would be just as aloof with each other. But new observations suggest that dark matter interacts significantly with itself, while leaving regular matter out of the conversation.

“There could be a whole class of dark particles that don’t interact with normal matter but do interact with themselves,” says James Bullock of the University of California, Irvine. “Dark matter could be doing all sorts of interesting things, and we’d never know.”

Some of the best evidence for dark matter’s existence came from the Bullet clusterMovie Camera, a smash-up in which a small galaxy cluster plunged through a larger one about 100 million years ago. Separated by hundreds of light years, the individual galaxies sailed right past each other, and the two clusters parted ways. But intergalactic gas collided and pooled on the trailing ends of each cluster.

Mass maps of the Bullet cluster showed that dark matter stayed in line with the galaxies instead of pooling with the gas, proving that it can separate from ordinary matter. This also hinted that dark matter wasn’t interacting with itself, and was affected by gravity alone.

Musket shot

Last year William Dawson of the University of California, Davis, and colleagues found an older set of clusters seen about 700 million years after their collision. Nicknamed the Musket Ball cluster, this smash-up told a different tale. When Dawson’s team analysed the concentration of matter in the Musket Ball, they found that galaxies are separated from dark matter by about 19,000 light years.

“The galaxies outrun the dark matter. That’s what creates the offset,” Dawson said. “This is fitting that picture of self-interacting dark matter.” If dark matter particles do interact, perhaps via a dark force, they would slow down like the gas.

This new picture could solve some outstanding mysteries in cosmology, Dawson said this week during a meeting of the American Astronomical Society in Long Beach, California. Non-interacting dark matter should sink to the cores of star clusters and dwarf galaxies, but observations show that it is more evenly distributed. If it interacts with itself, it could puff up and spread outward like a gas.

So why doesn’t the Bullet cluster show the same separation between dark matter and galaxies? Dawson thinks it’s a question of age – dark matter in the younger Bullet simply hasn’t had time to separate.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: An overlay of an optical image of a cluster of galaxies with an x-ray image of hot gas lying within the cluster. Courtesy of NASA.[end-div]

Next Potential Apocalypse: 2036

Having missed the recent apocalypse said to have been predicted by the Mayans, the next possible end of the world is set for 2036. This time it’s courtesy of aptly named asteroid – Apophis.

[div class=attrib]From the Guardian:[end-div]

In Egyptian myth, Apophis was the ancient spirit of evil and destruction, a demon that was determined to plunge the world into eternal darkness.

A fitting name, astronomers reasoned, for a menace now hurtling towards Earth from outerspace. Scientists are monitoring the progress of a 390-metre wide asteroid discovered last year that is potentially on a collision course with the planet, and are imploring governments to decide on a strategy for dealing with it.

Nasa has estimated that an impact from Apophis, which has an outside chance of hitting the Earth in 2036, would release more than 100,000 times the energy released in the nuclear blast over Hiroshima. Thousands of square kilometres would be directly affected by the blast but the whole of the Earth would see the effects of the dust released into the atmosphere.

And, scientists insist, there is actually very little time left to decide. At a recent meeting of experts in near-Earth objects (NEOs) in London, scientists said it could take decades to design, test and build the required technology to deflect the asteroid. Monica Grady, an expert in meteorites at the Open University, said: “It’s a question of when, not if, a near Earth object collides with Earth. Many of the smaller objects break up when they reach the Earth’s atmosphere and have no impact. However, a NEO larger than 1km [wide] will collide with Earth every few hundred thousand years and a NEO larger than 6km, which could cause mass extinction, will collide with Earth every hundred million years. We are overdue for a big one.”

Apophis had been intermittently tracked since its discovery in June last year but, in December, it started causing serious concern. Projecting the orbit of the asteroid into the future, astronomers had calculated that the odds of it hitting the Earth in 2029 were alarming. As more observations came in, the odds got higher.

Having more than 20 years warning of potential impact might seem plenty of time. But, at last week’s meeting, Andrea Carusi, president of the Spaceguard Foundation, said that the time for governments to make decisions on what to do was now, to give scientists time to prepare mitigation missions. At the peak of concern, Apophis asteroid was placed at four out of 10 on the Torino scale – a measure of the threat posed by an NEO where 10 is a certain collision which could cause a global catastrophe. This was the highest of any asteroid in recorded history and it had a 1 in 37 chance of hitting the Earth. The threat of a collision in 2029 was eventually ruled out at the end of last year

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Graphic: This graphic shows the orbit of the asteroid Apophis in relation to the paths of Earth and other planets in the inner solar system. Courtesy of MSNBC.[end-div]

What’s Next at the LHC: Parallel Universe?

The Large Hadron Collider (LHC) at CERN made headlines in 2012 with the announcement of a probable discovery of the Higgs Boson. Scientists are collecting and analyzing more data before they declare an outright discovery in 2013. In the meantime, they plan to use the giant machine to examine even more interesting science — at very small and very large scales — in the new year.

[div class=attrib]From the Guardian:[end-div]

When it comes to shutting down the most powerful atom smasher ever built, it’s not simply a question of pressing the off switch.

In the French-Swiss countryside on the far side of Geneva, staff at the Cern particle physics laboratory are taking steps to wind down the Large Hadron Collider. After the latest run of experiments ends next month, the huge superconducting magnets that line the LHC’s 27km-long tunnel must be warmed up, slowly and gently, from -271 Celsius to room temperature. Only then can engineers descend into the tunnel to begin their work.

The machine that last year helped scientists snare the elusive Higgs boson – or a convincing subatomic impostor – faces a two-year shutdown while engineers perform repairs that are needed for the collider to ramp up to its maximum energy in 2015 and beyond. The work will beef up electrical connections in the machine that were identified as weak spots after an incident four years ago that knocked the collider out for more than a year.

The accident happened days after the LHC was first switched on in September 2008, when a short circuit blew a hole in the machine and sprayed six tonnes of helium into the tunnel that houses the collider. Soot was scattered over 700 metres. Since then, the machine has been forced to run at near half its design energy to avoid another disaster.

The particle accelerator, which reveals new physics at work by crashing together the innards of atoms at close to the speed of light, fills a circular, subterranean tunnel a staggering eight kilometres in diameter. Physicists will not sit around idle while the collider is down. There is far more to know about the new Higgs-like particle, and clues to its identity are probably hidden in the piles of raw data the scientists have already gathered, but have had too little time to analyse.

But the LHC was always more than a Higgs hunting machine. There are other mysteries of the universe that it may shed light on. What is the dark matter that clumps invisibly around galaxies? Why are we made of matter, and not antimatter? And why is gravity such a weak force in nature? “We’re only a tiny way into the LHC programme,” says Pippa Wells, a physicist who works on the LHC’s 7,000-tonne Atlas detector. “There’s a long way to go yet.”

The hunt for the Higgs boson, which helps explain the masses of other particles, dominated the publicity around the LHC for the simple reason that it was almost certainly there to be found. The lab fast-tracked the search for the particle, but cannot say for sure whether it has found it, or some more exotic entity.

“The headline discovery was just the start,” says Wells. “We need to make more precise measurements, to refine the particle’s mass and understand better how it is produced, and the ways it decays into other particles.” Scientists at Cern expect to have a more complete identikit of the new particle by March, when repair work on the LHC begins in earnest.

By its very nature, dark matter will be tough to find, even when the LHC switches back on at higher energy. The label “dark” refers to the fact that the substance neither emits nor reflects light. The only way dark matter has revealed itself so far is through the pull it exerts on galaxies.

Studies of spinning galaxies show they rotate with such speed that they would tear themselves apart were there not some invisible form of matter holding them together through gravity. There is so much dark matter, it outweighs by five times the normal matter in the observable universe.

The search for dark matter on Earth has failed to reveal what it is made of, but the LHC may be able to make the substance. If the particles that constitute it are light enough, they could be thrown out from the collisions inside the LHC. While they would zip through the collider’s detectors unseen, they would carry energy and momentum with them. Scientists could then infer their creation by totting up the energy and momentum of all the particles produced in a collision, and looking for signs of the missing energy and momentum.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: The eight torodial magnets can be seen on the huge ATLAS detector with the calorimeter before it is moved into the middle of the detector. This calorimeter will measure the energies of particles produced when protons collide in the centre of the detector. ATLAS will work along side the CMS experiment to search for new physics at the 14 TeV level. Courtesy of CERN.[end-div]

You Are Different From Yourself

The next time your spouse tells you that you’re “just not the same person anymore” there may be some truth to it. After all, we are not who we thought we would become, nor are we likely to become what we think. That’s the overall result of a recent study of human personality changes in around 20,000 people over time.

[div class=attrib]From Independent:[end-div]

When we remember our past selves, they seem quite different. We know how much our personalities and tastes have changed over the years. But when we look ahead, somehow we expect ourselves to stay the same, a team of psychologists said Thursday, describing research they conducted of people’s self-perceptions.

They called this phenomenon the “end of history illusion,” in which people tend to “underestimate how much they will change in the future.” According to their research, which involved more than 19,000 people ages 18 to 68, the illusion persists from teenage years into retirement.

“Middle-aged people — like me — often look back on our teenage selves with some mixture of amusement and chagrin,” said one of the authors, Daniel T. Gilbert, a psychologist at Harvard. “What we never seem to realize is that our future selves will look back and think the very same thing about us. At every age we think we’re having the last laugh, and at every age we’re wrong.”

Other psychologists said they were intrigued by the findings, published Thursday in the journal Science, and were impressed with the amount of supporting evidence. Participants were asked about their personality traits and preferences — their favorite foods, vacations, hobbies and bands — in years past and present, and then asked to make predictions for the future. Not surprisingly, the younger people in the study reported more change in the previous decade than did the older respondents.

But when asked to predict what their personalities and tastes would be like in 10 years, people of all ages consistently played down the potential changes ahead.

Thus, the typical 20-year-old woman’s predictions for her next decade were not nearly as radical as the typical 30-year-old woman’s recollection of how much she had changed in her 20s. This sort of discrepancy persisted among respondents all the way into their 60s.

And the discrepancy did not seem to be because of faulty memories, because the personality changes recalled by people jibed quite well with independent research charting how personality traits shift with age. People seemed to be much better at recalling their former selves than at imagining how much they would change in the future.

Why? Dr. Gilbert and his collaborators, Jordi Quoidbach of Harvard and Timothy D. Wilson of the University of Virginia, had a few theories, starting with the well-documented tendency of people to overestimate their own wonderfulness.

“Believing that we just reached the peak of our personal evolution makes us feel good,” Dr. Quoidbach said. “The ‘I wish that I knew then what I know now’ experience might give us a sense of satisfaction and meaning, whereas realizing how transient our preferences and values are might lead us to doubt every decision and generate anxiety.”

Or maybe the explanation has more to do with mental energy: predicting the future requires more work than simply recalling the past. “People may confuse the difficulty of imagining personal change with the unlikelihood of change itself,” the authors wrote in Science.

The phenomenon does have its downsides, the authors said. For instance, people make decisions in their youth — about getting a tattoo, say, or a choice of spouse — that they sometimes come to regret.

And that illusion of stability could lead to dubious financial expectations, as the researchers showed in an experiment asking people how much they would pay to see their favorite bands.

When asked about their favorite band from a decade ago, respondents were typically willing to shell out $80 to attend a concert of the band today. But when they were asked about their current favorite band and how much they would be willing to spend to see the band’s concert in 10 years, the price went up to $129. Even though they realized that favorites from a decade ago like Creed or the Dixie Chicks have lost some of their luster, they apparently expect Coldplay and Rihanna to blaze on forever.

“The end-of-history effect may represent a failure in personal imagination,” said Dan P. McAdams, a psychologist at Northwestern who has done separate research into the stories people construct about their past and future lives. He has often heard people tell complex, dynamic stories about the past but then make vague, prosaic projections of a future in which things stay pretty much the same.

[div class=attrib]Read the entire article after the jump.[end-div]

Planets From Stardust

Stunning images captured by Atacama Millimetre-submillimetre Array (ALMA) radio telescope in Chile show the early stages of a planet forming from stardust around a star located 450 light-years from Earth. This is the first time that astronomers have snapped such a clear picture of the process, confirming long-held theories of planetary formation.

[div class=attrib]From Independent:[end-div]

The world’s highest radio telescope, built on a Chilean plateau in the Andes 5,000 metres above sea level, has captured the first image of a new planet being formed as it gobbles up the cosmic dust and gas surrounding a distant star.

Astronomers have long predicted that giant “gas” planets similar to Jupiter would form by collecting the dust and debris that forms around a young star. Now they have the first visual evidence to support the phenomenon, scientists said.

The image taken by the Atacama Millimetre-submillimetre Array (ALMA) in Chile shows two streams of gas connecting the inner and outer disks of cosmic material surrounding the star HD 142527, which is about 450 light-years from Earth.

Astronomers believe the gas streamers are the result of two giant planets – too small to be visible in this image – exerting a gravitational pull on the cloud of surrounding dust and gas, causing the material to flow from the outer to inner stellar disks, said Simon Casassus of the University of Chile in Santiago.

“The most natural interpretation for the flows seen by ALMA is that the putative proto-planets are pulling streams of gas inward towards them that are channelled by their gravity. Much of the gas then overshoots the planets and continues inward to the portion of the disk close to the star, where it can eventually fall onto the star itself,” Dr Casassus said.

“Astronomers have been predicting that these streams exist, but this is the first time we’ve been able to see them directly. Thanks to the new ALMA telescope, we’ve been able to get direct observations to illuminate current theories of how planets are formed,” he said.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Observations (left) made with the ALMA telescope of the young star HD 142527. The dust in the outer disc is shown in red. Dense gas in the streams flowing across the gap, as well as in the outer disc, is shown in green. Diffuse gas in the central gap is shown in blue. The gas filaments can be seen at the three o’clock and ten o’clock positions, flowing from the outer disc towards the centre. And (right) an artist’s impression. Courtesy of Independent.[end-div]

Curiosity’s 10K Hike

Scientists and engineers at JPL have Mount Sharp in their sites. It’s no ordinary mountain — it’s situated on Mars. The 5,000 meter high mountain is home to exposed layers of some promising sedimentary rocks, which hold clues to Mars’ geologic, and perhaps biological, history. Unfortunately, Mount Sharp is 10K away from the current home of the Curiosity rover. So, at a top speed of around 100 meters per day it will take Curiosity until the fall of 2013 to reach its destination.

[div class=attrib]From the New Scientist:[end-div]

NASA’S Curiosity rover is about to have its cake and eat it too. Around September, the rover should get its first taste of layered sediments at Aeolis Mons, a mountain over 5 kilometres tall that may hold preserved signs of life on Mars.

Previous rovers uncovered ample evidence of ancient water, a key ingredient for life as we know it. With its sophisticated on-board chemistry lab, Curiosity is hunting for more robust signs of habitability, including organic compounds – the carbon-based building blocks of life as we know it.

Observations from orbit show that the layers in Aeolis Mons – also called Mount Sharp – contain minerals thought to have formed in the presence of water. That fits with theories that the rover’s landing site, Gale crater, was once a large lake. Even better, the layers were probably laid down quickly enough that the rocks could have held on to traces of microorganisms, if they existed there.

If the search for organics turns up empty, Aeolis Mons may hold other clues to habitability, says project scientist John Grotzinger of the California Institute of Technology in Pasadena. The layers will reveal which minerals and chemical processes were present in Mars’s past. “We’re going to find all kinds of good stuff down there, I’m sure,” he says.

Curiosity will explore a region called Glenelg until early February, and then hit the gas. The base of the mountain is 10 kilometres away, and the rover can drive at about 100 metres a day at full speed. The journey should take between six and nine months, but will include stops to check out any interesting landmarks. After all, some of the most exciting discoveries from Mars rovers were a result of serendipity.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Base of Mount Sharp, Mars. Courtesy of Credit: NASA/JPL-Caltech/MSSS.[end-div]

Evolution and Autocatalysis

A clever idea about the process of emergence from mathematicians at the University of Vermont has some evolutionary biologists thinking.

[div class=attrib]From MIT Review:[end-div]

One of the most puzzling questions about the origin of life is how the rich chemical landscape that makes life possible came into existence.

This landscape would have consisted among other things of amino acids, proteins and complex RNA molecules. What’s more, these molecules must have been part of a rich network of interrelated chemical reactions which generated them in a reliable way.

Clearly, all that must have happened before life itself emerged. But how?

One idea is that groups of molecules can form autocatalytic sets. These are self-sustaining chemical factories, in which the product of one reaction is the feedstock or catalyst for another. The result is a virtuous, self-contained cycle of chemical creation.

Today, Stuart Kauffman at the University of Vermont in Burlington and a couple of pals take a look at the broader mathematical properties of autocatalytic sets. In examining this bigger picture, they come to an astonishing conclusion that could have remarkable consequences for our understanding of complexity, evolution and the phenomenon of emergence.

They begin by deriving some general mathematical properties of autocatalytic sets, showing that such a set can be made up of many autocatalytic subsets of different types, some of which can overlap.

In other words, autocatalytic sets can have a rich complex structure of their own.

They go on to show how evolution can work on a single autocatalytic set, producing new subsets within it that are mutually dependent on each other.  This process sets up an environment in which newer subsets can evolve.

“In other words, self-sustaining, functionally closed structures can arise at a higher level (an autocatalytic set of autocatalytic sets), i.e., true emergence,” they say.

That’s an interesting view of emergence and certainly seems a sensible approach to the problem of the origin of life. It’s not hard to imagine groups of molecules operating together like this. And indeed, biochemists have recently discovered simple autocatalytic sets that behave in exactly this way.

But what makes the approach so powerful is that the mathematics does not depend on the nature of chemistry–it is substrate independent. So the building blocks in an autocatalytic set need not be molecules at all but any units that can manipulate other units in the required way.

These units can be complex entities in themselves. “Perhaps it is not too far-fetched to think, for example, of the collection of bacterial species in your gut (several hundreds of them) as one big autocatalytic set,” say Kauffman and co.

And they go even further. They point out that the economy is essentially the process of transforming raw materials into products such as hammers and spades that themselves facilitate further transformation of raw materials and so on. “Perhaps we can also view the economy as an (emergent) autocatalytic set, exhibiting some sort of functional closure,” they speculate.

[div class=attrib]Read the entire article after the jump.[end-div]

Best Science Stories of 2012

As the year comes to a close it’s fascinating to look back at some of the most breathtaking science of 2012.

 

 

 

 

 

 

 

 

The image above is of Saturn’s moon Enceladus. Evidence from Cassini spacecraft, which took this remarkable image, suggests a deep salty ocean beneath the frozen surface that periodically spews out icy particles into the space. Many scientists believe that Enceladus is the best place to look for signs of life beyond Earth within our Solar System.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image courtesy of Cassini Imaging Team/SSI/JPL/ESA/NASA.[end-div]

The Missing Linc

LincRNA that is. Recent discoveries hint at the potentially crucial role of this new class of genetic material in embryonic development, cell and tissue differentiation and even speciation and evolution.

[div class=attrib]From the Economist:[end-div]

THE old saying that where there’s muck, there’s brass has never proved more true than in genetics. Once, and not so long ago, received wisdom was that most of the human genome—perhaps as much as 99% of it—was “junk”. If this junk had a role, it was just to space out the remaining 1%, the genes in which instructions about how to make proteins are encoded, in a useful way in the cell nucleus.

That, it now seems, was about as far from the truth as it is possible to be. The decade or so since the completion of the Human Genome Project has shown that lots of the junk must indeed have a function. The culmination of that demonstration was the publication, in September, of the results of the ENCODE project. This suggested that almost two-thirds of human DNA, rather than just 1% of it, is being copied into molecules of RNA, the chemical that carries protein-making instructions to the sub-cellular factories which turn those proteins out, and that as a consequence, rather than there being just 23,000 genes (namely, the bits of DNA that encode proteins), there may be millions of them.

The task now is to work out what all these extra genes are up to. And a study just published in Genome Biology, by David Kelley and John Rinn of Harvard University, helps do that for one new genetic class, a type known as lincRNAs. In doing so, moreover, Dr Kelley and Dr Rinn show just how complicated the modern science of genetics has become, and hint also at how animal species split from one another.

Lincs in the chain

Molecules of lincRNA are similar to the messenger-RNA molecules which carry protein blueprints. They do not, however, encode proteins. More than 9,000 sorts are known, and most of those whose job has been tracked down are involved in the regulation of other genes, for example by attaching themselves to the DNA switches that control those genes.

LincRNA is rather odd, though. It often contains members of a second class of weird genetic object. These are called transposable elements (or, colloquially, “jumping genes”, because their DNA can hop from one place to another within the genome). Transposable elements come in several varieties, but one group of particular interest are known as endogenous retroviruses. These are the descendants of ancient infections that have managed to hide away in the genome and get themselves passed from generation to generation along with the rest of the genes.

Dr Kelley and Dr Rinn realised that the movement within the genome of transposable elements is a sort of mutation, and wondered if it has evolutionary consequences. Their conclusion is that it does, for when they looked at the relation between such elements and lincRNA genes, they found some intriguing patterns.

In the first place, lincRNAs are much more likely to contain transposable elements than protein-coding genes are. More than 83% do so, in contrast to only 6% of protein-coding genes.

Second, those transposable elements are particularly likely to be endogenous retroviruses, rather than any of the other sorts of element.

Third, the interlopers are usually found in the bit of the gene where the process of copying RNA from the DNA template begins, suggesting they are involved in switching genes on or off.

And fourth, lincRNAs containing one particular type of endogenous retrovirus are especially active in pluripotent stem cells, the embryonic cells that are the precursors of all other cell types. That indicates these lincRNAs have a role in the early development of the embryo.

Previous work suggests lincRNAs are also involved in creating the differences between various sorts of tissue, since many lincRNA genes are active in only one or a few cell types. Given that their principal job is regulating the activities of other genes, this makes sense.

Even more intriguingly, studies of lincRNA genes from species as diverse as people, fruit flies and nematode worms, have found they differ far more from one species to another than do protein-coding genes. They are, in other words, more species specific. And that suggests they may be more important than protein-coding genes in determining the differences between those species.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Darwin’s finches or Galapagos finches. Darwin, 1845. Courtesy of Wikipedia.[end-div]

Rivers of Methane

The image shows what looks like a satellite picture of a river delta, complete with tributaries. It could be the Nile or the Amazon river systems as seen from space.

However, the image is not of an earthbound river at all. It’s a recently discovered river on Titan, Saturn’s largest moon. And, the river’s contents are not even water, but probably a mixture of liquid ethane and methane.

[div class=attrib]From NASA:[end-div]

This image from NASA’s Cassini spacecraft shows a vast river system on Saturn’s moon Titan. It is the first time images from space have revealed a river system so vast and in such high resolution anywhere other than Earth. The image was acquired on Sept. 26, 2012, on Cassini’s 87th close flyby of Titan. The river valley crosses Titan’s north polar region and runs into Ligeia Mare, one of the three great seas in the high northern latitudes of Saturn’s moon Titan. It stretches more than 200 miles (400 kilometers).

Scientists deduce that the river is filled with liquid because it appears dark along its entire extent in the high-resolution radar image, indicating a smooth surface. That liquid is presumably ethane mixed with methane, the former having been positively identified in 2008 by Cassini’s visual and infrared mapping spectrometer at the lake known as Ontario Lacus in Titan’s southern hemisphere. Though there are some short, local meanders, the relative straightness of the river valley suggests it follows the trace of at least one fault, similar to other large rivers running into the southern margin of Ligeia Mare (see PIA10008). Such faults may lead to the opening of basins and perhaps to the formation of the giant seas themselves.

North is toward the top of this image.

The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and ASI, the Italian Space Agency. NASA’s Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the mission for NASA’s Science Mission Directorate, Washington. The Cassini orbiter was designed, developed and assembled at JPL. The RADAR instrument was built by JPL and the Italian Space Agency, working with team members from the US and several European countries. JPL is a division of the California Institute of Technology in Pasadena.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image courtesy of NASA/JPL-Caltech/ASI.[end-div]

The Habitable Exoplanets Catalog

The Habitable Exoplanets Catalog is a fascinating resource for those who dream of starting a new life on a distant world. Only into its first year, the catalog now lists 7 planets outside of our solar system and within our own Milky Way galaxy that could become a future home for adventurous humans — complaints from existing inhabitants notwithstanding. Although, the closest at the moment at a distance of just over 20 light years — Gliese 581g — would take around 200,000 years to reach using current technology.

[div class=attrib]From the Independent:[end-div]

An ambitious project to catalogue every habitable planet has discovered seven worlds inside the Milky Way that could possibly harbour life.

Marking its first anniversary, the Habitable Exoplanets Catalog said it had far exceeded its expectation of adding one or two new planets this year in its search for a new earth.

In recent years scientists from the Puerto Rico-based Planetary Habitability Laboratory that runs the catalogue have sharpened their techniques for finding new planets outside our solar system.

Chile’s High Accuracy Radial Veolocity Planet Searcher and the orbiting Kepler Space Telescope are two of the many tools that have increased the pace of discoveries.

The Planetary Habitability Laboratory launched the Habitable Exoplanets Catalog last year to measure the suitability for life of these emerging worlds and as a way to organise them for the public.

It has found nearly 80 confirmed exoplanets with a similar size to Earth but only a few of those have the right distance from their star to support liquid surface water – the presence of which is considered essential to sustain life.

Seven potentially habitable exoplanets are now listed by the Habitable Exoplanets Catalog, including the disputed Gliese 581g, plus some 27 more from NASA Kepler candidates waiting for confirmation.

Although all these exoplanets are superterrans are considered potentially habitable, scientists have not yet found a true Earth analogue.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Current Potential Habitable Exoplanets. Courtesy of CREDIT: PHL @ UPR Arecibo.[end-div]

A Star is Born, and its Solar System

A diminutive stellar blob some 450 million light years away seems to be a young star giving birth to a planetary system much like our very own Solar System. The developing protostar and its surrounding gas cloud is being tracked astronomers at the National Radio Astronomy Observatory in Charlottesville, Virginia. Stellar and planetary evolution in action.

[div class=attrib]From New Scientist:[end-div]

Swaddled in a cloud of dust and gas, the baby star shows a lot of potential. It is quietly sucking in matter from the cloud, which holds enough cosmic nourishment for the infant to grow as big and bright as our sun. What’s more, the star is surrounded by enough raw material to build at least seven planetary playmates.

Dubbed L1527, the star is still in the earliest stages of development, so it offers one of the best peeks yet at what our solar system may have looked like as it was taking shape.

The young star is currently one-fifth of the mass of the sun, but it is growing. If it has been bulking up at the same rate all its life, the star should be just 300,000 years old – a mere tyke compared to our 4.6-billion-year-old sun. But the newfound star may be even younger, because some theories say stars initially grow at a faster rate.

Diminutive sun

The cloud feeding the protostar contains at least as much material as our sun, says John Tobin of the National Radio Astronomy Observatory in Charlottesville, Virginia.

“The key factor in determining a star’s characteristics is the mass, so L1527 could potentially grow to become similar to the sun,” says Tobin.

Material from the cloud is being funnelled to the star through a swirling disc that contains roughly 0.5 per cent the mass of the sun. That might not sound like a lot, but that’s enough mass to make up at least seven Jupiter-sized planets.

Previous observations of L1527 had hinted that a disk encircled the star, but it was not clear that the disk was rotating, which is an essential ingredient for planet formation. So Tobin and his colleagues took a closer look.

Good rotations

The team used radio observations to detect the presence of carbon monoxide around the star and watched how the material swirled around in the disc to trace its overall motion. They found that matter nearest to the star is rotating faster than material near the edge of the disc – a pattern that mirrors the way planets orbit a star.

“The dust and gas are orbiting the protostar much like how planets orbit the sun,” says Tobin. “Unfortunately there is no telling how many planets might form or how large they will be.”

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Protostar L1527. Courtesy of NASA / JPL, via tumblr.[end-div]

Voyager: A Gift that Keeps on Giving

The little space probe that could — Voyager I — is close to leaving our solar system and entering the relative void of interstellar space. As it does so, from a distance of around 18.4 billion kilometers (today), it continues to send back signals of what it finds. And, surprises continue.

[div class=attrib]From ars technica:[end-div]

Several years ago the Voyager spacecraft neared the edge of the Solar System, where the solar wind and magnetic field started to be influenced by the pressure from the interstellar medium that surrounds them. But the expected breakthrough to interstellar space appeared to be indefinitely put on hold; instead, the particles and magnetic field lines in the area seemed to be sending mixed signals about the Voyagers’ escape. At today’s meeting of the American Geophysical Union, scientists offered an explanation: the durable spacecraft ran into a region that nobody predicted.

The Voyager probes were sent on a grand tour of the outer planets over 35 years ago. After a series of staggeringly successful visits to the planets, the probes shot out beyond the most distant of them toward the edges of the Solar System. Scientists expected that as they neared the edge, we’d see the charge particles of the solar wind changing direction as the interstellar medium alters the direction of the Sun’s magnetic field. But while some aspects of the Voyager’s environment have changed, we’ve not seen any clear indication that it has left the Solar System. The solar wind actually seems to be grinding to a halt.

Today’s announcement clarifies that the confusion was caused by the fact that nature didn’t think much of physicists’ expectations. Instead, there’s an additional region near our Solar System’s boundary that hadn’t been predicted.

Within the Solar System, the environment is dominated by the solar magnetic field and a flow of charged particles sent out by the Sun (called the solar wind). Interstellar space has its own flow of particles in the form of low-energy cosmic rays, which the Sun’s magnetic field deflects away from us. There’s also an interstellar magnetic field with field lines oriented in different directions to our Sun’s.

Researchers expected the Voyagers would reach a relatively clear boundary between the Solar System and interstellar space. The Sun’s magnetic field would first shift directions, then be left behind and the interstellar one would be detected. At the same time, we’d see the loss of the solar wind and start seeing the first low-energy cosmic rays.

As expected, a few years back, the Voyagers reached a region where the interstellar medium forced the Sun’s magnetic field lines to curve north. But the solar wind refused to follow suit. Instead of flowing north, the solar wind slowed to a halt while the cosmic rays were missing in action.

Over the summer, as Voyager 1 approached 122 astronomical units from the Sun, that started to change. Arik Posner of the Voyager team said that, starting in late July, Voyager 1 detected a sudden drop in the presence of particles from the solar wind, which went down by half. At the same time, the first low-energy cosmic rays filtered in. A few days later things returned to normal. A second drop occurred on August 15 and then, on August 28, things underwent a permanent shift. According to Tom Krimigis, particles originating from the Sun dropped by about 1,000-fold. Low-energy cosmic rays rose and stayed elevated.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Voyager II. Courtesy of NASA / JPL.[end-div]

The Immortal Jellyfish

In 1988 marine-biology student made a stunning discovery, though little publicized at the time. In the coral blooms of the Italian Mediterranean Christian Rapallo found a small creature that resembled a jellyfish. It showed a very odd attribute — it refused to die. The true importance of this discovery did not become fully apparent until 1996, when a group of researchers found that this invertebrate, now classified as a hydrozoan and known by its scientific name Turritopsis dohrnii, could at any point during its lifecycle revert back to an earlier stage, and then begin its development all over again. It was to all intents immortal.

For scientists seeking to unravel the mechanisms that underlie the aging process Turritopsis dohrnii — the immortal jellyfish — represents a truly significant finding. Might our progress in slowing or even halting aging in humans come from a lowly jellyfish? Time will tell.

[div class=attrib]From the New York Times:[end-div]

After more than 4,000 years — almost since the dawn of recorded time, when Utnapishtim told Gilgamesh that the secret to immortality lay in a coral found on the ocean floor — man finally discovered eternal life in 1988. He found it, in fact, on the ocean floor. The discovery was made unwittingly by Christian Sommer, a German marine-biology student in his early 20s. He was spending the summer in Rapallo, a small city on the Italian Riviera, where exactly one century earlier Friedrich Nietzsche conceived “Thus Spoke Zarathustra”: “Everything goes, everything comes back; eternally rolls the wheel of being. Everything dies, everything blossoms again. . . .”

Sommer was conducting research on hydrozoans, small invertebrates that, depending on their stage in the life cycle, resemble either a jellyfish or a soft coral. Every morning, Sommer went snorkeling in the turquoise water off the cliffs of Portofino. He scanned the ocean floor for hydrozoans, gathering them with plankton nets. Among the hundreds of organisms he collected was a tiny, relatively obscure species known to biologists as Turritopsis dohrnii. Today it is more commonly known as the immortal jellyfish.

Sommer kept his hydrozoans in petri dishes and observed their reproduction habits. After several days he noticed that his Turritopsis dohrnii was behaving in a very peculiar manner, for which he could hypothesize no earthly explanation. Plainly speaking, it refused to die. It appeared to age in reverse, growing younger and younger until it reached its earliest stage of development, at which point it began its life cycle anew.

Sommer was baffled by this development but didn’t immediately grasp its significance. (It was nearly a decade before the word “immortal” was first used to describe the species.) But several biologists in Genoa, fascinated by Sommer’s finding, continued to study the species, and in 1996 they published a paper called “Reversing the Life Cycle.” The scientists described how the species — at any stage of its development — could transform itself back to a polyp, the organism’s earliest stage of life, “thus escaping death and achieving potential immortality.” This finding appeared to debunk the most fundamental law of the natural world — you are born, and then you die.

One of the paper’s authors, Ferdinando Boero, likened the Turritopsis to a butterfly that, instead of dying, turns back into a caterpillar. Another metaphor is a chicken that transforms into an egg, which gives birth to another chicken. The anthropomorphic analogy is that of an old man who grows younger and younger until he is again a fetus. For this reason Turritopsis dohrnii is often referred to as the Benjamin Button jellyfish.

Yet the publication of “Reversing the Life Cycle” barely registered outside the academic world. You might expect that, having learned of the existence of immortal life, man would dedicate colossal resources to learning how the immortal jellyfish performs its trick. You might expect that biotech multinationals would vie to copyright its genome; that a vast coalition of research scientists would seek to determine the mechanisms by which its cells aged in reverse; that pharmaceutical firms would try to appropriate its lessons for the purposes of human medicine; that governments would broker international accords to govern the future use of rejuvenating technology. But none of this happened.

Some progress has been made, however, in the quarter-century since Christian Sommer’s discovery. We now know, for instance, that the rejuvenation of Turritopsis dohrnii and some other members of the genus is caused by environmental stress or physical assault. We know that, during rejuvenation, it undergoes cellular transdifferentiation, an unusual process by which one type of cell is converted into another — a skin cell into a nerve cell, for instance. (The same process occurs in human stem cells.) We also know that, in recent decades, the immortal jellyfish has rapidly spread throughout the world’s oceans in what Maria Pia Miglietta, a biology professor at Notre Dame, calls “a silent invasion.” The jellyfish has been “hitchhiking” on cargo ships that use seawater for ballast. Turritopsis has now been observed not only in the Mediterranean but also off the coasts of Panama, Spain, Florida and Japan. The jellyfish seems able to survive, and proliferate, in every ocean in the world. It is possible to imagine a distant future in which most other species of life are extinct but the ocean will consist overwhelmingly of immortal jellyfish, a great gelatin consciousness everlasting.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image of Turritopsis dohrnii, courtesy of Discovery News.[end-div]

Sleep Myths

Chronobiologist, Till Roenneberg, debunks 5 commonly held beliefs about sleep. He is author of “Internal Time: Chronotypes, Social Jet Lag, and Why You’re So Tired.

[div class=attrib]From the Washington Post:[end-div]

If shopping on Black Friday leaves you exhausted, or if your holiday guests keep you up until the wee hours, a long Thanksgiving weekend should offer an opportunity for some serious shut-eye. We spend between a quarter and a third of our lives asleep, but that doesn’t make us experts on how much is too much, how little is too little, or how many hours of rest the kids need to be sharp in school. Let’s tackle some popular myths about Mr. Sandman.

1.You need eight hours of sleep per night.

That’s the cliche. Napoleon, for one, didn’t believe it. His prescription went something like this: “Six hours for a man, seven for a woman and eight for a fool.”

But Napoleon’s formula wasn’t right, either. The ideal amount of sleep is different for everyone and depends on many factors, including age and genetic makeup.

In the past 10 years, my research team has surveyed sleep behavior in more than 150,000 people. About 11 percent slept six hours or less, while only 27 percent clocked eight hours or more. The majority fell in between. Women tended to sleep longer than men, but only by 14 minutes.

Bigger differences are seen when comparing various age groups. Ten-year-olds needed about nine hours of sleep, while adults older than 30, including senior citizens, averaged about seven hours. We recently identified the first gene associated with sleep duration — if you have one variant of this gene, you need more sleep than if you have another.

2. Early to bed and early to rise makes a man healthy, wealthy and wise.

Benjamin Franklin’s proverbial praise of early risers made sense in the second half of the 18th century, when his peers were exposed to much more daylight and to very dark nights. Their body clocks were tightly synchronized to this day-night cycle. This changed as work gradually moved indoors, performed under the far weaker intensity of artificial light during the day and, if desired, all night long.

The timing of sleep — earlier or later — is controlled by our internal clocks, which determine what researchers call our optimal “sleep window.” With the widespread use of electric light, our body clocks have shifted later while the workday has essentially remained the same. We fall asleep according to our (late) body clock, and are awakened early for work by the alarm clock. We therefore suffer from chronic sleep deprivation, and then we try to compensate by sleeping in on free days. Many of us sleep more than an hour longer on weekends than we do on workdays.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image courtesy of Google search.[end-div]

The Science (and Benefit) of Fasting

For thousands of years people have fasted to cleanse the body and the spirit. And, of course, many fast to lose (some) weight. Recently, a growing body of scientific research seems to suggest that fasting may slow the aging process.

[div class=attrib]From the New Scientist:[end-div]

THERE’S a fuzz in my brain and an ache in my gut. My legs are leaden and my eyesight is blurry. But I have only myself to blame. Besides, I have been assured that these symptoms will pass. Between 10 days and three weeks from now, my body will adjust to the new regime, which entails fasting for two days each week. In the meantime, I just need to keep my eyes on the prize. Forget breakfast and second breakfast, ignore the call of multiple afternoon snacks, because the pay offs of doing without could be enormous.

Fasting is most commonly associated with religious observation. It is the fourth of the Five Pillars of Islam. Buddhists consider it a means to practise self-control and advocate abstaining from food after the noon meal. For some Christians, temporary fasts are seen as a way of getting closer to God. But the benefits I am hoping for are more corporeal.

The idea that fasting might be good for your health has a long, if questionable, history. Back in 1908, “Dr” Linda Hazzard, an American with some training as a nurse, published a book called Fasting for the Cure of Disease, which claimed that minimal food was the route to recovery from a variety of illnesses including cancer. Hazzard was jailed after one of her patients died of starvation. But what if she was, at least partly, right?

A new surge of interest in fasting suggests that it might indeed help people with cancer. It could also reduce the risk of developing cancer, guard against diabetes and heart disease, help control asthma and even stave off Parkinson’s disease and dementia. Many of the scientists who study fasting practise what they research, and they tell me that at my age (39) it could be vital that I start now. “We know from animal models,” says Mark Mattson at the US National Institute on Aging, “that if we start an intermittent fasting diet at what would be the equivalent of middle age in people, we can delay the onset of Alzheimer’s and Parkinson’s.” Surely it’s worth a try?

Until recently, most studies linking diet with health and longevity focused on calorie restriction. They have had some impressive results, with the lifespan of various lab animals lengthened by up to 50 per cent after their daily calorie intake was cut in half. But these effects do not seem to extend to primates. A 23-year-long study of macaques found that although calorie restriction delayed the onset of age-related diseases, it had no impact on lifespan. So other factors such as genetics may be more important for human longevity too (Nature, vol 489, p 318).

That’s bad news for anyone who has gone hungry for decades in the hope of living longer, but the finding has not deterred fasting researchers. They point out that although fasting obviously involves cutting calories – at least on the fast days – it brings about biochemical and physiological changes that daily dieting does not. Besides, calorie restriction may leave people susceptible to infections and biological stress, whereas fasting, done properly, should not. Some even argue that we are evolutionarily adapted to going without food intermittently. “The evidence is pretty strong that our ancestors did not eat three meals a day plus snacks,” says Mattson. “Our genes are geared to being able to cope with periods of no food.”

What’s in a fast?

As I sit here, hungry, it certainly doesn’t feel like that. But researchers do agree that fasting will leave you feeling crummy in the short term because it takes time for your body to break psychological and biological habits. Less reassuring is their lack of agreement on what fasting entails. I have opted for the “5:2” diet, which allows me 600 calories in a single meal on each of two weekly “fast” days. The normal recommended intake is about 2000 calories for a woman and 2500 for a man, and I am allowed to eat whatever I want on the five non-fast days, underlining the fact that fasting is not necessarily about losing weight. A more draconian regimen has similar restricted-calorie “fasts” every other day. Then there’s total fasting, in which participants go without food for anything from one to five days – longer than about a week is considered potentially dangerous. Fasting might be a one-off, or repeated weekly or monthly.

Different regimens have different effects on the body. A fast is considered to start about 10 to 12 hours after a meal, when you have used up all the available glucose in your blood and start converting glycogen stored in liver and muscle cells into glucose to use for energy. If the fast continues, there is a gradual move towards breaking down stored body fat, and the liver produces “ketone bodies” – short molecules that are by-products of the breakdown of fatty acids. These can be used by the brain as fuel. This process is in full swing three to four days into a fast. Various hormones are also affected. For example, production of insulin-like growth factor 1 (IGF-1), drops early and reaches very low levels by day three or four. It is similar in structure to insulin, which also becomes scarcer with fasting, and high levels of both have been linked to cancer.

[div class=attrib]Read the entire article following the jump.[end-div]

Telomere Test: A Date With Death

In 1977 Elizabeth Blackburn and Joseph Gall, molecular biologists, discovered the structure of the end caps, known as telomeres, of chromosomes. In 2009, Blackburn and colleagues Carol Greider and Jack Szostak shared the Nobel prize in Physiology or Medicine for discovering the enzyme telomerase, the enzyme responsible for replenishing telomeres.

It turns out that telomeres are rather important. Studies shows that telomeres regulate cell division, and as a consequence directly influence aging and life span. When a cell divides the length of its chromosomal telomeres shortens. Once a telomere is depleted its chromosome, and DNA, can no longer be replicated accurately, and the cell no longer divides, hastening cell death.

[div class=attrib]From the Independent:[end-div]

A blood test to determine how fast someone is ageing has been shown to work on a population of wild birds, the first time the ageing test has been used successfully on animals living outside a laboratory setting.

The test measures the average length of tiny structures on the tips of chromosomes called telomeres which are known to get shorter each time a cell divides during an organism’s lifetime.

Telomeres are believed to act like internal clocks by providing a more accurate estimate of a person’s true biological age rather than their actual chronological age.

This has led some experts to suggest that telomere tests could be used to estimate not only how fast someone is ageing, but possibly how long they have left to live if they die of natural causes.

Telomere tests have been widely used on experimental animals and at least one company is offering a £400 blood test in the UK for people interested in seeing how fast they are ageing based on their average telomere length.

Now scientists have performed telomere tests on an isolated population of songbirds living on an island in the Seychelles and found that the test does indeed accurately predict an animal’s likely lifespan.

“We saw that telomere length is a better indicator of life expectancy than chronological age. So by measuring telomere length we have a way of estimating the biological age of an individual – how much of its life it has used up,” said David Richardson of the University of East Anglia.

The researchers tested the average telomere lengths of a population of 320 Seychelles Warblers living on the remote Cousin Island, which ornithologists have studied for 20 years, documenting the life history of each bird.

“Our results provide the first clear and unambiguous evidence of a relationship between telomere length and mortality in the wild, and substantiate the prediction that telomere length and shortening rate can act as an indicator of biological age further to chronological age,” says the study published in the journal Molecular Ecology.

Studying an island population of wild birds was important because there were no natural predators and little migration, meaning that the scientists could accurately study the link between telomere length and a bird’s natural lifespan.

“We wanted to understand what happens over an entire lifetime, so the Seychelles warbler is an ideal research subject. They are naturally confined to an isolated tropical island, without any predators, so we can follow individuals throughout their lives, right into old age,” Dr Richardson said.

“We investigated whether, at any given age, their telomere lengths could predict imminent death. We found that short and rapidly shortening telomeres were a good indication that the bird would die within a year,” he said.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Infographic courtesy of Independent.[end-div]