Category Archives: BigBang

Lead a Congressional Committee on Science: No Grasp of Science Required

[div class=attrib]From ars technica:[end-div]

The House Committee on Space, Science, and Technology hears testimony on climate change in March 2011.[/ars_img]If you had the chance to ask questions of one of the world’s leading climatologists, would you select a set of topics that would be at home in the heated discussions that take place in the Ars forums? If you watch the video below, you’d find that’s precisely what Dana Rohrabacher (R-CA) chose to do when Penn State’s Richard Alley (a fellow Republican) was called before the House Science Committee, which has already had issues with its grasp of science. Rohrabacher took Alley on a tour of some of the least convincing arguments about climate change, all trying to convince him changes in the Sun were to blame for a changing climate. (Alley, for his part, noted that we have actually measured the Sun, and we’ve seen no such changes.)

Now, if he has his way, Rohrabacher will be chairing the committee once the next Congress is seated. Even if he doesn’t get the job, the alternatives aren’t much better.

There has been some good news for the Science Committee to come out of the last election. Representative Todd Akin (R-MO), whose lack of understanding of biology was made clear by his comments on “legitimate rape,” had to give up his seat to run for the Senate, a race he lost. Meanwhile, Paul Broun (R-GA), who said that evolution and cosmology are “lies straight from the pit of Hell,” won reelection, but he received a bit of a warning in the process: dead English naturalist Charles Darwin, who is ineligible to serve in Congress, managed to draw thousands of write-in votes. And, thanks to limits on chairmanships, Ralph Hall (R-TX), who accused climate scientists of being in it for the money (if so, they’re doing it wrong), will have to step down.

In addition to Rohrabacher, the other Representatives that are vying to lead the Committee are Wisconsin’s James Sensenbrenner and Texas’ Lamar Smith. They all suggest that they will focus on topics like NASA’s budget and the Department of Energy’s plans for future energy tech. But all of them have been embroiled in the controversy over climate change in the past.

In an interview with Science Insider about his candidacy, Rohrabacher engaged in a bit of triumphalism and suggested that his beliefs were winning out. “There were a lot of scientists who were just going along with the flow on the idea that mankind was causing a change in the world’s climate,” he said. “I think that after 10 years of debate, we can show that there are hundreds if not thousands of scientists who have come over to being skeptics, and I don’t know anyone [who was a skeptic] who became a believer in global warming.”

[div class=attrib]Read the entire article following the jump.[end-div]

Us: Perhaps It’s All Due to Gene miR-941

Geneticists have discovered a gene that helps explain how humans and apes diverged from their common ancestor around 6 million years ago.

[div class=attrib]From the Guardian:[end-div]

Researchers have discovered a new gene they say helps explain how humans evolved from chimpanzees.

The gene, called miR-941, appears to have played a crucial role in human brain development and could shed light on how we learned to use tools and language, according to scientists.

A team at the University of Edinburgh compared it to 11 other species of mammals, including chimpanzees, gorillas, mice and rats.

The results, published in Nature Communications, showed that the gene is unique to humans.

The team believe it emerged between six and one million years ago, after humans evolved from apes.

Researchers said it is the first time a new gene carried by humans and not by apes has been shown to have a specific function in the human body.

Martin Taylor, who led the study at the Institute of Genetics and Molecular Medicine at the University of Edinburgh, said: “As a species, humans are wonderfully inventive – we are socially and technologically evolving all the time.

“But this research shows that we are innovating at a genetic level too.

“This new molecule sprang from nowhere at a time when our species was undergoing dramatic changes: living longer, walking upright, learning how to use tools and how to communicate.

“We’re now hopeful that we will find more new genes that help show what makes us human.”

The gene is highly active in two areas of the brain, controlling decision-making and language abilities, with the study suggesting it could have a role in the advanced brain functions that make us human.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image courtesy of ABCNews.[end-div]

Hearing and Listening

Auditory neuroscientist Seth Horowitz guides us through the science of hearing and listening in his new book, “The Universal Sense: How Hearing Shapes the Mind.” He clarifies the important distinction between attentive listening with the mind and the more passive act of hearing, and laments the many modern distractions that threaten our ability to listen effectively.

[div class=attrib]From the New York Times:[end-div]

HERE’S a trick question. What do you hear right now?

If your home is like mine, you hear the humming sound of a printer, the low throbbing of traffic from the nearby highway and the clatter of plastic followed by the muffled impact of paws landing on linoleum — meaning that the cat has once again tried to open the catnip container atop the fridge and succeeded only in knocking it to the kitchen floor.

The slight trick in the question is that, by asking you what you were hearing, I prompted your brain to take control of the sensory experience — and made you listen rather than just hear. That, in effect, is what happens when an event jumps out of the background enough to be perceived consciously rather than just being part of your auditory surroundings. The difference between the sense of hearing and the skill of listening is attention.

Hearing is a vastly underrated sense. We tend to think of the world as a place that we see, interacting with things and people based on how they look. Studies have shown that conscious thought takes place at about the same rate as visual recognition, requiring a significant fraction of a second per event. But hearing is a quantitatively faster sense. While it might take you a full second to notice something out of the corner of your eye, turn your head toward it, recognize it and respond to it, the same reaction to a new or sudden sound happens at least 10 times as fast.

This is because hearing has evolved as our alarm system — it operates out of line of sight and works even while you are asleep. And because there is no place in the universe that is totally silent, your auditory system has evolved a complex and automatic “volume control,” fine-tuned by development and experience, to keep most sounds off your cognitive radar unless they might be of use as a signal that something dangerous or wonderful is somewhere within the kilometer or so that your ears can detect.

This is where attention kicks in.

Attention is not some monolithic brain process. There are different types of attention, and they use different parts of the brain. The sudden loud noise that makes you jump activates the simplest type: the startle. A chain of five neurons from your ears to your spine takes that noise and converts it into a defensive response in a mere tenth of a second — elevating your heart rate, hunching your shoulders and making you cast around to see if whatever you heard is going to pounce and eat you. This simplest form of attention requires almost no brains at all and has been observed in every studied vertebrate.

More complex attention kicks in when you hear your name called from across a room or hear an unexpected birdcall from inside a subway station. This stimulus-directed attention is controlled by pathways through the temporoparietal and inferior frontal cortex regions, mostly in the right hemisphere — areas that process the raw, sensory input, but don’t concern themselves with what you should make of that sound. (Neuroscientists call this a “bottom-up” response.)

But when you actually pay attention to something you’re listening to, whether it is your favorite song or the cat meowing at dinnertime, a separate “top-down” pathway comes into play. Here, the signals are conveyed through a dorsal pathway in your cortex, part of the brain that does more computation, which lets you actively focus on what you’re hearing and tune out sights and sounds that aren’t as immediately important.

In this case, your brain works like a set of noise-suppressing headphones, with the bottom-up pathways acting as a switch to interrupt if something more urgent — say, an airplane engine dropping through your bathroom ceiling — grabs your attention.

Hearing, in short, is easy. You and every other vertebrate that hasn’t suffered some genetic, developmental or environmental accident have been doing it for hundreds of millions of years. It’s your life line, your alarm system, your way to escape danger and pass on your genes. But listening, really listening, is hard when potential distractions are leaping into your ears every fifty-thousandth of a second — and pathways in your brain are just waiting to interrupt your focus to warn you of any potential dangers.

Listening is a skill that we’re in danger of losing in a world of digital distraction and information overload.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: The Listener (TV series). Courtesy of Shaftsbury Films, CTV / Wikipedia.[end-div]

Big Data Versus Talking Heads

With the election in the United States now decided, the dissection of the result is well underway. And, perhaps the biggest winner of all is the science of big data. Yes, mathematical analysis of vast quantities of demographic and polling data won over the voodoo proclamations and gut felt predictions of the punditocracy. Now, that’s a result truly worth celebrating.

[div class=attrib]From ReadWriteWeb:[end-div]

Political pundits, mostly Republican, went into a frenzy when Nate Silver, a New York Times pollster and stats blogger, predicted that Barack Obama would win reelection.

But Silver was right and the pundits were wrong – and the impact of this goes way beyond politics.

Silver won because, um, science. As ReadWrite’s own Dan Rowinski noted,  Silver’s methodology is all based on data. He “takes deep data sets and applies logical analytical methods” to them. It’s all just numbers.

Silver runs a blog called FiveThirtyEight, which is licensed by the Times. In 2008 he called the presidential election with incredible accuracy, getting 49 out of 50 states right. But this year he rolled a perfect score, 50 out of 50, even nailing the margins in many cases. His uncanny accuracy on this year’s election represents what Rowinski calls a victory of “logic over punditry.”

In fact it’s bigger than that. Bear in mind that before turning his attention to politics in 2007 and 2008, Silver was using computer models to make predictions about baseball. What does it mean when some punk kid baseball nerd can just wade into politics and start kicking butt on all these long-time “experts” who have spent their entire lives covering politics?

It means something big is happening.

Man Versus Machine

This is about the triumph of machines and software over gut instinct.

The age of voodoo is over. The era of talking about something as a “dark art” is done. In a world with big computers and big data, there are no dark arts.

And thank God for that. One by one, computers and the people who know how to use them are knocking off these crazy notions about gut instinct and intuition that humans like to cling to. For far too long we’ve applied this kind of fuzzy thinking to everything, from silly stuff like sports to important stuff like medicine.

Someday, and I hope it’s soon, we will enter the age of intelligent machines, when true artificial intellgence becomes a reality, and when we look back on the late 20th and early 21st century it will seem medieval in its simplicity and reliance on superstition.

What most amazes me is the backlash and freak-out that occurs every time some “dark art” gets knocked over in a particular domain. Watch Moneyball (or read the book) and you’ll see the old guard (in that case, baseball scouts) grow furious as they realize that computers can do their job better than they can. (Of course it’s not computers; it’s people who know how to use computers.)

We saw the same thing when IBM’s Deep Blue defeated Garry Kasparov in 1997. We saw it when Watson beat humans at Jeopardy.

It’s happening in advertising, which used to be a dark art but is increasingly a computer-driven numbers game. It’s also happening in my business, the news media, prompting the same kind of furor as happened with the baseball scouts in Moneyball.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Political pundits, Left to right: Mark Halperin, David Brooks, Jon Stewart, Tim Russert, Matt Drudge, John Harris & Jim VandeHei, Rush Limbaugh, Sean Hannity, Chris Matthews, Karl Rove. Courtesy of Telegraph.[end-div]

How We Die (In Britain)

The handy infographic is compiled from data compiled by the Office of National Statistics in the United Kingdom. So, if you live in the British Isles this will give you an inkling of your likely cause of death. Interestingly, if you live in the United States you are more likely to die of a gunshot wound than a Brit is of dying from falling from a building.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Infographic courtesy of the Guardian.[end-div]

The Benefits and Beauty of Blue

[div class=attrib]From the New York Times:[end-div]

For the French Fauvist painter and color gourmand Raoul Dufy, blue was the only color with enough strength of character to remain blue “in all its tones.” Darkened red looks brown and whitened red turns pink, Dufy said, while yellow blackens with shading and fades away in the light. But blue can be brightened or dimmed, the artist said, and “it will always stay blue.”

Scientists, too, have lately been bullish on blue, captivated by its optical purity, complexity and metaphorical fluency. They’re exploring the physics and chemistry of blueness in nature, the evolution of blue ornaments and blue come-ons, and the sheer brazenness of being blue when most earthly life forms opt for earthy raiments of beige, ruddy or taupe.

One research team recently reported the structural analysis of a small, dazzlingly blue fruit from the African Pollia condensata plant that may well be the brightest terrestrial object in nature. Another group working in the central Congo basin announced the discovery of a new species of monkey, a rare event in mammalogy. Rarer still is the noteworthiest trait of the monkey, called the lesula: a patch of brilliant blue skin on the male’s buttocks and scrotal area that stands out from the surrounding fur like neon underpants.

Still other researchers are tracing the history of blue pigments in human culture, and the role those pigments have played in shaping our notions of virtue, authority, divinity and social class. “Blue pigments played an outstanding role in human development,” said Heinz Berke, an emeritus professor of chemistry at the University of Zurich. For some cultures, he said, they were as valuable as gold.

As a raft of surveys has shown, blue love is a global affair. Ask people their favorite color, and in most parts of the world roughly half will say blue, a figure three to four times the support accorded common second-place finishers like purple or green. Just one in six Americans is blue-eyed, but nearly one in two consider blue the prettiest eye color, which could be why some 50 percent of tinted contact lenses sold are the kind that make your brown eyes blue.

Sick children like their caretakers in blue: A recent study at the Cleveland Clinic found that young patients preferred nurses wearing blue uniforms to those in white or yellow. And am I the only person in the United States who doesn’t own a single pair of those permanently popular pants formerly known as dungarees?

“For Americans, bluejeans have a special connotation because of their association with the Old West and rugged individualism,” said Steven Bleicher, author of “Contemporary Color: Theory and Use.” The jeans take their John Wayne reputation seriously. “Because the indigo dye fades during washing, everyone’s blue becomes uniquely different,” said Dr. Bleicher, a professor of visual arts at Coastal Carolina University. “They’re your bluejeans.”

According to psychologists who explore the complex interplay of color, mood and behavior, blue’s basic emotional valence is calmness and open-endedness, in contrast to the aggressive specificity associated with red. Blue is sea and sky, a pocket-size vacation.

In a study that appeared in the journal Perceptual & Motor Skills, researchers at Aichi University in Japan found that subjects who performed a lengthy video game exercise while sitting next to a blue partition reported feeling less fatigued and claustrophobic, and displayed a more regular heart beat pattern, than did people who sat by red or yellow partitions.

In the journal Science, researchers at the University of British Columbia described their study of how computer screen color affected participants’ ability to solve either creative problems — for example, determining the word that best unifies the terms “shelf,” “read” and “end” (answer: book) — or detail-oriented tasks like copy editing. The researchers found that blue screens were superior to red or white backgrounds at enhancing creativity, while red screens worked best for accuracy tasks. Interestingly, when participants were asked to predict which screen color would improve performance on the two categories of problems, big majorities deemed blue the ideal desktop setting for both.

But skies have their limits, and blue can also imply coldness, sorrow and death. On learning of a good friend’s suicide in 1901, Pablo Picasso fell into a severe depression, and he began painting images of beggars, drunks, the poor and the halt, all famously rendered in a palette of blue.

The provenance of using “the blues” to mean sadness isn’t clear, but L. Elizabeth Crawford, a professor of psychology at the University of Richmond in Virginia, suggested that the association arose from the look of the body when it’s in a low energy, low oxygen state. “The lips turn blue, there’s a blue pallor to the complexion,” she said. “It’s the opposite of the warm flushing of the skin that we associate with love, kindness and affection.”

Blue is also known to suppress the appetite, possibly as an adaptation against eating rotten meat, which can have a bluish tinge. “If you’re on a diet, my advice is, take the white bulb out of the refrigerator and put in a blue one instead,” Dr. Bleicher said. “A blue glow makes food look very unappetizing.”

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Morpho didius, dorsal view of male butterfly. Courtesy of Wikipedia.[end-div]

Teenagers and Time

Parents have long known that the sleep-wake cycles of their adolescent offspring are rather different to those of anyone else in the household.

Several new and detailed studies of teenagers tell us why teens are impossible to awaken at 7 am, suddenly awake at 10 pm, and often able to sleep anywhere for stretches of 16 hours.

[div class=attrib]From the Wall Street Journal:[end-div]

Many parents know the scene: The groggy, sleep-deprived teenager stumbles through breakfast and falls asleep over afternoon homework, only to spring to life, wide-eyed and alert, at 10 p.m.—just as Mom and Dad are nodding off.

Fortunately for parents, science has gotten more sophisticated at explaining why, starting at puberty, a teen’s internal sleep-wake clock seems to go off the rails. Researchers are also connecting the dots between the resulting sleep loss and behavior long chalked up to just “being a teenager.” This includes more risk-taking, less self-control, a drop in school performance and a rise in the incidence of depression.

One 2010 study from the University of British Columbia, for example, found that sleep loss can hamper neuron growth in the brain during adolescence, a critical period for cognitive development.

Findings linking sleep loss to adolescent turbulence are “really revelatory,” says Michael Terman, a professor of clinical psychology and psychiatry at Columbia University Medical Center and co-author of “Chronotherapy,” a forthcoming book on resetting the body clock. “These are reactions to a basic change in the way teens’ physiology and behavior is organized.”

Despite such revelations, there are still no clear solutions for the teen-zombie syndrome. Should a parent try to enforce strict wake-up and bedtimes, even though they conflict with the teen’s body clock? Or try to create a workable sleep schedule around that natural cycle? Coupled with a trend toward predawn school start times and peer pressure to socialize online into the wee hours, the result can upset kids’ health, school performance—and family peace.

Jeremy Kern, 16 years old, of San Diego, gets up at 6:30 a.m. for school and tries to fall asleep by 10 p.m. But a heavy load of homework and extracurricular activities, including playing saxophone in his school marching band and in a theater orchestra, often keep him up later.

“I need 10 hours of sleep to not feel tired, and every single day I have to deal with being exhausted,” Jeremy says. He stays awake during early-afternoon classes “by sheer force of will.” And as research shows, sleep loss makes him more emotionally volatile, Jeremy says, like when he recently broke up with his girlfriend: “You are more irrational when you’re sleep deprived. Your emotions are much harder to control.”

Only 7.6% of teens get the recommended 9 to 10 hours of sleep, 23.5% get eight hours and 38.7% are seriously sleep-deprived at six or fewer hours a night, says a 2011 study by the Centers for Disease Control and Prevention.

It’s a biological 1-2-3 punch. First, the onset of puberty brings a median 1.5-hour delay in the body’s release of the sleep-inducing hormone melatonin, says Mary Carskadon, a professor of psychiatry and human behavior at the Brown University medical school and a leading sleep researcher.

Second, “sleep pressure,” or the buildup of the need to sleep as the day wears on, slows during adolescence. That is, kids don’t become sleepy as early. This sleep delay isn’t just a passing impulse: It continues to increase through adolescence, peaking at age 19.5 in girls and age 20.9 in boys, Dr. Carskadon’s research shows.

Finally, teens lose some of their sensitivity to morning light, the kind that spurs awakening and alertness. And they become more reactive to nighttime light, sparking activity later into the evening.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of the Guardian / Alamy.[end-div]

The Promise of Quantum Computation

Advanced in quantum physics and in the associated realm of quantum information promise to revolutionize computing. Imagine a computer several trillions of times faster than the present day supercomputers — well, that’s where we are heading.

[div class=attrib]From the New York Times:[end-div]

THIS summer, physicists celebrated a triumph that many consider fundamental to our understanding of the physical world: the discovery, after a multibillion-dollar effort, of the Higgs boson.

Given its importance, many of us in the physics community expected the event to earn this year’s Nobel Prize in Physics. Instead, the award went to achievements in a field far less well known and vastly less expensive: quantum information.

It may not catch as many headlines as the hunt for elusive particles, but the field of quantum information may soon answer questions even more fundamental — and upsetting — than the ones that drove the search for the Higgs. It could well usher in a radical new era of technology, one that makes today’s fastest computers look like hand-cranked adding machines.

The basis for both the work behind the Higgs search and quantum information theory is quantum physics, the most accurate and powerful theory in all of science. With it we created remarkable technologies like the transistor and the laser, which, in time, were transformed into devices — computers and iPhones — that reshaped human culture.

But the very usefulness of quantum physics masked a disturbing dissonance at its core. There are mysteries — summed up neatly in Werner Heisenberg’s famous adage “atoms are not things” — lurking at the heart of quantum physics suggesting that our everyday assumptions about reality are no more than illusions.

Take the “principle of superposition,” which holds that things at the subatomic level can be literally two places at once. Worse, it means they can be two things at once. This superposition animates the famous parable of Schrödinger’s cat, whereby a wee kitty is left both living and dead at the same time because its fate depends on a superposed quantum particle.

For decades such mysteries were debated but never pushed toward resolution, in part because no resolution seemed possible and, in part, because useful work could go on without resolving them (an attitude sometimes called “shut up and calculate”). Scientists could attract money and press with ever larger supercolliders while ignoring such pesky questions.

But as this year’s Nobel recognizes, that’s starting to change. Increasingly clever experiments are exploiting advances in cheap, high-precision lasers and atomic-scale transistors. Quantum information studies often require nothing more than some equipment on a table and a few graduate students. In this way, quantum information’s progress has come not by bludgeoning nature into submission but by subtly tricking it to step into the light.

Take the superposition debate. One camp claims that a deeper level of reality lies hidden beneath all the quantum weirdness. Once the so-called hidden variables controlling reality are exposed, they say, the strangeness of superposition will evaporate.

Another camp claims that superposition shows us that potential realities matter just as much as the single, fully manifested one we experience. But what collapses the potential electrons in their two locations into the one electron we actually see? According to this interpretation, it is the very act of looking; the measurement process collapses an ethereal world of potentials into the one real world we experience.

And a third major camp argues that particles can be two places at once only because the universe itself splits into parallel realities at the moment of measurement, one universe for each particle location — and thus an infinite number of ever splitting parallel versions of the universe (and us) are all evolving alongside one another.

These fundamental questions might have lived forever at the intersection of physics and philosophy. Then, in the 1980s, a steady advance of low-cost, high-precision lasers and other “quantum optical” technologies began to appear. With these new devices, researchers, including this year’s Nobel laureates, David J. Wineland and Serge Haroche, could trap and subtly manipulate individual atoms or light particles. Such exquisite control of the nano-world allowed them to design subtle experiments probing the meaning of quantum weirdness.

Soon at least one interpretation, the most common sense version of hidden variables, was completely ruled out.

At the same time new and even more exciting possibilities opened up as scientists began thinking of quantum physics in terms of information, rather than just matter — in other words, asking if physics fundamentally tells us more about our interaction with the world (i.e., our information) than the nature of the world by itself (i.e., matter). And so the field of quantum information theory was born, with very real new possibilities in the very real world of technology.

What does this all mean in practice? Take one area where quantum information theory holds promise, that of quantum computing.

Classical computers use “bits” of information that can be either 0 or 1. But quantum-information technologies let scientists consider “qubits,” quantum bits of information that are both 0 and 1 at the same time. Logic circuits, made of qubits directly harnessing the weirdness of superpositions, allow a quantum computer to calculate vastly faster than anything existing today. A quantum machine using no more than 300 qubits would be a million, trillion, trillion, trillion times faster than the most modern supercomputer.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Bloch sphere representation of a qubit, the fundamental building block of quantum computers. Courtesy of Wikipedia.[end-div]

The Half Life of Facts

There is no doubting the ever expanding reach of science and the acceleration of scientific discovery. Yet the accumulation, and for that matter the acceleration in the accumulation, of ever more knowledge does come with a price — many historical facts that we learned as kids are no longer true. This is especially important in areas such as medical research where new discoveries are constantly making obsolete our previous notions of disease and treatment.

Author Samuel Arbesman, tells us why facts should have an expiration date in his new book, A review of The Half-Life of Facts.

[div class=attrib]From Reason:[end-div]

Dinosaurs were cold-blooded. Vast increases in the money supply produce inflation. Increased K-12 spending and lower pupil/teacher ratios boosts public school student outcomes. Most of the DNA in the human genome is junk. Saccharin causes cancer and a high fiber diet prevents it. Stars cannot be bigger than 150 solar masses. And by the way, what are the ten most populous cities in the United States?

In the past half century, all of the foregoing facts have turned out to be wrong (except perhaps the one about inflation rates). We’ll revisit the ten biggest cities question below. In the modern world facts change all of the time, according to Samuel Arbesman, author of The Half-Life of Facts: Why Everything We Know Has an Expiration Date.

Arbesman, a senior scholar at the Kaufmann Foundation and an expert in scientometrics, looks at how facts are made and remade in the modern world. And since fact-making is speeding up, he worries that most of us don’t keep up to date and base our decisions on facts we dimly remember from school and university classes that turn out to be wrong.

The field of scientometrics – the science of measuring and analyzing science – took off in 1947 when mathematician Derek J. de Solla Price was asked to store a complete set of the Philosophical Transactions of the Royal Society temporarily in his house. He stacked them in order and he noticed that the height of the stacks fit an exponential curve. Price started to analyze all sorts of other kinds of scientific data and concluded in 1960 that scientific knowledge had been growing steadily at a rate of 4.7 percent annually since the 17th century. The upshot was that scientific data was doubling every 15 years.

In 1965, Price exuberantly observed, “All crude measures, however arrived at, show to a first approximation that science increases exponentially, at a compound interest of about 7 percent  per annum, thus doubling in size every 10–15 years, growing by a factor of 10 every half century, and by something like a factor of a million in the 300 years which separate us from the seventeenth-century invention of the scientific paper when the process began.” A 2010 study in the journal Scientometrics looked at data between 1907 and 2007 and concluded that so far the “overall growth rate for science still has been at least 4.7 percent per year.”

Since scientific knowledge is still growing by a factor of ten every 50 years, it should not be surprising that lots of facts people learned in school and universities have been overturned and are now out of date.  But at what rate do former facts disappear? Arbesman applies the concept of half-life, the time required for half the atoms of a given amount of a radioactive substance to disintegrate, to the dissolution of facts. For example, the half-life of the radioactive isotope strontium-90 is just over 29 years. Applying the concept of half-life to facts, Arbesman cites research that looked into the decay in the truth of clinical knowledge about cirrhosis and hepatitis. “The half-life of truth was 45 years,” reported the researchers.

In other words, half of what physicians thought they knew about liver diseases was wrong or obsolete 45 years later. As interesting and persuasive as this example is, Arbesman’s book would have been strengthened by more instances drawn from the scientific literature.

Facts are being manufactured all of the time, and, as Arbesman shows, many of them turn out to be wrong. Checking each by each is how the scientific process is supposed work, i.e., experimental results need to be replicated by other researchers. How many of the findings in 845,175 articles published in 2009 and recorded in PubMed, the free online medical database, were actually replicated? Not all that many. In 2011, a disheartening study in Nature reported that a team of researchers over ten years was able to reproduce the results of only six out of 53 landmark papers in preclinical cancer research.

[div class=attrib]Read the entire article after the jump.[end-div]

Remembering the Future

Memory is a very useful cognitive tool. After all, where would we be if we had no recall of our family, friends, foods, words, tasks and dangers.

But, it turns our that memory may also help us imagine the future — another very important human trait.

[div class=attrib]From the New Scientist:[end-div]

WHEN thinking about the workings of the mind, it is easy to imagine memory as a kind of mental autobiography – the private book of you. To relive the trepidation of your first day at school, say, you simply dust off the cover and turn to the relevant pages. But there is a problem with this idea. Why are the contents of that book so unreliable? It is not simply our tendency to forget key details. We are also prone to “remember” events that never actually took place, almost as if a chapter from another book has somehow slipped into our autobiography. Such flaws are puzzling if you believe that the purpose of memory is to record your past – but they begin to make sense if it is for something else entirely.

That is exactly what memory researchers are now starting to realise. They believe that human memory didn’t evolve so that we could remember but to allow us to imagine what might be. This idea began with the work of Endel Tulving, now at the Rotman Research Institute in Toronto, Canada, who discovered a person with amnesia who could remember facts but not episodic memories relating to past events in his life. Crucially, whenever Tulving asked him about his plans for that evening, the next day or the summer, his mind went blank – leading Tulving to suspect that foresight was the flipside of episodic memory.

Subsequent brain scans supported the idea, suggesting that every time we think about a possible future, we tear up the pages of our autobiographies and stitch together the fragments into a montage that represents the new scenario. This process is the key to foresight and ingenuity, but it comes at the cost of accuracy, as our recollections become frayed and shuffled along the way. “It’s not surprising that we confuse memories and imagination, considering that they share so many processes,” says Daniel Schacter, a psychologist at Harvard University.

Over the next 10 pages, we will show how this theory has brought about a revolution in our understanding of memory. Given the many survival benefits of being able to imagine the future, for instance, it is not surprising that other creatures show a rudimentary ability to think in this way (“Do animals ever forget?”). Memory’s role in planning and problem solving, meanwhile, suggests that problems accessing the past may lie behind mental illnesses like depression and post-traumatic stress disorder, offering a new approach to treating these conditions (“Boosting your mental fortress”). Equally, a growing understanding of our sense of self can explain why we are so selective in the events that we weave into our life story – again showing definite parallels with the way we imagine the future (“How the brain spins your life story”). The work might even suggest some dieting tips (“Lost in the here and now”).

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: The Persistence of Memory, 1931. Salvador Dalí. Courtesy of Salvador Dalí, Gala-Salvador Dalí Foundation/Artists Rights Society.[end-div]

Integrated Space Plan

The Integrated Space Plan is a 100 year vision of space exploration as envisioned over 20 years ago. It is a beautiful and intricate timeline covering the period 1983 to 2100. The timeline was developed in 1989 by Ronald M. Jones at Rockwell International, using long range planning data from NASA, the National Space Policy Directive and other Western space agencies.

While optimistic the plan nonetheless outlined unmanned rover exploration on Mars (done), a comet sample return mission (done), and an orbiter around Mercury (done). Over the longer-term the plan foresaw “human expansion into the inner solar system” by 2018, with “triplanetary, earth-moon-mars infrastructure” in place by 2023, “small martian settlements” followed in 2060, and “Venus terraforming operations” in 2080. The plan concludes with “human interstellar travel” sometime after the year 2100. So, perhaps there is hope for humans beyond this Pale Blue Dot after all.

More below on this fascinating diagram and how it was re-discovered from Sean Ragan over at Make Magazine. A detailed and large download of the plan follows: Integrated Space Plan.

[div class=attrib]From Make:[end-div]

I first encountered this amazing infographic hanging on a professor’s office wall when I was visiting law schools back in 1999. I’ve been trying, off and on, to run down my own copy ever since. It’s been one of those back-burner projects that I’ll poke at when it comes to mind, every now and again, but until quite recently all my leads had come up dry. All I really knew about the poster was that it had been created in the 80s by analysts at Rockwell International and that it was called the “Integrated Space Plan.”

About a month ago, all the little threads I’d been pulling on suddenly unraveled, and I was able to connect with a generous donor willing to entrust an original copy of the poster to me long enough to have it scanned at high resolution. It’s a large document, at 28 x 45?, but fortunately it’s monochrome, and reproduces well using 1-bit color at 600dpi, so even uncompressed bitmaps come in at under 5MB.

[div class=attrib]Read the entire article following the jump.[end-div]

Mr. Tesla, Meet Mr. Blaine

A contemporary showman puts the inventions from another to the test with electrifying results.

[tube]irAYUU_6VSc[/tube]

[div class=attrib]From the New York Times:[end-div]

David Blaine, the magician and endurance artist, is ready for more pain. With the help of the Liberty Science Center, a chain-mail suit and an enormous array of Tesla electrical coils, he plans to stand atop a 20-foot-high pillar for 72 straight hours, without sleep or food, while being subjected to a million volts of electricity.

When Mr. Blaine performs “Electrified” on a pier in Hudson River Park, the audience there as well as viewers in London, Beijing, Tokyo and Sydney, Australia, will take turns controlling which of the seven coils are turned on, and at what intensity. They will also be able to play music by producing different notes from the coils. The whole performance, on Pier 54 near West 13th Street, will be shown live at www.youtube.com/electrified.

[div class=attrib]Read more after the jump. Read more about Nikola Tesla here.[end-div]

Engage the Warp Engines

According to Star Trek fictional history warp engines were invented in 2063. That gives us just over 50 years. While very unlikely based on our current technological prowess and general lack of understanding of the cosmos, warp engines are perhaps becoming just a little closer to being realized. But, please, no photon torpedoes!

[div class=attrib]From Wired:[end-div]

NASA scientists now think that the famous warp drive concept is a realistic possibility, and that in the far future humans could regularly travel faster than the speed of light.

A warp drive would work by “warping” spacetime around any spaceship, which physicist Miguel Alcubierre showed was theoretically possible in 1994, albeit well beyond the current technical capabilities of humanity. However, any such Alcubierre drive was assumed to require more energy — equivalent to the mass-energy of the whole planet of Jupiter – than could ever possibly be supplied, rendering it impossible to build.

But now scientists believe that those requirements might not be so vast, making warp travel a tangible possibility. Harold White, from NASA’s Johnson Space Centre, revealed the news on Sept. 14 at the 100 Year Starship Symposium, a gathering to discuss the possibilities and challenges of interstellar space travel. Space.com reports that White and his team have calculated that the amount of energy required to create an Alcubierre drive may be smaller than first thought.

The drive works by using a wave to compress the spacetime in front of the spaceship while expanding the spacetime behind it. The ship itself would float in a “bubble” of normal spacetime that would float along the wave of compressed spacetime, like the way a surfer rides a break. The ship, inside the warp bubble, would be going faster than the speed of light relative to objects outside the bubble.

By changing the shape of the warp bubble from a sphere to more of a rounded doughnut, White claims that the energy requirements will be far, far smaller for any faster-than-light ship — merely equivalent to the mass-energy of an object the size of Voyager 1.

Alas, before you start plotting which stars you want to visit first, don’t expect one appearing within our lifetimes. Any warp drive big enough to transport a ship would still require vast amounts of energy by today’s standards, which would probably necessitate exploiting dark energy — but we don’t know yet what, exactly, dark energy is, nor whether it’s something a spaceship could easily harness. There’s also the issue that we have no idea how to create or maintain a warp bubble, let alone what it would be made out of. It could even potentially, if not constructed properly, create unintended black holes.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: U.S.S Enterprise D. Courtesy of Startrek.com.[end-div]

Uncertainty Strikes the Uncertainty Principle

Some recent experiments out of the University of Toronto show for the first time an anomaly in measurements predicted by Werner Heisenberg’s fundamental law of quantum mechanics, the Uncertainty Principle.

[div class=attrib]From io9:[end-div]

Heisenberg’s uncertainty principle is an integral component of quantum physics. At the quantum scale, standard physics starts to fall apart, replaced by a fuzzy, nebulous set of phenomena. Among all the weirdness observed at this microscopic scale, Heisenberg famously observed that the position and momentum of a particle cannot be simultaneously measured, with any meaningful degree of precision. This led him to posit the uncertainty principle, the declaration that there’s only so much we can know about a quantum system, namely a particle’s momentum and position.

Now, by definition, the uncertainty principle describes a two-pronged process. First, there’s the precision of a measurement that needs to be considered, and second, the degree of uncertainty, or disturbance, that it must create. It’s this second aspect that quantum physicists refer to as the “measurement-disturbance relationship,” and it’s an area that scientists have not sufficiently explored or proven.

Up until this point, quantum physicists have been fairly confident in their ability to both predict and measure the degree of disturbances caused by a measurement. Conventional thinking is that a measurement will always cause a predictable and consistent disturbance — but as the study from Toronto suggests, this is not always the case. Not all measurements, it would seem, will cause the effect predicted by Heisenberg and the tidy equations that have followed his theory. Moreover, the resultant ambiguity is not always caused by the measurement itself.

The researchers, a team led by Lee Rozema and Aephraim Steinberg, experimentally observed a clear-cut violation of Heisenberg’s measurement-disturbance relationship. They did this by applying what they called a “weak measurement” to define a quantum system before and after it interacted with their measurement tools — not enough to disturb it, but enough to get a basic sense of a photon’s orientation.

Then, by establishing measurement deltas, and then applying stronger, more disruptive measurements, the team was able to determine that they were not disturbing the quantum system to the degree that the uncertainty principle predicted. And in fact, the disturbances were half of what would normally be expected.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Heisenberg, Werner Karl Prof. 1901-1976; Physicist, Nobel Prize for Physics 1933, Germany. Courtesy of Wikipedia.[end-div]

Fusion and the Z Machine

The quest to tap fusion as an energy source here on Earth continues to inch forward with some promising new developments. Of course, we mean nuclear fusion — the type which drives our companion star to shine, not the now debunked “cold fusion” supposedly demonstrated in a test tube in the late 1980s.

[div class=attrib]From Wired:[end-div]

In the high-stakes race to realize fusion energy, a smaller lab may be putting the squeeze on the big boys. Worldwide efforts to harness fusion—the power source of the sun and stars—for energy on Earth currently focus on two multibillion dollar facilities: the ITER fusion reactor in France and the National Ignition Facility (NIF) in California. But other, cheaper approaches exist—and one of them may have a chance to be the first to reach “break-even,” a key milestone in which a process produces more energy than needed to trigger the fusion reaction.

Researchers at the Sandia National Laboratory in Albuquerque, New Mexico, will announce in a Physical Review Letters (PRL) paper accepted for publication that their process, known as magnetized liner inertial fusion (MagLIF) and first proposed 2 years ago, has passed the first of three tests, putting it on track for an attempt at the coveted break-even. Tests of the remaining components of the process will continue next year, and the team expects to take its first shot at fusion before the end of 2013.

Fusion reactors heat and squeeze a plasma—an ionized gas—composed of the hydrogen isotopes deuterium and tritium, compressing the isotopes until their nuclei overcome their mutual repulsion and fuse together. Out of this pressure-cooker emerge helium nuclei, neutrons, and a lot of energy. The temperature required for fusion is more than 100 million°C—so you have to put a lot of energy in before you start to get anything out. ITER and NIF are planning to attack this problem in different ways. ITER, which will be finished in 2019 or 2020, will attempt fusion by containing a plasma with enormous magnetic fields and heating it with particle beams and radio waves. NIF, in contrast, takes a tiny capsule filled with hydrogen fuel and crushes it with a powerful laser pulse. NIF has been operating for a few years but has yet to achieve break-even.

Sandia’s MagLIF technique is similar to NIF’s in that it rapidly crushes its fuel—a process known as inertial confinement fusion. But to do it, MagLIF uses a magnetic pulse rather than lasers. The target in MagLIF is a tiny cylinder about 7 millimeters in diameter; it’s made of beryllium and filled with deuterium and tritium. The cylinder, known as a liner, is connected to Sandia’s vast electrical pulse generator (called the Z machine), which can deliver 26 million amps in a pulse lasting milliseconds or less. That much current passing down the walls of the cylinder creates a magnetic field that exerts an inward force on the liner’s walls, instantly crushing it—and compressing and heating the fusion fuel.

Researchers have known about this technique of crushing a liner to heat the fusion fuel for some time. But the MagLIF-Z machine setup on its own didn’t produce quite enough heat; something extra was needed to make the process capable of reaching break-even. Sandia researcher Steve Slutz led a team that investigated various enhancements through computer simulations of the process. In a paper published in Physics of Plasmas in 2010, the team predicted that break-even could be reached with three enhancements.

First, they needed to apply the current pulse much more quickly, in just 100 nanoseconds, to increase the implosion velocity. They would also preheat the hydrogen fuel inside the liner with a laser pulse just before the Z machine kicks in. And finally, they would position two electrical coils around the liner, one at each end. These coils produce a magnetic field that links the two coils, wrapping the liner in a magnetic blanket. The magnetic blanket prevents charged particles, such as electrons and helium nuclei, from escaping and cooling the plasma—so the temperature stays hot.

Sandia plasma physicist Ryan McBride is leading the effort to see if the simulations are correct. The first item on the list is testing the rapid compression of the liner. One critical parameter is the thickness of the liner wall: The thinner the wall, the faster it will be accelerated by the magnetic pulse. But the wall material also starts to evaporate away during the pulse, and if it breaks up too early, it will spoil the compression. On the other hand, if the wall is too thick, it won’t reach a high enough velocity. “There’s a sweet spot in the middle where it stays intact and you still get a pretty good implosion velocity,” McBride says.

To test the predicted sweet spot, McBride and his team set up an elaborate imaging system that involved blasting a sample of manganese with a high-powered laser (actually a NIF prototype moved to Sandia) to produce x-rays. By shining the x-rays through the liner at various stages in its implosion, the researchers could image what was going on. They found that at the sweet-spot thickness, the liner held its shape right through the implosion. “It performed as predicted,” McBride says. The team aims to test the other two enhancements—the laser preheating and the magnetic blanket—in the coming year, and then put it all together to take a shot at break-even before the end of 2013.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: The Z Pulsed Power Facility produces tremendous energy when it fires. Courtesy of Sandia National Laboratory.[end-div]

As Simple as abc; As Difficult as ABC

As children we all learn our abc’s; as adults very few ponder the ABC Conjecture in mathematics. The first is often a simple task of rote memorization; the second is a troublesome mathematical problem with a fiendishly complex solution (maybe).

[div class=attrib]From the New Scientist:[end-div]

?Whole numbers, addition and multiplication are among the first things schoolchildren learn, but a new mathematical proof shows that even the world’s best minds have plenty more to learn about these seemingly simple concepts.

Shinichi Mochizuki of Kyoto University in Japan has torn up these most basic of mathematical concepts and reconstructed them as never before. The result is a fiendishly complicated proof for the decades-old “ABC conjecture” – and an alternative mathematical universe that should prise open many other outstanding enigmas.

To boot, Mochizuki’s proof also offers an alternative explanation for Fermat’s last theorem, one of the most famous results in the history of mathematics but not proven until 1993 (see “Fermat’s last theorem made easy”, below).

The ABC conjecture starts with the most basic equation in algebra, adding two whole numbers, or integers, to get another: a + b = c. First posed in 1985 by Joseph Oesterlé and David Masser, it places constraints on the interactions of the prime factors of these numbers, primes being the indivisible building blocks that can be multiplied together to produce all integers.

Dense logic

Take 81 + 64 = 145, which breaks down into the prime building blocks 3 × 3 × 3 × 3 + 2 × 2 × 2 × 2 × 2 × 2 = 5 × 29. Simplified, the conjecture says that the large amount of smaller primes on the equation’s left-hand side is always balanced by a small amount of larger primes on the right – the addition restricts the multiplication, and vice versa.

“The ABC conjecture in some sense exposes the relationship between addition and multiplication,” says Jordan Ellenberg of the University of Wisconsin-Madison. “To learn something really new about them at this late date is quite startling.”

Though rumours of Mochizuki’s proof started spreading on mathematics blogs earlier this year, it was only last week that he posted a series of papers on his website detailing what he calls “inter-universal geometry”, one of which claims to prove the ABC conjecture. Only now are mathematicians attempting to decipher its dense logic, which spreads over 500 pages.

So far the responses are cautious, but positive. “It will be fabulously exciting if it pans out, experience suggests that that’s quite a big ‘if’,” wrote University of Cambridge mathematician Timothy Gowers on Google+.

Alien reasoning

“It is going to be a while before people have a clear idea of what Mochizuki has done,” Ellenberg told New Scientist. “Looking at it, you feel a bit like you might be reading a paper from the future, or from outer space,” he added on his blog.

Mochizuki’s reasoning is alien even to other mathematicians because it probes deep philosophical questions about the foundations of mathematics, such as what we really mean by a number, says Minhyong Kim at the University of Oxford. The early 20th century saw a crisis emerge as mathematicians realised they actually had no formal way to define a number – we can talk about “three apples” or “three squares”, but what exactly is the mathematical object we call “three”? No one could say.

Eventually numbers were redefined in terms of sets, rigorously specified collections of objects, and mathematicians now know that the true essence of the number zero is a set which contains no objects – the empty set – while the number one is a set which contains one empty set. From there, it is possible to derive the rest of the integers.

But this was not the end of the story, says Kim. “People are aware that many natural mathematical constructions might not really fall into the universe of sets.”

Terrible deformation

Rather than using sets, Mochizuki has figured out how to translate fundamental mathematical ideas into objects that only exist in new, conceptual universes. This allowed him to “deform” basic whole numbers and push their innate relationships – such as multiplication and addition – to the limit. “He is literally taking apart conventional objects in terrible ways and reconstructing them in new universes,” says Kim.

These new insights led him to a proof of the ABC conjecture. “How he manages to come back to the usual universe in a way that yields concrete consequences for number theory, I really have no idea as yet,” says Kim.

Because of its fundamental nature, a verified proof of ABC would set off a chain reaction, in one swoop proving many other open problems and deepening our understanding of the relationships between integers, fractions, decimals, primes and more.

Ellenberg compares proving the conjecture to the discovery of the Higgs boson, which particle physicists hope will reveal a path to new physics. But while the Higgs emerged from the particle detritus of a machine specifically designed to find it, Mochizuki’s methods are completely unexpected, providing new tools for mathematical exploration.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Clare College Cambridge.[end-div]

Sign First; Lie Less

A recent paper filed with the Proceedings of the National Academy of Sciences (PNAS) shows that we are more likely to be honest if we sign a form before, rather than after, completing it. So, over the coming years look out for Uncle Sam to revise the ubiquitous IRS 1040 form by adding a signature line at the top rather than the bottom of the last page.

[div class=attrib]From Ars Technica:[end-div]

What’s the purpose of signing a form? On the simplest level, a signature is simply a way to make someone legally responsible for the content of the form. But in addition to the legal aspect, the signature is an appeal to personal integrity, forcing people to consider whether they’re comfortable attaching their identity to something that may not be completely true.

Based on some figures in a new PNAS paper, the signatures on most forms are miserable failures, at least from the latter perspective. The IRS estimates that it misses out on about $175 billion because people misrepresent their income or deductions. And the insurance industry calculates that it loses about $80 billion annually due to fraudulent claims. But the same paper suggests a fix that is as simple as tweaking the form. Forcing people to sign before they complete the form greatly increases their honesty.

It shouldn’t be a surprise that signing at the end of a form does not promote accurate reporting, given what we know about human psychology. “Immediately after lying,” the paper’s authors write, “individuals quickly engage in various mental justifications, reinterpretations, and other ‘tricks’ such as suppressing thoughts about their moral standards that allow them to maintain a positive self-image despite having lied.” By the time they get to the actual request for a signature, they’ve already made their peace with lying: “When signing comes after reporting, the morality train has already left the station.”

The problem isn’t with the signature itself. Lots of studies have shown that focusing the attention on one’s self, which a signature does successfully, can cause people to behave more ethically. The problem comes from its placement after the lying has already happened. So, the authors posited a quick fix: stick the signature at the start. Their hypothesis was that “signing one’s name before reporting information (rather than at the end) makes morality accessible right before it is most needed, which will consequently promote honest reporting.”

To test this proposal, they designed a series of forms that required self reporting of personal information, either involving performance on a math quiz where higher scores meant higher rewards, or the reimbursable travel expenses involved in getting to the study’s location. The only difference among the forms? Some did not ask for a signature, some put the signature on top, and some placed it in its traditional location, at the end.

In the case of the math quiz, the researchers actually tracked how well the participants had performed. With the signature at the end, a full 79 percent of the participants cheated. Somewhat fewer cheated when no signature was required, though the difference was not statistically significant. But when the signature was required on top, only 37 percent cheated—less than half the rate seen in the signature-at-bottom group. A similar pattern was seen when the authors analyzed the extent of the cheating involved.

Although they didn’t have complete information on travel expenses, the same pattern prevailed: people who were given the signature-on-top form reported fewer expenses than either of the other two groups.

The authors then repeated this experiment, but added a word completion task, where participants were given a series of blanks, some filled in with letters, and asked to complete the word. These completion tasks were set up so that they could be answered with neutral words or with those associated with personal ethics, like “virtue.” They got the same results as in the earlier tests of cheating, and the word completion task showed that the people who had signed on top were more likely to fill in the blanks to form ethics-focused words. This supported the contention that the early signature put people in an ethical state of mind prior to completion of the form.

But the really impressive part of the study came from its real-world demonstration of this effect. The authors got an unnamed auto insurance company to send out two versions of its annual renewal forms to over 13,000 policy holders, identical except for the location of the signature. One part of this form included a request for odometer readings, which the insurance companies use to calculate typical miles travelled, which are proportional to accident risk. These are used to calculate insurance cost—the more you drive, the more expensive it is.

Those who signed at the top reported nearly 2,500 miles more than the ones who signed at the end.

[div class=attrib]Read the entire article after the jump, or follow the article at PNAS, here.[end-div]

[div class=attrib]Image courtesy of University of Illinois at Urbana-Champaign.[end-div]

Scandinavian Killer on Ice

The title could be mistaken for a dark and violent crime novel from the likes of (Stieg) Larrson, Nesbø, Sjöwall-Wahlöö, or Henning Mankell. But, this story is somewhat more mundane, though much more consequential. It’s a story about a Swedish cancer killer.

[div class=attrib]From the Telegraph:[end-div]

On the snow-clotted plains of central Sweden where Wotan and Thor, the clamorous gods of magic and death, once held sway, a young, self-deprecating gene therapist has invented a virus that eliminates the type of cancer that killed Steve Jobs.

‘Not “eliminates”! Not “invented”, no!’ interrupts Professor Magnus Essand, panicked, when I Skype him to ask about this explosive achievement.

‘Our results are only in the lab so far, not in humans, and many treatments that work in the lab can turn out to be not so effective in humans. However, adenovirus serotype 5 is a common virus in which we have achieved transcriptional targeting by replacing an endogenous viral promoter sequence by…’

It sounds too kindly of the gods to be true: a virus that eats cancer.

‘I sometimes use the phrase “an assassin who kills all the bad guys”,’ Prof Essand agrees contentedly.

Cheap to produce, the virus is exquisitely precise, with only mild, flu-like side-effects in humans. Photographs in research reports show tumours in test mice melting away.

‘It is amazing,’ Prof Essand gleams in wonder. ‘It’s better than anything else. Tumour cell lines that are resistant to every other drug, it kills them in these animals.’

Yet as things stand, Ad5[CgA-E1A-miR122]PTD – to give it the full gush of its most up-to-date scientific name – is never going to be tested to see if it might also save humans. Since 2010 it has been kept in a bedsit-sized mini freezer in a busy lobby outside Prof Essand’s office, gathering frost. (‘Would you like to see?’ He raises his laptop computer and turns, so its camera picks out a table-top Electrolux next to the lab’s main corridor.)

Two hundred metres away is the Uppsala University Hospital, a European Centre of Excellence in Neuroendocrine Tumours. Patients fly in from all over the world to be seen here, especially from America, where treatment for certain types of cancer lags five years behind Europe. Yet even when these sufferers have nothing else to hope for, have only months left to live, wave platinum credit cards and are prepared to sign papers agreeing to try anything, to hell with the side-effects, the oncologists are not permitted – would find themselves behind bars if they tried – to race down the corridors and snatch the solution out of Prof Essand’s freezer.

I found out about Prof Magnus Essand by stalking him. Two and a half years ago the friend who edits all my work – the biographer and genius transformer of rotten sentences and misdirected ideas, Dido Davies – was diagnosed with neuroendocrine tumours, the exact type of cancer that Steve Jobs had. Every three weeks she would emerge from the hospital after eight hours of chemotherapy infusion, as pale as ice but nevertheless chortling and optimistic, whereas I (having spent the day battling Dido’s brutal edits to my work, among drip tubes) would stumble back home, crack open whisky and cigarettes, and slump by the computer. Although chemotherapy shrank the tumour, it did not cure it. There had to be something better.

It was on one of those evenings that I came across a blog about a quack in Mexico who had an idea about using sub-molecular particles – nanotechnology. Quacks provide a very useful service to medical tyros such as myself, because they read all the best journals the day they appear and by the end of the week have turned the results into potions and tinctures. It’s like Tommy Lee Jones in Men in Black reading the National Enquirer to find out what aliens are up to, because that’s the only paper trashy enough to print the truth. Keep an eye on what the quacks are saying, and you have an idea of what might be promising at the Wild West frontier of medicine. This particular quack was in prison awaiting trial for the manslaughter (by quackery) of one of his patients, but his nanotechnology website led, via a chain of links, to a YouTube lecture about an astounding new therapy for neuroendocrine cancer based on pig microbes, which is currently being put through a variety of clinical trials in America.

I stopped the video and took a snapshot of the poster behind the lecturer’s podium listing useful research company addresses; on the website of one of these organisations was a reference to a scholarly article that, when I checked through the footnotes, led, via a doctoral thesis, to a Skype address – which I dialled.

‘Hey! Hey!’ Prof Magnus Essand answered.

To geneticists, the science makes perfect sense. It is a fact of human biology that healthy cells are programmed to die when they become infected by a virus, because this prevents the virus spreading to other parts of the body. But a cancerous cell is immortal; through its mutations it has somehow managed to turn off the bits of its genetic programme that enforce cell suicide. This means that, if a suitable virus infects a cancer cell, it could continue to replicate inside it uncontrollably, and causes the cell to ‘lyse’ – or, in non-technical language, tear apart. The progeny viruses then spread to cancer cells nearby and repeat the process. A virus becomes, in effect, a cancer of cancer. In Prof Essand’s laboratory studies his virus surges through the bloodstreams of test animals, rupturing cancerous cells with Viking rapacity.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]The Snowman by Jo Nesbø. Image courtesy of Barnes and Noble.[end-div]

Living Organism as Software

For the first time scientists have built a computer software model of an entire organism from its molecular building blocks. This allows the model to predict previously unobserved cellular biological processes and behaviors. While the organism in question is a simple bacterium, this represents another huge advance in computational biology.

[div class=attrib]From the New York Times:[end-div]

Scientists at Stanford University and the J. Craig Venter Institute have developed the first software simulation of an entire organism, a humble single-cell bacterium that lives in the human genital and respiratory tracts.

The scientists and other experts said the work was a giant step toward developing computerized laboratories that could carry out complete experiments without the need for traditional instruments.

For medical researchers and drug designers, cellular models will be able to supplant experiments during the early stages of screening for new compounds. And for molecular biologists, models that are of sufficient accuracy will yield new understanding of basic biological principles.

The simulation of the complete life cycle of the pathogen, Mycoplasma genitalium, was presented on Friday in the journal Cell. The scientists called it a “first draft” but added that the effort was the first time an entire organism had been modeled in such detail — in this case, all of its 525 genes.

“Where I think our work is different is that we explicitly include all of the genes and every known gene function,” the team’s leader, Markus W. Covert, an assistant professor of bioengineering at Stanford, wrote in an e-mail. “There’s no one else out there who has been able to include more than a handful of functions or more than, say, one-third of the genes.”

The simulation, which runs on a cluster of 128 computers, models the complete life span of the cell at the molecular level, charting the interactions of 28 categories of molecules — including DNA, RNA, proteins and small molecules known as metabolites that are generated by cell processes.

“The model presented by the authors is the first truly integrated effort to simulate the workings of a free-living microbe, and it should be commended for its audacity alone,” wrote the Columbia scientists Peter L. Freddolino and Saeed Tavazoie in a commentary that accompanied the article. “This is a tremendous task, involving the interpretation and integration of a massive amount of data.”

They called the simulation an important advance in the new field of computational biology, which has recently yielded such achievements as the creation of a synthetic life form — an entire bacterial genome created by a team led by the genome pioneer J. Craig Venter. The scientists used it to take over an existing cell.

For their computer simulation, the researchers had the advantage of extensive scientific literature on the bacterium. They were able to use data taken from more than 900 scientific papers to validate the accuracy of their software model.

Still, they said that the model of the simplest biological system was pushing the limits of their computers.

“Right now, running a simulation for a single cell to divide only one time takes around 10 hours and generates half a gigabyte of data,” Dr. Covert wrote. “I find this fact completely fascinating, because I don’t know that anyone has ever asked how much data a living thing truly holds. We often think of the DNA as the storage medium, but clearly there is more to it than that.”

In designing their model, the scientists chose an approach that parallels the design of modern software systems, known as object-oriented programming. Software designers organize their programs in modules, which communicate with one another by passing data and instructions back and forth.

Similarly, the simulated bacterium is a series of modules that mimic the different functions of the cell.

“The major modeling insight we had a few years ago was to break up the functionality of the cell into subgroups which we could model individually, each with its own mathematics, and then to integrate these sub-models together into a whole,” Dr. Covert said. “It turned out to be a very exciting idea.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: A Whole-Cell Computational Model Predicts Phenotype from Genotype. Courtesy of Cell / Elsevier Inc.[end-div]

Curiosity in Flight

NASA pulled off another tremendous and daring feat of engineering when it successfully landed the Mars Science Laboratory (MSL) to the surface of Mars on August 5, 2012, 10:32 PM Pacific Time.

The MSL is housed aboard the Curiosity rover, a 2,000-pound, car-size robot. Not only did NASA land Curiosity a mere 1 second behind schedule following a journey of over 576 million kilometers (358 million miles) lasting around 8 months, it went one better. NASA had one of its Mars orbiters — Mars Reconnaissance Orbiter — snap an image of MSL from around 300 miles away as it descended through the Martian atmosphere, with its supersonic parachute unfurled.

Another historic day for science, engineering and exploration.

[div class=attrib]From NASA / JPL:[end-div]

NASA’s Curiosity rover and its parachute were spotted by NASA’s Mars Reconnaissance Orbiter as Curiosity descended to the surface on Aug. 5 PDT (Aug. 6 EDT). The High-Resolution Imaging Science Experiment (HiRISE) camera captured this image of Curiosity while the orbiter was listening to transmissions from the rover. Curiosity and its parachute are in the center of the white box; the inset image is a cutout of the rover stretched to avoid saturation. The rover is descending toward the etched plains just north of the sand dunes that fringe “Mt. Sharp.” From the perspective of the orbiter, the parachute and Curiosity are flying at an angle relative to the surface, so the landing site does not appear directly below the rover.

The parachute appears fully inflated and performing perfectly. Details in the parachute, such as the band gap at the edges and the central hole, are clearly seen. The cords connecting the parachute to the back shell cannot be seen, although they were seen in the image of NASA’s Phoenix lander descending, perhaps due to the difference in lighting angles. The bright spot on the back shell containing Curiosity might be a specular reflection off of a shiny area. Curiosity was released from the back shell sometime after this image was acquired.

This view is one product from an observation made by HiRISE targeted to the expected location of Curiosity about one minute prior to landing. It was captured in HiRISE CCD RED1, near the eastern edge of the swath width (there is a RED0 at the very edge). This means that the rover was a bit further east or downrange than predicted.

[div class=attrib]Follow the mission after the jump.[end-div]

[div class=attrib]Image courtesy of NASA/JPL-Caltech/Univ. of Arizona.[end-div]

The Radium Girls and the Polonium Assassin

Deborah Blum’s story begins with Marie Curie’s analysis of a “strange energy” released from uranium ore, and ends with the assassination of Russian dissident, Alexander Litveninko in 2006.

[div class=attrib]From Wired:[end-div]

In the late 19th century, a then-unknown chemistry student named Marie Curie was searching for a thesis subject. With encouragement from her husband, Pierre, she decided to study the strange energy released by uranium ores, a sizzle of power far greater than uranium alone could explain.

The results of that study are today among the most famous in the history of science. The Curies discovered not one but two new radioactive elements in their slurry of material (and Marie invented the word radioactivity to help explain them.) One was the glowing element radium. The other, which burned brighter and briefer, she named after her home country of Poland — Polonium (from the Latin root, polonia). In honor of that discovery, the Curies shared the 1903 Nobel Prize in Physics with their French colleague Henri Becquerel for his work with uranium.

Radium was always Marie Curie’s first love – “radium, my beautiful radium”, she used to call it. Her continued focus gained her a second Nobel Prize in chemistry in 1911. (Her Nobel lecture was titled Radium and New Concepts in Chemistry.)  It was also the higher-profile radium — embraced in a host of medical, industrial, and military uses — that first called attention to the health risks of radioactive elements. I’ve told some of that story here before in a look at the deaths and illnesses suffered by the “Radium Girls,” young women who in the 1920s painted watch-dial faces with radium-based luminous paint.

Polonium remained the unstable, mostly ignored step-child element of the story, less famous, less interesting, less useful than Curie’s beautiful radium. Until the last few years, that is. Until the reported 2006 assassination by polonium 210 of Russian spy turned dissident, Alexander Litveninko. And until the news this week, first reported by Al Jazeera, that surprisingly high levels of polonium-210 were detected by a Swiss laboratory in the clothes and other effects of the late Palestinian leader Yasser Arafat.

Arafat, 75, had been held for almost two years under an Israeli form of house arrest when he died in 2004 of a sudden wasting illness. His rapid deterioration led to a welter of conspiracy theories that he’d been poisoned, some accusing his political rivals and many more accusing Israel, which has steadfastly denied any such plot.

Recently (and for undisclosed reasons) his widow agreed to the forensic analysis of articles including clothes, a toothbrush, bed sheets, and his favorite kaffiyeh. Al Jazeera arranged for the analysis and took the materials to Europe for further study. After the University of Lausanne’s Institute of Radiation Physics released the findings, Suha Arafat asked that her husband’s body be exhumed and tested for polonium. Palestinian authorities have indicated that they may do so within the week.

And at this point, as we anticipate those results, it’s worth asking some questions about the use of a material like polonium as an assassination poison. Why, for instance, pick a poison that leaves such a durable trail of evidence behind? In the case of the Radium Girls, I mentioned earlier, scientists found that their bones were still hissing with radiation years after their deaths. In the case of Litvinenko, public health investigators found that he’d literally left a trail of radioactive residues across London where he was living at the time of his death.

In what we might imagine as the clever world of covert killings  why would a messy element like polonium even be on the assassination list? To answer that, it helps to begin by stepping back to some of the details provided in the Curies’ seminal work. Both radium and polonium are links in a chain of radioactive decay (element changes due to particle emission) that begins with uranium.  Polonium, which eventually decays to an isotope of lead, is one of the more unstable points in this chain, unstable enough that there are  some 33 known variants (isotopes) of the element.

Of these, the best known and most abundant is the energetic isotope polonium-210, with its half life of 138 days. Half-life refers to the time it takes for a radioactive element to burn through its energy supply, essentially the time it takes for activity to decrease by half. For comparison, the half life of the uranium isotope U-235, which often features in weapon design, is 700 million years. In other words, polonium is a little blast furnace of radioactive energy. The speed of its decay means that eight years after Arafat’s death, it would probably be identified by the its breakdown products. And it’s on that note – its life as a radioactive element –  that it becomes interesting as an assassin’s weapon.

Like radium, polonium’s radiation is primarily in the form of alpha rays — the emission of alpha particles. Compared to other subatomic particles, alpha particles tend to be high energy and high mass. Their relatively larger mass means that they don’t penetrate as well as other forms of radiation, in fact, alpha particles barely penetrate the skin. And they can stopped from even that by a piece of paper or protective clothing.

That may make them sound safe. It shouldn’t. It should just alert us that these are only really dangerous when they are inside the body. If a material emitting alpha radiation is swallowed or inhaled, there’s nothing benign about it. Scientists realized, for instance, that the reason the Radium Girls died of radiation poisoning was because they were lip-pointing their paintbrushes and swallowing radium-laced paint. The radioactive material deposited in their bones — which literally crumbled. Radium, by the way, has a half-life of about 1,600 years. Which means that it’s not in polonium’s league as an alpha emitter. How bad is this? By mass, polonium-210 is considered to be about 250,000 times more poisonous than hydrogen cyanide. Toxicologists estimate that an amount the size of a grain of salt could be fatal to the average adult.

In other words, a victim would never taste a lethal dose in food or drink. In the case of Litvinenko, investigators believed that he received his dose of polonium-210 in a cup of tea, dosed during a meeting with two Russian agents. (Just as an aside, alpha particles tend not to set off radiation detectors so it’s relatively easy to smuggle from country to country.) Another assassin advantage is that illness comes on gradually, making it hard to pinpoint the event.  Yet another advantage is that polonium poisoning is so rare that it’s not part of a standard toxics screen. In Litvinenko’s case, the poison wasn’t identified until shortly after his death. In Arafat’s case — if polonium-210 killed him and that has not been established — obviously it wasn’t considered at the time. And finally, it gets the job done.  “Once absorbed,” notes the U.S. Regulatory Commission, “The alpha radiation can rapidly destroy major organs, DNA and the immune system.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Pierre and Marie Curie in the laboratory, Paris c1906. Courtesy of Wikipedia.[end-div]

Curiosity: August 5, 2012, 10:31 PM Pacific Time

This is the time when NASA’s latest foray into space reaches its zenith — the upcoming landing of the Curiosity rover on Mars. At this time NASA’s Mars Science Laboratory (MSL) mission plans to deliver the nearly 2,000-pound, car-size robot rover to the surface of Mars. Curiosity will then embark on two years of exploration on the Red Planet.

For mission scientists and science buffs alike Curiosity’s descent and landing will be a major event. And, for the first time NASA will have a visual feed beamed back direct from the spacecraft (but only available after the event). The highly complex and fully automated landing has been dubbed “the Seven Minutes of Terror” by NASA engineers. Named for the time lag of signals from Curiosity to reach Earth due to the immense distance, mission scientists (and the rest of us) will not know whether Curiosity successfully descended and landed until a full 7 minutes after the fact.

For more on Curiosity and this special event visit NASA’s Jet Propulsion MSL site, here.

[div class=attrib]Image: This artist’s concept features NASA’s Mars Science Laboratory Curiosity rover, a mobile robot for investigating Mars’ past or present ability to sustain microbial life. Courtesy: NASA/JPL-Caltech.[end-div]

Solar Tornadoes

No, Solar tornadoes are not another manifestation of our slowly warming planet. Rather, these phenomena are believed to explain why the outer reaches of the solar atmosphere are so much hotter than its surface.

[div class=attrib]From ars technica:[end-div]

One of the abiding mysteries surrounding our Sun is understanding how the corona gets so hot. The Sun’s surface, which emits almost all the visible light, is about 5800 Kelvins. The surrounding corona rises to over a million K, but the heating process has not been identified. Most solar physicists suspect the process is magnetic, since the strong magnetic fields at the Sun’s surface drive much of the solar weather (including sunspots, coronal loops, prominences, and mass ejections). However, the diffuse solar atmosphere is magnetically too quiet on the large scales. The recent discovery of atmospheric “tornadoes”—swirls of gas over a thousand kilometers in diameter above the Sun’s surface—may provide a possible answer.

As described in Nature, these vortices occur in the chromosphere (the layer of the Sun’s atmosphere below the corona) and they are common. There are about 10 thousand swirls in evidence at any given time. Sven Wedemeyer-Böhm and colleagues identified the vortices using NASA’s Solar Dynamics Observatory (SDO) spacecraft and the Swedish Solar Telescope (SST). They measured the shape of the swirls as a function of height in the atmosphere, determining they grow wider at higher elevations, with the whole structure aligned above a concentration of the magnetic field on the Sun’s surface. Comparing these observations to computer simulations, the authors determined the vortices could be produced by a magnetic vortex exerting pressure on the gas in the atmosphere, accelerating it along a spiral trajectory up into the corona. Such acceleration could bring about the incredibly high temperatures observed in the Sun’s outer atmosphere.

The Sun’s atmosphere is divided into three major regions: the photosphere, the chromosphere, and the corona. The photosphere is the visible bit of the Sun, what we typically think of as the “surface.” It exhibits the behavior of rising gas and photons from the solar interior, as well as magnetic phenomena such as sunspots. The chromosphere is far less dense but hotter; the corona (“crown”) is still hotter and less dense, making an amorphous cloud around the sphere of the Sun. The chromosphere and corona are not seen without special equipment (except during total solar eclipses), but they can be studied with dedicated solar observatories.

To crack the problem of the super-hot corona, the researchers focused their attention on the chromosphere. Using data from SDO and SST, they measured the motion of various elements in the Sun’s atmosphere (iron, calcium, and helium) via the Doppler effect. These different gases all exhibited vortex behavior, aligned with the same spot on the photosphere. The authors identified 14 vortices during a single 55-minute observing run, which lasted for an average of about 13 minutes. Based on these statistics, they determined the Sun should have at least 11,000 vortices on its surface at any given time, at least during periods of low sunspot activity.

Due to the different wavelengths of light the observers used, they were able to map the shape and speed of the vortices as a function of height in the chromosphere. They found the familiar tornado shape: tapered at the base, widening at the top, reaching diameters of 1500 km. Each vortex was aligned along a single axis over a bright spot in the photosphere, which is the sign of a concentration of magnetic field lines.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: A giant solar tornado from last fall large enought to swallow up 5 planet Earths is the first of its kind caught on film, March 6, 2012. Courtesy of Slate / NASA /Solar Dynamics Observatory (SDO).[end-div]

 

100 Million Year Old Galactic Echo

Cosmologists have found what they believe to be the echoes of a galactic collision some 100 million years ago with our own Milky Way galaxy.

[div class=attrib]From Symmetry Magazine:[end-div]

Our galaxy, the Milky Way, is a large spiral galaxy surrounded by dozens of smaller satellite galaxies. Scientists have long theorized that occasionally these satellites will pass through the disk of the Milky Way, perturbing both the satellite and the disk. A team of astronomers from Canada and the United States have discovered what may well be the smoking gun of such an encounter, one that occurred close to our position in the galaxy and relatively recently, at least in the cosmological sense.

“We have found evidence that our Milky Way had an encounter with a small galaxy or massive dark matter structure perhaps as recently as 100 million years ago,” said Larry Widrow, professor at Queen’s University in Canada. “We clearly observe unexpected differences in the Milky Way’s stellar distribution above and below the Galaxy’s midplane that have the appearance of a vertical wave — something that nobody has seen before.”

The discovery is based on observations of some 300,000 nearby Milky Way stars by the Sloan Digital Sky Survey. Stars in the disk of the Milky Way move up and down at a speed of about 20-30 kilometers per second while orbiting the center of the galaxy at a brisk 220 kilometers per second. Widrow and his four collaborators from the University of Kentucky, the University of Chicago and Fermi National Accelerator Laboratory have found that the positions and motions of these nearby stars weren’t quite as regular as previously thought.

“Our part of the Milky Way is ringing like a bell,” said Brian Yanny, of the Department of Energy’s Fermilab. “But we have not been able to identify the celestial object that passed through the Milky Way. It could have been one of the small satellite galaxies that move around the center of our galaxy, or an invisible structure such as a dark matter halo.”

Adds Susan Gardner, professor of physics at the University of Kentucky: “The perturbation need not have been a single isolated event in the past, and it may even be ongoing. Additional observations may well clarify its origin.”

When the collaboration started analyzing the SDSS data on the Milky Way, they noticed a small but statistically significant difference in the distribution of stars north and south of the Milky Way’s midplane. For more than a year, the team members explored various explanations of this north-south asymmetry, such as the effect of interstellar dust on distance determinations and the way the stars surveyed were selected. When those attempts failed, they began to explore the alternative explanation that the data was telling them something about recent events in the history of the Galaxy.

The scientists used computer simulations to explore what would happen if a satellite galaxy or dark matter structure passed through the disk of the Milky Way. The simulations indicate that over the next 100 million years or so, our galaxy will “stop ringing:” the north-south asymmetry will disappear and the vertical motions of stars in the solar neighborhood will revert back to their equilibrium orbits — unless we get hit again.

[div class=attrib]Read the entire article after the jump.[end-div]

Persecution of Scientists: Old and New

The debate over the theory of evolution continues into the 21st century particularly in societies with a religious bent, including the United States of America. Yet, while the theory and corresponding evidence comes under continuous attack from mostly religious apologists, we generally do not see scientists themselves persecuted for supporting evolution, or not.

This cannot be said for climate scientists in Western countries, who while not physically abused or tortured or imprisoned do continue to be targets of verbal abuse and threats from corporate interests or dogmatic politicians and their followers. But, as we know persecution of scientists for embodying new, and thus threatening, ideas has been with us since the dawn of the scientific age. In fact, this behavior probably has been with us since our tribal ancestors moved out of Africa.

So, it is useful to remind ourselves how far we have come and of the distance we still have to travel.

[div class=attrib]From Wired:[end-div]

Turing was famously chemically-castrated after admitting to homosexual acts in the 1950s. He is one of a long line of scientists who have been persecuted for their beliefs or practices.

After admitting to “homosexual acts” in early 1952, Alan Turing was prosecuted and had to make the choice between a custodial sentence or chemical castration through hormone injections. Injections of oestrogen were intended to deal with “abnormal and uncontrollable” sexual urges, according to literature at the time.
He chose this option so that he could stay out of jail and continue his research, although his security clearance was revoked, meaning he could not continue with his cryptographic work. Turing experienced some disturbing side effects, including impotence, from the hormone treatment. Other known side effects include breast swelling, mood changes and an overall “feminization”. Turing completed his year of treatment without major incident. His medication was discontinued in April 1953 and the University of Manchester created a five-year readership position just for him, so it came as a shock when he committed suicide on 7 June, 1954.

Turing isn’t the only scientist to have been persecuted for his personal or professional beliefs or lifestyle. Here’s a a list of other prominent scientific luminaries who have been punished throughout history.

Rhazes (865-925)
Muhammad ibn Zakariy? R?z? or Rhazes was a medical pioneer from Baghdad who lived between 860 and 932 AD. He was responsible for introducing western teachings, rational thought and the works of Hippocrates and Galen to the Arabic world. One of his books, Continens Liber, was a compendium of everything known about medicine. The book made him famous, but offended a Muslim priest who ordered the doctor to be beaten over the head with his own manuscript, which caused him to go blind, preventing him from future practice.

Michael Servetus (1511-1553)
Servetus was a Spanish physician credited with discovering pulmonary circulation. He wrote a book, which outlined his discovery along with his ideas about reforming Christianity — it was deemed to be heretical. He escaped from Spain and the Catholic Inquisition but came up against the Protestant Inquisition in Switzerland, who held him in equal disregard. Under orders from John Calvin, Servetus was arrested, tortured and burned at the stake on the shores of Lake Geneva – copies of his book were accompanied for good measure.

Galileo Galilei (1564-1642)
The Italian astronomer and physicist Galileo Galilei was trialled and convicted in 1633 for publishing his evidence that supported the Copernican theory that the Earth revolves around the Sun. His research was instantly criticized by the Catholic Church for going against the established scripture that places Earth and not the Sun at the center of the universe. Galileo was found “vehemently suspect of heresy” for his heliocentric views and was required to “abjure, curse and detest” his opinions. He was sentenced to house arrest, where he remained for the rest of his life and his offending texts were banned.

Henry Oldenburg (1619-1677)
Oldenburg founded the Royal Society in London in 1662. He sought high quality scientific papers to publish. In order to do this he had to correspond with many foreigners across Europe, including the Netherlands and Italy. The sheer volume of his correspondence caught the attention of authorities, who arrested him as a spy. He was held in the Tower of London for several months.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Engraving of Galileo Galilei offering his telescope to three women (possibly Urania and attendants) seated on a throne; he is pointing toward the sky where some of his astronomical discoveries are depicted, 1655. Courtesy of Library of Congress.[end-div]

Higgs?

 

A week ago, on July 4, 2012 researchers at CERN told the world that they had found evidence of a new fundamental particle — the so-called Higgs boson, or something closely similar. If further particle collisions at CERN’s Large Hadron Collider uphold this finding over the coming years, this will rank as significant a discovery as that of the proton or the electro-magnetic force. While practical application of this discovery, in our lifetimes at least, is likely to be scant, it undeniably furthers our quest to understand the underlying mechanism of our existence.

So where might this discovery lead next?

[div class=attrib]From the New Scientist:[end-div]

“As a layman, I would say, I think we have it,” said Rolf-Dieter Heuer, director general of CERN at Wednesday’s seminar announcing the results of the search for the Higgs boson. But when pressed by journalists afterwards on what exactly “it” was, things got more complicated. “We have discovered a boson – now we have to find out what boson it is,” he said cryptically. Eh? What kind of particle could it be if it isn’t the Higgs boson? And why would it show up right where scientists were looking for the Higgs? We asked scientists at CERN to explain.

If we don’t know the new particle is a Higgs, what do we know about it?
We know it is some kind of boson, says Vivek Sharma of CMS, one of the two Large Hadron Collider experiments that presented results on Wednesday. There are only two types of elementary particle in the standard model: fermions, which include electrons, quarks and neutrinos, and bosons, which include photons and the W and Z bosons. The Higgs is a boson – and we know the new particle is too because one of the things it decays into is a pair of high-energy photons, or gamma rays. According to the rules of mathematical symmetry, only a boson could decay into exactly two other photons.

Anything else?
Another thing we can say about the new particle is that nothing yet suggests it isn’t a Higgs. The standard model, our leading explanation for the known particles and the forces that act on them, predicts the rate at which a Higgs of a given mass should decay into various particles. The rates of decay reported for the new particle yesterday are not exactly what would be predicted for its mass of about 125 gigaelectronvolts (GeV) – leaving the door open to more exotic stuff. “If there is such a thing as a 125 GeV Higgs, we know what its rate of decay should be,” says Sharma. But the decay rates are close enough for the differences to be statistical anomalies that will disappear once more data is taken. “There are no serious inconsistencies,” says Joe Incandela, head of CMS, who reported the results on Wednesday.

In that case, are the CERN scientists just being too cautious? What would be enough evidence to call it a Higgs boson?
As there could be many different kinds of Higgs bosons, there’s no straight answer. An easier question to answer is: what would make the new particle neatly fulfil the Higgs boson’s duty in the standard model? Number one is to give other particles mass via the Higgs field – an omnipresent entity that “slows” some particles down more than others, resulting in mass. Any particle that makes up this field must be “scalar”. The opposite of a vector, this means that, unlike a magnetic field, or gravity, it doesn’t have any directionality. “Only a scalar boson fixes the problem,” says Oliver Buchmueller, also of CMS.

When will we know whether it’s a scalar boson?
By the end of the year, reckons Buchmueller, when at least one outstanding property of the new particle – its spin – should be determined. Scalars’ lack of directionality means they have spin 0. As the particle is a boson, we already know its spin is a whole number and as it decays into two photons, mathematical symmetry again dictates that the spin can’t be 1. Buchmueller says LHC researchers will able to determine whether it has a spin of 0 or 2 by examining whether the Higgs’ decay particles shoot into the detector in all directions or with a preferred direction – the former would suggest spin 0. “Most people think it is a scalar, but it still needs to be proven,” says Buchmueller. Sharma is pretty sure it’s a scalar boson – that’s because it is more difficult to make a boson with spin 2. He adds that, although it is expected, confirmation that this is a scalar boson is still very exciting: “The beautiful thing is, if this turns out to be a scalar particle, we are seeing a new kind of particle. We have never seen a fundamental particle that is a scalar.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: A typical candidate event including two high-energy photons whose energy (depicted by dashed yellow lines and red towers) is measured in the CMS electromagnetic calorimeter. The yellow lines are the measured tracks of other particles produced in the collision.[end-div]

Empathy and Touch

[div class=attrib]From Scientific American:[end-div]

When a friend hits her thumb with a hammer, you don’t have to put much effort into imagining how this feels. You know it immediately. You will probably tense up, your “Ouch!” may arise even quicker than your friend’s, and chances are that you will feel a little pain yourself. Of course, you will then thoughtfully offer consolation and bandages, but your initial reaction seems just about automatic. Why?

Neuroscience now offers you an answer: A recent line of research has demonstrated that seeing other people being touched activates primary sensory areas of your brain, much like experiencing the same touch yourself would do. What these findings suggest is beautiful in its simplicity—that you literally “feel with” others.

There is no denying that the exceptional interpersonal understanding we humans show is by and large a product of our emotional responsiveness. We are automatically affected by other people’s feelings, even without explicit communication. Our involvement is sometimes so powerful that we have to flee it, turning our heads away when we see someone get hurt in a movie. Researchers hold that this capacity emerged long before humans evolved. However, only quite recently has it been given a name: A mere hundred years ago, the word “Empathy”—a combination of the Greek “in” (em-) and “feeling” (pathos)—was coined by the British psychologist E. B. Titchener during his endeavor to translate the German Einfühlungsvermögen (“the ability to feel into”).

Despite the lack of a universally agreed-upon definition of empathy, the mechanisms of sharing and understanding another’s experience have always been of scientific and public interest—and particularly so since the introduction of “mirror neurons.” This important discovery was made two decades ago by  Giacomo Rizzolatti and his co-workers at the University of Parma, who were studying motor neuron properties in macaque monkeys. To compensate for the tedious electrophysiological recordings required, the monkey was occasionally given food rewards. During these incidental actions something unexpected happened: When the monkey, remaining perfectly still, saw the food being grasped by an experimenter in a specific way, some of its motor neurons discharged. Remarkably, these neurons normally fired when the monkey itself grasped the food in this way. It was as if the monkey’s brain was directly mirroring the actions it observed. This “neural resonance,” which was later also demonstrated in humans, suggested the existence of a special type of “mirror” neurons that help us understand other people’s actions.

Do you find yourself wondering, now, whether a similar mirror mechanism could have caused your pungent empathic reaction to your friend maltreating herself with a hammer? A group of scientists led by Christian Keysers believed so. The researchers had their participants watch short movie clips of people being touched, while using functional magnetic resonance imaging (fMRI) to record their brain activity. The brain scans revealed that the somatosensory cortex, a complex of brain regions processing touch information, was highly active during the movie presentations—although participants were not being touched at all. As was later confirmed by other studies, this activity strongly resembled the somatosensory response participants showed when they were actually touched in the same way. A recent study by Esther Kuehn and colleagues even found that, during the observation of a human hand being touched, parts of the somatosensory cortex were particularly active when (judging by perspective) the hand clearly belonged to another person.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Science Daily.[end-div]

The Inevitability of Life: A Tale of Protons and Mitochondria

A fascinating article by Nick Lane a leading researcher into the origins of life. Lane is a Research Fellow at University College London.

He suggests that it would be surprising if simple, bacterial-like, life were not common throughout the universe. However, the acquisition of one cell by another — an event that led to all higher organisms on planet Earth, is an altogether much rarer occurrence. So are we alone in the universe?

[div class=attrib]From the New Scientist:[end-div]

UNDER the intense stare of the Kepler space telescope, more and more planets similar to our own are revealing themselves to us. We haven’t found one exactly like Earth yet, but so many are being discovered that it appears the galaxy must be teeming with habitable planets.

These discoveries are bringing an old paradox back into focus. As physicist Enrico Fermi asked in 1950, if there are many suitable homes for life out there and alien life forms are common, where are they all? More than half a century of searching for extraterrestrial intelligence has so far come up empty-handed.

Of course, the universe is a very big place. Even Frank Drake’s famously optimistic “equation” for life’s probability suggests that we will be lucky to stumble across intelligent aliens: they may be out there, but we’ll never know it. That answer satisfies no one, however.

There are deeper explanations. Perhaps alien civilisations appear and disappear in a galactic blink of an eye, destroying themselves long before they become capable of colonising new planets. Or maybe life very rarely gets started even when conditions are perfect.

If we cannot answer these kinds of questions by looking out, might it be possible to get some clues by looking in? Life arose only once on Earth, and if a sample of one were all we had to go on, no grand conclusions could be drawn. But there is more than that. Looking at a vital ingredient for life – energy – suggests that simple life is common throughout the universe, but it does not inevitably evolve into more complex forms such as animals. I might be wrong, but if I’m right, the immense delay between life first appearing on Earth and the emergence of complex life points to another, very different explanation for why we have yet to discover aliens.

Living things consume an extraordinary amount of energy, just to go on living. The food we eat gets turned into the fuel that powers all living cells, called ATP. This fuel is continually recycled: over the course of a day, humans each churn through 70 to 100 kilograms of the stuff. This huge quantity of fuel is made by enzymes, biological catalysts fine-tuned over aeons to extract every last joule of usable energy from reactions.

The enzymes that powered the first life cannot have been as efficient, and the first cells must have needed a lot more energy to grow and divide – probably thousands or millions of times as much energy as modern cells. The same must be true throughout the universe.

This phenomenal energy requirement is often left out of considerations of life’s origin. What could the primordial energy source have been here on Earth? Old ideas of lightning or ultraviolet radiation just don’t pass muster. Aside from the fact that no living cells obtain their energy this way, there is nothing to focus the energy in one place. The first life could not go looking for energy, so it must have arisen where energy was plentiful.

Today, most life ultimately gets its energy from the sun, but photosynthesis is complex and probably didn’t power the first life. So what did? Reconstructing the history of life by comparing the genomes of simple cells is fraught with problems. Nevertheless, such studies all point in the same direction. The earliest cells seem to have gained their energy and carbon from the gases hydrogen and carbon dioxide. The reaction of H2 with CO2 produces organic molecules directly, and releases energy. That is important, because it is not enough to form simple molecules: it takes buckets of energy to join them up into the long chains that are the building blocks of life.

A second clue to how the first life got its energy comes from the energy-harvesting mechanism found in all known life forms. This mechanism was so unexpected that there were two decades of heated altercations after it was proposed by British biochemist Peter Mitchell in 1961.

Universal force field

Mitchell suggested that cells are powered not by chemical reactions, but by a kind of electricity, specifically by a difference in the concentration of protons (the charged nuclei of hydrogen atoms) across a membrane. Because protons have a positive charge, the concentration difference produces an electrical potential difference between the two sides of the membrane of about 150 millivolts. It might not sound like much, but because it operates over only 5 millionths of a millimetre, the field strength over that tiny distance is enormous, around 30 million volts per metre. That’s equivalent to a bolt of lightning.

Mitchell called this electrical driving force the proton-motive force. It sounds like a term from Star Wars, and that’s not inappropriate. Essentially, all cells are powered by a force field as universal to life on Earth as the genetic code. This tremendous electrical potential can be tapped directly, to drive the motion of flagella, for instance, or harnessed to make the energy-rich fuel ATP.

However, the way in which this force field is generated and tapped is extremely complex. The enzyme that makes ATP is a rotating motor powered by the inward flow of protons. Another protein that helps to generate the membrane potential, NADH dehydrogenase, is like a steam engine, with a moving piston for pumping out protons. These amazing nanoscopic machines must be the product of prolonged natural selection. They could not have powered life from the beginning, which leaves us with a paradox.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Transmission electron microscope image of a thin section cut through an area of mammalian lung tissue. The high magnification image shows a mitochondria. Courtesy of Wikipedia.[end-div]

CDM: Cosmic Discovery Machine

We think CDM sounds much more fun than LHC, a rather dry acronym for Large Hadron Collider.

Researchers at the LHC are set to announce the latest findings in early July from the record-breaking particle smasher buried below the French and Swiss borders. Rumors point towards the discovery of the so-called Higgs boson, the particle theorized to give mass to all the other fundamental building blocks of matter. So, while this would be another exciting discovery from CERN and yet another confirmation of the fundamental and elegant Standard Model of particle physics, perhaps there is yet more to uncover, such as the exotically named “inflaton”.

[div class=attrib]From Scientific American:[end-div]

Within a sliver of a second after it was born, our universe expanded staggeringly in size, by a factor of at least 10^26. That’s what most cosmologists maintain, although it remains a mystery as to what might have begun and ended this wild expansion. Now scientists are increasingly wondering if the most powerful particle collider in history, the Large Hadron Collider (LHC) in Europe, could shed light on this mysterious growth, called inflation, by catching a glimpse of the particle behind it. It could be that the main target of the collider’s current experiments, the Higgs boson, which is thought to endow all matter with mass, could also be this inflationary agent.

During inflation, spacetime is thought to have swelled in volume at an accelerating rate, from about a quadrillionth the size of an atom to the size of a dime. This rapid expansion would help explain why the cosmos today is as extraordinarily uniform as it is, with only very tiny variations in the distribution of matter and energy. The expansion would also help explain why the universe on a large scale appears geometrically flat, meaning that the fabric of space is not curved in a way that bends the paths of light beams and objects traveling within it.

The particle or field behind inflation, referred to as the “inflaton,” is thought to possess a very unusual property: it generates a repulsive gravitational field. To cause space to inflate as profoundly and temporarily as it did, the field’s energy throughout space must have varied in strength over time, from very high to very low, with inflation ending once the energy sunk low enough, according to theoretical physicists.

Much remains unknown about inflation, and some prominent critics of the idea wonder if it happened at all. Scientists have looked at the cosmic microwave background radiation—the afterglow of the big bang—to rule out some inflationary scenarios. “But it cannot tell us much about the nature of the inflaton itself,” says particle cosmologist Anupam Mazumdar at Lancaster University in England, such as its mass or the specific ways it might interact with other particles.

A number of research teams have suggested competing ideas about how the LHC might discover the inflaton. Skeptics think it highly unlikely that any earthly particle collider could shed light on inflation, because the uppermost energy densities one could imagine with inflation would be about 10^50 times above the LHC’s capabilities. However, because inflation varied with strength over time, scientists have argued the LHC may have at least enough energy to re-create inflation’s final stages.

It could be that the principal particle ongoing collider runs aim to detect, the Higgs boson, could also underlie inflation.

“The idea of the Higgs driving inflation can only take place if the Higgs’s mass lies within a particular interval, the kind which the LHC can see,” says theoretical physicist Mikhail Shaposhnikov at the École Polytechnique Fédérale de Lausanne in Switzerland. Indeed, evidence of the Higgs boson was reported at the LHC in December at a mass of about 125 billion electron volts, roughly the mass of 125 hydrogen atoms.

Also intriguing: the Higgs as well as the inflaton are thought to have varied with strength over time. In fact, the inventor of inflation theory, cosmologist Alan Guth at the Massachusetts Institute of Technology, originally assumed inflation was driven by the Higgs field of a conjectured grand unified theory.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Physics World.[end-div]

Communicating with the Comatose

[div class=attrib]From Scientific American:[end-div]

Adrian Owen still gets animated when he talks about patient 23. The patient was only 24 years old when his life was devastated by a car accident. Alive but unresponsive, he had been languishing in what neurologists refer to as a vegetative state for five years, when Owen, a neuro-scientist then at the University of Cambridge, UK, and his colleagues at the University of Liège in Belgium, put him into a functional magnetic resonance imaging (fMRI) machine and started asking him questions.

Incredibly, he provided answers. A change in blood flow to certain parts of the man’s injured brain convinced Owen that patient 23 was conscious and able to communicate. It was the first time that anyone had exchanged information with someone in a vegetative state.

Patients in these states have emerged from a coma and seem awake. Some parts of their brains function, and they may be able to grind their teeth, grimace or make random eye movements. They also have sleep–wake cycles. But they show no awareness of their surroundings, and doctors have assumed that the parts of the brain needed for cognition, perception, memory and intention are fundamentally damaged. They are usually written off as lost.

Owen’s discovery, reported in 2010, caused a media furore. Medical ethicist Joseph Fins and neurologist Nicholas Schiff, both at Weill Cornell Medical College in New York, called it a “potential game changer for clinical practice”. The University of Western Ontario in London, Canada, soon lured Owen away from Cambridge with Can$20 million (US$19.5 million) in funding to make the techniques more reliable, cheaper, more accurate and more portable — all of which Owen considers essential if he is to help some of the hundreds of thousands of people worldwide in vegetative states. “It’s hard to open up a channel of communication with a patient and then not be able to follow up immediately with a tool for them and their families to be able to do this routinely,” he says.

Many researchers disagree with Owen’s contention that these individuals are conscious. But Owen takes a practical approach to applying the technology, hoping that it will identify patients who might respond to rehabilitation, direct the dosing of analgesics and even explore some patients’ feelings and desires. “Eventually we will be able to provide something that will be beneficial to patients and their families,” he says.

Still, he shies away from asking patients the toughest question of all — whether they wish life support to be ended — saying that it is too early to think about such applications. “The consequences of asking are very complicated, and we need to be absolutely sure that we know what to do with the answers before we go down this road,” he warns.

Lost and found
With short, reddish hair and beard, Owen is a polished speaker who is not afraid of publicity. His home page is a billboard of links to his television and radio appearances. He lectures to scientific and lay audiences with confidence and a touch of defensiveness.

Owen traces the roots of his experiments to the late 1990s, when he was asked to write a review of clinical applications for technologies such as fMRI. He says that he had a “weird crisis of confidence”. Neuroimaging had confirmed a lot of what was known from brain mapping studies, he says, but it was not doing anything new. “We would just tweak a psych test and see what happens,” says Owen. As for real clinical applications: “I realized there weren’t any. We all realized that.”

Owen wanted to find one. He and his colleagues got their chance in 1997, with a 26-year-old patient named Kate Bainbridge. A viral infection had put her in a coma — a condition that generally persists for two to four weeks, after which patients die, recover fully or, in rare cases, slip into a vegetative or a minimally conscious state — a more recently defined category characterized by intermittent hints of conscious activity.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]fMRI axial brain image. Image courtesy of Wikpedia.[end-div]