Category Archives: BigBang

Non-Spooky Action at a Distance

Albert Einstein famously called quantum entanglement “spooky action at a distance”. It refers to the notion that measuring the state of one of two entangled particles makes the state of the second particle known instantaneously, regardless of the distance  separating the two particles. Entanglement seems to link these particles and make them behave as one system. This peculiar characteristic has been a core element of the counterintuitiive world of quantum theory. Yet while experiments have verified this spookiness, other theorists maintain that both theory and experiment are flawed, and that a different interpretation is required. However, one such competing theory — the many worlds interpretation — makes equally spooky predictions.

From ars technica:

Quantum nonlocality, perhaps one of the most mysterious features of quantum mechanics, may not be a real phenomenon. Or at least that’s what a new paper in the journal PNAS asserts. Its author claims that nonlocality is nothing more than an artifact of the Copenhagen interpretation, the most widely accepted interpretation of quantum mechanics.

Nonlocality is a feature of quantum mechanics where particles are able to influence each other instantaneously regardless of the distance between them, an impossibility in classical physics. Counterintuitive as it may be, nonlocality is currently an accepted feature of the quantum world, apparently verified by many experiments. It’s achieved such wide acceptance that even if our understandings of quantum physics turn out to be completely wrong, physicists think some form of nonlocality would be a feature of whatever replaced it.

The term “nonlocality” comes from the fact that this “spooky action at a distance,” as Einstein famously called it, seems to put an end to our intuitive ideas about location. Nothing can travel faster than the speed of light, so if two quantum particles can influence each other faster than light could travel between the two, then on some level, they act as a single system—there must be no real distance between them.

The concept of location is a bit strange in quantum mechanics anyway. Each particle is described by a mathematical quantity known as the “wave function.” The wave function describes a probability distribution for the particle’s location, but not a definite location. These probable locations are not just scientists’ guesses at the particle’s whereabouts; they’re actual, physical presences. That is to say, the particles exist in a swarm of locations at the same time, with some locations more probable than others.

A measurement collapses the wave function so that the particle is no longer spread out over a variety of locations. It begins to act just like objects we’re familiar with—existing in one specific location.

The experiments that would measure nonlocality, however, usually involve two particles that are entangled, which means that both are described by a shared wave function. The wave function doesn’t just deal with the particle’s location, but with other aspects of its state as well, such as the direction of the particle’s spin. So if scientists can measure the spin of one of the two entangled particles, the shared wave function collapses and the spins of both particles become certain. This happens regardless of the distance between the particles.

The new paper calls all this into question.

The paper’s sole author, Frank Tipler, argues that the reason previous studies apparently confirmed quantum nonlocality is that they were relying on an oversimplified understanding of quantum physics in which the quantum world and the macroscopic world we’re familiar with are treated as distinct from one another. Even large structures obey the laws of quantum Physics, Tipler points out, so the scientists making the measurements must be considered part of the system being studied.

It is intuitively easy to separate the quantum world from our everyday world, as they appear to behave so differently. However, the equations of quantum mechanics can be applied to large objects like human beings, and they essentially predict that you’ll behave just as classical physics—and as observation—says you will. (Physics students who have tried calculating their own wave functions can attest to this). The laws of quantum physics do govern the entire Universe, even if distinctly quantum effects are hard to notice at a macroscopic level.

When this is taken into account, according to Tipler, the results of familiar nonlocality experiments are altered. Typically, such experiments are thought to involve only two measurements: one on each of two entangled particles. But Tipler argues that in such experiments, there’s really a third measurement taking place when the scientists compare the results of the two.

This third measurement is crucial, Tipler argues, as without it, the first two measurements are essentially meaningless. Without comparing the first two, there’s no way to know that one particle’s behavior is actually linked to the other’s. And crucially, in order for the first two measurements to be compared, information must be exchanged between the particles, via the scientists, at a speed less than that of light. In other words, when the third measurement is taken into account, the two particles are not communicating faster than light. There is no “spooky action at a distance.”

Tipler has harsh criticism for the reasoning that led to nonlocality. “The standard argument that quantum phenomena are nonlocal goes like this,” he says in the paper. “(i) Let us add an unmotivated, inconsistent, unobservable, nonlocal process (collapse) to local quantum mechanics; (ii) note that the resulting theory is nonlocal; and (iii) conclude that quantum mechanics is [nonlocal].”

He’s essentially saying that scientists are arbitrarily adding nonlocality, which they can’t observe, and then claiming they have discovered nonlocality. Quite an accusation, especially for the science world. (The “collapse” he mentions is the collapse of the particle’s wave function, which he asserts is not a real phenomenon.) Instead, he claims that the experiments thought to confirm nonlocality are in fact confirming an alternative to the Copenhagen interpretation called the many-worlds interpretation (MWI). As its name implies, the MWI predicts the existence of other universes.

The Copenhagen interpretation has been summarized as “shut up and measure.” Even though the consequences of a wave function-based world don’t make much intuitive sense, it works. The MWI tries to keep particles concrete at the cost of making our world a bit fuzzy. It posits that rather than becoming a wave function, particles remain distinct objects but enter one of a number of alternative universes, which recombine to a single one when the particle is measured.

Scientists who thought they were measuring nonlocality, Tipler claims, were in fact observing the effects of alternate universe versions of themselves, also measuring the same particles.

Part of the significance of Tipler’s claim is that he’s able to mathematically derive the same experimental results from the MWI without use of nonlocality. But this does not necessarily make for evidence that the MWI is correct; either interpretation remains consistent with the data. Until the two can be distinguished experimentally, it all comes down to whether you personally like or dislike nonlocality.

Read the entire article here.

You Are a Neural Computation

Since the days of Aristotle, and later Descartes, thinkers have sought to explain consciousness and free will. Several thousand years on and we are still pondering the notion; science has made great strides and yet fundamentally we still have little idea.

Many neuroscientists now armed with new and very precise research tools are aiming to change this. Yet, increasingly it seems that free will may indeed by a cognitive illusion. Evidence suggests that our subconscious decides and initiates action for us long before we are aware of making a conscious decision. There seems to be no god or ghost in the machine.

From Technology Review:

It was an expedition seeking something never caught before: a single human neuron lighting up to create an urge, albeit for the minor task of moving an index finger, before the subject was even aware of feeling anything. Four years ago, Itzhak Fried, a neurosurgeon at the University of California, Los Angeles, slipped several probes, each with eight hairlike electrodes able to record from single neurons, into the brains of epilepsy patients. (The patients were undergoing surgery to diagnose the source of severe seizures and had agreed to participate in experiments during the process.) Probes in place, the patients—who were conscious—were given instructions to press a button at any time of their choosing, but also to report when they’d first felt the urge to do so.

Later, Gabriel Kreiman, a neuroscientist at Harvard Medical School and Children’s Hospital in Boston, captured the quarry. Poring over data after surgeries in 12 patients, he found telltale flashes of individual neurons in the pre-­supplementary motor area (associated with movement) and the anterior cingulate (associated with motivation and attention), preceding the reported urges by anywhere from hundreds of milliseconds to several seconds. It was a direct neural measurement of the unconscious brain at work—caught in the act of formulating a volitional, or freely willed, decision. Now Kreiman and his colleagues are planning to repeat the feat, but this time they aim to detect pre-urge signatures in real time and stop the subject from performing the action—or see if that’s even possible.

A variety of imaging studies in humans have revealed that brain activity related to decision-making tends to precede conscious action. Implants in macaques and other animals have examined brain circuits involved in perception and action. But Kreiman broke ground by directly measuring a preconscious decision in humans at the level of single neurons. To be sure, the readouts came from an average of just 20 neurons in each patient. (The human brain has about 86 billion of them, each with thousands of connections.) And ultimately, those neurons fired only in response to a chain of even earlier events. But as more such experiments peer deeper into the labyrinth of neural activity behind decisions—whether they involve moving a finger or opting to buy, eat, or kill something—science could eventually tease out the full circuitry of decision-making and perhaps point to behavioral therapies or treatments. “We need to understand the neuronal basis of voluntary decision-making—or ‘freely willed’ decision-­making—and its pathological counterparts if we want to help people such as drug, sex, food, and gambling addicts, or patients with obsessive-compulsive disorder,” says Christof Koch, chief scientist at the Allen Institute of Brain Science in Seattle (see “Cracking the Brain’s Codes”). “Many of these people perfectly well know that what they are doing is dysfunctional but feel powerless to prevent themselves from engaging in these behaviors.”

Kreiman, 42, believes his work challenges important Western philosophical ideas about free will. The Argentine-born neuroscientist, an associate professor at Harvard Medical School, specializes in visual object recognition and memory formation, which draw partly on unconscious processes. He has a thick mop of black hair and a tendency to pause and think a long moment before reframing a question and replying to it expansively. At the wheel of his Jeep as we drove down Broadway in Cambridge, Massachusetts, Kreiman leaned over to adjust the MP3 player—toggling between Vivaldi, Lady Gaga, and Bach. As he did so, his left hand, the one on the steering wheel, slipped to let the Jeep drift a bit over the double yellow lines. Kreiman’s view is that his neurons made him do it, and they also made him correct his small error an instant later; in short, all actions are the result of neural computations and nothing more. “I am interested in a basic age-old question,” he says. “Are decisions really free? I have a somewhat extreme view of this—that there is nothing really free about free will. Ultimately, there are neurons that obey the laws of physics and mathematics. It’s fine if you say ‘I decided’—that’s the language we use. But there is no god in the machine—only neurons that are firing.”

Our philosophical ideas about free will date back to Aristotle and were systematized by René Descartes, who argued that humans possess a God-given “mind,” separate from our material bodies, that endows us with the capacity to freely choose one thing rather than another. Kreiman takes this as his departure point. But he’s not arguing that we lack any control over ourselves. He doesn’t say that our decisions aren’t influenced by evolution, experiences, societal norms, sensations, and perceived consequences. “All of these external influences are fundamental to the way we decide what we do,” he says. “We do have experiences, we do learn, we can change our behavior.”

But the firing of a neuron that guides us one way or another is ultimately like the toss of a coin, Kreiman insists. “The rules that govern our decisions are similar to the rules that govern whether a coin will land one way or the other. Ultimately there is physics; it is chaotic in both cases, but at the end of the day, nobody will argue the coin ‘wanted’ to land heads or tails. There is no real volition to the coin.”

Testing Free Will

It’s only in the past three to four decades that imaging tools and probes have been able to measure what actually happens in the brain. A key research milestone was reached in the early 1980s when Benjamin Libet, a researcher in the physiology department at the University of California, San Francisco, made a remarkable study that tested the idea of conscious free will with actual data.

Libet fitted subjects with EEGs—gadgets that measure aggregate electrical brain activity through the scalp—and had them look at a clock dial that spun around every 2.8 seconds. The subjects were asked to press a button whenever they chose to do so—but told they should also take note of where the time hand was when they first felt the “wish or urge.” It turns out that the actual brain activity involved in the action began 300 milliseconds, on average, before the subject was conscious of wanting to press the button. While some scientists criticized the methods—questioning, among other things, the accuracy of the subjects’ self-reporting—the study set others thinking about how to investigate the same questions. Since then, functional magnetic resonance imaging (fMRI) has been used to map brain activity by measuring blood flow, and other studies have also measured brain activity processes that take place before decisions are made. But while fMRI transformed brain science, it was still only an indirect tool, providing very low spatial resolution and averaging data from millions of neurons. Kreiman’s own study design was the same as Libet’s, with the important addition of the direct single-neuron measurement.

When Libet was in his prime, ­Kreiman was a boy. As a student of physical chemistry at the University of Buenos Aires, he was interested in neurons and brains. When he went for his PhD at Caltech, his passion solidified under his advisor, Koch. Koch was deep in collaboration with Francis Crick, co-discoverer of DNA’s structure, to look for evidence of how consciousness was represented by neurons. For the star-struck kid from Argentina, “it was really life-changing,” he recalls. “Several decades ago, people said this was not a question serious scientists should be thinking about; they either had to be smoking something or have a Nobel Prize”—and Crick, of course, was a Nobelist. Crick hypothesized that studying how the brain processed visual information was one way to study consciousness (we tap unconscious processes to quickly decipher scenes and objects), and he collaborated with Koch on a number of important studies. Kreiman was inspired by the work. “I was very excited about the possibility of asking what seems to be the most fundamental aspect of cognition, consciousness, and free will in a reductionist way—in terms of neurons and circuits of neurons,” he says.

One thing was in short supply: humans willing to have scientists cut open their skulls and poke at their brains. One day in the late 1990s, Kreiman attended a journal club—a kind of book club for scientists reviewing the latest literature—and came across a paper by Fried on how to do brain science in people getting electrodes implanted in their brains to identify the source of severe epileptic seizures. Before he’d heard of Fried, “I thought examining the activity of neurons was the domain of monkeys and rats and cats, not humans,” Kreiman says. Crick introduced Koch to Fried, and soon Koch, Fried, and Kreiman were collaborating on studies that investigated human neural activity, including the experiment that made the direct neural measurement of the urge to move a finger. “This was the opening shot in a new phase of the investigation of questions of voluntary action and free will,” Koch says.

Read the entire article here.

Questioning Quantum Orthodoxy

de-BrogliePhysics works very well in explaining our world, yet it is also broken — it cannot, at the moment, reconcile our views of the very small (quantum theory) with those of the very large (relativity theory).

So although the probabilistic underpinnings of quantum theory have done wonders in allowing physicists to construct the Standard Model, gaps remain.

Back in the mid-1920s, the probabilistic worldview proposed by Niels Bohr and others gained favor and took hold. A competing theory, known as the pilot wave theory, proposed by a young Louis de Broglie, was given short shrift. Yet some theorists have maintained that it may do a better job of reconciling this core gap in our understanding — so it is time to revisit and breathe fresh life into pilot wave theory.

From Wired / Quanta:

For nearly a century, “reality” has been a murky concept. The laws of quantum physics seem to suggest that particles spend much of their time in a ghostly state, lacking even basic properties such as a definite location and instead existing everywhere and nowhere at once. Only when a particle is measured does it suddenly materialize, appearing to pick its position as if by a roll of the dice.

This idea that nature is inherently probabilistic — that particles have no hard properties, only likelihoods, until they are observed — is directly implied by the standard equations of quantum mechanics. But now a set of surprising experiments with fluids has revived old skepticism about that worldview. The bizarre results are fueling interest in an almost forgotten version of quantum mechanics, one that never gave up the idea of a single, concrete reality.

The experiments involve an oil droplet that bounces along the surface of a liquid. The droplet gently sloshes the liquid with every bounce. At the same time, ripples from past bounces affect its course. The droplet’s interaction with its own ripples, which form what’s known as a pilot wave, causes it to exhibit behaviors previously thought to be peculiar to elementary particles — including behaviors seen as evidence that these particles are spread through space like waves, without any specific location, until they are measured.

Particles at the quantum scale seem to do things that human-scale objects do not do. They can tunnel through barriers, spontaneously arise or annihilate, and occupy discrete energy levels. This new body of research reveals that oil droplets, when guided by pilot waves, also exhibit these quantum-like features.

To some researchers, the experiments suggest that quantum objects are as definite as droplets, and that they too are guided by pilot waves — in this case, fluid-like undulations in space and time. These arguments have injected new life into a deterministic (as opposed to probabilistic) theory of the microscopic world first proposed, and rejected, at the birth of quantum mechanics.

“This is a classical system that exhibits behavior that people previously thought was exclusive to the quantum realm, and we can say why,” said John Bush, a professor of applied mathematics at the Massachusetts Institute of Technology who has led several recent bouncing-droplet experiments. “The more things we understand and can provide a physical rationale for, the more difficult it will be to defend the ‘quantum mechanics is magic’ perspective.”

Magical Measurements

The orthodox view of quantum mechanics, known as the “Copenhagen interpretation” after the home city of Danish physicist Niels Bohr, one of its architects, holds that particles play out all possible realities simultaneously. Each particle is represented by a “probability wave” weighting these various possibilities, and the wave collapses to a definite state only when the particle is measured. The equations of quantum mechanics do not address how a particle’s properties solidify at the moment of measurement, or how, at such moments, reality picks which form to take. But the calculations work. As Seth Lloyd, a quantum physicist at MIT, put it, “Quantum mechanics is just counterintuitive and we just have to suck it up.”

A classic experiment in quantum mechanics that seems to demonstrate the probabilistic nature of reality involves a beam of particles (such as electrons) propelled one by one toward a pair of slits in a screen. When no one keeps track of each electron’s trajectory, it seems to pass through both slits simultaneously. In time, the electron beam creates a wavelike interference pattern of bright and dark stripes on the other side of the screen. But when a detector is placed in front of one of the slits, its measurement causes the particles to lose their wavelike omnipresence, collapse into definite states, and travel through one slit or the other. The interference pattern vanishes. The great 20th-century physicist Richard Feynman said that this double-slit experiment “has in it the heart of quantum mechanics,” and “is impossible, absolutely impossible, to explain in any classical way.”

Some physicists now disagree. “Quantum mechanics is very successful; nobody’s claiming that it’s wrong,” said Paul Milewski, a professor of mathematics at the University of Bath in England who has devised computer models of bouncing-droplet dynamics. “What we believe is that there may be, in fact, some more fundamental reason why [quantum mechanics] looks the way it does.”

Riding Waves

The idea that pilot waves might explain the peculiarities of particles dates back to the early days of quantum mechanics. The French physicist Louis de Broglie presented the earliest version of pilot-wave theory at the 1927 Solvay Conference in Brussels, a famous gathering of the founders of the field. As de Broglie explained that day to Bohr, Albert Einstein, Erwin Schrödinger, Werner Heisenberg and two dozen other celebrated physicists, pilot-wave theory made all the same predictions as the probabilistic formulation of quantum mechanics (which wouldn’t be referred to as the “Copenhagen” interpretation until the 1950s), but without the ghostliness or mysterious collapse.

The probabilistic version, championed by Bohr, involves a single equation that represents likely and unlikely locations of particles as peaks and troughs of a wave. Bohr interpreted this probability-wave equation as a complete definition of the particle. But de Broglie urged his colleagues to use two equations: one describing a real, physical wave, and another tying the trajectory of an actual, concrete particle to the variables in that wave equation, as if the particle interacts with and is propelled by the wave rather than being defined by it.

For example, consider the double-slit experiment. In de Broglie’s pilot-wave picture, each electron passes through just one of the two slits, but is influenced by a pilot wave that splits and travels through both slits. Like flotsam in a current, the particle is drawn to the places where the two wavefronts cooperate, and does not go where they cancel out.

De Broglie could not predict the exact place where an individual particle would end up — just like Bohr’s version of events, pilot-wave theory predicts only the statistical distribution of outcomes, or the bright and dark stripes — but the two men interpreted this shortcoming differently. Bohr claimed that particles don’t have definite trajectories; de Broglie argued that they do, but that we can’t measure each particle’s initial position well enough to deduce its exact path.

In principle, however, the pilot-wave theory is deterministic: The future evolves dynamically from the past, so that, if the exact state of all the particles in the universe were known at a given instant, their states at all future times could be calculated.

At the Solvay conference, Einstein objected to a probabilistic universe, quipping, “God does not play dice,” but he seemed ambivalent about de Broglie’s alternative. Bohr told Einstein to “stop telling God what to do,” and (for reasons that remain in dispute) he won the day. By 1932, when the Hungarian-American mathematician John von Neumann claimed to have proven that the probabilistic wave equation in quantum mechanics could have no “hidden variables” (that is, missing components, such as de Broglie’s particle with its well-defined trajectory), pilot-wave theory was so poorly regarded that most physicists believed von Neumann’s proof without even reading a translation.

More than 30 years would pass before von Neumann’s proof was shown to be false, but by then the damage was done. The physicist David Bohm resurrected pilot-wave theory in a modified form in 1952, with Einstein’s encouragement, and made clear that it did work, but it never caught on. (The theory is also known as de Broglie-Bohm theory, or Bohmian mechanics.)

Later, the Northern Irish physicist John Stewart Bell went on to prove a seminal theorem that many physicists today misinterpret as rendering hidden variables impossible. But Bell supported pilot-wave theory. He was the one who pointed out the flaws in von Neumann’s original proof. And in 1986 he wrote that pilot-wave theory “seems to me so natural and simple, to resolve the wave-particle dilemma in such a clear and ordinary way, that it is a great mystery to me that it was so generally ignored.”

The neglect continues. A century down the line, the standard, probabilistic formulation of quantum mechanics has been combined with Einstein’s theory of special relativity and developed into the Standard Model, an elaborate and precise description of most of the particles and forces in the universe. Acclimating to the weirdness of quantum mechanics has become a physicists’ rite of passage. The old, deterministic alternative is not mentioned in most textbooks; most people in the field haven’t heard of it. Sheldon Goldstein, a professor of mathematics, physics and philosophy at Rutgers University and a supporter of pilot-wave theory, blames the “preposterous” neglect of the theory on “decades of indoctrination.” At this stage, Goldstein and several others noted, researchers risk their careers by questioning quantum orthodoxy.

A Quantum Drop

Now at last, pilot-wave theory may be experiencing a minor comeback — at least, among fluid dynamicists. “I wish that the people who were developing quantum mechanics at the beginning of last century had access to these experiments,” Milewski said. “Because then the whole history of quantum mechanics might be different.”

The experiments began a decade ago, when Yves Couder and colleagues at Paris Diderot University discovered that vibrating a silicon oil bath up and down at a particular frequency can induce a droplet to bounce along the surface. The droplet’s path, they found, was guided by the slanted contours of the liquid’s surface generated from the droplet’s own bounces — a mutual particle-wave interaction analogous to de Broglie’s pilot-wave concept.

Read the entire article here.

Image: Louis de Broglie. Courtesy of Wikipedia.

Defying Enemy Number One

Sir_Isaac_NewtonEnemy number one in this case is not your favorite team’s arch-rival or your political nemesis or your neighbor’s nocturnal barking dog. It is not sugar, nor is it trans-fat. Enemy number one is not North Korea (close),  nor is it the latest group of murderous  terrorists  (closer).

The real enemy is gravity. Not the movie, that is, but the natural phenomenon.

Gravity is constricting: it anchors us to our measly home  planet, making extra-terrestrial exploration rather difficult. Gravity is painful: it drags us down, it makes us fall — and when we’re down , it helps other things fall on top  of us. Gravity is an enigma.

But help may not be too distant; enter The Gravity Research Foundation. While the foundation’s mission may no longer be to counteract gravity, it still aims to help us better understand.

From the NYT:

Not long after the bombings of Hiroshima and Nagasaki, while the world was reckoning with the specter of nuclear energy, a businessman named Roger Babson was worrying about another of nature’s forces: gravity.

It had been 55 years since his sister Edith drowned in the Annisquam River, in Gloucester, Mass., when gravity, as Babson later described it, “came up and seized her like a dragon and brought her to the bottom.” Later on, the dragon took his grandson, too, as he tried to save a friend during a boating mishap.

Something had to be done.

“It seems as if there must be discovered some partial insulator of gravity which could be used to save millions of lives and prevent accidents,” Babson wrote in a manifesto, “Gravity — Our Enemy Number One.” In 1949, drawing on his considerable wealth, he started the Gravity Research Foundation and began awarding annual cash prizes for the best new ideas for furthering his cause.

It turned out to be a hopeless one. By the time the 2014 awards were announced last month, the foundation was no longer hoping to counteract gravity — it forms the very architecture of space-time — but to better understand it. What began as a crank endeavor has become mainstream. Over the years, winners of the prizes have included the likes of Stephen Hawking, Freeman Dyson, Roger Penrose and Martin Rees.

With his theory of general relativity, Einstein described gravity with an elegance that has not been surpassed. A mass like the sun makes the universe bend, causing smaller masses like planets to move toward it.

The problem is that nature’s other three forces are described in an entirely different way, by quantum mechanics. In this system forces are conveyed by particles. Photons, the most familiar example, are the carriers of light. For many scientists, the ultimate prize would be proof that gravity is carried by gravitons, allowing it to mesh neatly with the rest of the machine.

So far that has been as insurmountable as Babson’s old dream. After nearly a century of trying, the best physicists have come up with is superstring theory, a self-consistent but possibly hollow body of mathematics that depends on the existence of extra dimensions and implies that our universe is one of a multitude, each unknowable to the rest.

With all the accomplishments our species has achieved, we could be forgiven for concluding that we have reached a dead end. But human nature compels us to go on.

This year’s top gravity prize of $4,000 went to Lawrence Krauss and Frank Wilczek. Dr. Wilczek shared a Nobel Prize in 2004 for his part in developing the theory of the strong nuclear force, the one that holds quarks together and forms the cores of atoms.

So far gravitons have eluded science’s best detectors, like LIGO, the Laser Interferometer Gravitational-Wave Observatory. Mr. Dyson suggested at a recent talk that the search might be futile, requiring an instrument with mirrors so massive that they would collapse to form a black hole — gravity defeating its own understanding. But in their paper Dr. Krauss and Dr. Wilczek suggest how gravitons might leave their mark on cosmic background radiation, the afterglow of the Big Bang.

Continue reading the main story Continue reading the main story
Continue reading the main story

There are other mysteries to contend with. Despite the toll it took on Babson’s family, theorists remain puzzled over why gravity is so much weaker than electromagnetism. Hold a refrigerator magnet over a paper clip, and it will fly upward and away from Earth’s pull.

Reaching for an explanation, the physicists Lisa Randall and Raman Sundrum once proposed that gravity is diluted because it leaks into a parallel universe. Striking off in a different direction, Dr. Randall and another colleague, Matthew Reece, recently speculated that the pull of a disk of dark matter might be responsible for jostling the solar system and unleashing periodic comet storms like one that might have killed off the dinosaurs.

It was a young theorist named Bryce DeWitt who helped disabuse Babson of his dream of stopping such a mighty force. In “The Perfect Theory,” a new book about general relativity, the Oxford astrophysicist Pedro G. Ferreira tells how DeWitt, in need of a down payment for a house, entered the Gravitational Research Foundation’s competition in 1953 with a paper showing why the attempt to make any kind of antigravity device was “a waste of time.”

He won the prize, the foundation became more respectable, and DeWitt went on to become one of the most prominent theorists of general relativity. Babson, however, was not entirely deterred. In 1962 after more than 100 prominent Atlantans were killed in a plane crash in Paris, he donated $5,000 to Emory University along with a marble monument “to remind students of the blessings forthcoming” once gravity is counteracted.

He paid for similar antigravity monuments at more than a dozen campuses, including one at Tufts University, where newly minted doctoral students in cosmology kneel before it in a ceremony in which an apple is dropped on their heads.

I thought of Babson recently during a poignant scene in the movie “Gravity,” in which two astronauts are floating high above Earth, stranded from home. During a moment of calm, one of them, Lt. Matt Kowalski (played by George Clooney), asks the other, Dr. Ryan Stone (Sandra Bullock), “What do you miss down there?”

She tells him about her daughter:

“She was 4. She was at school playing tag, slipped and hit her head, and that was it. The stupidest thing.” It was gravity that did her in.

Read the entire article here.

Image: Portrait of Isaac Newton (1642-1727) by  Sir Godfrey Kneller (1646–1723). Courtesy of Wikipedia.

c2=e/m

Feynmann_Diagram_Gluon_RadiationParticle physicists will soon attempt to reverse the direction of Einstein’s famous equation delineating energy-matter equivalence, e=mc2. Next year, they plan to crash quanta of light into each other to create matter. Cool or what!

From the Guardian:

Researchers have worked out how to make matter from pure light and are drawing up plans to demonstrate the feat within the next 12 months.

The theory underpinning the idea was first described 80 years ago by two physicists who later worked on the first atomic bomb. At the time they considered the conversion of light into matter impossible in a laboratory.

But in a report published on Sunday, physicists at Imperial College London claim to have cracked the problem using high-powered lasers and other equipment now available to scientists.

“We have shown in principle how you can make matter from light,” said Steven Rose at Imperial. “If you do this experiment, you will be taking light and turning it into matter.”

The scientists are not on the verge of a machine that can create everyday objects from a sudden blast of laser energy. The kind of matter they aim to make comes in the form of subatomic particles invisible to the naked eye.

The original idea was written down by two US physicists, Gregory Breit and John Wheeler, in 1934. They worked out that – very rarely – two particles of light, or photons, could combine to produce an electron and its antimatter equivalent, a positron. Electrons are particles of matter that form the outer shells of atoms in the everyday objects around us.

But Breit and Wheeler had no expectations that their theory would be proved any time soon. In their study, the physicists noted that the process was so rare and hard to produce that it would be “hopeless to try to observe the pair formation in laboratory experiments”.

Oliver Pike, the lead researcher on the study, said the process was one of the most elegant demonstrations of Einstein’s famous relationship that shows matter and energy are interchangeable currencies. “The Breit-Wheeler process is the simplest way matter can be made from light and one of the purest demonstrations of E=mc2,” he said.

Writing in the journal Nature Photonics, the scientists describe how they could turn light into matter through a number of separate steps. The first step fires electrons at a slab of gold to produce a beam of high-energy photons. Next, they fire a high-energy laser into a tiny gold capsule called a hohlraum, from the German for “empty room”. This produces light as bright as that emitted from stars. In the final stage, they send the first beam of photons into the hohlraum where the two streams of photons collide.

The scientists’ calculations show that the setup squeezes enough particles of light with high enough energies into a small enough volume to create around 100,000 electron-positron pairs.

The process is one of the most spectacular predictions of a theory called quantum electrodynamics (QED) that was developed in the run up to the second world war. “You might call it the most dramatic consequence of QED and it clearly shows that light and matter are interchangeable,” Rose told the Guardian.

The scientists hope to demonstrate the process in the next 12 months. There are a number of sites around the world that have the technology. One is the huge Omega laser in Rochester, New York. But another is the Orion laser at Aldermaston, the atomic weapons facility in Berkshire.

A successful demonstration will encourage physicists who have been eyeing the prospect of a photon-photon collider as a tool to study how subatomic particles behave. “Such a collider could be used to study fundamental physics with a very clean experimental setup: pure light goes in, matter comes out. The experiment would be the first demonstration of this,” Pike said.

Read the entire story here.

Image: Feynmann diagram for gluon radiation. Courtesy of Wikipedia.

 

 

95.5 Percent is Made Up and It’s Dark

Petrarch_by_Bargilla

Physicists and astronomers observe the very small and the very big. Although they are focused on very different areas of scientific endeavor and discovery, they tend to agree on one key observation: 95.5 of the cosmos is currently invisible to us. That is, only around 4.5 percent of our physical universe is made up of matter or energy that we can see or sense directly through experimental interaction. The rest, well, it’s all dark — so-called dark matter and dark energy. But nobody really knows what or how or why. Effectively, despite tremendous progress in our understanding of our world, we are still in a global “Dark Age”.

From the New Scientist:

TO OUR eyes, stars define the universe. To cosmologists they are just a dusting of glitter, an insignificant decoration on the true face of space. Far outweighing ordinary stars and gas are two elusive entities: dark matter and dark energy. We don’t know what they are… except that they appear to be almost everything.

These twin apparitions might be enough to give us pause, and make us wonder whether all is right with the model universe we have spent the past century so carefully constructing. And they are not the only thing. Our standard cosmology also says that space was stretched into shape just a split second after the big bang by a third dark and unknown entity called the inflaton field. That might imply the existence of a multiverse of countless other universes hidden from our view, most of them unimaginably alien – just to make models of our own universe work.

Are these weighty phantoms too great a burden for our observations to bear – a wholesale return of conjecture out of a trifling investment of fact, as Mark Twain put it?

The physical foundation of our standard cosmology is Einstein’s general theory of relativity. Einstein began with a simple observation: that any object’s gravitational mass is exactly equal to its resistance to accelerationMovie Camera, or inertial mass. From that he deduced equations that showed how space is warped by mass and motion, and how we see that bending as gravity. Apples fall to Earth because Earth’s mass bends space-time.

In a relatively low-gravity environment such as Earth, general relativity’s effects look very like those predicted by Newton’s earlier theory, which treats gravity as a force that travels instantaneously between objects. With stronger gravitational fields, however, the predictions diverge considerably. One extra prediction of general relativity is that large accelerating masses send out tiny ripples in the weave of space-time called gravitational waves. While these waves have never yet been observed directly, a pair of dense stars called pulsars, discovered in 1974, are spiralling in towards each other just as they should if they are losing energy by emitting gravitational waves.

Gravity is the dominant force of nature on cosmic scales, so general relativity is our best tool for modelling how the universe as a whole moves and behaves. But its equations are fiendishly complicated, with a frightening array of levers to pull. If you then give them a complex input, such as the details of the real universe’s messy distribution of mass and energy, they become effectively impossible to solve. To make a working cosmological model, we make simplifying assumptions.

The main assumption, called the Copernican principle, is that we are not in a special place. The cosmos should look pretty much the same everywhere – as indeed it seems to, with stuff distributed pretty evenly when we look at large enough scales. This means there’s just one number to put into Einstein’s equations: the universal density of matter.

Einstein’s own first pared-down model universe, which he filled with an inert dust of uniform density, turned up a cosmos that contracted under its own gravity. He saw that as a problem, and circumvented it by adding a new term into the equations by which empty space itself gains a constant energy density. Its gravity turns out to be repulsive, so adding the right amount of this “cosmological constant” ensured the universe neither expanded nor contracted. When observations in the 1920s showed it was actually expanding, Einstein described this move as his greatest blunder.

It was left to others to apply the equations of relativity to an expanding universe. They arrived at a model cosmos that grows from an initial point of unimaginable density, and whose expansion is gradually slowed down by matter’s gravity.

This was the birth of big bang cosmology. Back then, the main question was whether the expansion would ever come to a halt. The answer seemed to be no; there was just too little matter for gravity to rein in the fleeing galaxies. The universe would coast outwards forever.

Then the cosmic spectres began to materialise. The first emissary of darkness put a foot in the door as long ago as the 1930s, but was only fully seen in the late 1970s when astronomers found that galaxies are spinning too fast. The gravity of the visible matter would be too weak to hold these galaxies together according to general relativity, or indeed plain old Newtonian physics. Astronomers concluded that there must be a lot of invisible matter to provide extra gravitational glue.

The existence of dark matter is backed up by other lines of evidence, such as how groups of galaxies move, and the way they bend light on its way to us. It is also needed to pull things together to begin galaxy-building in the first place. Overall, there seems to be about five times as much dark matter as visible gas and stars.

Dark matter’s identity is unknown. It seems to be something beyond the standard model of particle physics, and despite our best efforts we have yet to see or create a dark matter particle on Earth (see “Trouble with physics: Smashing into a dead end”). But it changed cosmology’s standard model only slightly: its gravitational effect in general relativity is identical to that of ordinary matter, and even such an abundance of gravitating stuff is too little to halt the universe’s expansion.

The second form of darkness required a more profound change. In the 1990s, astronomers traced the expansion of the universe more precisely than ever before, using measurements of explosions called type 1a supernovae. They showed that the cosmic expansion is accelerating. It seems some repulsive force, acting throughout the universe, is now comprehensively trouncing matter’s attractive gravity.

This could be Einstein’s cosmological constant resurrected, an energy in the vacuum that generates a repulsive force, although particle physics struggles to explain why space should have the rather small implied energy density. So imaginative theorists have devised other ideas, including energy fields created by as-yet-unseen particles, and forces from beyond the visible universe or emanating from other dimensions.

Whatever it might be, dark energy seems real enough. The cosmic microwave background radiation, released when the first atoms formed just 370,000 years after the big bang, bears a faint pattern of hotter and cooler spots that reveals where the young cosmos was a little more or less dense. The typical spot sizes can be used to work out to what extent space as a whole is warped by the matter and motions within it. It appears to be almost exactly flat, meaning all these bending influences must cancel out. This, again, requires some extra, repulsive energy to balance the bending due to expansion and the gravity of matter. A similar story is told by the pattern of galaxies in space.

All of this leaves us with a precise recipe for the universe. The average density of ordinary matter in space is 0.426 yoctograms per cubic metre (a yoctogram is 10-24 grams, and 0.426 of one equates to about 250 protons), making up 4.5 per cent of the total energy density of the universe. Dark matter makes up 22.5 per cent, and dark energy 73 per cent (see diagram). Our model of a big-bang universe based on general relativity fits our observations very nicely – as long as we are happy to make 95.5 per cent of it up.

Arguably, we must invent even more than that. To explain why the universe looks so extraordinarily uniform in all directions, today’s consensus cosmology contains a third exotic element. When the universe was just 10-36 seconds old, an overwhelming force took over. Called the inflaton field, it was repulsive like dark energy, but far more powerful, causing the universe to expand explosively by a factor of more than 1025, flattening space and smoothing out any gross irregularities.

When this period of inflation ended, the inflaton field transformed into matter and radiation. Quantum fluctuations in the field became slight variations in density, which eventually became the spots in the cosmic microwave background, and today’s galaxies. Again, this fantastic story seems to fit the observational facts. And again it comes with conceptual baggage. Inflation is no trouble for general relativity – mathematically it just requires an add-on term identical to the cosmological constant. But at one time this inflaton field must have made up 100 per cent of the contents of the universe, and its origin poses as much of a puzzle as either dark matter or dark energy. What’s more, once inflation has started it proves tricky to stop: it goes on to create a further legion of universes divorced from our own. For some cosmologists, the apparent prediction of this multiverse is an urgent reason to revisit the underlying assumptions of our standard cosmology (see “Trouble with physics: Time to rethink cosmic inflation?”).

The model faces a few observational niggles, too. The big bang makes much more lithium-7 in theory than the universe contains in practice. The model does not explain the possible alignment in some features in the cosmic background radiation, or why galaxies along certain lines of sight seem biased to spin left-handedly. A newly discovered supergalactic structure 4 billion light years long calls into question the assumption that the universe is smooth on large scales.

Read the entire story here.

Image: Petrarch, who first conceived the idea of a European “Dark Age”, by Andrea di Bartolo di Bargilla, c1450. Courtesy of Galleria degli Uffizi, Florence, Italy / Wikipedia.

Building a Memory Palace

Feats of memory have long been the staple of human endeavor — for instance, memorizing and recalling Pi to hundreds of decimal places. Nowadays, however, memorization is a competitive sport replete with grand prizes, worthy of a place in an X-Games tournament.

From the NYT:

The last match of the tournament had all the elements of a classic showdown, pitting style versus stealth, quickness versus deliberation, and the world’s foremost card virtuoso against its premier numbers wizard.

If not quite Ali-Frazier or Williams-Sharapova, the duel was all the audience of about 100 could ask for. They had come to the first Extreme Memory Tournament, or XMT, to see a fast-paced, digitally enhanced memory contest, and that’s what they got.

The contest, an unusual collaboration between industry and academic scientists, featured one-minute matches between 16 world-class “memory athletes” from all over the world as they met in a World Cup-like elimination format. The grand prize was $20,000; the potential scientific payoff was large, too.

One of the tournament’s sponsors, the company Dart NeuroScience, is working to develop drugs for improved cognition. The other, Washington University in St. Louis, sent a research team with a battery of cognitive tests to determine what, if anything, sets memory athletes apart. Previous research was sparse and inconclusive.

Yet as the two finalists, both Germans, prepared to face off — Simon Reinhard, 35, a lawyer who holds the world record in card memorization (a deck in 21.19 seconds), and Johannes Mallow, 32, a teacher with the record for memorizing digits (501 in five minutes) — the Washington group had one preliminary finding that wasn’t obvious.

“We found that one of the biggest differences between memory athletes and the rest of us,” said Henry L. Roediger III, the psychologist who led the research team, “is in a cognitive ability that’s not a direct measure of memory at all but of attention.”

The Memory Palace

The technique the competitors use is no mystery.

People have been performing feats of memory for ages, scrolling out pi to hundreds of digits, or phenomenally long verses, or word pairs. Most store the studied material in a so-called memory palace, associating the numbers, words or cards with specific images they have already memorized; then they mentally place the associated pairs in a familiar location, like the rooms of a childhood home or the stops on a subway line.

The Greek poet Simonides of Ceos is credited with first describing the method, in the fifth century B.C., and it has been vividly described in popular books, most recently “Moonwalking With Einstein,” by Joshua Foer.

Each competitor has his or her own variation. “When I see the eight of diamonds and the queen of spades, I picture a toilet, and my friend Guy Plowman,” said Ben Pridmore, 37, an accountant in Derby, England, and a former champion. “Then I put those pictures on High Street in Cambridge, which is a street I know very well.”

As these images accumulate during memorization, they tell an increasingly bizarre but memorable story. “I often use movie scenes as locations,” said James Paterson, 32, a high school psychology teacher in Ascot, near London, who competes in world events. “In the movie ‘Gladiator,’ which I use, there’s a scene where Russell Crowe is in a field, passing soldiers, inspecting weapons.”

Mr. Paterson uses superheroes to represent combinations of letters or numbers: “I might have Batman — one of my images — playing Russell Crowe, and something else playing the horse, and so on.”

The material that competitors attempt to memorize falls into several standard categories. Shuffled decks of cards. Random words. Names matched with faces. And numbers, either binary (ones and zeros) or integers. They are given a set amount of time to study — up to one minute in this tournament, an hour or more in others — before trying to reproduce as many cards, words or digits in the order presented.

Now and then, a challenger boasts online of having discovered an entirely new method, and shows up at competitions to demonstrate it.

“Those people are easy to find, because they come in last, or close to it,” said another world-class competitor, Boris Konrad, 29, a German postdoctoral student in neuroscience. “Everyone here uses this same type of technique.”

Anyone can learn to construct a memory palace, researchers say, and with practice remember far more detail of a particular subject than before. The technique is accessible enough that preteens pick it up quickly, and Mr. Paterson has integrated it into his teaching.

“I’ve got one boy, for instance, he has no interest in academics really, but he knows the Premier League, every team, every player,” he said. “I’m working with him, and he’s using that knowledge as scaffolding to help remember what he’s learning in class.”

Experts in Forgetting

The competitors gathered here for the XMT are not just anyone, however. This is the all-world team, an elite club of laser-smart types who take a nerdy interest in stockpiling facts and pushing themselves hard.

In his doctoral study of 30 world-class performers (most from Germany, which has by far the highest concentration because there are more competitions), Mr. Konrad has found as much. The average I.Q.: 130. Average study time: 1,000 to 2,000 hours and counting. The top competitors all use some variation of the memory-palace system and test, retest and tweak it.

“I started with my own system, but now I use his,” said Annalena Fischer, 20, pointing to her boyfriend, Christian Schäfer, 22, whom she met at a 2010 memory competition in Germany. “Except I don’t use the distance runners he uses; I don’t know anything about the distance runners.” Both are advanced science students and participants in Mr. Konrad’s study.

One of the Washington University findings is predictable, if still preliminary: Memory athletes score very highly on tests of working memory, the mental sketchpad that serves as a shopping list of information we can hold in mind despite distractions.

One way to measure working memory is to have subjects solve a list of equations (5 + 4 = x; 8 + 9 = y; 7 + 2 = z; and so on) while keeping the middle numbers in mind (4, 9 and 2 in the above example). Elite memory athletes can usually store seven items, the top score on the test the researchers used; the average for college students is around two.

“And college students tend to be good at this task,” said Dr. Roediger, a co-author of the new book “Make It Stick: The Science of Successful Learning.” “What I’d like to do is extend the scoring up to, say, 21, just to see how far the memory athletes can go.”

Yet this finding raises another question: Why don’t the competitors’ memory palaces ever fill up? Players usually have many favored locations to store studied facts, but they practice and compete repeatedly. They use and reuse the same blueprints hundreds of times, and the new images seem to overwrite the old ones — virtually without error.

“Once you’ve remembered the words or cards or whatever it is, and reported them, they’re just gone,” Mr. Paterson said.

Many competitors say the same: Once any given competition is over, the numbers or words or facts are gone. But this is one area in which they have less than precise insight.

In its testing, which began last year, the Washington University team has given memory athletes surprise tests on “old” material — lists of words they’d been tested on the day before. On Day 2, they recalled an average of about three-quarters of the words they memorized on Day 1 (college students remembered fewer than 5 percent). That is, despite what competitors say, the material is not gone; far from it.

Yet to install a fresh image-laden “story” in any given memory palace, a memory athlete must clear away the old one in its entirety. The same process occurs when we change a password: The old one must be suppressed, so it doesn’t interfere with the new one.

One term for that skill is “attentional control,” and psychologists have been measuring it for years with standardized tests. In the best known, the Stroop test, people see words flash by on a computer screen and name the color in which a word is presented. Answering is nearly instantaneous when the color and the word match — “red” displayed in red — but slower when there’s a mismatch, like “red” displayed in blue.

Read the entire article here.

The (Space) Explorers Club

clangers

Thirteen private companies recently met in New York city to present their plans and ideas for their commercial space operations. Ranging from space tourism to private exploration of the Moon and asteroid mining the companies gathered at the Explorers Club to herald a new phase of human exploration.

From Technology Review:

It was a rare meeting of minds. Representatives from 13 commercial space companies gathered on May 1 at a place dedicated to going where few have gone before: the Explorers Club in New York.

Amid the mansions and high-end apartment buildings just off Central Park, executives from space-tourism companies, rocket-making startups, and even a business that hopes to make money by mining asteroids for useful materials showed off displays and gave presentations.

The Explorers Club event provided a snapshot of what may be a new industry in the making. In an era when NASA no longer operates manned space missions and government funding for unmanned missions is tight, a host of startups—most funded by space enthusiasts with very deep pockets—have stepped up in hope of filling the gap. In the past few years, several have proved themselves. Elon Musk’s SpaceX, for example, delivers cargo to the International Space Station for NASA. Both Richard Branson’s Virgin Galactic and rocket-plane builder XCOR Aerospace plan to perform demonstrations this year that will help catapult commercial spaceflight from the fringe into the mainstream.

The advancements being made by space companies could matter to more than the few who can afford tickets to space. SpaceX has already shaken incumbents in the $190 billion satellite launch industry by offering cheaper rides into space for communications, mapping, and research satellites.

However, space tourism also looks set to become significantly cheaper. “People don’t have to actually go up for it to impact them,” says David Mindell, an MIT professor of aeronautics and astronautics and a specialist in the history of engineering. “At $200,000 you’ll have a lot more ‘space people’ running around, and over time that could have a big impact.” One direct result, says Mindell, may be increased public support for human spaceflight, especially “when everyone knows someone who’s been into space.”

Along with reporters, Explorer Club members, and members of the public who had paid the $75 to $150 entry fee, several former NASA astronauts were in attendance to lend their endorsements—including the MC for the evening, Michael López-Alegría, veteran of the space shuttle and the ISS. Also on hand, highlighting the changing times with his very presence, was the world’s first second-generation astronaut, Richard Garriott. Garriott’s father flew missions on Skylab and the space shuttle in the 1970s and 1980s, respectively. However, Garriott paid his own way to the International Space Station in 2008 as a private citizen.

The evening was a whirlwind of activity, with customer testimonials and rapid-fire displays of rocket launches, spacecraft in orbit, and space ships under construction and being tested. It all painted a picture of an industry on the move, with multiple companies offering services from suborbital experiences and research opportunities to flights to Earth orbit and beyond.

The event also offered a glimpse at the plans of several key players.

Lauren De Niro Pipher, head of astronaut relations at Virgin Galactic, revealed that the company’s founder plans to fly with his family aboard the Virgin Galactic SpaceShipTwo rocket plane in November or December of this year. The flight will launch the company’s suborbital spaceflight business, for which De Niro Pipher said more than 700 customers have so far put down deposits on tickets costing $200,000 to $250,000.

The director of business development for Blue Origin, Bretton Alexander, announced his company’s intention to begin test flights of its first full-scale vehicle within the next year. “We have not publicly started selling rides in space as others have,” said Alexander during his question-and-answer session. “But that is our plan to do that, and we look forward to doing that, hopefully soon.”

Blue Origin is perhaps the most secretive of the commercial spaceflight companies, typically revealing little of its progress toward the services it plans to offer: suborbital manned spaceflight and, later, orbital flight. Like Virgin, it was founded by a wealthy entrepreneur, in this case Amazon founder Jeff Bezos. The company, which is headquartered in Kent, Washington, has so far conducted at least one supersonic test flight and a test of its escape rocket system, both at its West Texas test center.

Also on hand was the head of Planetary Resources, Chris Lewicki, a former spacecraft engineer and manager for Mars programs at NASA. He showed off a prototype of his company’s Arkyd 100, an asteroid-hunting space telescope the size of a toaster oven. If all goes according to plan, a fleet of Arkyd 100s will first scan the skies from Earth orbit in search of nearby asteroids that might be rich in mineral wealth and water, to be visited by the next generation of Arkyd probes. Water is potentially valuable for future space-based enterprises as rocket fuel (split into its constituent elements of hydrogen and oxygen) and for use in life support systems. Planetary Resources plans to “launch early, launch often,” Lewicki told me after his presentation. To that end, the company is building a series of CubeSat-size spacecraft dubbed Arkyd 3s, to be launched from the International Space Station by the end of this year.

Andrew Antonio, experience manager at a relatively new company, World View Enterprises, showed a computer-generated video of his company’s planned balloon flights to the edge of space. A manned capsule will ascend to 100,000 feet, or about 20 miles up, from which the curvature of Earth and the black sky of space are visible. At $75,000 per ticket (reduced to $65,000 for Explorers Club members), the flight will be more affordable than competing rocket-powered suborbital experiences but won’t go as high. Antonio said his company plans to launch a small test vehicle “in about a month.”

XCOR’s director of payload sales and operations, Khaki Rodway, showed video clips of the company’s Lynx suborbital rocket plane coming together in Mojave, California, as well as a profile of an XCOR spaceflight customer. Hangared just down the flight line at the same air and space port where Virgin Galactic’s SpaceShipTwo is undergoing flight testing, the Lynx offers seating for one paying customer per flight at $95,000. XCOR hopes the Lynx will begin flying by the end of this year.

Read the entire article here.

Image: Still from the Clangers TV show. Courtesy of BBC / Smallfilms.

DarwinTunes

Charles_DarwinResearchers at Imperial College, London recently posed an intriguing question and have since developed a cool experiment to test it. Does artistic endeavor, such as music, follow the same principles of evolutionary selection in biology, as described by Darwin? That is, does the funkiest survive? Though, one has to wonder what the eminent scientist would have thought about some recent fusion of rap / dubstep / classical.

From the Guardian:

There were some funky beats at Imperial College London on Saturday at its annual science festival. As well as opportunities to create bogeys, see robots dance and try to get physics PhD students to explain their wacky world, this fascinating event included the chance to participate in a public game-like experiment called DarwinTunes.

Participants select tunes and “mate” them with other tunes to create musical offspring: if the offspring are in turn selected by other players, they “survive” and get the chance to reproduce their musical DNA. The experiment is online – you too can try to immortalise your selfish musical genes.

It is a model of evolution in practice that raises fascinating questions about culture and nature. These questions apply to all the arts, not just to dance beats. How does “cultural evolution” work? How close is the analogy between Darwin’s well-proven theory of evolution in nature and the evolution of art, literature and music?

The idea of cultural evolution was boldly defined by Jacob Bronowski as our fundamental human ability “not to accept the environment but to change it”. The moment the first stone tools appeared in Africa, about 2.5m years ago, a new, faster evolution, that of human culture, became visible on Earth: from cave paintings to the Renaissance, from Galileo to the 3D printer, this cultural evolution has advanced at breathtaking speed compared with the massive periods of time it takes nature to evolve new forms.

In DarwinTunes, cultural evolution is modelled as what the experimenters call “the survival of the funkiest”. Pulsing dance beats evolve through selections made by participants, and the music (it is claimed) becomes richer through this process of selection. Yet how does the model really correspond to the story of culture?

One way Darwin’s laws of nature apply to visual art is in the need for every successful form to adapt to its environment. In the forests of west and central Africa, wood carving was until recent times a flourishing art form. In the islands of Greece, where marble could be quarried easily, stone sculpture was more popular. In the modern technological world, the things that easily come to hand are not wood or stone but manufactured products and media images – so artists are inclined to work with the readymade.

At first sight, the thesis of DarwinTunes is a bit crude. Surely it is obvious that artists don’t just obey the selections made by their audience – that is, their consumers. To think they do is to apply the economic laws of our own consumer society across all history. Culture is a lot funkier than that.

Yet just because the laws of evolution need some adjustment to encompass art, that does not mean art is a mysterious spiritual realm impervious to scientific study. In fact, the evolution of evolution – the adjustments made by researchers to Darwin’s theory since it was unveiled in the Victorian age – offers interesting ways to understand culture.

One useful analogy between art and nature is the idea of punctuated equilibrium, introduced by some evolutionary scientists in the 1970s. Just as species may evolve not through a constant smooth process but by spectacular occasional leaps, so the history of art is punctuated by massively innovative eras followed by slower, more conventional periods.

Read the entire story here.

Image: Charles Darwin, 1868, photographed by Julia Margaret Cameron. Courtesy of Wikipedia.

You May Be Living Inside a Simulation

real-and-simulated-cosmos

Some theorists posit that we are living inside a simulation, that the entire universe is one giant, evolving model inside a grander reality. This is a fascinating idea, but may never be experimentally verifiable. So just relax — you and I may not be real, but we’ll never know.

On the other hand, but in a similar vein, researchers have themselves developed the broadest and most detailed simulation of the universe to date. Now, there are no “living” things yet inside this computer model, but it’s probably only a matter of time before our increasingly sophisticated simulations start wondering if they are simulations as well.

From the BBC:

An international team of researchers has created the most complete visual simulation of how the Universe evolved.

The computer model shows how the first galaxies formed around clumps of a mysterious, invisible substance called dark matter.

It is the first time that the Universe has been modelled so extensively and to such great resolution.

The research has been published in the journal Nature.

Now we can get to grips with how stars and galaxies form and relate it to dark matter”

The simulation will provide a test bed for emerging theories of what the Universe is made of and what makes it tick.

One of the world’s leading authorities on galaxy formation, Professor Richard Ellis of the California Institute of Technology (Caltech) in Pasadena, described the simulation as “fabulous”.

“Now we can get to grips with how stars and galaxies form and relate it to dark matter,” he told BBC News.

The computer model draws on the theories of Professor Carlos Frenk of Durham University, UK, who said he was “pleased” that a computer model should come up with such a good result assuming that it began with dark matter.

“You can make stars and galaxies that look like the real thing. But it is the dark matter that is calling the shots”.

Cosmologists have been creating computer models of how the Universe evolved for more than 20 years. It involves entering details of what the Universe was like shortly after the Big Bang, developing a computer program which encapsulates the main theories of cosmology and then letting the programme run.

The simulated Universe that comes out at the other end is usually a very rough approximation of what astronomers really see.

The latest simulation, however, comes up with the Universe that is strikingly like the real one.

Immense computing power has been used to recreate this virtual Universe. It would take a normal laptop nearly 2,000 years to run the simulation. However, using state-of-the-art supercomputers and clever software called Arepo, researchers were able to crunch the numbers in three months.

Cosmic tree

In the beginning, it shows strands of mysterious material which cosmologists call “dark matter” sprawling across the emptiness of space like branches of a cosmic tree. As millions of years pass by, the dark matter clumps and concentrates to form seeds for the first galaxies.

Then emerges the non-dark matter, the stuff that will in time go on to make stars, planets and life emerge.

But early on there are a series of cataclysmic explosions when it gets sucked into black holes and then spat out: a chaotic period which was regulating the formation of stars and galaxies. Eventually, the simulation settles into a Universe that is similar to the one we see around us.

According to Dr Mark Vogelsberger of Massachusetts Institute of Technology (MIT), who led the research, the simulations back many of the current theories of cosmology.

“Many of the simulated galaxies agree very well with the galaxies in the real Universe. It tells us that the basic understanding of how the Universe works must be correct and complete,” he said.

In particular, it backs the theory that dark matter is the scaffold on which the visible Universe is hanging.

“If you don’t include dark matter (in the simulation) it will not look like the real Universe,” Dr Vogelsberger told BBC News.

Read the entire article here.

Image: On the left: the real universe imaged via the Hubble telescope. On the right: a view of what emerges from the computer simulation. Courtesy of BBC / Illustris Collaboration.

Metabolism Without Life

Glycolysis2-pathway

A remarkable chance discovery in a Cambridge University research lab shows that a number of life-sustaining metabolic processes can occur spontaneously and outside of living cells. This opens a rich, new vein of theories and approaches to studying the origin of life.

From the New Scientist:

Metabolic processes that underpin life on Earth have arisen spontaneously outside of cells. The serendipitous finding that metabolism – the cascade of reactions in all cells that provides them with the raw materials they need to survive – can happen in such simple conditions provides fresh insights into how the first life formed. It also suggests that the complex processes needed for life may have surprisingly humble origins.

“People have said that these pathways look so complex they couldn’t form by environmental chemistry alone,” says Markus Ralser at the University of Cambridge who supervised the research.

But his findings suggest that many of these reactions could have occurred spontaneously in Earth’s early oceans, catalysed by metal ions rather than the enzymes that drive them in cells today.

The origin of metabolism is a major gap in our understanding of the emergence of life. “If you look at many different organisms from around the world, this network of reactions always looks very similar, suggesting that it must have come into place very early on in evolution, but no one knew precisely when or how,” says Ralser.

Happy accident

One theory is that RNA was the first building block of life because it helps to produce the enzymes that could catalyse complex sequences of reactions. Another possibility is that metabolism came first; perhaps even generating the molecules needed to make RNA, and that cells later incorporated these processes – but there was little evidence to support this.

“This is the first experiment showing that it is possible to create metabolic networks in the absence of RNA,” Ralser says.

Remarkably, the discovery was an accident, stumbled on during routine quality control testing of the medium used to culture cells at Ralser’s laboratory. As a shortcut, one of his students decided to run unused media through a mass spectrometer, which spotted a signal for pyruvate – an end product of a metabolic pathway called glycolysis.

To test whether the same processes could have helped spark life on Earth, they approached colleagues in the Earth sciences department who had been working on reconstructing the chemistry of the Archean Ocean, which covered the planet almost 4 billion years ago. This was an oxygen-free world, predating photosynthesis, when the waters were rich in iron, as well as other metals and phosphate. All these substances could potentially facilitate chemical reactions like the ones seen in modern cells.

Metabolic backbone

Ralser’s team took early ocean solutions and added substances known to be starting points for modern metabolic pathways, before heating the samples to between 50 ?C and 70 ?C – the sort of temperatures you might have found near a hydrothermal vent – for 5 hours. Ralser then analysed the solutions to see what molecules were present.

“In the beginning we had hoped to find one reaction or two maybe, but the results were amazing,” says Ralser. “We could reconstruct two metabolic pathways almost entirely.”

The pathways they detected were glycolysis and the pentose phosphate pathway, “reactions that form the core metabolic backbone of every living cell,” Ralser adds. Together these pathways produce some of the most important materials in modern cells, including ATP – the molecule cells use to drive their machinery, the sugars that form DNA and RNA, and the molecules needed to make fats and proteins.

If these metabolic pathways were occurring in the early oceans, then the first cells could have enveloped them as they developed membranes.

In all, 29 metabolism-like chemical reactions were spotted, seemingly catalysed by iron and other metals that would have been found in early ocean sediments. The metabolic pathways aren’t identical to modern ones; some of the chemicals made by intermediate steps weren’t detected. However, “if you compare them side by side it is the same structure and many of the same molecules are formed,” Ralser says. These pathways could have been refined and improved once enzymes evolved within cells.

Read the entire article here.

Image: Glycolysis metabolic pathway. Courtesy of Wikipedia.

The Arrow of Time

Arthur_Stanley_EddingtonEinstein’s “spooky action at a distance” and quantum information theory (QIT) may help explain the so-called arrow of time — specifically, why it seems to flow in only one direction. Astronomer Arthur Eddington first described this asymmetry in 1927, and it has stumped theoreticians ever since.

At a macro-level the classic and simple example is that of an egg breaking when it hits your kitchen floor: repeat this over and over, and it’s likely that the egg will always make for a scrambled mess on your clean tiles, but it will never rise up from the floor and spontaneously re-assemble in your slippery hand. Yet at the micro-level, physicists know their underlying laws apply equally in both directions. Enter two new tenets of the quantum world that may help us better understand this perplexing forward flow of time: entanglement and QIT.

From Wired:

Coffee cools, buildings crumble, eggs break and stars fizzle out in a universe that seems destined to degrade into a state of uniform drabness known as thermal equilibrium. The astronomer-philosopher Sir Arthur Eddington in 1927 cited the gradual dispersal of energy as evidence of an irreversible “arrow of time.”

But to the bafflement of generations of physicists, the arrow of time does not seem to follow from the underlying laws of physics, which work the same going forward in time as in reverse. By those laws, it seemed that if someone knew the paths of all the particles in the universe and flipped them around, energy would accumulate rather than disperse: Tepid coffee would spontaneously heat up, buildings would rise from their rubble and sunlight would slink back into the sun.

“In classical physics, we were struggling,” said Sandu Popescu, a professor of physics at the University of Bristol in the United Kingdom. “If I knew more, could I reverse the event, put together all the molecules of the egg that broke? Why am I relevant?”

Surely, he said, time’s arrow is not steered by human ignorance. And yet, since the birth of thermodynamics in the 1850s, the only known approach for calculating the spread of energy was to formulate statistical distributions of the unknown trajectories of particles, and show that, over time, the ignorance smeared things out.

Now, physicists are unmasking a more fundamental source for the arrow of time: Energy disperses and objects equilibrate, they say, because of the way elementary particles become intertwined when they interact — a strange effect called “quantum entanglement.”

“Finally, we can understand why a cup of coffee equilibrates in a room,” said Tony Short, a quantum physicist at Bristol. “Entanglement builds up between the state of the coffee cup and the state of the room.”

Popescu, Short and their colleagues Noah Linden and Andreas Winter reported the discovery in the journal Physical Review E in 2009, arguing that objects reach equilibrium, or a state of uniform energy distribution, within an infinite amount of time by becoming quantum mechanically entangled with their surroundings. Similar results by Peter Reimann of the University of Bielefeld in Germany appeared several months earlier in Physical Review Letters. Short and a collaborator strengthened the argument in 2012 by showing that entanglement causes equilibration within a finite time. And, in work that was posted on the scientific preprint site arXiv.org in February, two separate groups have taken the next step, calculating that most physical systems equilibrate rapidly, on time scales proportional to their size. “To show that it’s relevant to our actual physical world, the processes have to be happening on reasonable time scales,” Short said.

The tendency of coffee — and everything else — to reach equilibrium is “very intuitive,” said Nicolas Brunner, a quantum physicist at the University of Geneva. “But when it comes to explaining why it happens, this is the first time it has been derived on firm grounds by considering a microscopic theory.”

If the new line of research is correct, then the story of time’s arrow begins with the quantum mechanical idea that, deep down, nature is inherently uncertain. An elementary particle lacks definite physical properties and is defined only by probabilities of being in various states. For example, at a particular moment, a particle might have a 50 percent chance of spinning clockwise and a 50 percent chance of spinning counterclockwise. An experimentally tested theorem by the Northern Irish physicist John Bell says there is no “true” state of the particle; the probabilities are the only reality that can be ascribed to it.

Quantum uncertainty then gives rise to entanglement, the putative source of the arrow of time.

When two particles interact, they can no longer even be described by their own, independently evolving probabilities, called “pure states.” Instead, they become entangled components of a more complicated probability distribution that describes both particles together. It might dictate, for example, that the particles spin in opposite directions. The system as a whole is in a pure state, but the state of each individual particle is “mixed” with that of its acquaintance. The two could travel light-years apart, and the spin of each would remain correlated with that of the other, a feature Albert Einstein famously described as “spooky action at a distance.”

“Entanglement is in some sense the essence of quantum mechanics,” or the laws governing interactions on the subatomic scale, Brunner said. The phenomenon underlies quantum computing, quantum cryptography and quantum teleportation.

The idea that entanglement might explain the arrow of time first occurred to Seth Lloyd about 30 years ago, when he was a 23-year-old philosophy graduate student at Cambridge University with a Harvard physics degree. Lloyd realized that quantum uncertainty, and the way it spreads as particles become increasingly entangled, could replace human uncertainty in the old classical proofs as the true source of the arrow of time.

Using an obscure approach to quantum mechanics that treated units of information as its basic building blocks, Lloyd spent several years studying the evolution of particles in terms of shuffling 1s and 0s. He found that as the particles became increasingly entangled with one another, the information that originally described them (a “1” for clockwise spin and a “0” for counterclockwise, for example) would shift to describe the system of entangled particles as a whole. It was as though the particles gradually lost their individual autonomy and became pawns of the collective state. Eventually, the correlations contained all the information, and the individual particles contained none. At that point, Lloyd discovered, particles arrived at a state of equilibrium, and their states stopped changing, like coffee that has cooled to room temperature.

“What’s really going on is things are becoming more correlated with each other,” Lloyd recalls realizing. “The arrow of time is an arrow of increasing correlations.”

The idea, presented in his 1988 doctoral thesis, fell on deaf ears. When he submitted it to a journal, he was told that there was “no physics in this paper.” Quantum information theory “was profoundly unpopular” at the time, Lloyd said, and questions about time’s arrow “were for crackpots and Nobel laureates who have gone soft in the head.” he remembers one physicist telling him.

“I was darn close to driving a taxicab,” Lloyd said.

Advances in quantum computing have since turned quantum information theory into one of the most active branches of physics. Lloyd is now a professor at the Massachusetts Institute of Technology, recognized as one of the founders of the discipline, and his overlooked idea has resurfaced in a stronger form in the hands of the Bristol physicists. The newer proofs are more general, researchers say, and hold for virtually any quantum system.

“When Lloyd proposed the idea in his thesis, the world was not ready,” said Renato Renner, head of the Institute for Theoretical Physics at ETH Zurich. “No one understood it. Sometimes you have to have the idea at the right time.”

Read the entire article here.

Image: English astrophysicist Sir Arthur Stanley Eddington (1882–1944). Courtesy: George Grantham Bain Collection (Library of Congress).

Good Mutations and Breathing

Van_andel_113

Stem cells — the factories that manufacture all our component body parts — may hold a key to divining why our bodies gradually break down as we age. A new body of research shows how the body’s population of blood stem cells mutates, and gradually dies, over a typical lifespan. Sometimes these mutations turn cancerous, sometimes not. Luckily for us, the research is centered on the blood samples of Hendrikje van Andel-Schipper — she died in 2005 at the age of 115, and donated her body to science. Her body showed a remarkable resilience — no hardening of the arteries and no deterioration of her brain tissue.  When quizzed about the secret of her longevity, she once retorted, “breathing”.

From the New Scientist:

Death is the one certainty in life – a pioneering analysis of blood from one of the world’s oldest and healthiest women has given clues to why it happens.

Born in 1890, Hendrikje van Andel-Schipper was at one point the oldest woman in the world. She was also remarkable for her health, with crystal-clear cognition until she was close to death, and a blood circulatory system free of disease. When she died in 2005, she bequeathed her body to science, with the full support of her living relatives that any outcomes of scientific analysis – as well as her name – be made public.

Researchers have now examined her blood and other tissues to see how they were affected by age.

What they found suggests, as we could perhaps expect, that our lifespan might ultimately be limited by the capacity for stem cells to keep replenishing tissues day in day out. Once the stem cells reach a state of exhaustion that imposes a limit on their own lifespan, they themselves gradually die out and steadily diminish the body’s capacity to keep regenerating vital tissues and cells, such as blood.

Two little cells

In van Andel-Schipper’s case, it seemed that in the twilight of her life, about two-thirds of the white blood cells remaining in her body at death originated from just two stem cells, implying that most or all of the blood stem cells she started life with had already burned out and died.

“Is there a limit to the number of stem cell divisions, and does that imply that there’s a limit to human life?” asks Henne Holstege of the VU University Medical Center in Amsterdam, the Netherlands, who headed the research team. “Or can you get round that by replenishment with cells saved from earlier in your life?” she says.

The other evidence for the stem cell fatigue came from observations that van Andel-Schipper’s white blood cells had drastically worn-down telomeres – the protective tips on chromosomes that burn down like wicks each time a cell divides. On average, the telomeres on the white blood cells were 17 times shorter than those on brain cells, which hardly replicate at all throughout life.

The team could establish the number of white blood cell-generating stem cells by studying the pattern of mutations found within the blood cells. The pattern was so similar in all cells that the researchers could conclude that they all came from one of two closely related “mother” stem cells.

Point of exhaustion

“It’s estimated that we’re born with around 20,000 blood stem cells, and at any one time, around 1000 are simultaneously active to replenish blood,” says Holstege. During life, the number of active stem cells shrinks, she says, and their telomeres shorten to the point at which they die – a point called stem-cell exhaustion.

Holstege says the other remarkable finding was that the mutations within the blood cells were harmless – all resulted from mistaken replication of DNA during van Andel-Schipper’s life as the “mother” blood stem cells multiplied to provide clones from which blood was repeatedly replenished.

She says this is the first time patterns of lifetime “somatic” mutations have been studied in such an old and such a healthy person. The absence of mutations posing dangers of disease and cancer suggest that van Andel-Schipper had a superior system for repairing or aborting cells with dangerous mutations.

Read the entire article here.

Image: Hendrikje van Andel-Schipper, aged 113. Courtesy of Wikipedia.

European Extremely Large Telescope

Rendering_of_the_E-ELT

When it is cited in the high mountains in the Chilean coastal desert the European Extremely Large Telescope (or E-ELT) will be the biggest and the baddest telescope to date.  With a mirror having a diameter of around 125 feet, the E-ELT will give observers unprecedented access to the vast panoramas of the cosmos. Astronomers are even confident that when it is fully operational, in about 2030, the telescope will be able to observe exo-planets directly, for the first time.

From the Observer:

Cerro Armazones is a crumbling dome of rock that dominates the parched peaks of the Chilean Coast Range north of Santiago. A couple of old concrete platforms and some rusty pipes, parts of the mountain’s old weather station, are the only hints that humans have ever taken an interest in this forbidding, arid place. Even the views look alien, with the surrounding boulder-strewn desert bearing a remarkable resemblance to the landscape of Mars.

Dramatic change is coming to Cerro Armazones, however – for in a few weeks, the 10,000ft mountain is going to have its top knocked off. “We are going to blast it with dynamite and then carry off the rubble,” says engineer Gird Hudepohl. “We will take about 80ft off the top of the mountain to create a plateau – and when we have done that, we will build the world’s biggest telescope there.”

Given the peak’s remote, inhospitable location that might sound an improbable claim – except for the fact that Hudepohl has done this sort of thing before. He is one of the European Southern Observatory’s most experienced engineers and was involved in the decapitation of another nearby mountain, Cerro Paranal, on which his team then erected one of the planet’s most sophisticated observatories.

The Paranal complex has been in operation for more than a decade and includes four giant instruments with eight-metre-wide mirrors – known as the Very Large Telescopes or VLTs – as well as control rooms and a labyrinth of underground tunnels linking its instruments. More than 100 astronomers, engineers and support staff work and live there. A few dozen metres below the telescopes, they have a sports complex with a squash court, an indoor football pitch, and a luxurious 110-room residence that has a central swimming pool and a restaurant serving meals and drinks around the clock. Built overlooking one of the world’s driest deserts, the place is an amazing oasis. (See box.)

Now the European Southern Observatory, of which Britain is a key member state, wants Hudepohl and his team to repeat this remarkable trick and take the top off Cerro Armazones, which is 20km distant. Though this time they will construct an instrument so huge it will dwarf all the telescopes on Paranal put together, and any other telescope on the planet. When completed, the European Extremely Large Telescope (E-ELT) and its 39-metre mirror will allow astronomers to peer further into space and look further back into the history of the universe than any other astronomical device in existence. Its construction will push telescope-making to its limit, however. Its primary mirror will be made of almost 800 segments – each 1.4 metres in diameter but only a few centimetres thick – which will have to be aligned with microscopic precision.

It is a remarkable juxtaposition: in the midst of utter desolation, scientists have built giant machines engineered to operate with smooth perfection and are now planning to top this achievement by building an even more vast device. The question is: for what purpose? Why go to a remote wilderness in northern Chile and chop down peaks to make homes for some of the planet’s most complex scientific hardware?

The answer is straightforward, says Cambridge University astronomer Professor Gerry Gilmore. It is all about water. “The atmosphere here is as dry as you can get and that is critically important. Water molecules obscure the view from telescopes on the ground. It is like trying to peer through mist – for mist is essentially a suspension of water molecules in the air, after all, and they obscure your vision. For a telescope based at sea level that is a major drawback.

“However, if you build your telescope where the atmosphere above you is completely dry, you will get the best possible views of the stars – and there is nowhere on Earth that has air drier than this place. For good measure, the high-altitude winds blow in a smooth, laminar manner above Paranal – like slabs of glass – so images of stars remain remarkably steady as well.”

The view of the heavens here is close to perfect, in other words – as an evening stroll around the viewing platform on Paranal demonstrates vividly. During my visit, the Milky Way hung over the observatory like a single white sheet. I could see the four main stars of the Southern Cross; Alpha Centauri, whose unseen companion Proxima Centauri is the closest star to our solar system; the two Magellanic Clouds, satellite galaxies of our own Milky Way; and the Coalsack, an interstellar dust cloud that forms a striking silhouette against the starry Milky Way. None are visible in northern skies and none appear with such brilliance anywhere else on the planet.

Hence the decision to build this extraordinary complex of VLTs. At sunset, each one’s housing is opened and the four great telescopes are brought slowly into operation. Each machine is made to rotate and swivel, like football players stretching muscles before a match. Each housing is the size of a block of flats. Yet they move in complete silence, so precise is their engineering.

Read the entire article here.

Image: Architectural rendering of ESO’s planned European Extremely Large Telescope (E-ELT) shows the telescope at work, with its dome open and its record-setting 42-metre primary mirror pointed to the sky. Courtesy of the European Southern Observatory (ESO) / Wikipedia.

A Subsurface Anomaly

Enceladusstripes_cassini

Researchers published details of this “subsurface anomaly” in the journal Science, on April 4, 2014. The summary reads as follows:

Our results indicate the presence of a negative mass anomaly in the south-polar region, largely compensated by a positive subsurface anomaly compatible with the presence of a regional subsurface sea at depths of 30 to 40 kilometers and extending up to south latitudes of about 50°. The estimated values for the largest quadrupole harmonic coefficients (106J2 = 5435.2 ± 34.9, 106C22 = 1549.8 ± 15.6, 1?) and their ratio (J2/C22 = 3.51 ± 0.05) indicate that the body deviates mildly from hydrostatic equilibrium. The moment of inertia is around 0.335MR2, where M is the mass and R is the radius, suggesting a differentiated body with a low-density core.

In effect this means that the researchers are reasonably confident that an ocean of water lies below the icy surface of Enceladus, one of Saturn’s most intriguing moons.

From NYT:

Inside a moon of Saturn, beneath its icy veneer and above its rocky core, is a sea of water the size of Lake Superior, scientists announced on Thursday.

The findings, published in the journal Science, confirm what planetary scientists have suspected about the moon, Enceladus, ever since they were astonished in 2005 by photographs showing geysers of ice crystals shooting out of its south pole.

“What we’ve done is put forth a strong case for an ocean,” said David J. Stevenson, a professor of planetary science at the California Institute of Technology and an author of the Science paper.

For many researchers, this tiny, shiny cue ball of a moon, just over 300 miles wide, is now the most promising place to look for life elsewhere in the solar system, even more than Mars.

“Definitely Enceladus,” said Larry W. Esposito, a professor of astrophysical and planetary sciences at the University of Colorado, who was not involved in the research. “Because there’s warm water right there now.”

Enceladus (pronounced en-SELL-a-dus) is caught in a gravitational tug of war between Saturn and another moon, Dione, which bends its icy outer layer, creating friction and heat. In the years since discovering the geysers, NASA’s Cassini spacecraft has made repeated flybys of Enceladus, photographing the fissures (nicknamed tiger stripes) where the geysers originate, measuring temperatures and identifying carbon-based organic molecules that could serve as building blocks for life.

Cassini has no instruments that can directly detect water beneath the surface, but three flybys in the years 2010-12 were devoted to producing a map of the gravity field, noting where the pull was stronger or weaker.

During the flybys, lasting just a few minutes, radio telescopes that are part of NASA’s Deep Space Network broadcast a signal to the spacecraft, which echoed it back to Earth. As the pull of Enceladus’s gravity sped and then slowed the spacecraft, the frequency of the radio signal shifted, just as the pitch of a train whistle rises and falls as it passes by a listener.

Using atomic clocks on Earth, the scientists measured the radio frequency with enough precision that they could discern changes in the velocity of Cassini, hundreds of millions of miles away, as minuscule as 14 inches an hour.

They found that the moon’s gravity was weaker at the south pole. At first glance, that is not so surprising; there is a depression at the pole, and lower mass means less gravity. But the depression is so large that the gravity should actually have been weaker.

“Then you say, ‘A-ha, there must be compensation,’ ” Dr. Stevenson said. “Something more dense under the ice. The natural candidate is water.”

Liquid water is 8 percent denser than ice, so the presence of a sea 20 to 25 miles below the surface fits the gravity measurements. “It’s an ocean that extends in all directions from the south pole to about halfway to the equator,” Dr. Stevenson said.

The underground sea is up to six miles thick, much deeper than a lake. “It’s a lot more water than Lake Superior,” Dr. Stevenson said. “It may even be bigger. The ocean could extend all the way to the north pole.”

The conclusion was not a surprise, said Christopher P. McKay, a planetary scientist at NASA Ames Research Center in Mountain View, Calif., who studies the possibility of life on other worlds, but “it confirms in a really robust way what has been sort of the standard model.”

It also makes Enceladus a more attractive destination for a future mission, especially one that would collect samples from the plumes and return them to Earth to see if they contain any microbes.

Read the entire article here.

Image: View of Saturn’s moon Enceladus on July 14, 2005, from the Cassini spacecraft. Courtesy of NASA / JPL / Space Science Institute.

Are You in the 18 Percent? A Cave Beckons

la-mar-labAccording to a recent survey, 18 percent of U.S. citizens believe that the sun revolves around the earth. And, another survey suggests that 30 percent believe in the literal “truth” of the bible and 40 percent believe in intelligent design. The surveys, apparently, were of functioning adults.

I have to suspect that a similar number of adults believe in the fat reducing power of soap.

A number of vociferous advocates of creationism-as-science have recently taken to the airwaves to demand equal time — believing their (pseudo)-scientific views should stand on a par with real science.

Astrophysicist and presenter of the re-made Cosmos series, Neil deGrasse Tyson recently provided his eloquent take on these scientific naysayers,

“If you don’t know science in the 21st century, just move back to the cave, because that’s where we’re going to leave you as we move forward.”

My hat off to Mr.Tyson. Rather than engaging in lengthy debate over nonsense his curt reply is very apt: it is time for believers — in the scientific method — to just move on, and move ahead.

From Salon:

We Americans pride ourselves on our ideals of free speech. We believe in spirited back-and-forth and the notion that we are all entitled to our opinions. We stack our media coverage of news events with “opposing views.” These ideals are deeply rooted in our cultural character. And they’re making us stupid.

Ever since it debuted earlier this month, Neil deGrasse Tyson’s blockbuster, multi-network reboot of “Cosmos” has been ruffling feathers with its crazy, brazen tactic of putting scientific facts forward as the truth. It’s infuriated religious conservatives by furthering “the Scientific Martyr Myth of Giordano Bruno” within its “glossy multi-million-dollar piece of agitprop for scientific materialism.” And this weekend, creationist astronomer and Answers in Genesis bigwig Danny Faulkner complained about “Cosmos” on “The Janet Mefferd Show” that “Creationists aren’t even on the radar screen; they wouldn’t even consider us plausible at all” and that “Consideration of creation is definitely not up for discussion,” leading Mefferd to suggest equal time for the opposing views. But on “Late Night With Seth Meyers” last week, Neil deGrasse Tyson shrugged off the naysayers, noting, “If you don’t know science in the 21st century, just move back to the cave, because that’s where we’re going to leave you as we move forward.” This is why he’s a treasure — he has proven himself a consistent and elegant beacon of how to respond to extremists and crazy talk – by acknowledging it but not wasting breath arguing it.

We can go round and round in endless circles about social and philosophical issues. We can debate all day about matters of faith and religion, if you’re up for it. But well-established scientific principles don’t lend themselves well to conversations in which I say something based on hard physical evidence and carefully analyzed data, and then you shoot back with a bunch of spurious nonsense.

Read the entire article here.

Image courtesy of La-Mar Laboratories.

eLiquid eQuals ePoison

Nicotine3Dan2Many smokers are weaning themselves off tobacco, leaving the perils of carcinogenic tar and ash behind. Some are kicking the smoking habit for good. Others are dashing headlong towards another risk to health — e-cigarettes with tobacco substitutes.

The most prominent new danger comes from a brand of substances called eLiquids, particularly liquid nicotine. Just like the tobacco industry during its early days, eLiquid producers are poorly controlled and the substances are not regulated. A teaspoon of concentrated nicotine, even absorbed through the skin, can kill. Caveat emptor!

From NYT:

A dangerous new form of a powerful stimulant is hitting markets nationwide, for sale by the vial, the gallon and even the barrel.

The drug is nicotine, in its potent, liquid form — extracted from tobacco and tinctured with a cocktail of flavorings, colorings and assorted chemicals to feed the fast-growing electronic cigarette industry.

These “e-liquids,” the key ingredients in e-cigarettes, are powerful neurotoxins. Tiny amounts, whether ingested or absorbed through the skin, can cause vomiting and seizures and even be lethal. A teaspoon of even highly diluted e-liquid can kill a small child.

But, like e-cigarettes, e-liquids are not regulated by federal authorities. They are mixed on factory floors and in the back rooms of shops, and sold legally in stores and online in small bottles that are kept casually around the house for regular refilling of e-cigarettes.

Evidence of the potential dangers is already emerging. Toxicologists warn that e-liquids pose a significant risk to public health, particularly to children, who may be drawn to their bright colors and fragrant flavorings like cherry, chocolate and bubble gum.

“It’s not a matter of if a child will be seriously poisoned or killed,” said Lee Cantrell, director of the San Diego division of the California Poison Control System and a professor of pharmacy at the University of California, San Francisco. “It’s a matter of when.”

Reports of accidental poisonings, notably among children, are soaring. Since 2011, there appears to have been one death in the United States, a suicide by an adult who injected nicotine. But less serious cases have led to a surge in calls to poison control centers. Nationwide, the number of cases linked to e-liquids jumped to 1,351 in 2013, a 300 percent increase from 2012, and the number is on pace to double this year, according to information from the National Poison Data System. Of the cases in 2013, 365 were referred to hospitals, triple the previous year’s number.

Examples come from across the country. Last month, a 2-year-old girl in Oklahoma City drank a small bottle of a parent’s nicotine liquid, started vomiting and was rushed to an emergency room.

That case and age group is considered typical. Of the 74 e-cigarette and nicotine poisoning cases called into Minnesota poison control in 2013, 29 involved children age 2 and under. In Oklahoma, all but two of the 25 cases in the first two months of this year involved children age 4 and under.

In terms of the immediate poison risk, e-liquids are far more dangerous than tobacco, because the liquid is absorbed more quickly, even in diluted concentrations.

“This is one of the most potent naturally occurring toxins we have,” Mr. Cantrell said of nicotine. But e-liquids are now available almost everywhere. “It is sold all over the place. It is ubiquitous in society.”

The surge in poisonings reflects not only the growth of e-cigarettes but also a shift in technology. Initially, many e-cigarettes were disposable devices that looked like conventional cigarettes. Increasingly, however, they are larger, reusable gadgets that can be refilled with liquid, generally a combination of nicotine, flavorings and solvents. In Kentucky, where about 40 percent of cases involved adults, one woman was admitted to the hospital with cardiac problems after her e-cigarette broke in her bed, spilling the e-liquid, which was then absorbed through her skin.

The problems with adults, like those with children, owe to carelessness and lack of understanding of the risks. In the cases of exposure in children, “a lot of parents didn’t realize it was toxic until the kid started vomiting,” said Ashley Webb, director of the Kentucky Regional Poison Control Center at Kosair Children’s Hospital.

The increased use of liquid nicotine has, in effect, created a new kind of recreational drug category, and a controversial one. For advocates of e-cigarettes, liquid nicotine represents the fuel of a technology that might prompt people to quit smoking, and there is anecdotal evidence that is happening. But there are no long-term studies about whether e-cigarettes will be better than nicotine gum or patches at helping people quit. Nor are there studies about the long-term effects of inhaling vaporized nicotine.

 Unlike nicotine gums and patches, e-cigarettes and their ingredients are not regulated. The Food and Drug Administration has said it plans to regulate e-cigarettes but has not disclosed how it will approach the issue. Many e-cigarette companies hope there will be limited regulation.

“It’s the wild, wild west right now,” said Chip Paul, chief executive officer of Palm Beach Vapors, a company based in Tulsa, Okla., that operates 13 e-cigarette franchises nationwide and plans to open 50 more this year. “Everybody fears F.D.A. regulation, but honestly, we kind of welcome some kind of rules and regulations around this liquid.”

Mr. Paul estimated that this year in the United States there will be sales of one million to two million liters of liquid used to refill e-cigarettes, and it is widely available on the Internet. Liquid Nicotine Wholesalers, based in Peoria, Ariz., charges $110 for a liter with 10 percent nicotine concentration. The company says on its website that it also offers a 55 gallon size. Vaporworld.biz sells a gallon at 10 percent concentrations for $195.

Read the entire story here.

Image: Nicotine molecule. Courtesy of Wikipedia.

The Inflaton and the Multiverse

multiverse-illustration

 

 

 

 

 

 

 

 

 

Last week’s announcement that cosmologists had found signals of gravitational waves from the primordial cosmic microwave background of the Big Bang made many headlines, even on cable news. If verified by separate experiments this will be ground-breaking news indeed — much like the discovery of the Higgs Boson in 2012. Should the result stand, this may well pave the way for new physics and greater support for the multiverse theory of the universe. So, in addition to the notion that we may not be alone in the vast cosmos, we’ll now have to consider not being alone in a cosmos made up of multiple universes — our universe may not be alone either!

From the New Scientist:

Wave hello to the multiverse? Ripples in the very fabric of the cosmos, unveiled this week, are allowing us to peer further back in time than anyone thought possible, showing us what was happening in the first slivers of a second after the big bang.

The discovery of these primordial waves could solidify the idea that our young universe went through a rapid growth spurt called inflation. And that theory is linked to the idea that the universe is constantly giving birth to smaller “pocket” universes within an ever-expanding multiverse.

The waves in question are called gravitational waves, and they appear in Einstein’s highly successful theory of general relativity (see “A surfer’s guide to gravitational waves”). On 17 March, scientists working with the BICEP2 telescope in Antarctica announced the first indirect detection of primordial gravitational waves. This version of the ripples was predicted to be visible in maps of the cosmic microwave background (CMB), the earliest light emitted in the universe, roughly 380,000 years after the big bang.

Repulsive gravity

The BICEP2 team had spent three years analysing CMB data, looking for a distinctive curling pattern called B-mode polarisation. These swirls indicate that the light of the CMB has been twisted, or polarised, into specific curling alignments. In two papers published online on the BICEP project website, the team said they have high confidence the B-mode pattern is there, and that they can rule out alternative explanations such as dust in our own galaxy, distortions caused by the gravity of other galaxies and errors introduced by the telescope itself. That suggests the swirls could have been left only by the very first gravitational waves being stretched out by inflation.

“If confirmed, this result would constitute the most important breakthrough in cosmology over the past 15 years. It will open a new window into the beginning of our universe and have fundamental implications for extensions of the standard model of physics,” says Avi Loeb at Harvard University. “If it is real, the signal will likely lead to a Nobel prize.”

And for some theorists, simply proving that inflation happened at all would be a sign of the multiverse.

“If inflation is there, the multiverse is there,” said Andrei Linde of Stanford University in California, who is not on the BICEP2 team and is one of the originators of inflationary theory. “Each observation that brings better credence to inflation brings us closer to establishing that the multiverse is real.” (Watch video of Linde being surprised with the news that primordial gravitational waves have been detected.)

The simplest models of inflation, which the BICEP2 results seem to support, require a particle called an inflaton to push space-time apart at high speed.

“Inflation depends on a kind of material that turns gravity on its head and causes it to be repulsive,” says Alan Guth at the Massachusetts Institute of Technology, another author of inflationary theory. Theory says the inflaton particle decays over time like a radioactive element, so for inflation to work, these hypothetical particles would need to last longer than the period of inflation itself. Afterwards, inflatons would continue to drive inflation in whatever pockets of the universe they inhabit, repeatedly blowing new universes into existence that then rapidly inflate before settling down. This “eternal inflation” produces infinite pocket universes to create a multiverse.

Quantum harmony

For now, physicists don’t know how they might observe the multiverse and confirm that it exists. “But when the idea of inflation was proposed 30 years ago, it was a figment of theoretical imagination,” says Marc Kamionkowski at Johns Hopkins University in Baltimore, Maryland. “What I’m hoping is that with these results, other theorists out there will start to think deeply about the multiverse, so that 20 years from now we can have a press conference saying we’ve found evidence of it.”

In the meantime, studying the properties of the swirls in the CMB might reveal details of what the cosmos was like just after its birth. The power and frequency of the waves seen by BICEP2 show that they were rippling through a particle soup with an energy of about 1016 gigaelectronvolts, or 10 trillion times the peak energy expected at the Large Hadron Collider. At such high energies, physicists expect that three of the four fundamental forces in physics – the strong, weak and electromagnetic forces – would be merged into one.

The detection is also the first whiff of quantum gravity, one of the thorniest puzzles in modern physics. Right now, theories of quantum mechanics can explain the behaviour of elementary particles and those three fundamental forces, but the equations fall apart when the fourth force, gravity, is added to the mix. Seeing gravitational waves in the CMB means that gravity is probably linked to a particle called the graviton, which in turn is governed by quantum mechanics. Finding these primordial waves won’t tell us how quantum mechanics and gravity are unified, says Kamionkowski. “But it does tell us that gravity obeys quantum laws.”

“For the first time, we’re directly testing an aspect of quantum gravity,” says Frank Wilczek at MIT. “We’re seeing gravitons imprinted on the sky.”

Waiting for Planck

Given the huge potential of these results, scientists will be eagerly anticipating polarisation maps from projects such as the POLARBEAR experiment in Chile or the South Pole Telescope. The next full-sky CMB maps from the Planck space telescope are also expected to include polarisation data. Seeing a similar signal from one or more of these experiments would shore up the BICEP2 findings and make a firm case for inflation and boost hints of the multiverse and quantum gravity.

One possible wrinkle is that previous temperature maps of the CMB suggested that the signal from primordial gravitational waves should be much weaker that what BICEP2 is seeing. Those results set theorists bickering about whether inflation really happened and whether it could create a multiverse. Several physicists suggested that we scrap the idea entirely for a new model of cosmic birth.

Taken alone, the BICEP2 results give a strong-enough signal to clinch inflation and put the multiverse back in the game. But the tension with previous maps is worrying, says Paul Steinhardt at Princeton University, who helped to develop the original theory of inflation but has since grown sceptical of it.

“If you look at the best-fit models with the new data added, they’re bizarre,” Steinhardt says. “If it remains like that, it requires adding extra fields, extra parameters, and you get really rather ugly-looking models.”

Forthcoming data from Planck should help resolve the issue, and we may not have long to wait. Olivier Doré at the California Institute of Technology is a member of the Planck collaboration. He says that the BICEP2 results are strong and that his group should soon be adding their data to the inflation debate: “Planck in particular will have something to say about it as soon as we publish our polarisation result in October 2014.”

Read the entire article here.

Image: Multiverse illustration. Courtesy of National Geographic.

Meet the Indestructible Life-form

water-bear

Meet the water bear or tardigrade. It may not be pretty, but its as close to indestructible as any life-form may ever come.

Cool it to a mere 1 degree above absolute zero or -458 F and it lives on. Heat it to 300 F and it lives on. Throw it out into the vacuum of space and it lives on. Irradiate it with hundreds of times the radiation that would kill a human and it lives on. Dehydrate it to 3 percent of its normal water content and it lives on.

From Wired:

In 1933, the owner of a New York City speakeasy and three cronies embarked on a rather unoriginal scheme to make a quick couple grand: Take out three life insurance policies on the bar’s deepest alcoholic, Mike Malloy, then kill him.

First, they pumped him full of ungodly amounts of liquor. When that didn’t work, they poisoned the hooch. Mike didn’t mind. Then came the sandwiches of rotten sardines and broken glass and metal shavings. Mike reportedly loved them. Next they dropped him in the snow and poured cold water on him. It didn’t faze Mike. Then they ran him over with a cab, which only broke his arm. The conspirators finally succeeded when they boozed Mike up, ran a tube down his throat, and pumped him full of carbon monoxide.

They don’t come much tougher than Mike the Durable, as he is remembered. Except in the microscopic world beneath our feet, where there lives what is perhaps the toughest creature on Earth: the tardigrade. Also known as the water bear (because it looks like an adorable little many-legged bear), this exceedingly tiny critter has an incredible resistance to just about everything. Go ahead and boil it, freeze it, irradiate it, and toss it into the vacuum of space — it won’t die. If it were big enough to eat a glass sandwich, it probably could survive that too.

The water bear’s trick is something called cryptobiosis, in which it brings its metabolic processes nearly to a halt. In this state it can dehydrate to 3 percent of its normal water content in what is called desiccation, becoming a husk of its former self. But just add water and the tardigrade roars back to life like Mike the Durable emerging from a bender and continues trudging along, puncturing algae and other organisms with a mouthpart called a stylet and sucking out the nutrients.

“They are probably the most extreme survivors that we know of among animals,” said biologist Bob Goldstein of the University of North Carolina at Chapel Hill. “People talk about cockroaches surviving anything. I think long after the cockroaches would be killed we’d still have dried water bears that could be rehydrated and be alive.”

“Is It Cold in Here?” Asked a Water Bear NEVER

This hibernation of sorts isn’t happening for a single season, like a true bear (tardigrades are invertebrates). As far as scientists can tell, water bears can be dried out for at least a decade and still revivify, only to find their clothes are suddenly out of style.

Mike the Durable did just fine in the freezing cold, but the temperatures the water bear endures in cryptobiosis defy belief. It can survive in a lab environment of just 1 degree kelvin. That’s an astonishing -458 degrees Fahrenheit, where matter goes bizarro, with gases becoming liquids and liquids becoming solids.

At this temperature the movements of the normally frenzied atoms inside the water bear come almost to a standstill, yet the creature endures. And that’s all the more incredible when you consider that the water bear indeed has a brain, a relatively simple one, sure, but a brain that somehow emerges from this unscathed.

Water bears also can tolerate pressures six times that of the deepest oceans. And a few of them once survived an experiment that subjected them to 10 days exposed to the vacuum of space. (While we’re on the topic, humans can survive for a couple minutes, max. One poor fellow at NASA accidentally depressurized his suit in a vacuum chamber in 1965 and lost consciousness after 15 seconds. When he woke up, he said his last memory was feeling the water on his tongue boiling, which I’m guessing felt a bit like Pop Rocks, only somehow even worse for your body.)

Anyway, tardigrades. They can take hundreds of times the radiation that would kill a human. Water bears don’t mind hot water either–like, 300 degrees Fahrenheit hot. So the question is: why? Why evolve to survive the kind of cold that only scientists can create in a lab, and pressures that have never even existed on our planet?

Water bears don’t even necessarily inhabit extreme habitats like, say, boiling springs where certain bacteria proliferate. Therefore the term “extremophile” that has been applied to tardigrades over the years isn’t entirely accurate. Just because they’re capable of surviving these harsh environments doesn’t mean they seek them out.

They actually prefer regular old dirt and sand and moss all over the world. I mean, would you rather stay in a Motel 6 in a lake of boiling acidic water or lounge around on a beach resort and drink algae cocktails? (Why this isn’t a BuzzFeed quiz yet is beyond me. It’s gold. There’s untold billions of water bears on Earth. Page views, BuzzFeed. What’s the sound of a billion water bears clicking? Boom, another quiz.)

But that isn’t to say there aren’t troubles in the tardigrade version of paradise. “If you’re living in dirt,” said Goldstein, “there’s a danger of desiccation all the time.” If, say, the sun starts drying out the surface, one option is to move farther down into the soil. But “if you go too far down, there’s not going to be much food. So they really probably have to live in a fringe where they need to get food, but there’s always danger of drying out.”

A Tiny Superhero That Could One Day Save Your Life

And so it could be that the water bear’s incredible feats of survival may simply stem from a tough life in the dirt. But there’s also the question of how it does this, and it’s a perplexing one at that. Goldstein’s lab is researching this, and he reckons that water bears don’t just have one simple trick, but a range of strategies to be able to endure drying out and eventually reanimating.

“There’s one that we know of, which is some animals that survive drying make a sugar called trehalose,” he said. “And trehalose sort of replaces water as they dry down, so it will make glassy surfaces where normally water would be sitting. That probably helps prevent a lot of the damage that normally occurs when you dry something down or when you rehydrate it.” Not all of the 1,000 or so species of water bears produce this sugar though, he says, so there must be some other trick going on.

Ironically enough, these incredibly hardy creatures are very difficult to grow in the lab, but Goldstein has had great success where many others have failed. And, like so many great things in this world, it all began in a shed in England, where a regular old chap mastered their breeding to sell them to local schools for scientific experiments. He was so good at it, in fact, that he never needed to venture out to recollect specimens. And their descendants now crawl around Goldstein’s lab, totally unaware of how incredibly lucky they are to not be tortured by school children day in and day out.

A scanning electron micrograph of three awkwardly cuddling water bears. “You know what they say: Two’s company, three’s a crowd. We’re looking at you, Paul. Seriously though, Paul. You need to scram.” Image: Willow Gabriel

“Some organisms just can’t be raised in labs,” Goldstein said. “You bring them in and try to mimic what’s going on outside and they just don’t grow up. So we were lucky, actually, people were having a hard time growing water bears in labs continuously. And this guy in England had figured it out.”

Thanks to this breakthrough, Goldstein and other scientists are exploring the possibility of utilizing the water bear as science’s next fruit fly, that ubiquitous test subject that has yielded so many momentous discoveries. The water bear’s small size means you can pack a ton of them into a lab, plus they reproduce quickly and have a relatively compact genome to work with. Also, they’re way cuter than fruit flies and they don’t fall into your sodas and stuff.

Read the entire article here.

Image: A scanning electron micrograph of a water bear.  Courtesy: Bob Goldstein and Vicky Madden / Wired.

Gravity Makes Some Waves

[tube]ZlfIVEy_YOA[/tube]

Gravity, the movie, made some “waves” at the recent Academy Awards ceremony in Hollywood. But the real star in this case, is the real gravity that seems to hold all macroscopic things in the cosmos together. And the waves in the this case are real gravitational waves. A long-running experiment based at the South Pole has discerned a signal from the Cosmic Microwave Background that points to the existence of gravitational waves. This is a discovery of great significance, if upheld, and confirms the Inflationary Theory of our universe’s exponential expansion just after the Big Bang. Theorists who first proposed this remarkable hypothesis — Alan Guth (1979) and Andrei Linde (1981) — are probably popping some champagne right now.

From the New Statesman:

The announcement yesterday that scientists working on the BICEP2 experiment in Antarctica had detected evidence of “inflation” may not appear incredible, but it is. It appears to confirm longstanding hypotheses about the Big Bang and the earliest moments of our universe, and could open a new path to resolving some of physics’ most difficult mysteries.

Here’s the explainer. BICEP2, near the South Pole (where the sky is clearest of pollution), was scanning the visible universe for cosmic background radiation – that is, the fuzzy warmth left over from the Big Bang. It’s the oldest light in the universe, and as such our maps of it are our oldest glimpses of the young universe. Here’s a map created with data collected by the ESA’s Planck Surveyor probe last year:

ESA-Planck-Surveyor-image

What should be clear from this is that the universe is remarkably flat and regular – that is, there aren’t massive clumps of radiation in some areas and gaps in others. This doesn’t quite make intuitive sense.

If the Big Bang really was a chaotic event, with energy and matter being created and destroyed within tiny fractions of nanoseconds, then we would expect the net result to be a universe that’s similarly chaotic in its structure. Something happened to smooth everything out, and that something is inflation.

Inflation assumes that something must have happened to the rate of expansion of the universe, somewhere between 10-35 and 10-32 seconds after the Big Bang, to make it massively increase. It would mean that the size of the “lumps” would outpace the rate at which they appear in the cosmos, smoothing them out.

For an analogy, imagine if the Moon was suddenly stretched out to the size of the Sun. You’d see – just before it collapsed in on itself – that its rifts and craters had become, relative to its new size, made barely perceptible. Just like a sheet being pulled tightly on a bed, a chaotic structure becomes more uniform.

Inflation, first theorised by Alan Guth in 1979 and refined by Andrei Linde in 1981, became the best hypothesis to explain what we were observing in the universe. It also seemed to offer a way to better understand how dark energy drove the expansion of the Big Bang, and even possibly lead a way towards unifying quantum mechanics with general relativity. That is, if it was correct. And there have been plenty of theories which tied-up some loose ends only to come apart with further observation.

The key evidence needed to verify inflation would be in the form of gravitational waves – that is, ripples in spacetime. Such waves were a part of Einstein’s theory of general relativity, and in the 90s scientists observed some for the first time, but until now there’s never been any evidence of them from inside the cosmic background radiation.

BICEP2, though, has found that evidence, and with it scientists now have a crucial piece of fact that can falsify other theories about the early universe and potentially open up entirely new areas of investigation. This is why it’s being compared with the discovery of the Higgs Boson last year, as just as that particle was fundamental to our understanding of molecular physics, so to is inflation to our understanding of the wider universe.

Read the entire article here.

Video: Professor physicist Chao-Lin Kuo delivers news of results from his gravitational wave experiment. Professor Andrei Linde reacts to the discovery, March 17, 2014. Courtesy of Stanford University.

Time Traveling Camels

camels_at_giza

Camels have no place in the Middle East of biblical times. Forensic scientists, biologists, archeologists, geneticists and paleontologists all seem to agree that camels could not have been present in the early Jewish stories of the Genesis and the Old Testament — camels trotted in to the land many hundreds of years later.

From the NYT:

There are too many camels in the Bible, out of time and out of place.

Camels probably had little or no role in the lives of such early Jewish patriarchs as Abraham, Jacob and Joseph, who lived in the first half of the second millennium B.C., and yet stories about them mention these domesticated pack animals more than 20 times. Genesis 24, for example, tells of Abraham’s servant going by camel on a mission to find a wife for Isaac.

These anachronisms are telling evidence that the Bible was written or edited long after the events it narrates and is not always reliable as verifiable history. These camel stories “do not encapsulate memories from the second millennium,” said Noam Mizrahi, an Israeli biblical scholar, “but should be viewed as back-projections from a much later period.”

Dr. Mizrahi likened the practice to a historical account of medieval events that veers off to a description of “how people in the Middle Ages used semitrailers in order to transport goods from one European kingdom to another.”

For two archaeologists at Tel Aviv University, the anachronisms were motivation to dig for camel bones at an ancient copper smelting camp in the Aravah Valley in Israel and in Wadi Finan in Jordan. They sought evidence of when domesticated camels were first introduced into the land of Israel and the surrounding region.

The archaeologists, Erez Ben-Yosef and Lidar Sapir-Hen, used radiocarbon dating to pinpoint the earliest known domesticated camels in Israel to the last third of the 10th century B.C. — centuries after the patriarchs lived and decades after the kingdom of David, according to the Bible. Some bones in deeper sediments, they said, probably belonged to wild camels that people hunted for their meat. Dr. Sapir-Hen could identify a domesticated animal by signs in leg bones that it had carried heavy loads.

The findings were published recently in the journal Tel Aviv and in a news release from Tel Aviv University. The archaeologists said that the origin of the domesticated camel was probably in the Arabian Peninsula, which borders the Aravah Valley. Egyptians exploited the copper resources there and probably had a hand in introducing the camels. Earlier, people in the region relied on mules and donkeys as their beasts of burden.

“The introduction of the camel to our region was a very important economic and social development,” Dr. Ben-Yosef said in a telephone interview. “The camel enabled long-distance trade for the first time, all the way to India, and perfume trade with Arabia. It’s unlikely that mules and donkeys could have traversed the distance from one desert oasis to the next.”

Dr. Mizrahi, a professor of Hebrew culture studies at Tel Aviv University who was not directly involved in the research, said that by the seventh century B.C. camels had become widely employed in trade and travel in Israel and through the Middle East, from Africa as far as India. The camel’s influence on biblical research was profound, if confusing, for that happened to be the time that the patriarchal stories were committed to writing and eventually canonized as part of the Hebrew Bible.

“One should be careful not to rush to the conclusion that the new archaeological findings automatically deny any historical value from the biblical stories,” Dr. Mizrahi said in an email. “Rather, they established that these traditions were indeed reformulated in relatively late periods after camels had been integrated into the Near Eastern economic system. But this does not mean that these very traditions cannot capture other details that have an older historical background.”

Read the entire article here.

Image: Camels at the Great Pyramid of Giza, Egypt. Courtesy of Wikipedia.

Is Your City Killing You?

The stresses of modern day living are taking a toll on your mind and body. And, more so if you happen to live in an concrete jungle. The results are even more pronounced for those of us living in large urban centers. That’s the finding of some fascinating new brain research out of Germany. Their simple answer to a lower-stress life: move to the countryside.

From The Guardian:

You are lying down with your head in a noisy and tightfitting fMRI brain scanner, which is unnerving in itself. You agreed to take part in this experiment, and at first the psychologists in charge seemed nice.

They set you some rather confusing maths problems to solve against the clock, and you are doing your best, but they aren’t happy. “Can you please concentrate a little better?” they keep saying into your headphones. Or, “You are among the worst performing individuals to have been studied in this laboratory.” Helpful things like that. It is a relief when time runs out.

Few people would enjoy this experience, and indeed the volunteers who underwent it were monitored to make sure they had a stressful time. Their minor suffering, however, provided data for what became a major study, and a global news story. The researchers, led by Dr Andreas Meyer-Lindenberg of the Central Institute of Mental Health in Mannheim, Germany, were trying to find out more about how the brains of different people handle stress. They discovered that city dwellers’ brains, compared with people who live in the countryside, seem not to handle it so well.

To be specific, while Meyer-Lindenberg and his accomplices were stressing out their subjects, they were looking at two brain regions: the amygdalas and the perigenual anterior cingulate cortex (pACC). The amygdalas are known to be involved in assessing threats and generating fear, while the pACC in turn helps to regulate the amygdalas. In stressed citydwellers, the amygdalas appeared more active on the scanner; in people who lived in small towns, less so; in people who lived in the countryside, least of all.

And something even more intriguing was happening in the pACC. Here the important relationship was not with where the the subjects lived at the time, but where they grew up. Again, those with rural childhoods showed the least active pACCs, those with urban ones the most. In the urban group moreover, there seemed not to be the same smooth connection between the behaviour of the two brain regions that was observed in the others. An erratic link between the pACC and the amygdalas is often seen in those with schizophrenia too. And schizophrenic people are much more likely to live in cities.

When the results were published in Nature, in 2011, media all over the world hailed the study as proof that cities send us mad. Of course it proved no such thing – but it did suggest it. Even allowing for all the usual caveats about the limitations of fMRI imaging, the small size of the study group and the huge holes that still remained in our understanding, the results offered a tempting glimpse at the kind of urban warping of our minds that some people, at least, have linked to city life since the days of Sodom and Gomorrah.

The year before the Meyer-Lindenberg study was published, the existence of that link had been established still more firmly by a group of Dutch researchers led by Dr Jaap Peen. In their meta-analysis (essentially a pooling together of many other pieces of research) they found that living in a city roughly doubles the risk of schizophrenia – around the same level of danger that is added by smoking a lot of cannabis as a teenager.

At the same time urban living was found to raise the risk of anxiety disorders and mood disorders by 21% and 39% respectively. Interestingly, however, a person’s risk of addiction disorders seemed not to be affected by where they live. At one time it was considered that those at risk of mental illness were just more likely to move to cities, but other research has now more or less ruled that out.

So why is it that the larger the settlement you live in, the more likely you are to become mentally ill? Another German researcher and clinician, Dr Mazda Adli, is a keen advocate of one theory, which implicates that most paradoxical urban mixture: loneliness in crowds. “Obviously our brains are not perfectly shaped for living in urban environments,” Adli says. “In my view, if social density and social isolation come at the same time and hit high-risk individuals … then city-stress related mental illness can be the consequence.”

Read the entire story here.

Mars Emigres Beware

MRO-Mars-impact-craterThe planners behind the proposed, private Mars One mission to Mars are still targeting 2024 for an initial settlement on the Red Planet. That’s now a mere 10 years away. As of this writing, the field of potential settlers has been whittled down to around 2,000 from an initial pool of about 250,000 would-be explorers. While the selection process and planning continues, other objects continue to target Mars as well. Large space rocks seem to be hitting the planet more frequently and more recently than was first thought. So, while such impacts are both beautiful and scientifically valuable — they may come as rather unwanted to the forthcoming human Martians.

From ars technica:

Yesterday [February 5, 2014], the team that runs the HiRISE camera on the Mars Reconnaissance Orbiter released the photo shown above. It’s a new impact crater on Mars, formed sometime early this decade. The crater at the center is about 30 meters in diameter, and the material ejected during its formation extends out as far as 15 kilometers.

The impact was originally spotted by the MRO’s Context Camera, a wide-field imaging system that (wait for it) provides the context—an image of the surrounding terrain—for the high-resolution images taken by HiRISE. The time window on the impact, between July 2010 and May 2012, simply represents the time between two different Context Camera photos of the same location. Once the crater was spotted, it took until November of 2013 for another pass of the region, at which point HiRISE was able to image it.

Read the entire article here.

Image: Impact crater from Mars Reconnaissance Orbiter. Courtesy of NASA / JPL.

 

 

13.6 Billion Versus 4004 BCE

The first number, 13.6 billion, is the age in years of the oldest known star in the cosmos. It was discovered recently by astronomers in Australia at the National University’s Mount Stromlo SkyMapper Observatory. The star is located in our Milky Way galaxy about 6,000 light years away. A little closer to home, in Kentucky at the aptly named Creation Museum, the Synchronological Chart places the beginning of time and all things at 4004 BCE.

Interestingly enough both Australia and Kentucky should not exist according to the flat earth myth or the widespread pre-Columbus view of our world with an edge at the visible horizon. But, the evolution versus creationism debates continue unabated. The chasm between the two camps remains a mere 13.6 billion years give or take a handful of millennia. But perhaps over time, those who subscribe to reason and the scientific method are likely to prevail — an apt example of survival of the most adaptable at work.

Hitch, we still miss you!

From ars technica:

In 1878, the American scholar and minister Sebastian Adams put the final touches on the third edition of his grandest project: a massive Synchronological Chart that covers nothing less than the entire history of the world in parallel, with the deeds of kings and kingdoms running along together in rows over 25 horizontal feet of paper. When the chart reaches 1500 BCE, its level of detail becomes impressive; at 400 CE it becomes eyebrow-raising; at 1300 CE it enters the realm of the wondrous. No wonder, then, that in their 2013 book Cartographies of Time: A History of the Timeline, authors Daniel Rosenberg and Anthony Grafton call Adams’ chart “nineteenth-century America’s surpassing achievement in complexity and synthetic power… a great work of outsider thinking.”

The chart is also the last thing that visitors to Kentucky’s Creation Museum see before stepping into the gift shop, where full-sized replicas can be purchased for $40.

That’s because, in the world described by the museum, Adams’ chart is more than a historical curio; it remains an accurate timeline of world history. Time is said to have begun in 4004 BCE with the creation of Adam, who went on to live for 930 more years. In 2348 BCE, the Earth was then reshaped by a worldwide flood, which created the Grand Canyon and most of the fossil record even as Noah rode out the deluge in an 81,000 ton wooden ark. Pagan practices at the eight-story high Tower of Babel eventually led God to cause a “confusion of tongues” in 2247 BCE, which is why we speak so many different languages today.

Adams notes on the second panel of the chart that “all the history of man, before the flood, extant, or known to us, is found in the first six chapters of Genesis.”

Ken Ham agrees. Ham, CEO of Answers in Genesis (AIG), has become perhaps the foremost living young Earth creationist in the world. He has authored more books and articles than seems humanly possible and has built AIG into a creationist powerhouse. He also made national headlines when the slickly modern Creation Museum opened in 2007.

He has also been looking for the opportunity to debate a prominent supporter of evolution.

And so it was that, as a severe snow and sleet emergency settled over the Cincinnati region, 900 people climbed into cars and wound their way out toward the airport to enter the gates of the Creation Museum. They did not come for the petting zoo, the zip line, or the seasonal camel rides, nor to see the animatronic Noah chortle to himself about just how easy it had really been to get dinosaurs inside his ark. They did not come to see The Men in White, a 22-minute movie that plays in the museum’s halls in which a young woman named Wendy sees that what she’s been taught about evolution “doesn’t make sense” and is then visited by two angels who help her understand the truth of six-day special creation. They did not come to see the exhibits explaining how all animals had, before the Fall of humanity into sin, been vegetarians.

He has also been looking for the opportunity to debate a prominent supporter of evolution.

And so it was that, as a severe snow and sleet emergency settled over the Cincinnati region, 900 people climbed into cars and wound their way out toward the airport to enter the gates of the Creation Museum. They did not come for the petting zoo, the zip line, or the seasonal camel rides, nor to see the animatronic Noah chortle to himself about just how easy it had really been to get dinosaurs inside his ark. They did not come to see The Men in White, a 22-minute movie that plays in the museum’s halls in which a young woman named Wendy sees that what she’s been taught about evolution “doesn’t make sense” and is then visited by two angels who help her understand the truth of six-day special creation. They did not come to see the exhibits explaining how all animals had, before the Fall of humanity into sin, been vegetarians.

They came to see Ken Ham debate TV presenter Bill Nye the Science Guy—an old-school creation v. evolution throwdown for the Powerpoint age. Even before it began, the debate had been good for both men. Traffic to AIG’s website soared by 80 percent, Nye appeared on CNN, tickets sold out in two minutes, and post-debate interviews were lined up with Piers Morgan Live and MSNBC.

While plenty of Ham supporters filled the parking lot, so did people in bow ties and “Bill Nye is my Homeboy” T-shirts. They all followed the stamped dinosaur tracks to the museum’s entrance, where a pack of AIG staffers wearing custom debate T-shirts stood ready to usher them into “Discovery Hall.”

Security at the Creation Museum is always tight; the museum’s security force is made up of sworn (but privately funded) Kentucky peace officers who carry guns, wear flat-brimmed state trooper-style hats, and operate their own K-9 unit. For the debate, Nye and Ham had agreed to more stringent measures. Visitors passed through metal detectors complete with secondary wand screenings, packages were prohibited in the debate hall itself, and the outer gates were closed 15 minutes before the debate began.

Inside the hall, packed with bodies and the blaze of high-wattage lights, the temperature soared. The empty stage looked—as everything at the museum does—professionally designed, with four huge video screens, custom debate banners, and a pair of lecterns sporting Mac laptops. 20 different video crews had set up cameras in the hall, and 70 media organizations had registered to attend. More than 10,000 churches were hosting local debate parties. As AIG technical staffers made final preparations, one checked the YouTube-hosted livestream—242,000 people had already tuned in before start time.

An AIG official took the stage eight minutes before start time. “We know there are people who disagree with each other in this room,” he said. “No cheering or—please—any disruptive behavior.”

At 6:59pm, the music stopped and the hall fell silent but for the suddenly prominent thrumming of the air conditioning. For half a minute, the anticipation was electric, all eyes fixed on the stage, and then the countdown clock ticked over to 7:00pm and the proceedings snapped to life. Nye, wearing his traditional bow tie, took the stage from the left; Ham appeared from the right. The two shook hands in the center to sustained applause, and CNN’s Tom Foreman took up his moderating duties.

Inside the hall, packed with bodies and the blaze of high-wattage lights, the temperature soared. The empty stage looked—as everything at the museum does—professionally designed, with four huge video screens, custom debate banners, and a pair of lecterns sporting Mac laptops. 20 different video crews had set up cameras in the hall, and 70 media organizations had registered to attend. More than 10,000 churches were hosting local debate parties. As AIG technical staffers made final preparations, one checked the YouTube-hosted livestream—242,000 people had already tuned in before start time.

An AIG official took the stage eight minutes before start time. “We know there are people who disagree with each other in this room,” he said. “No cheering or—please—any disruptive behavior.”

At 6:59pm, the music stopped and the hall fell silent but for the suddenly prominent thrumming of the air conditioning. For half a minute, the anticipation was electric, all eyes fixed on the stage, and then the countdown clock ticked over to 7:00pm and the proceedings snapped to life. Nye, wearing his traditional bow tie, took the stage from the left; Ham appeared from the right. The two shook hands in the center to sustained applause, and CNN’s Tom Foreman took up his moderating duties.

Ham had won the coin toss backstage and so stepped to his lectern to deliver brief opening remarks. “Creation is the only viable model of historical science confirmed by observational science in today’s modern scientific era,” he declared, blasting modern textbooks for “imposing the religion of atheism” on students.

“We’re teaching people to think critically!” he said. “It’s the creationists who should be teaching the kids out there.”

And we were off.

Two kinds of science

Digging in the fossil fields of Colorado or North Dakota, scientists regularly uncover the bones of ancient creatures. No one doubts the existence of the bones themselves; they lie on the ground for anyone to observe or weigh or photograph. But in which animal did the bones originate? How long ago did that animal live? What did it look like? One of Ham’s favorite lines is that the past “doesn’t come with tags”—so the prehistory of a stegosaurus thigh bone has to be interpreted by scientists, who use their positions in the present to reconstruct the past.

For mainstream scientists, this is simply an obvious statement of our existential position. Until a real-life Dr. Emmett “Doc” Brown finds a way to power a Delorean with a 1.21 gigawatt flux capacitor in order to shoot someone back through time to observe the flaring-forth of the Universe, the formation of the Earth, or the origins of life, or the prehistoric past can’t be known except by interpretation. Indeed, this isn’t true only of prehistory; as Nye tried to emphasize, forensic scientists routinely use what they know of nature’s laws to reconstruct past events like murders.

For Ham, though, science is broken into two categories, “observational” and “historical,” and only observational science is trustworthy. In the initial 30 minute presentation of his position, Ham hammered the point home.

“You don’t observe the past directly,” he said. “You weren’t there.”

Ham spoke with the polish of a man who has covered this ground a hundred times before, has heard every objection, and has a smooth answer ready for each one.

When Bill Nye talks about evolution, Ham said, that’s “Bill Nye the Historical Science Guy” speaking—with “historical” being a pejorative term.

In Ham’s world, only changes that we can observe directly are the proper domain of science. Thus, when confronted with the issue of speciation, Ham readily admits that contemporary lab experiments on fast-breeding creatures like mosquitoes can produce new species. But he says that’s simply “micro-evolution” below the family level. He doesn’t believe that scientists can observe “macro-evolution,” such as the alteration of a lobe-finned fish into a tiger over millions of years.

Because they can’t see historical events unfold, scientists must rely on reconstructions of the past. Those might be accurate, but they simply rely on too many “assumptions” for Ham to trust them. When confronted during the debate with evidence from ancient trees which have more rings than there are years on the Adams Sychronological Chart, Ham simply shrugged.

“We didn’t see those layers laid down,” he said.

To him, the calculus of “one ring, one year” is merely an assumption when it comes to the past—an assumption possibly altered by cataclysmic events such as Noah’s flood.

In other words, “historical science” is dubious; we should defer instead to the “observational” account of someone who witnessed all past events: God, said to have left humanity an eyewitness account of the world’s creation in the book of Genesis. All historical reconstructions should thus comport with this more accurate observational account.

Mainstream scientists don’t recognize this divide between observational and historical ways of knowing (much as they reject Ham’s distinction between “micro” and “macro” evolution). Dinosaur bones may not come with tags, but neither does observed contemporary reality—think of a doctor presented with a set of patient symptoms, who then has to interpret what she sees in order to arrive at a diagnosis.

Given that the distinction between two kinds of science provides Ham’s key reason for accepting the “eyewitness account” of Genesis as a starting point, it was unsurprising to see Nye take generous whacks at the idea. You can’t observe the past? “That’s what we do in astronomy,” said Nye in his opening presentation. Since light takes time to get here, “All we can do in astronomy is look at the past. By the way, you’re looking at the past right now.”

Those in the present can study the past with confidence, Nye said, because natural laws are generally constant and can be used to extrapolate into the past.

“This idea that you can separate the natural laws of the past from the natural laws you have now is at the heart of our disagreement,” Nye said. “For lack of a better word, it’s magical. I’ve appreciated magic since I was a kid, but it’s not what we want in mainstream science.”

How do scientists know that these natural laws are correctly understood in all their complexity and interplay? What operates as a check on their reconstructions? That’s where the predictive power of evolutionary models becomes crucial, Nye said. Those models of the past should generate predictions which can then be verified—or disproved—through observations in the present.

Read the entire article here.

Wolfgang Pauli’s Champagne

PauliAustrian theoretical physicist dreamed up neutrinos in 1930, and famously bet a case of fine champagne that these ghostly elementary particles would never be found. Pauli lost the bet in 1956. Since then researchers have made great progress both theoretically and experimentally in trying to delve into the neutrino’s secrets. Two new books describe the ongoing quest.

From the Economist:

Neutrinoa are weird. The wispy particles are far more abundant than the protons and electrons that make up atoms. Billions of them stream through every square centimetre of Earth’s surface each second, but they leave no trace and rarely interact with anything. Yet scientists increasingly agree that they could help unravel one of the biggest mysteries in physics: why the cosmos is made of matter.

Neutrinos’ scientific history is also odd, as two new books explain. The first is “Neutrino Hunters” by Ray Jayawardhana, a professor of astrophysics at the University of Toronto (and a former contributor to The Economist). The second, “The Perfect Wave”, is by Heinrich Päs, a neutrino theorist from Technical University in the German city of Dortmund.

The particles were dreamed up in 1930 by Wolfgang Pauli, an Austrian, to account for energy that appeared to go missing in a type of radioactivity known as beta decay. Pauli apologised for what was a bold idea at a time when physicists knew of just two subatomic particles (protons and electrons), explaining that the missing energy was carried away by a new, electrically neutral and, he believed, undetectable subatomic species. He bet a case of champagne that it would never be found.

Pauli lost the wager in 1956 to two Americans, Frederick Reines and Clyde Cowan. The original experiment they came up with to test the hypothesis was unorthodox. It involved dropping a detector down a shaft within 40 metres of an exploding nuclear bomb, which would act as a source of neutrinos. Though Los Alamos National Laboratory approved the experiment, the pair eventually chose a more practical approach and buried a detector near a powerful nuclear reactor at Savannah River, South Carolina, instead. (Most neutrino detectors are deep underground to shield them from cosmic rays, which can cause similar signals.)

However, as other experiments, in particular those looking for neutrinos in the physical reactions which power the sun, strove to replicate Reines’s and Cowan’s result, they hit a snag. The number of solar neutrinos they recorded was persistently just one third of what theory said the sun ought to produce. Either the theorists had made a mistake, the thinking went, or the experiments had gone awry.

In fact, both were right all along. It was the neutrinos that, true to form, behaved oddly. As early as 1957 Bruno Pontecorvo, an Italian physicist who had defected to the Soviet Union seven years earlier, suggested that neutrinos could come in different types, known to physicists as “flavours”, and that they morph from one type to another on their way from the sun to Earth. Other scientists were sceptical. Their blueprint for how nature works at the subatomic level, called the Standard Model, assumed that neutrinos have no mass. This, as Albert Einstein showed, is the same as saying they travel at the speed of light. On reaching that speed time stops. If neutrinos switch flavours they would have to experience change, and thus time. That means they would have to be slower than light. In other words, they would have mass. (A claim in 2011 by Italian physicists working with CERN, Europe’s main physics laboratory, that neutrinos broke Einstein’s speed limit turned out to be the result of a loose cable.)

Pontecorvo’s hypothesis was proved only in 1998, in Japan. Others have since confirmed the phenomenon known as “oscillation”. The Standard Model had to be tweaked to make room for neutrino mass. But scientists still have little idea about how much any of the neutrinos actually weigh, besides being at least 1m times lighter than an electron.

The answer to the weight question, as well as a better understanding of neutrino oscillations, may help solve the puzzle of why the universe is full of matter. One explanation boffins like a lot because of its elegant maths invokes a whole new category of “heavy” neutrino decaying more readily into matter than antimatter. If that happened a lot when the universe began, then there would have been more matter around than antimatter, and when the matter and antimatter annihilated each other, as they are wont to do, some matter (ie, everything now visible) would be left over. The lighter the known neutrinos, according to this “seesaw” theory, the heftier the heavy sort would have to be. A heavy neutrino has yet to be observed, and may well, as Pauli described it, be unobservable. But a better handle on the light variety, Messrs Jayawardhana and Päs both agree, may offer important clues.

These two books complement each other. Mr Jayawardhana’s is stronger on the history (though his accounts of the neutrino hunters’ personal lives can read a little too much like a professional CV). It is also more comprehensive on the potential use of neutrinos in examining the innards of the sun, of distant exploding stars or of Earth, as well as more practical uses such as fingering illicit nuclear-enrichment programmes (since they spew out a telltale pattern of the particles).

Read the entire article here.

Image: Wolfgang Pauli, c1945. Courtesy of Wikipedia.

NASA’s 30-Year Roadmap

NASA-logoWhile NASA vacillates over any planned manned missions back to the Moon or to the Red Planet, the agency continues to think ahead. Despite perennial budget constraints and severe cuts NASA still has some fascinating plans for unmanned exploration of our solar system and beyond to the very horizon of the visible universe.

In its latest 30 year roadmap, NASA maps out its long-term goals, which include examining the atmospheres of exoplanets, determining the structure of neutron stars and tracing the history of galactic formation.

Download the NASA roadmap directly from NASA here.

From Technology Review:

The past 30 years has seen a revolution in astronomy and our understanding of the Universe. That’s thanks in large part to a relatively small number of orbiting observatories that have changed the way we view our cosmos.

These observatories have contributed observations from every part of the electromagnetic spectrum, from NASA’s Compton Gamma Ray Observatory at the very high energy end to HALCA, a Japanese 8-metre radio telescope at the low energy end.  Then there is the Hubble Space Telescope in the visible part of the spectrum, arguably the greatest telescope in history.

It’s fair to say that these  observatories have had a profound effect not just on science , but on the history of humankind.

So an interesting question is: what next?  Today, we find out, at least as far as NASA is concerned, with the publication of the organisation’s roadmap for astrophysics over the next 30 years. The future space missions identified in this document will have a profound influence on the future of astronomy but also on the way imaging technology develops in general.

So what has NASA got up its sleeve? To start off with, it says its goal in astrophysics is to answer three questions: Are we alone? How did we get here? And how does our universe work?

So let’s start with the first question. Perhaps the most important discovery in astronomy in recent years is that the Milky Way is littered with planets, many of which must have conditions ripe for life. So it’s no surprise that NASA aims first to understand the range of planets that exist and the types of planetary systems they form.

The James Webb Space Telescope, Hubble’s successor due for launch in 2018, will study the atmospheres of exoplanets, along with the Large UV Optical IR (LUVOIR) Surveyor due for launch in the 2020s. Together, these telescopes may produce results just as spectacular as Hubble’s.

To complement the Kepler mission, which has found numerous warm planets orbiting all kinds of stars, NASA is also planning the WFIRST-AFTA mission which will look for cold, free-floating planets using gravitational lensing. That’s currently scheduled for launch in the mid 2020s.

Beyond that, NASA hopes to build an ExoEarth Mapper mission that combines the observations from several large optical space telescopes to produce the first resolved images of other Earths. “For the first time, we will identify continents and oceans—and perhaps the signatures of life—on distant worlds,” says the report.

To tackle the second question—how did we get here?—NASA hopes to trace the origins of the first stars, star clusters and galaxies, again using JWST, LUVOIR and WFIRST-AXA. “These missions will also directly trace the history of galaxies and intergalactic gas through cosmic time, peering nearly 14 billion years into the past,” it says.

And to understand how the universe works, NASA hopes to observe the most extreme events in the universe, by peering inside neutron stars, observing the collisions of black holes and even watching the first nanoseconds of time. Part of this will involve an entirely new way to observe the universe using gravitational waves (as long as today’s Earth-based gravitational wave detectors finally spot something of interest).

The technology challenges in all this will be immense. NASA needs everything from bigger, lighter optics and extremely high contrast imaging devices to smart materials and micro-thrusters with unprecedented positioning accuracy.

One thing NASA’s roadmap doesn’t mention though is money and management—the two thorniest issues in the space business. The likelihood is that NASA will not have to sweat too hard for the funds it needs to carry out these missions. Much more likely is that any sleep lost will be over the type of poor management and oversight that has brought many a multibillion dollar mission to its knees.

Read the entire article here.

Image: NASA logo. Courtesy of NASA / Wikipedia.

How to Rendezvous With a Comet

[tube]ktrtvCvZb28[/tube]

First, you will need a significant piece of space hardware. Second, you will need to launch it having meticulously planned its convoluted trajectory through the solar system. Third, wait 12 years for the craft to reach the comet. Fourth, and with fingers crossed, launch a landing probe from the craft on to the 2.5 mile wide comet 67 P/Churyumov-Gerasimenko, while all are hurtling through space at around 25,000 miles per hour.

So far so good. The Rosetta spacecraft woke up from its self-induced 30-month hibernation on January 20, having slumbered to conserve energy. Now it continues on its final leg of the journey — a year-long trek to catch the comet.

Visit the European Space Agency (ESA) Rosetta mission home page here.

From ars technica:

The Rosetta spacecraft is due to wake up on the morning of January 20 after an 30-month hibernation in deep space. For the past ten years, the three-ton spacecraft has been on a one-way trip to a 4 km-wide comet. When it arrives, it will set about performing a maneuver that has never been done before: landing on a comet’s surface.

The spacecraft has already achieved some success on its long journey through the solar system. It has passed by two asteroids—Steins in 2008 and Lutetia in 2010—and it tried out some of its instruments on them. Because Rosetta’s journey is so protracted, however, preserving energy has been of the utmost importance, which is why it was put into hibernation in June 2011. The journey has taken so long because the spacecraft needed to be “gravity-assisted” by many planets in order to reach the necessary velocity to match the comet’s orbit.

When it wakes up, Rosetta is expected to take a few hours to establish contact with Earth, 673 million km (396 million mi) away. The scientists involved will wait with bated breath. Dan Andrews, part of a team at the Open University who built one of Rosetta’s on-board instruments, said, “If there isn’t sufficient power, Rosetta will go back to sleep and try again later. The wake-up process is driven by software commands already on the spacecraft. It will wake itself up autonomously and spend some time warming up and orienting its antenna toward Earth to ‘phone home.’”

If multiple attempts fail to wake Rosetta, it could mean the end of the mission.

Rosetta should reach comet 67P/Churyumov-Gerasimenko in May 2014, at which point it will decelerate to match the speed of the comet. In August 2014, Rosetta will enter orbit around the comet to scout 67P’s surface in search of a landing spot. Then, in November 2014, Rosetta’s on-board lander, Philae, will be ejected from the orbiting spacecraft onto the surface of the comet. There are a lot of things that need to come together perfectly for this to go smoothly, but space endeavors are designed to charter unknown territories, and Rosetta will be doing just that.

If Rosetta manages this mission successfully, it will make history as the first spacecraft to land on the surface of a comet. Success is by no means assured, as scientists have no idea what to expect when Rosetta arrives at the comet. Will the comet’s surface be icy, soft, hard, or rocky? This information will affect what kind of landing the spacecraft can expect and whether it will sink into the comet or bounce off. Another problem is that comet 67P is small and has a weak gravitational field, which will make holding the spacecraft on its surface challenging, even after a successful landing.

At a cost of €1 billion ($1.36 billion) it’s important that we get some value for our money with this mission. To ensure we do, Rosetta was designed to help answer some of the most basic questions about Earth and our solar system, such as where water and life originated, even if the landing doesn’t work out as well as we hope it will.

Comets are thought to have delivered some of the chemicals needed for life, including water to Earth and possibly other planets. This is why comet ISON, which sadly did not survive its close encounter with the Sun, had created excitement among scientists. If it had survived, it would have been the closest scientists could get to a comet with modern instruments.

Comet ISON’s demise means Rosetta is more important than ever. Without measuring the composition of comets, we won’t fully understand the origin of our planet. Comet 67P is thought to have preserved the very earliest ingredients of the solar system, acting as a small, deep-freeze time capsule. The hope is that it will now reveal its long-held secrets to Rosetta.

Andrews said, “It will be the first time a spacecraft will approach a comet and actually stay with it for a prolonged period of time, studying the processes whereby a comet ‘switches on’ as it approaches the Sun.”

Once on the comet’s surface, the Philae lander will deploy instruments to measure different forms of the elements hydrogen, carbon, nitrogen, and oxygen in the comet ice. This will allow scientists to understand the composition of the water and organic components that were collected by the comet 4.6 billion years ago, at the very start of the Solar System.

Read the entire article here.

Video: Rosetta’s Twelve-Year Journey to Land on a Comet. Courtesy of European Space Agency (ESA) Space Science.

 

God Is a Thermodynamicist

Physicists and cosmologists are constantly postulating and testing new ideas to explain the universe and everything within. Over the last hundred years or so, two such ideas have grown to explain much about our cosmos, and do so very successfully — quantum mechanics, which describes the very small, and relativity which describes the very large. However, these two views do no reconcile, leaving theoreticians and researchers looking for a more fundamental theory of everything. One possible idea banishes the notions of time and gravity — treating them both as emergent properties of a deeper reality.

From New Scientist:

As revolutions go, its origins were haphazard. It was, according to the ringleader Max Planck, an “act of desperation”. In 1900, he proposed the idea that energy comes in discrete chunks, or quanta, simply because the smooth delineations of classical physics could not explain the spectrum of energy re-radiated by an absorbing body.

Yet rarely was a revolution so absolute. Within a decade or so, the cast-iron laws that had underpinned physics since Newton’s day were swept away. Classical certainty ceded its stewardship of reality to the probabilistic rule of quantum mechanics, even as the parallel revolution of Einstein’s relativity displaced our cherished, absolute notions of space and time. This was complete regime change.

Except for one thing. A single relict of the old order remained, one that neither Planck nor Einstein nor any of their contemporaries had the will or means to remove. The British astrophysicist Arthur Eddington summed up the situation in 1915. “If your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation,” he wrote.

In this essay, I will explore the fascinating question of why, since their origins in the early 19th century, the laws of thermodynamics have proved so formidably robust. The journey traces the deep connections that were discovered in the 20th century between thermodynamics and information theory – connections that allow us to trace intimate links between thermodynamics and not only quantum theory but also, more speculatively, relativity. Ultimately, I will argue, those links show us how thermodynamics in the 21st century can guide us towards a theory that will supersede them both.

In its origins, thermodynamics is a theory about heat: how it flows and what it can be made to do (see diagram). The French engineer Sadi Carnot formulated the second law in 1824 to characterise the mundane fact that the steam engines then powering the industrial revolution could never be perfectly efficient. Some of the heat you pumped into them always flowed into the cooler environment, rather than staying in the engine to do useful work. That is an expression of a more general rule: unless you do something to stop it, heat will naturally flow from hotter places to cooler places to even up any temperature differences it finds. The same principle explains why keeping the refrigerator in your kitchen cold means pumping energy into it; only that will keep warmth from the surroundings at bay.

A few decades after Carnot, the German physicist Rudolph Clausius explained such phenomena in terms of a quantity characterising disorder that he called entropy. In this picture, the universe works on the back of processes that increase entropy – for example dissipating heat from places where it is concentrated, and therefore more ordered, to cooler areas, where it is not.

That predicts a grim fate for the universe itself. Once all heat is maximally dissipated, no useful process can happen in it any more: it dies a “heat death”. A perplexing question is raised at the other end of cosmic history, too. If nature always favours states of high entropy, how and why did the universe start in a state that seems to have been of comparatively low entropy? At present we have no answer, and later I will mention an intriguing alternative view.

Perhaps because of such undesirable consequences, the legitimacy of the second law was for a long time questioned. The charge was formulated with the most striking clarity by the British physicist James Clerk Maxwell in 1867. He was satisfied that inanimate matter presented no difficulty for the second law. In an isolated system, heat always passes from the hotter to the cooler, and a neat clump of dye molecules readily dissolves in water and disperses randomly, never the other way round. Disorder as embodied by entropy does always increase.

Maxwell’s problem was with life. Living things have “intentionality”: they deliberately do things to other things to make life easier for themselves. Conceivably, they might try to reduce the entropy of their surroundings and thereby violate the second law.

Information is power

Such a possibility is highly disturbing to physicists. Either something is a universal law or it is merely a cover for something deeper. Yet it was only in the late 1970s that Maxwell’s entropy-fiddling “demon” was laid to rest. Its slayer was the US physicist Charles Bennett, who built on work by his colleague at IBM, Rolf Landauer, using the theory of information developed a few decades earlier by Claude Shannon. An intelligent being can certainly rearrange things to lower the entropy of its environment. But to do this, it must first fill up its memory, gaining information as to how things are arranged in the first place.

This acquired information must be encoded somewhere, presumably in the demon’s memory. When this memory is finally full, or the being dies or otherwise expires, it must be reset. Dumping all this stored, ordered information back into the environment increases entropy – and this entropy increase, Bennett showed, will ultimately always be at least as large as the entropy reduction the demon originally achieved. Thus the status of the second law was assured, albeit anchored in a mantra of Landauer’s that would have been unintelligible to the 19th-century progenitors of thermodynamics: that “information is physical”.

But how does this explain that thermodynamics survived the quantum revolution? Classical objects behave very differently to quantum ones, so the same is presumably true of classical and quantum information. After all, quantum computers are notoriously more powerful than classical ones (or would be if realised on a large scale).

The reason is subtle, and it lies in a connection between entropy and probability contained in perhaps the most profound and beautiful formula in all of science. Engraved on the tomb of the Austrian physicist Ludwig Boltzmann in Vienna’s central cemetery, it reads simply S = k log W. Here S is entropy – the macroscopic, measurable entropy of a gas, for example – while k is a constant of nature that today bears Boltzmann’s name. Log W is the mathematical logarithm of a microscopic, probabilistic quantity W – in a gas, this would be the number of ways the positions and velocities of its many individual atoms can be arranged.

On a philosophical level, Boltzmann’s formula embodies the spirit of reductionism: the idea that we can, at least in principle, reduce our outward knowledge of a system’s activities to basic, microscopic physical laws. On a practical, physical level, it tells us that all we need to understand disorder and its increase is probabilities. Tot up the number of configurations the atoms of a system can be in and work out their probabilities, and what emerges is nothing other than the entropy that determines its thermodynamical behaviour. The equation asks no further questions about the nature of the underlying laws; we need not care if the dynamical processes that create the probabilities are classical or quantum in origin.

There is an important additional point to be made here. Probabilities are fundamentally different things in classical and quantum physics. In classical physics they are “subjective” quantities that constantly change as our state of knowledge changes. The probability that a coin toss will result in heads or tails, for instance, jumps from ½ to 1 when we observe the outcome. If there were a being who knew all the positions and momenta of all the particles in the universe – known as a “Laplace demon”, after the French mathematician Pierre-Simon Laplace, who first countenanced the possibility – it would be able to determine the course of all subsequent events in a classical universe, and would have no need for probabilities to describe them.

In quantum physics, however, probabilities arise from a genuine uncertainty about how the world works. States of physical systems in quantum theory are represented in what the quantum pioneer Erwin Schrödinger called catalogues of information, but they are catalogues in which adding information on one page blurs or scrubs it out on another. Knowing the position of a particle more precisely means knowing less well how it is moving, for example. Quantum probabilities are “objective”, in the sense that they cannot be entirely removed by gaining more information.

That casts in an intriguing light thermodynamics as originally, classically formulated. There, the second law is little more than impotence written down in the form of an equation. It has no deep physical origin itself, but is an empirical bolt-on to express the otherwise unaccountable fact that we cannot know, predict or bring about everything that might happen, as classical dynamical laws suggest we can. But this changes as soon as you bring quantum physics into the picture, with its attendant notion that uncertainty is seemingly hardwired into the fabric of reality. Rooted in probabilities, entropy and thermodynamics acquire a new, more fundamental physical anchor.

It is worth pointing out, too, that this deep-rooted connection seems to be much more general. Recently, together with my colleagues Markus Müller of the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, Canada, and Oscar Dahlsten at the Centre for Quantum Technologies in Singapore, I have looked at what happens to thermodynamical relations in a generalised class of probabilistic theories that embrace quantum theory and much more besides. There too, the crucial relationship between information and disorder, as quantified by entropy, survives (arxiv.org/1107.6029).

One theory to rule them all

As for gravity – the only one of nature’s four fundamental forces not covered by quantum theory – a more speculative body of research suggests it might be little more than entropy in disguise (see “Falling into disorder”). If so, that would also bring Einstein’s general theory of relativity, with which we currently describe gravity, firmly within the purview of thermodynamics.

Take all this together, and we begin to have a hint of what makes thermodynamics so successful. The principles of thermodynamics are at their roots all to do with information theory. Information theory is simply an embodiment of how we interact with the universe – among other things, to construct theories to further our understanding of it. Thermodynamics is, in Einstein’s term, a “meta-theory”: one constructed from principles over and above the structure of any dynamical laws we devise to describe reality’s workings. In that sense we can argue that it is more fundamental than either quantum physics or general relativity.

If we can accept this and, like Eddington and his ilk, put all our trust in the laws of thermodynamics, I believe it may even afford us a glimpse beyond the current physical order. It seems unlikely that quantum physics and relativity represent the last revolutions in physics. New evidence could at any time foment their overthrow. Thermodynamics might help us discern what any usurping theory would look like.

For example, earlier this year, two of my colleagues in Singapore, Esther Hänggi and Stephanie Wehner, showed that a violation of the quantum uncertainty principle – that idea that you can never fully get rid of probabilities in a quantum context – would imply a violation of the second law of thermodynamics. Beating the uncertainty limit means extracting extra information about the system, which requires the system to do more work than thermodynamics allows it to do in the relevant state of disorder. So if thermodynamics is any guide, whatever any post-quantum world might look like, we are stuck with a degree of uncertainty (arxiv.org/abs/1205.6894).

My colleague at the University of Oxford, the physicist David Deutsch, thinks we should take things much further. Not only should any future physics conform to thermodynamics, but the whole of physics should be constructed in its image. The idea is to generalise the logic of the second law as it was stringently formulated by the mathematician Constantin Carathéodory in 1909: that in the vicinity of any state of a physical system, there are other states that cannot physically be reached if we forbid any exchange of heat with the environment.

James Joule’s 19th century experiments with beer can be used to illustrate this idea. The English brewer, whose name lives on in the standard unit of energy, sealed beer in a thermally isolated tub containing a paddle wheel that was connected to weights falling under gravity outside. The wheel’s rotation warmed the beer, increasing the disorder of its molecules and therefore its entropy. But hard as we might try, we simply cannot use Joule’s set-up to decrease the beer’s temperature, even by a fraction of a millikelvin. Cooler beer is, in this instance, a state regrettably beyond the reach of physics.

God, the thermodynamicist

The question is whether we can express the whole of physics simply by enumerating possible and impossible processes in a given situation. This is very different from how physics is usually phrased, in both the classical and quantum regimes, in terms of states of systems and equations that describe how those states change in time. The blind alleys down which the standard approach can lead are easiest to understand in classical physics, where the dynamical equations we derive allow a whole host of processes that patently do not occur – the ones we have to conjure up the laws of thermodynamics expressly to forbid, such as dye molecules reclumping spontaneously in water.

By reversing the logic, our observations of the natural world can again take the lead in deriving our theories. We observe the prohibitions that nature puts in place, be it on decreasing entropy, getting energy from nothing, travelling faster than light or whatever. The ultimately “correct” theory of physics – the logically tightest – is the one from which the smallest deviation gives us something that breaks those taboos.

There are other advantages in recasting physics in such terms. Time is a perennially problematic concept in physical theories. In quantum theory, for example, it enters as an extraneous parameter of unclear origin that cannot itself be quantised. In thermodynamics, meanwhile, the passage of time is entropy increase by any other name. A process such as dissolved dye molecules forming themselves into a clump offends our sensibilities because it appears to amount to running time backwards as much as anything else, although the real objection is that it decreases entropy.

Apply this logic more generally, and time ceases to exist as an independent, fundamental entity, but one whose flow is determined purely in terms of allowed and disallowed processes. With it go problems such as that I alluded to earlier, of why the universe started in a state of low entropy. If states and their dynamical evolution over time cease to be the question, then anything that does not break any transformational rules becomes a valid answer.

Such an approach would probably please Einstein, who once said: “What really interests me is whether God had any choice in the creation of the world.” A thermodynamically inspired formulation of physics might not answer that question directly, but leaves God with no choice but to be a thermodynamicist. That would be a singular accolade for those 19th-century masters of steam: that they stumbled upon the essence of the universe, entirely by accident. The triumph of thermodynamics would then be a revolution by stealth, 200 years in the making.

Read the entire article here.

A Kid’s Book For Adults

book_BoneByBoneOne of the most engaging new books for young children is a picture book that explains evolution. By way of whimsical illustrations and comparisons of animal skeletons the book — Bone By Bone — is able to deliver the story of evolutionary theory in an entertaining and compelling way.

Perhaps, it could be used just as well for those adults who have trouble grappling with the fruits of the scientific method. The Texas School Board of Education would make an ideal place to begin.

Bone By Bone is written by veterinarian Sara Levine.

From Slate:

In some of the best children’s books, dandelions turn into stars, sharks and radishes merge, and pancakes fall from the sky. No one would confuse these magical tales for descriptions of nature. Small children can differentiate between “the real world and the imaginary world,” as psychologist Alison Gopnik has written. They just “don’t see any particular reason for preferring to live in the real one.”

Children’s nuanced understanding of the not-real surely extends to the towering heap of books that feature dinosaurs as playmates who fill buckets of sand or bake chocolate-chip cookies. The imaginative play of these books may be no different to kids than radishsharks and llama dramas.

But as a parent, friendly dinos never steal my heart. I associate them, just a little, with old creationist images of animals frolicking near the Garden of Eden, which carried the message that dinosaurs and man, both created by God on the sixth day, co-existed on the Earth until after the flood. (Never mind the evidence that dinosaurs went extinct millions of years before humans appeared.) The founder of the Creation Museum in Kentucky calls dinosaurs “missionary lizards,” and that phrase echoes in my head when I see all those goofy illustrations of dinosaurs in sunglasses and hats.

I’ve been longing for another kind of picture book: one that appeals to young children’s wildest imagination in service of real evolutionary thinking. Such a book could certainly include dinosaur skeletons or fossils. But Bone by Bone, by veterinarian and professor Sara Levine, fills the niche to near perfection by relying on dogs, rabbits, bats, whales, and humans. Levine plays with differences in their skeletons to groom kids for grand scientific concepts.

Bone by Bone asks kids to imagine what their bodies would look like if they had different configurations of bones, like extra vertebrae, longer limbs, or fewer fingers. “What if your vertebrae didn’t stop at your rear end? What if they kept going?” Levine writes, as a boy peers over his shoulder at the spinal column. “You’d have a tail!”

“What kind of animal would you be if your leg bones were much, much longer than your arm bones?” she wonders, as a girl in pink sneakers rises so tall her face disappears from the page. “A rabbit or a kangaroo!” she says, later adding a pike and a hare. “These animals need strong hind leg bones for jumping.” Levine’s questions and answers are delightfully simple for the scientific heft they carry.

With the lightest possible touch, Levine introduces the idea that bones in different vertebrates are related and that they morph over time. She starts with vertebrae, skulls and ribs. But other structures bear strong kinships in these animals, too. The bone in the center of a horse’s hoof, for instance, is related to a human finger. (“What would happen if your middle fingers and the middle toes were so thick that they supported your whole body?”) The bones that radiate out through a bat’s wing are linked to those in a human hand. (“A web of skin connects the bones to make wings so that a bat can fly.”) This is different from the wings of a bird or an insect; with bats, it’s almost as if they’re swimming through air.

Of course, human hands did not shape-shift into bats’ wings, or vice versa. Both derive from a common ancestral structure, which means they share an evolutionary past. Homology, as this kind of relatedness is called, is among “the first and in many ways the best evidence for evolution,” says Josh Rosenau of the National Center for Science Education. Comparing bones also paves the way for comparing genes and molecules, for grasping evolution at the next level of sophistication. Indeed, it’s hard to look at the bat wings and human hands as presented here without lighting up, at least a little, with these ideas. So many smart writers focus on preparing young kids to read or understand numbers. Why not do more to ready them for the big ideas of science? Why not pave the way for evolution? (This is easier to do with older kids, with books like The Evolution of Calpurnia Tate and Why Don’t Your Eyelashes Grow?)

Read the entire story here.

Image: Bone By Bone, book cover. Courtesy: Lerner Publishing Group

Time for the Neutrino

Enough of the Higgs boson, already! It’s time to shine the light on its smaller, swifter cousin, the neutrino.

From the NYT:

HAVE you noticed how the Higgs boson has been hogging the limelight lately? For a measly little invisible item, whose significance cannot be explained without appealing to thorny concepts of quantum field theory, it has done pretty well for itself. The struggling starlets of Hollywood could learn a thing or two about the dark art of self-promotion from this boson.

First, its elusiveness “sparked the greatest hunt in science,” as the subtitle of one popular book put it. Then came all the hoopla over its actual discovery. Or should I say discoveries? Because those clever, well-meaning folks at the CERN laboratory outside Geneva proclaimed their finding of the particle not once but twice. First in 2012, on the Fourth of July no less, they told the world that their supergigantic — and awesomely expensive — atom smasher had found tentative evidence of the Higgs. Eight months later, they made a second announcement, this time with more data in hand, to confirm that they had nabbed the beast for real. Just recently, there was yet more fanfare when two of the grandees who had predicted the particle’s existence back in 1964 shared a Nobel Prize for their insight.

In fact, ever since another Nobel-winning genius, Leon Lederman, branded it the “God particle” some 20 years ago, the Higgs boson has captured the public imagination and dominated the media coverage of physics. Some consider Professor Lederman’s moniker a brilliant P.R. move for physics, while others denounce it as a terrible gaffe that confuses people and cheapens a solemn scientific enterprise. Either way, it has been effective. Nobody ever talks about the fascinating lives of other subatomic particles on “Fox and Friends.”

Sure, the story of Higgs is a compelling one. The jaw-dropping $9 billion price tag of the machine built to chase it is enough to command our attention. Plus, there is the serene, wise man at the center of this epic saga: the octogenarian Peter Higgs, finally vindicated after waiting patiently for decades. Professor Higgs was seen to shed a tear of joy at a news conference announcing the discovery, adding tenderness to the triumphant moment and tugging ever so gently at our heartstrings. For reporters looking for a human-interest angle to this complicated scientific brouhaha, that was pure gold.

But I say enough is enough. It is time to give another particle a chance.

And have I got a terrific candidate for you! It moves in mysterious ways, passing right through wood, walls and even our bodies, with nary a bump. It morphs among three forms, like a cosmic chameleon evading capture. It brings us news from the sun’s scorching heart and from the spectacular death throes of monstrous stars. It could tell us why antimatter is so rare in the universe and illuminate the inner workings of our own planet. Someday, it may even help expose rogue nuclear reactors and secret bomb tests, thus promoting world peace. Most important, we might not be here without it.

WHAT is this magical particle, you ask? It is none other than the ghostly neutrino.

O.K., I admit that I am biased, having just written a book about it. But believe me, no other particle comes close to matching the incredibly colorful and quirky personality of the neutrino, or promises to reveal as much about a mind-boggling array of natural phenomena, both subatomic and cosmic. As one researcher told me, “Whenever anything cool happens in the universe, neutrinos are usually involved.” Besides, John Updike considered it worthy of celebrating in a delightful poem in The New Yorker, and on “The Big Bang Theory,” Sheldon Cooper’s idol Professor Proton chose Gino the Neutrino as his beloved puppet sidekick.

Granted, the neutrino does come with some baggage. Remember how it made headlines two years ago for possibly traveling faster than light? Back then, the prospects of time travel and breaking Einstein’s speed limit provided plenty of fodder for rampant speculation and a few bad jokes. In the end, the whole affair turned out to be much ado about a faulty cable. I maintain it is unfair to hold the poor little neutrino responsible for that commotion.

Generally speaking, the neutrino tends to shun the limelight. Actually, it is pathologically shy and hardly ever interacts with other particles. That makes it tough to pin down.

Thankfully, today’s neutrino hunters have a formidable arsenal at their disposal, including newfangled observatories buried deep underground or in the Antarctic ice. Neutrino chasing, once an esoteric sideline, has turned into one of the hottest occupations for the discerning nerd. More eager young ones will surely clamor for entry into the Promised Land now that the magazine Physics World has declared the recent detection of cosmic neutrinos to be the No. 1 physics breakthrough of the year.

Drum roll, please. The neutrino is ready to take center stage. But don’t blink: It zips by at nearly the speed of light.

Read the entire story here.