Tag Archives: cosmology

The Accelerated Acceleration

Dark_Energy

Until the mid-1990s accepted scientific understanding of the universe held that the cosmos was expanding. Scientists have accepted this since 1929 when Edwin Hubble‘s celestial observations showed that distant galaxies were all apparently moving away from us.

But, in 1998 two independent groups of cosmologists made a startling finding. The universe was not only expanding, its expansion was accelerating. Recent studies show that this acceleration in the fabric of spacetime is actually faster than first theorized and observed.

And, nobody knows why. This expansion, indeed the accelerating expansion, remains one of our current great scientific mysteries.

Cosmologists, astronomers and theoreticians of all stripes have proposed no shortage of possible explanations. But, there is still scant observational evidence to support any of the leading theories. The most popular revolves around the peculiar idea of dark energy.

From Scientific American:

Our universe is flying apart, with galaxies moving away from each other faster each moment than they were the moment before. Scientists have known about this acceleration since the late 1990s, but whatever is causing it—dubbed dark energy—remains a mystery. Now the latest measurement of how fast the cosmos is growing thickens the plot further: The universe appears to be ballooning more quickly than it should be, even after accounting for the accelerating expansion caused by dark energy.

Scientists came to this conclusion after comparing their new measurement of the cosmic expansion rate, called the Hubble constant, to predictions of what the Hubble constant should be based on evidence from the early universe. The puzzling conflict—which was hinted at in earlier data and confirmed in the new calculation—means that either one or both of the measurements are flawed, or that dark energy or some other aspect of nature acts differently than we think.

“The bottom line is that the universe looks like it’s expanding about eight percent faster than you would have expected based on how it looked in its youth and how we expect it to evolve,” says study leader Adam Riess of the Space Telescope Science Institute in Baltimore, Md. “We have to take this pretty darn seriously.” He and his colleagues described their findings, based on observations from the Hubble Space Telescope, in a paper submitted last week to the Astrophysical Journal and posted on the preprint server arXiv.

One of the most exciting possibilities is that dark energy is even stranger than the leading theory suggests. Most observations support the idea that dark energy behaves like a “cosmological constant,” a term Albert Einstein inserted into his equations of general relativity and later removed. This kind of dark energy would arise from empty space, which, according to quantum mechanics, is not empty at all, but rather filled with pairs of “virtual” particles and antiparticles that constantly pop in and out of existence. These virtual particles would carry energy, which in turn might exert a kind of negative gravity that pushes everything in the universe outward.

Read the entire story here.

Image: The universe’s accelerated expansion. Courtesy: NASA and ESA.

A Googol Years From Now

If humanity makes it the next few years and decades without destroying itself and the planet, we can ponder the broader fate of our universal home. Assuming humanity escapes the death of our beautiful local star (in 4-5 billion years or so) and the merging of our very own Milky Way and the Andromeda galaxy (around 7-10 billion years), we’ll be toast in a googol years. Actually, we and everything else in the cosmos will be more like a cold, dark particle soup. By the way, a googol is a rather large number — 10100. That gives us plenty of time to fix ourselves.

From Space:

Yes, the universe is dying. Get over it.

 Well, let’s back up. The universe, as defined as “everything there is, in total summation,” isn’t going anywhere anytime soon. Or ever. If the universe changes into something else far into the future, well then, that’s just more universe, isn’t it?

But all the stuff in the universe? That’s a different story. When we’re talking all that stuff, then yes, everything in the universe is dying, one miserable day at a time.

You may not realize it by looking at the night sky, but the ultimate darkness is already settling in. Stars first appeared on the cosmic stage rather early — more than 13 billion years ago; just a few hundred million years into this Great Play. But there’s only so much stuff in the universe, and only so many opportunities to make balls of it dense enough to ignite nuclear fusion, creating the stars that fight against the relentless night.

The expansion of the universe dilutes everything in it, meaning there are fewer and fewer chances to make the nuclear magic happen. And around 10 billion years ago, the expansion reached a tipping point. The matter in the cosmos was spread too thin. The engines of creation shut off. The curtain was called: the epoch of peak star formation has already passed, and we are currently living in the wind-down stage. Stars are still born all the time, but the birth rate is dropping.

At the same time, that dastardly dark energy is causing the expansion of the universe to accelerate, ripping galaxies away from each other faster than the speed of light (go ahead, say that this violates some law of physics, I dare you), drawing them out of the range of any possible contact — and eventually, visibility — with their neighbors. With the exception of the Andromeda Galaxy and a few pathetic hangers-on, no other galaxies will be visible. We’ll become very lonely in our observable patch of the universe.

The infant universe was a creature of heat and light, but the cosmos of the ancient future will be a dim, cold animal.

The only consolation is the time scale involved. You thought 14 billion years was a long time? The numbers I’m going to present are ridiculous, even with exponential notation. You can’t wrap your head around it. They’re just … big.

For starters, we have at least 2 trillion years until the last sun is born, but the smallest stars will continue to burn slow and steady for another 100 trillion years in a cosmic Children of Men. Our own sun will be long gone by then, heaving off its atmosphere within the next 5 billion years and charcoaling the Earth. Around the same time, the Milky Way and Andromeda galaxies will collide, making a sorry mess of the local system.

At the end of this 100-trillion-year “stelliferous” era, the universe will only be left with the … well, leftovers: white dwarves (some cooled to black dwarves), neutron stars and black holes. Lots of black holes.

Welcome to the Degenerate Era, a state that is as sad as it sounds. But even that isn’t the end game. Oh no, it gets worse. After countless gravitational interactions, planets will get ejected from their decaying systems and galaxies themselves will dissolve. Losing cohesion, our local patch of the universe will be a disheveled wreck of a place, with dim, dead stars scattered about randomly and black holes haunting the depths.

The early universe was a very strange place, and the late universe will be equally bizarre. Given enough time, things that seem impossible become commonplace, and objects that appear immutable … uh, mutate. Through a process called quantum tunneling, any solid object will slowly “leak” atoms, dissolving. Because of this, gone will be the white dwarves, the planets, the asteroids, the solid.

Even fundamental particles are not immune: given 10^34 years, the neutrons in neutron stars will break apart into their constituent particles. We don’t yet know if the proton is stable, but if it isn’t, it’s only got 10^40 years before it meets its end.

With enough time (and trust me, we’ve got plenty of time), the universe will consist of nothing but light particles (electrons, neutrinos and their ilk), photons and black holes. The black holes themselves will probably dissolve via Hawking Radiation, briefly illuminating the impenetrable darkness as they decay.

After 10^100 years (but who’s keeping track at this point?), nothing macroscopic remains. Just a weak soup of particles and photons, spread so thin that they hardly ever interact.

Read the entire article here.

In case, you’ve forgotten, a googol is 10100 (10 to the power of 100) or 10 followed by 100 zeros. And, yes, that’s how the company Google derived its name.

See, Earth is at the Center of the Cosmos

A single image of the entire universe from 2012 has been collecting lots of attention recently. Not only is it beautiful, it shows the Earth and our solar system clearly in the correct location — at the rightful center!

Some seem to be using this to claim that the circa 2,000 year old, geo-centric view of the cosmos must be right.

Observable_universe_logarithmic_illustration

Well, sorry creationists, flat-earthers, and followers of Ptolemy, this gorgeous image is a logarithmic illustration.

Image: Artist’s logarithmic scale conception of the observable universe with the Solar System at the center, inner and outer planets, Kuiper belt, Oort cloud, Alpha Centauri, Perseus Arm, Milky Way galaxy, Andromeda galaxy, nearby galaxies, Cosmic Web, Cosmic microwave radiation and Big Bang’s invisible plasma on the edge. Courtesy: Pablo Carlos Budassi / Wikipedia.

Neutrinos in the News

Something’s up. Perhaps there’s some degree of hope that we may be reversing the tide of “dumbeddownness” in the stories that the media pumps through its many tubes to reach us. So, it comes as a welcome surprise to see articles about the very, very small making big news in publications like the New Yorker. Stories about neutrinos no less. Thank you New Yorker for dumbing us up. And, kudos to the latest Nobel laureates — Takaaki Kajita and Arthur B. McDonald — for helping us understand just a little bit more about our world.

From the New Yorker:

This week the 2015 Nobel Prize in Physics was awarded jointly to Takaaki Kajita and Arthur B. McDonald for their discovery that elementary particles called neutrinos have mass. This is, remarkably, the fourth Nobel Prize associated with the experimental measurement of neutrinos. One might wonder why we should care so much about these ghostly particles, which barely interact with normal matter.

Even though the existence of neutrinos was predicted in 1930, by Wolfgang Pauli, none were experimentally observed until 1956. That’s because neutrinos almost always pass through matter without stopping. Every second of every day, more than six trillion neutrinos stream through your body, coming directly from the fiery core of the sun—but most of them go right through our bodies, and the Earth, without interacting with the particles out of which those objects are made. In fact, on average, those neutrinos would be able to traverse more than one thousand light-years of lead before interacting with it even once.

The very fact that we can detect these ephemeral particles is a testament to human ingenuity. Because the rules of quantum mechanics are probabilistic, we know that, even though almost all neutrinos will pass right through the Earth, a few will interact with it. A big enough detector can observe such an interaction. The first detector of neutrinos from the sun was built in the nineteen-sixties, deep within a mine in South Dakota. An area of the mine was filled with a hundred thousand gallons of cleaning fluid. On average, one neutrino each day would interact with an atom of chlorine in the fluid, turning it into an atom of argon. Almost unfathomably, the physicist in charge of the detector, Raymond Davis, Jr., figured out how to detect these few atoms of argon, and, four decades later, in 2002, he was awarded the Nobel Prize in Physics for this amazing technical feat.

Because neutrinos interact so weakly, they can travel immense distances. They provide us with a window into places we would never otherwise be able to see. The neutrinos that Davis detected were emitted by nuclear reactions at the very center of the sun, escaping this incredibly dense, hot place only because they so rarely interact with other matter. We have been able to detect neutrinos emerging from the center of an exploding star more than a hundred thousand light-years away.

But neutrinos also allow us to observe the universe at its very smallest scales—far smaller than those that can be probed even at the Large Hadron Collider, in Geneva, which, three years ago, discovered the Higgs boson. It is for this reason that the Nobel Committee decided to award this year’s Nobel Prize for yet another neutrino discovery.

Read the entire story here.

The Emperor and/is the Butterfly

In an earlier post I touched on the notion proposed by some cosmologists that our entire universe is some kind of highly advanced simulation. The hypothesis is that perhaps we are merely information elements within a vast mathematical fabrication, playthings of a much superior consciousness. Some draw upon parallels to The Matrix movie franchise.

Follow some of the story and video interviews here to learn more of this fascinating and somewhat unsettling idea. More unsettling still: did our overlord programmers leave a backdoor?

[tube]NEokFnAmmFE[/tube]

Video: David Brin – Could Our Universe Be a Fake? Courtesy of Closer to Truth.

Where Are They?

Astrophysics professor Adam Frank reminds us to ponder Enrico Fermi‘s insightful question posed in the middle of the last century. Fermi’s question spawned his infamous, eponymous paradox, and goes something like this:

Why is there no evidence of extraterrestrial civilizations in our Milky Way galaxy given the age of the universe and vast number of stars within it?

Based on simple assumptions and family accurate estimates of the universe’s age, the number of galaxies and stars within it, the probability of Earth-like planets and the development of intelligent life on these planets it should be highly likely that some civilizations have already developed the capability for interstellar travel. In fact, even a slow pace of intra-galactic travel should have led to the colonization of our entire galaxy within just a few tens of millions of years, which is a blink of an eye on a cosmological timescale. Yet we see now evidence on Earth or anywhere beyond. And therein lies the conundrum.

The doomsayers might have us believe that extraterrestrial civilizations have indeed developed numerous times throughout our galaxy. But, none have made the crucial leap beyond ecological catastrophe and technological self-destruction before being able to shirk the bonds of their home planet. Do we have the power to avoid the same fate? I hope so.

From 13.7:

The story begins like this: In 1950, a group of high-powered physicists were lunching together near the Los Alamos National Laboratory.

Among those in attendance were Edward Teller (father of the nuclear bomb) and the Nobel Prize-winning Enrico Fermi. The discussion turned to a spate of recent UFO sightings and, then, on to the possibility of seeing an object (made by aliens) move faster than light. The conversation eventually turned to other topics when, out the blue, Fermi suddenly asked: “Where is everybody?”

While he’d startled his colleagues, they all quickly understood what he was referring to: Where are all the aliens?

What Fermi realized in his burst of insight was simple: If the universe was teeming with intelligent technological civilizations, why hadn’t they already made it to Earth? Indeed, why hadn’t they made it everywhere?

This question, known as “Fermi’s paradox,” is now a staple of astrobiological/SETI thinking. And while it might seem pretty abstract and inconsequential to our day-to-day existence, within Fermi’s paradox there lies a terrible possibility that haunts the fate of humanity.

Enough issues are packed into Fermi’s paradox for more than one post and — since Caleb Scharf and I are just starting a research project related to the question — I am sure to return to it. Today, however, I just want to unpack the basics of Fermi’s paradox and its consequences.

The most important thing to understand about Fermi’s paradox is that you don’t need faster-than-light travel, a warp drive or other exotic technology to take it seriously. Even if a technological civilization built ships that reached only a fraction of the speed of light, we might still expect all the stars (and the planets) to be “colonized.”

For example, let’s imagine that just one high-tech alien species emerges and starts sending ships out at one-hundredth of the speed of light. With that technology, they’d cross the typical distance between stars in “just” a few centuries to a millennium. If, once they got to a new solar system, they began using its resources to build more ships, then we can imagine how a wave of colonization begins propagating across the galaxy.

But how long does it take this colonization wave to spread?

Remarkably, it would only take a fraction of our galaxy’s lifetime before all the stars are inhabited. Depending on what you assume, the propagating wave of colonization could make it from one end of our Milky Way to the other in just 10 million years. While that might seem very long to you, it’s really just a blink of the eye to the 10-billion-year-old Milky Way (in other words, the colonization wave crosses in 0.001 times the age of the galaxy). That means if an alien civilization began at some random moment in the Milky Way’s history, odds are it has had time to colonize the entire galaxy.

You can choose your favorite sci-fi trope for what’s going on with these alien “slow ships.” Maybe they use cryogenic suspension. Maybe they’re using generation ships — mobile worlds whose inhabitants live out entire lives during the millennia-long crossing. Maybe the aliens don’t go themselves but send fully autonomous machines. Whatever scenario you choose, simple calculations, like the one above, tend to imply the aliens should be here already.

Of course, you can also come up with lots of resolutions to Fermi’s paradox. Maybe the aliens don’t want to colonize other worlds. Maybe none of the technologies for the ships described above really work. Maybe, maybe, maybe. We can take up some of those solutions in later 13.7 posts.

For today, however, let’s just consider the one answer that really matters for us, the existential one that is very, very freaky indeed: The aliens aren’t here because they don’t exist. We are the only sentient, technological species that exists in the entire galaxy.

It’s hard to overstate how profound this conclusion would be.

The consequences cut both ways. On the one hand, it’s possible that no other species has ever reached our state of development. Our galaxy with its 300 billion stars — meaning 300 billion chances for self-consciousness — has never awakened anywhere else. We would be the only ones looking into the night sky and asking questions. How impossibly lonely that would be.

Read the entire article here.

 

It’s Official — Big Rip Coming!

San_Sebastian-Cementerio_de_PolloeThe UK’s Daily Telegraph newspaper just published this article, so it must be true. After all, the broadsheet has been a stalwart of conservative British journalism since, well, the dawn of time, some 6,000 year ago.

Apparently our universe will end in a so-called Big Rip, and not in a Big Freeze. Nor will it end in a Big Crunch, which is like the Big Bang in reverse. The Big Rip seems to be a rather calm and quiet version of the impending cosmological apocalypse. So, I’m all for it. I can’t wait… 22 billion years and counting.

From the Daily Telegraph:

A group of scientists claim to have evidence supporting the Big Rip theory, explaining how the universe will end – in 22 billion years.

Researchers at Vanderbilt University in Nashville, Tennessee, have discovered a new mathematical formulation that supports the Big Rip theory – that as the universe expands, it will eventually be ripped apart.

“The idea of the Big Rip is that eventually even the constituents of matter would start separating from each other. You’d be seeing all the atoms being ripped apart … it’s fair to say that it’s a dramatic scenario,” Dr Marcelo Disconzi told the Guardian.

Scientists observed distant supernovae to examine whether the Big Rip theory, which was first suggested in 2003, was possible.

The theory relies on the assumption that the universe continues to expand faster and faster, eventually causing the Big Rip.

“Mathematically we know what this means. But what it actually means in physical terms is hard to fathom,” said Dr Disconzi.

Conflicting theories for how the universe will end include the Big Crunch, whereby the Big Bang reverses and everything contracts, and the Big Freeze, where as the universe slowly expands it eventually becomes too cold to sustain life.

Previous questions raised over the Big Rip theory include explaining how sticky fluids – that have high levels of viscosity – can travel faster than the speed of light, defying the laws of physics.

However, the Vanderbilt team combined a series of equations, including some dating back to 1955, to show that viscosity may not be a barrier to a rapidly expanding universe.

“My result by no means settles the question of what the correct formulation of relativistic viscous fluids is. What it shows is that, under some assumptions, the equations put forward by Lichnerowicz have solutions and the solutions do not predict faster-than-light signals. But we still don’t know if these results remain valid under the most general situations relevant to physics,” Dr Disconzi told the New Statesman.

Read the story here.

Image: Cementerio de Polloe, en Donostia-San Sebastián, 2014. Courtesy of Zarateman. Public domain.

Dark Matter May Cause Cancer and Earthquakes

Abell 1689

Leave aside the fact that there is no direct evidence for the existence of dark matter. In fact, theories that indirectly point to its existence seem rather questionable as well. That said, cosmologists are increasingly convinced that dark matter’s gravitational effects can be derived from recent observations of gravitationally lenses galaxy clusters. Some researchers postulate that this eerily murky non-substance — it doesn’t interact with anything in our visible universe except, perhaps, gravity — may be a cause for activities much closer to home. All very interesting.

From NYT:

Earlier this year, Dr. Sabine Hossenfelder, a theoretical physicist in Stockholm, made the jarring suggestion that dark matter might cause cancer. She was not talking about the “dark matter” of the genome (another term for junk DNA) but about the hypothetical, lightless particles that cosmologists believe pervade the universe and hold the galaxies together.

Though it has yet to be directly detected, dark matter is presumed to exist because we can see the effects of its gravity. As its invisible particles pass through our bodies, they could be mutating DNA, the theory goes, adding at an extremely low level to the overall rate of cancer.

It was unsettling to see two such seemingly different realms, cosmology and oncology, suddenly juxtaposed. But that was just the beginning. Shortly after Dr. Hossenfelder broached her idea in an online essay, Michael Rampino, a professor at New York University, added geology and paleontology to the picture.

Dark matter, he proposed in an article for the Royal Astronomical Society, is responsible for the mass extinctions that have periodically swept Earth, including the one that killed the dinosaurs.

His idea is based on speculations by other scientists that the Milky Way is sliced horizontally through its center by a thin disk of dark matter. As the sun, traveling around the galaxy, bobs up and down through this darkling plane, it generates gravitational ripples strong enough to dislodge distant comets from their orbits, sending them hurtling toward Earth.

An earlier version of this hypothesis was put forth last year by the Harvard physicists Lisa Randall and Matthew Reece. But Dr. Rampino has added another twist: During Earth’s galactic voyage, dark matter accumulates in its core. There the particles self-destruct, generating enough heat to cause deadly volcanic eruptions. Struck from above and below, the dinosaurs succumbed.

It is surprising to see something as abstract as dark matter take on so much solidity, at least in the human mind. The idea was invented in the early 1930s as a theoretical contrivance — a means of explaining observations that otherwise didn’t make sense.

Galaxies appear to be rotating so fast that they should have spun apart long ago, throwing off stars like sparks from a Fourth of July pinwheel. There just isn’t enough gravity to hold a galaxy together, unless you assume that it hides a huge amount of unseen matter — particles that neither emit or absorb light.

Some mavericks propose alternatives, attempting to tweak the equations of gravity to account for what seems like missing mass. But for most cosmologists, the idea of unseeable matter has become so deeply ingrained that it has become almost impossible to do without it.

Said to be five times more abundant than the stuff we can see, dark matter is a crucial component of the theory behind gravitational lensing, in which large masses like galaxies can bend light beams and cause stars to appear in unexpected parts of the sky.

That was the explanation for the spectacular observation of an “Einstein Cross” reported last month. Acting like an enormous lens, a cluster of galaxies deflected the light of a supernova into four images — a cosmological mirage. The light for each reflection followed a different path, providing glimpses of four different moments of the explosion.

Continue reading the main storyContinue reading the main story

But not even a galactic cluster exerts enough gravity to bend light so severely unless you postulate that most of its mass consists of hypothetical dark matter. In fact, astronomers are so sure that dark matter exists that they have embraced gravitational lensing as a tool to map its extent.

Dark matter, in other words, is used to explain gravitational lensing, and gravitational lensing is taken as more evidence for dark matter.

Some skeptics have wondered if this is a modern-day version of what ancient astronomers called “saving the phenomena.” With enough elaborations, a theory can account for what we see without necessarily describing reality. The classic example is the geocentric model of the heavens that Ptolemy laid out in the Almagest, with the planets orbiting Earth along paths of complex curlicues.

Ptolemy apparently didn’t care whether his filigrees were real. What was important to him was that his model worked, predicting planetary movements with great precision.

Modern scientists are not ready to settle for such subterfuge. To show that dark matter resides in the world and not just in their equations, they are trying to detect it directly.

Though its identity remains unknown, most theorists are betting that dark matter consists of WIMPs — weakly interacting massive particles. If they really exist, it might be possible to glimpse them when they interact with ordinary matter.

Read the entire article here.

Image: Abell 1689 galaxy cluster. Courtesy ofNASA, ESA, and D. Coe (NASA JPL/Caltech and STScI).

The Big Crunch

cmb

It may just be possible that prophetic doomsayers have been right all along. The end is coming… well, in a few tens of billions of years. A group of physicists propose that the cosmos will soon begin collapsing in on itself. Keep in mind that soon in cosmological terms runs into the billions of years. So, it does appear that we still have some time to crunch down our breakfast cereal a few more times before the ultimate universal apocalypse. Clearly this may not please those who seek the end of days within their lifetimes, and for rather different — scientific — reasons, cosmologists seem to be unhappy too.

From Phys:

Physicists have proposed a mechanism for “cosmological collapse” that predicts that the universe will soon stop expanding and collapse in on itself, obliterating all matter as we know it. Their calculations suggest that the collapse is “imminent”—on the order of a few tens of billions of years or so—which may not keep most people up at night, but for the physicists it’s still much too soon.

In a paper published in Physical Review Letters, physicists Nemanja Kaloper at the University of California, Davis; and Antonio Padilla at the University of Nottingham have proposed the cosmological collapse mechanism and analyzed its implications, which include an explanation of dark energy.

“The fact that we are seeing dark energy now could be taken as an indication of impending doom, and we are trying to look at the data to put some figures on the end date,” Padilla told Phys.org. “Early indications suggest the collapse will kick in in a few tens of billions of years, but we have yet to properly verify this.”

The main point of the paper is not so much when exactly the universe will end, but that the mechanism may help resolve some of the unanswered questions in physics. In particular, why is the universe expanding at an accelerating rate, and what is the dark energy causing this acceleration? These questions are related to the cosmological constant problem, which is that the predicted vacuum energy density of the universe causing the expansion is much larger than what is observed.

“I think we have opened up a brand new approach to what some have described as ‘the mother of all physics problems,’ namely the cosmological constant problem,” Padilla said. “It’s way too early to say if it will stand the test of time, but so far it has stood up to scrutiny, and it does seem to address the issue of vacuum energy contributions from the standard model, and how they gravitate.”

The collapse mechanism builds on the physicists’ previous research on vacuum energy sequestering, which they proposed to address the cosmological constant problem. The dynamics of vacuum energy sequestering predict that the universe will collapse, but don’t provide a specific mechanism for how collapse will occur.

According to the new mechanism, the universe originated under a set of specific initial conditions so that it naturally evolved to its present state of acceleration and will continue on a path toward collapse. In this scenario, once the collapse trigger begins to dominate, it does so in a period of “slow roll” that brings about the accelerated expansion we see today. Eventually the universe will stop expanding and reach a turnaround point at which it begins to shrink, culminating in a “big crunch.”

Read the entire article here.

Image: Image of the Cosmic Microwave Background (CMB) from nine years of WMAP data. The image reveals 13.77 billion year old temperature fluctuations (shown as color differences) that correspond to the seeds that grew to become the galaxies. Courtesy of NASA.

The Religion of String Theory

Hyperboloid-of-one-sheetRead anything about string theory and you’ll soon learn that it resembles more of a religion than a scientific principle. String theory researchers and their supporters will be the first to tell you that this elegant, but extremely complex, integration of gravity and quantum field theory,  cannot be confirmed through experiment. And, neither, can it be dispelled through experiment.

So, while the promise of string theory — to bring us one unified understanding of the entire universe — is deliciously tantalizing, it nonetheless forces us to take a giant leap of faith. I suppose that would put string theory originators, physicists Michael Green and John Schwarz, somewhere in the same pantheon as Moses and Joseph Smith.

From Quanta:

Thirty years have passed since a pair of physicists, working together on a stormy summer night in Aspen, Colo., realized that string theory might have what it takes to be the “theory of everything.”

“We must be getting pretty close,” Michael Green recalls telling John Schwarz as the thunder raged and they hammered away at a proof of the theory’s internal consistency, “because the gods are trying to prevent us from completing this calculation.”

Their mathematics that night suggested that all phenomena in nature, including the seemingly irreconcilable forces of gravity and quantum mechanics, could arise from the harmonics of tiny, vibrating loops of energy, or “strings.” The work touched off a string theory revolution and spawned a generation of specialists who believed they were banging down the door of the ultimate theory of nature. But today, there’s still no answer. Because the strings that are said to quiver at the core of elementary particles are too small to detect — probably ever — the theory cannot be experimentally confirmed. Nor can it be disproven: Almost any observed feature of the universe jibes with the strings’ endless repertoire of tunes.

The publication of Green and Schwarz’s paper “was 30 years ago this month,” the string theorist and popular-science author Brian Greene wrote in Smithsonian Magazine in January, “making the moment ripe for taking stock: Is string theory revealing reality’s deep laws? Or, as some detractors have claimed, is it a mathematical mirage that has sidetracked a generation of physicists?” Greene had no answer, expressing doubt that string theory will “confront data” in his lifetime.

Recently, however, some string theorists have started developing a new tactic that gives them hope of someday answering these questions. Lacking traditional tests, they are seeking validation of string theory by a different route. Using a strange mathematical dictionary that translates between laws of gravity and those of quantum mechanics, the researchers have identified properties called “consistency conditions” that they say any theory combining quantum mechanics and gravity must meet. And in certain highly simplified imaginary worlds, they claim to have found evidence that the only consistent theories of “quantum gravity” involve strings.

According to many researchers, the work provides weak but concrete support for the decades-old suspicion that string theory may be the only mathematically consistent theory of quantum gravity capable of reproducing gravity’s known form on the scale of galaxies, stars and planets, as captured by Albert Einstein’s theory of general relativity. And if string theory is the only possible approach, then its proponents say it must be true — with or without physical evidence. String theory, by this account, is “the only game in town.”

“Proving that a big class of stringlike models are the only things consistent with general relativity and quantum mechanics would be a way, to some extent, of confirming it,” said Tom Hartman, a theoretical physicist at Cornell University who has been following the recent work.

If they are successful, the researchers acknowledge that such a proof will be seen as controversial evidence that string theory is correct. “‘Correct’ is a loaded word,” said Mukund Rangamani, a professor at Durham University in the United Kingdom and the co-author of a paper posted recently to the physics preprint site arXiv.org that finds evidence of “string universality” in a class of imaginary universes.

So far, the theorists have shown that string theory is the only “game” meeting certain conditions in “towns” wildly different from our universe, but they are optimistic that their techniques will generalize to somewhat more realistic physical worlds. “We will continue to accumulate evidence for the ‘string universality’ conjecture in different settings and for different classes of theories,” said Alex Maloney, a professor of physics at McGill University in Montreal and co-author of another recent paper touting evidence for the conjecture, “and eventually a larger picture will become clear.”

Meanwhile, outside experts caution against jumping to conclusions based on the findings to date. “It’s clear that these papers are an interesting attempt,” said Matt Strassler, a visiting professor at Harvard University who has worked on string theory and particle physics. “But these aren’t really proofs; these are arguments. They are calculations, but there are weasel words in certain places.”

Proponents of string theory’s rival, an underdog approach called “loop quantum gravity,” believe that the work has little to teach us about the real world. “They should try to solve the problems of their theory, which are many,” said Carlo Rovelli, a loop quantum gravity researcher at the Center for Theoretical Physics in Marseille, France, “instead of trying to score points by preaching around that they are ‘the only game in town.’”

Mystery Theory

Over the past century, physicists have traced three of the four forces of nature — strong, weak and electromagnetic — to their origins in the form of elementary particles. Only gravity remains at large. Albert Einstein, in his theory of general relativity, cast gravity as smooth curves in space and time: An apple falls toward the Earth because the space-time fabric warps under the planet’s weight. This picture perfectly captures gravity on macroscopic scales.

But in small enough increments, space and time lose meaning, and the laws of quantum mechanics — in which particles have no definite properties like “location,” only probabilities — take over. Physicists use a mathematical framework called quantum field theory to describe the probabilistic interactions between particles. A quantum theory of gravity would describe gravity’s origin in particles called “gravitons” and reveal how their behavior scales up to produce the space-time curves of general relativity. But unifying the laws of nature in this way has proven immensely difficult.

String theory first arose in the 1960s as a possible explanation for why elementary particles called quarks never exist in isolation but instead bind together to form protons, neutrons and other composite “hadrons.” The theory held that quarks are unable to pull apart because they form the ends of strings rather than being free-floating points. But the argument had a flaw: While some hadrons do consist of pairs of quarks and anti-quarks and plausibly resemble strings, protons and neutrons contain three quarks apiece, invoking the ugly and uncertain picture of a string with three ends. Soon, a different theory of quarks emerged. But ideas die hard, and some researchers, including Green, then at the University of London, and Schwarz, at the California Institute of Technology, continued to develop string theory.

Problems quickly stacked up. For the strings’ vibrations to make physical sense, the theory calls for many more spatial dimensions than the length, width and depth of everyday experience, forcing string theorists to postulate that six extra dimensions must be knotted up at every point in the fabric of reality, like the pile of a carpet. And because each of the innumerable ways of knotting up the extra dimensions corresponds to a different macroscopic pattern, almost any discovery made about our universe can seem compatible with string theory, crippling its predictive power. Moreover, as things stood in 1984, all known versions of string theory included a nonsensical mathematical term known as an “anomaly.”

On the plus side, researchers realized that a certain vibration mode of the string fit the profile of a graviton, the coveted quantum purveyor of gravity. And on that stormy night in Aspen in 1984, Green and Schwarz discovered that the graviton contributed a term to the equations that, for a particular version of string theory, exactly canceled out the problematic anomaly. The finding raised the possibility that this version was the one, true, mathematically consistent theory of quantum gravity, and it helped usher in a surge of activity known as the “first superstring revolution.”

 But only a year passed before another version of string theory was also certified anomaly-free. In all, five consistent string theories were discovered by the end of the decade. Some conceived of particles as closed strings, others described them as open strings with dangling ends, and still others generalized the concept of a string to higher-dimensional objects known as “D-branes,” which resemble quivering membranes in any number of dimensions. Five string theories seemed an embarrassment of riches.

Read the entire story here.

Image: Image of (1 + 1)-dimensional anti-de Sitter space embedded in flat (1 + 2)-dimensional space. The embedded surface contains closed timelike curves circling the x1 axis. Courtesy of Wikipedia.

Universal Amniotic Fluid

Another day, another physics paper describing the origin of the universe. This is no wonder. Since the development of general relativity and quantum mechanics — two mutually incompatible descriptions of our reality — theoreticians have been scurrying to come up with a grand theory, a rapprochement of sorts. This one describes the universe as a quantum fluid, perhaps made up of hypothesized gravitons.

From Nature Asia:

The prevailing model of cosmology, based on Einstein’s theory of general relativity, puts the universe at around 13.8 billion years old and suggests it originated from a “singularity” – an infinitely small and dense point – at the Big Bang.

 To understand what happened inside that tiny singularity, physicists must marry general relativity with quantum mechanics – the laws that govern small objects. Applying both of these disciplines has challenged physicists for decades. “The Big Bang singularity is the most serious problem of general relativity, because the laws of physics appear to break down there,” says Ahmed Farag Ali, a physicist at Zewail City of Science and Technology, Egypt.

 In an effort to bring together the laws of quantum mechanics and general relativity, and to solve the singularity puzzle, Ali and Saurya Das, a physicist at the University of Lethbridge in Alberta Canada, employed an equation that predicts the development of singularities in general relativity. That equation had been developed by Das’s former professor, Amal Kumar Raychaudhuri, when Das was an undergraduate student at Presidency University, in Kolkata, India, so Das was particularly familiar and fascinated by it.

 When Ali and Das made small quantum corrections to the Raychaudhuri equation, they realised it described a fluid, made up of small particles, that pervades space. Physicists have long believed that a quantum version of gravity would include a hypothetical particle, called the graviton, which generates the force of gravity. In their new model — which will appear in Physics Letters B in February — Ali and Das propose that such gravitons could form this fluid.

To understand the origin of the universe, they used this corrected equation to trace the behaviour of the fluid back through time. Surprisingly, they found that it did not converge into a singularity. Instead, the universe appears to have existed forever. Although it was smaller in the past, it never quite crunched down to nothing, says Das.

 “Our theory serves to complement Einstein’s general relativity, which is very successful at describing physics over large distances,” says Ali. “But physicists know that to describe short distances, quantum mechanics must be accommodated, and the quantum Raychaudhui equation is a big step towards that.”

The model could also help solve two other cosmic mysteries. In the late 1990s, astronomers discovered that the expansion of the universe is accelerating due the presence of a mysterious dark energy, the origin of which is not known. The model has the potential to explain it since the fluid creates a minor but constant outward force that expands space. “This is a happy offshoot of our work,” says Das.

 Astronomers also now know that most matter in the universe is in an invisible mysterious form called dark matter, only perceptible through its gravitational effect on visible matter such as stars. When Das and a colleague set the mass of the graviton in the model to a small level, they could make the density of their fluid match the universe’s observed density of dark matter, while also providing the right value for dark energy’s push.

Read the entire article here.

 

MondayMap: Our New Address — Laniakea

laniakea_nrao

Once upon a time we humans sat smugly at the center of the universe. Now, many of us (though, not yet all) know better. Over the the last several centuries we learned and accepted that the Earth spun around the nearest Star, and not the converse. We then learned that the Sun formed part of an immense galaxy, the Milky Way, itself spinning in a vast cosmological dance. More recently, we learned that the Milky Way formed part of a larger cluster of galaxies, known as the Local Group.

Now we find that our Local Group is a mere speck within an immense supercluster containing around 100,000 galaxies spanning half a billion light years. Researchers have dubbed this galactic supercluster, rather aptly, Laniakea, Hawaiian for “immense heaven”. Laniakea is your new address. And, fascinatingly, Laniakea is moving towards an even larger grouping of galaxies named the Shapely supercluster.

From the Guardian:

In what amounts to a back-to-school gift for pupils with nerdier leanings, researchers have added a fresh line to the cosmic address of humanity. No longer will a standard home address followed by “the Earth, the solar system, the Milky Way, the universe” suffice for aficionados of the extended astronomical location system.

The extra line places the Milky Way in a vast network of neighbouring galaxies or “supercluster” that forms a spectacular web of stars and planets stretching across 520m light years of our local patch of universe. Named Laniakea, meaning “immeasurable heaven” in Hawaiian, the supercluster contains 100,000 large galaxies that together have the mass of 100 million billion suns.

Our home galaxy, the Milky Way, lies on the far outskirts of Laniakea near the border with another supercluster of galaxies named Perseus-Pisces. “When you look at it in three dimensions, is looks like a sphere that’s been badly beaten up and we are over near the edge, being pulled towards the centre,” said Brent Tully, an astronomer at the University of Hawaii in Honolulu.

Astronomers have long known that just as the solar system is part of the Milky Way, so the Milky Way belongs to a cosmic structure that is much larger still. But their attempts to define the larger structure had been thwarted because it was impossible to work out where one cluster of galaxies ended and another began.

Tully’s team gathered measurements on the positions and movement of more than 8,000 galaxies and, after discounting the expansion of the universe, worked out which were being pulled towards us and which were being pulled away. This allowed the scientists to define superclusters of galaxies that all moved in the same direction (if you’re reading this story on a mobile device, click here to watch a video explaining the research).

The work published in Nature gives astronomers their first look at the vast group of galaxies to which the Milky Way belongs. A narrow arch of galaxies connects Laniakea to the neighbouring Perseus-Pisces supercluster, while two other superclusters called Shapley and Coma lie on the far side of our own.

Tully said the research will help scientists understand why the Milky Way is hurtling through space at 600km a second towards the constellation of Centaurus. Part of the reason is the gravitational pull of other galaxies in our supercluster.

“But our whole supercluster is being pulled in the direction of this other supercluster, Shapley, though it remains to be seen if that’s all that’s going on,” said Tully.

Read the entire article here or the nerdier paper here.

Image: Laniakea: Our Home Supercluster of Galaxies. The blue dot represents the location of the Milky Way. Courtesy: R. Brent Tully (U. Hawaii) et al., SDvision, DP, CEA/Saclay.

The Next (and Final) Doomsday Scenario

Personally, I love dystopian visions and apocalyptic nightmares. So, news that the famed Higgs boson may ultimately cause our demise, and incidentally the end of the entire cosmos, caught my attention.

Apparently theoreticians have calculated that the Higgs potential of which the Higgs boson is a manifestation has characteristics that make the universe unstable. (The Higgs was discovered in 2012 by teams at CERN’s Large Hadron Collider.) Luckily for those wishing to avoid the final catastrophe this instability may keep the universe intact for several more billions of years, and if suddenly the Higgs were to trigger the final apocalypse it would be at the speed of light.

From Popular Mechanics:

In July 2012, when scientists at CERN’s Large Hadron Collider culminated decades of work with their discovery of the Higgs boson, most physicists celebrated. Stephen Hawking did not. The famed theorist expressed his disappointmentthat nothing more unusual was found, calling the discovery “a pity in a way.” But did he ever say the Higgs could destroy the universe?

That’s what many reports in the media said earlier this week, quoting a preface Hawking wrote to a book called Starmus. According to The Australian, the preface reads in part: “The Higgs potential has the worrisome feature that it might become metastable at energies above 100 [billion] gigaelectronvolts (GeV). This could mean that the universe could undergo catastrophic vacuum decay, with a bubble of the true vacuum expanding at the speed of light. This could happen at any time and we wouldn’t see it coming.”

What Hawking is talking about here is not the Higgs boson but what’s called the Higgs potential, which are “totally different concepts,” says Katie Mack, a theoretical astrophysicist at Melbourne University. The Higgs field permeates the entire universe, and the Higgs boson is an excitation of that field, just like an electron is an excitation of an electric field. In this analogy, the Higgs potential is like the voltage, determining the value of the field.

Once physicists began to close in on the mass of the Higgs boson, they were able to work out the Higgs potential. That value seemed to reveal that the universe exists in what’s known as a meta-stable vacuum state, or false vacuum, a state that’s stable for now but could slip into the “true” vacuum at any time. This is the catastrophic vacuum decay in Hawking’s warning, though he is not the first to posit the idea.

Is he right?

“There are a couple of really good reasons to think that’s not the end of the story,” Mack says. There are two ways for a meta-stable state to fall off into the true vacuum—one classical way, and one quantum way. The first would occur via a huge energy boost, the 100 billion GeVs Hawking mentions. But, Mack says, the universe already experienced such high energies during the period of inflation just after the big bang. Particles in cosmic rays from space also regularly collide with these kinds of high energies, and yet the vacuum hasn’t collapsed (otherwise, we wouldn’t be here).

“Imagine that somebody hands you a piece of paper and says, ‘This piece of paper has the potential to spontaneously combust,’ and so you might be worried,” Mack says. “But then they tell you 20 years ago it was in a furnace.” If it didn’t combust in the furnace, it’s not likely to combust sitting in your hand.

Of course, there’s always the quantum world to consider, and that’s where things always get weirder. In the quantum world, where the smallest of particles interact, it’s possible for a particle on one side of a barrier to suddenly appear on the other side of the barrier without actually going through it, a phenomenon known as quantum tunneling. If our universe was in fact in a meta-stable state, it could quantum tunnel through the barrier to the vacuum on the other side with no warning, destroying everything in an instant. And while that is theoretically possible, predictions show that if it were to happen, it’s not likely for billions of billions of years. By then, the sun and Earth and you and I and Stephen Hawking will be a distant memory, so it’s probably not worth losing sleep over it.

What’s more likely, Mack says, is that there is some new physics not yet understood that makes our vacuum stable. Physicists know there are parts of the model missing; mysteries like quantum gravity and dark matter that still defy explanation. When two physicists published a paper documenting the Higgs potential conundrum in March, their conclusion was that an explanation lies beyond the Standard Model, not that the universe may collapse at any time.

Read the article here.

The Cosmological Axis of Evil

WMAP_temp-anisotropy

The cosmos seems remarkably uniform — look in any direction with the naked eye or the most powerful telescopes and you’ll see much the same as in any other direction. Yet, on a grand scale, our universe shows some peculiar fluctuations that have cosmologists scratching their heads. The temperature of the universe, as described by the cosmic microwave background (CMB), shows some interesting fluctuations in specific, vast regions. It is the distribution of these temperature variations that shows what seem to be non-random patterns. Cosmologists have dubbed the pattern, “axis of evil”.

From ars technica:

The Universe is incredibly regular. The variation of the cosmos’ temperature across the entire sky is tiny: a few millionths of a degree, no matter which direction you look. Yet the same light from the very early cosmos that reveals the Universe’s evenness also tells astronomers a great deal about the conditions that gave rise to irregularities like stars, galaxies, and (incidentally) us.

That light is the cosmic microwave background, and it provides some of the best knowledge we have about the structure, content, and history of the Universe. But it also contains a few mysteries: on very large scales, the cosmos seems to have a certain lopsidedness. That slight asymmetry is reflected in temperature fluctuations much larger than any galaxy, aligned on the sky in a pattern facetiously dubbed “the axis of evil.”

The lopsidedness is real, but cosmologists are divided over whether it reveals anything meaningful about the fundamental laws of physics. The fluctuations are sufficiently small that they could arise from random chance. We have just one observable Universe, but nobody sensible believes we can see all of it. With a sufficiently large cosmos beyond the reach of our telescopes, the rest of the Universe may balance the oddity that we can see, making it a minor, local variation.

However, if the asymmetry can’t be explained away so simply, it could indicate that some new physical mechanisms were at work in the early history of the Universe. As Amanda Yoho, a graduate student in cosmology at Case Western Reserve University, told Ars, “I think the alignments, in conjunction with all of the other large angle anomalies, must point to something we don’t know, whether that be new fundamental physics, unknown astrophysical or cosmological sources, or something else.”

Over the centuries, astronomers have provided increasing evidence that Earth, the Solar System, and the Milky Way don’t occupy a special position in the cosmos. Not only are we not at the center of existence—much less the corrupt sinkhole surrounded by the pure crystal heavens, as in early geocentric Christian theology—the Universe has no center and no edge.

In cosmology, that’s elevated to a principle. The Universe is isotropic, meaning it’s (roughly) the same in every direction. The cosmic microwave background (CMB) is the strongest evidence for the isotropic principle: the spectrum of the light reaching Earth from every direction indicates that it was emitted by matter at almost exactly the same temperature.

The Big Bang model explains why. In the early years of the Universe’s history, matter was very dense and hot, forming an opaque plasma of electrons, protons, and helium nuclei. The expansion of space-time thinned out until the plasma cooled enough that stable atoms could form. That event, which ended roughly 380,000 years after the Big Bang, is known as recombination. The immediate side effect was to make the Universe transparent and liberate vast numbers of photons, most of which have traveled through space unmolested ever since.

We observe the relics of recombination in the form of the CMB. The temperature of the Universe today is about 2.73 degrees above absolute zero in every part of the sky. The lack of variation makes the cosmos nearly as close to a perfect thermal body as possible. However, measurements show anisotropies—tiny fluctuations in temperature, roughly 10 millionths of a degree or less. These irregularities later gave rise to areas where mass gathered. A perfectly featureless, isotropic cosmos would have no stars, galaxies, or planets full of humans.

To measure the physical size of these anisotropies, researchers turn the whole-sky map of temperature fluctuations into something called a power spectrum. That’s akin to the process of taking light from a galaxy and finding the component wavelengths (colors) that make it up. The power spectrum encompasses fluctuations over the whole sky down to very small variations in temperature. (For those with some higher mathematics knowledge, this process involves decomposing the temperature fluctuations in spherical harmonics.)

Smaller details in the fluctuations tell cosmologists the relative amounts of ordinary matter, dark matter, and dark energy. However, some of the largest fluctuations—covering one-fourth, one-eighth, and one-sixteenth of the sky—are bigger than any structure in the Universe, therefore representing temperature variations across the whole sky.

Those large-scale fluctuations in the power spectrum are where something weird happens. The temperature variations are both larger than expected and aligned with each other to a high degree. That’s at odds with theoretical expectations: the CMB anisotropies should be randomly oriented, not aligned. In fact, the smaller-scale variations are random, which makes the deviation at larger scales that much stranger.

Kate Land and Joao Magueijo jokingly dubbed the strange alignment “the axis of evil” in a 2005 paper (freely available on the ArXiv), riffing on an infamous statement by then-US President George W. Bush. Their findings were based on data from an earlier observatory, the Wilkinson Microwave Anisotropy Probe (WMAP), but the follow-up Planck mission found similar results. There’s no question that the “axis of evil” is there; cosmologists just have to figure out what to think about it.

The task of interpretation is complicated by what’s called “cosmic variance,” or the fact that our observable Universe is just one region in a larger Universe. Random chance dictates that some pockets of the whole Universe will have larger or smaller fluctuations than others, and those fluctuations might even be aligned entirely by coincidence.

In other words, the “axis of evil” could very well be an illusion, a pattern that wouldn’t seem amiss if we could see more of the Universe. However, cosmic variance also predicts how big those local, random deviations should be—and the fluctuations in the CMB data are larger. They’re not so large as to rule out the possibility of a local variation entirely—they’re above-average height—but cosmologists can’t easily dismiss the possibility that something else is going on.

Read the entire article here.

Image courtesy of Hinshaw et al WMAP paper.

95.5 Percent is Made Up and It’s Dark

Petrarch_by_Bargilla

Physicists and astronomers observe the very small and the very big. Although they are focused on very different areas of scientific endeavor and discovery, they tend to agree on one key observation: 95.5 of the cosmos is currently invisible to us. That is, only around 4.5 percent of our physical universe is made up of matter or energy that we can see or sense directly through experimental interaction. The rest, well, it’s all dark — so-called dark matter and dark energy. But nobody really knows what or how or why. Effectively, despite tremendous progress in our understanding of our world, we are still in a global “Dark Age”.

From the New Scientist:

TO OUR eyes, stars define the universe. To cosmologists they are just a dusting of glitter, an insignificant decoration on the true face of space. Far outweighing ordinary stars and gas are two elusive entities: dark matter and dark energy. We don’t know what they are… except that they appear to be almost everything.

These twin apparitions might be enough to give us pause, and make us wonder whether all is right with the model universe we have spent the past century so carefully constructing. And they are not the only thing. Our standard cosmology also says that space was stretched into shape just a split second after the big bang by a third dark and unknown entity called the inflaton field. That might imply the existence of a multiverse of countless other universes hidden from our view, most of them unimaginably alien – just to make models of our own universe work.

Are these weighty phantoms too great a burden for our observations to bear – a wholesale return of conjecture out of a trifling investment of fact, as Mark Twain put it?

The physical foundation of our standard cosmology is Einstein’s general theory of relativity. Einstein began with a simple observation: that any object’s gravitational mass is exactly equal to its resistance to accelerationMovie Camera, or inertial mass. From that he deduced equations that showed how space is warped by mass and motion, and how we see that bending as gravity. Apples fall to Earth because Earth’s mass bends space-time.

In a relatively low-gravity environment such as Earth, general relativity’s effects look very like those predicted by Newton’s earlier theory, which treats gravity as a force that travels instantaneously between objects. With stronger gravitational fields, however, the predictions diverge considerably. One extra prediction of general relativity is that large accelerating masses send out tiny ripples in the weave of space-time called gravitational waves. While these waves have never yet been observed directly, a pair of dense stars called pulsars, discovered in 1974, are spiralling in towards each other just as they should if they are losing energy by emitting gravitational waves.

Gravity is the dominant force of nature on cosmic scales, so general relativity is our best tool for modelling how the universe as a whole moves and behaves. But its equations are fiendishly complicated, with a frightening array of levers to pull. If you then give them a complex input, such as the details of the real universe’s messy distribution of mass and energy, they become effectively impossible to solve. To make a working cosmological model, we make simplifying assumptions.

The main assumption, called the Copernican principle, is that we are not in a special place. The cosmos should look pretty much the same everywhere – as indeed it seems to, with stuff distributed pretty evenly when we look at large enough scales. This means there’s just one number to put into Einstein’s equations: the universal density of matter.

Einstein’s own first pared-down model universe, which he filled with an inert dust of uniform density, turned up a cosmos that contracted under its own gravity. He saw that as a problem, and circumvented it by adding a new term into the equations by which empty space itself gains a constant energy density. Its gravity turns out to be repulsive, so adding the right amount of this “cosmological constant” ensured the universe neither expanded nor contracted. When observations in the 1920s showed it was actually expanding, Einstein described this move as his greatest blunder.

It was left to others to apply the equations of relativity to an expanding universe. They arrived at a model cosmos that grows from an initial point of unimaginable density, and whose expansion is gradually slowed down by matter’s gravity.

This was the birth of big bang cosmology. Back then, the main question was whether the expansion would ever come to a halt. The answer seemed to be no; there was just too little matter for gravity to rein in the fleeing galaxies. The universe would coast outwards forever.

Then the cosmic spectres began to materialise. The first emissary of darkness put a foot in the door as long ago as the 1930s, but was only fully seen in the late 1970s when astronomers found that galaxies are spinning too fast. The gravity of the visible matter would be too weak to hold these galaxies together according to general relativity, or indeed plain old Newtonian physics. Astronomers concluded that there must be a lot of invisible matter to provide extra gravitational glue.

The existence of dark matter is backed up by other lines of evidence, such as how groups of galaxies move, and the way they bend light on its way to us. It is also needed to pull things together to begin galaxy-building in the first place. Overall, there seems to be about five times as much dark matter as visible gas and stars.

Dark matter’s identity is unknown. It seems to be something beyond the standard model of particle physics, and despite our best efforts we have yet to see or create a dark matter particle on Earth (see “Trouble with physics: Smashing into a dead end”). But it changed cosmology’s standard model only slightly: its gravitational effect in general relativity is identical to that of ordinary matter, and even such an abundance of gravitating stuff is too little to halt the universe’s expansion.

The second form of darkness required a more profound change. In the 1990s, astronomers traced the expansion of the universe more precisely than ever before, using measurements of explosions called type 1a supernovae. They showed that the cosmic expansion is accelerating. It seems some repulsive force, acting throughout the universe, is now comprehensively trouncing matter’s attractive gravity.

This could be Einstein’s cosmological constant resurrected, an energy in the vacuum that generates a repulsive force, although particle physics struggles to explain why space should have the rather small implied energy density. So imaginative theorists have devised other ideas, including energy fields created by as-yet-unseen particles, and forces from beyond the visible universe or emanating from other dimensions.

Whatever it might be, dark energy seems real enough. The cosmic microwave background radiation, released when the first atoms formed just 370,000 years after the big bang, bears a faint pattern of hotter and cooler spots that reveals where the young cosmos was a little more or less dense. The typical spot sizes can be used to work out to what extent space as a whole is warped by the matter and motions within it. It appears to be almost exactly flat, meaning all these bending influences must cancel out. This, again, requires some extra, repulsive energy to balance the bending due to expansion and the gravity of matter. A similar story is told by the pattern of galaxies in space.

All of this leaves us with a precise recipe for the universe. The average density of ordinary matter in space is 0.426 yoctograms per cubic metre (a yoctogram is 10-24 grams, and 0.426 of one equates to about 250 protons), making up 4.5 per cent of the total energy density of the universe. Dark matter makes up 22.5 per cent, and dark energy 73 per cent (see diagram). Our model of a big-bang universe based on general relativity fits our observations very nicely – as long as we are happy to make 95.5 per cent of it up.

Arguably, we must invent even more than that. To explain why the universe looks so extraordinarily uniform in all directions, today’s consensus cosmology contains a third exotic element. When the universe was just 10-36 seconds old, an overwhelming force took over. Called the inflaton field, it was repulsive like dark energy, but far more powerful, causing the universe to expand explosively by a factor of more than 1025, flattening space and smoothing out any gross irregularities.

When this period of inflation ended, the inflaton field transformed into matter and radiation. Quantum fluctuations in the field became slight variations in density, which eventually became the spots in the cosmic microwave background, and today’s galaxies. Again, this fantastic story seems to fit the observational facts. And again it comes with conceptual baggage. Inflation is no trouble for general relativity – mathematically it just requires an add-on term identical to the cosmological constant. But at one time this inflaton field must have made up 100 per cent of the contents of the universe, and its origin poses as much of a puzzle as either dark matter or dark energy. What’s more, once inflation has started it proves tricky to stop: it goes on to create a further legion of universes divorced from our own. For some cosmologists, the apparent prediction of this multiverse is an urgent reason to revisit the underlying assumptions of our standard cosmology (see “Trouble with physics: Time to rethink cosmic inflation?”).

The model faces a few observational niggles, too. The big bang makes much more lithium-7 in theory than the universe contains in practice. The model does not explain the possible alignment in some features in the cosmic background radiation, or why galaxies along certain lines of sight seem biased to spin left-handedly. A newly discovered supergalactic structure 4 billion light years long calls into question the assumption that the universe is smooth on large scales.

Read the entire story here.

Image: Petrarch, who first conceived the idea of a European “Dark Age”, by Andrea di Bartolo di Bargilla, c1450. Courtesy of Galleria degli Uffizi, Florence, Italy / Wikipedia.

You May Be Living Inside a Simulation

real-and-simulated-cosmos

Some theorists posit that we are living inside a simulation, that the entire universe is one giant, evolving model inside a grander reality. This is a fascinating idea, but may never be experimentally verifiable. So just relax — you and I may not be real, but we’ll never know.

On the other hand, but in a similar vein, researchers have themselves developed the broadest and most detailed simulation of the universe to date. Now, there are no “living” things yet inside this computer model, but it’s probably only a matter of time before our increasingly sophisticated simulations start wondering if they are simulations as well.

From the BBC:

An international team of researchers has created the most complete visual simulation of how the Universe evolved.

The computer model shows how the first galaxies formed around clumps of a mysterious, invisible substance called dark matter.

It is the first time that the Universe has been modelled so extensively and to such great resolution.

The research has been published in the journal Nature.

Now we can get to grips with how stars and galaxies form and relate it to dark matter”

The simulation will provide a test bed for emerging theories of what the Universe is made of and what makes it tick.

One of the world’s leading authorities on galaxy formation, Professor Richard Ellis of the California Institute of Technology (Caltech) in Pasadena, described the simulation as “fabulous”.

“Now we can get to grips with how stars and galaxies form and relate it to dark matter,” he told BBC News.

The computer model draws on the theories of Professor Carlos Frenk of Durham University, UK, who said he was “pleased” that a computer model should come up with such a good result assuming that it began with dark matter.

“You can make stars and galaxies that look like the real thing. But it is the dark matter that is calling the shots”.

Cosmologists have been creating computer models of how the Universe evolved for more than 20 years. It involves entering details of what the Universe was like shortly after the Big Bang, developing a computer program which encapsulates the main theories of cosmology and then letting the programme run.

The simulated Universe that comes out at the other end is usually a very rough approximation of what astronomers really see.

The latest simulation, however, comes up with the Universe that is strikingly like the real one.

Immense computing power has been used to recreate this virtual Universe. It would take a normal laptop nearly 2,000 years to run the simulation. However, using state-of-the-art supercomputers and clever software called Arepo, researchers were able to crunch the numbers in three months.

Cosmic tree

In the beginning, it shows strands of mysterious material which cosmologists call “dark matter” sprawling across the emptiness of space like branches of a cosmic tree. As millions of years pass by, the dark matter clumps and concentrates to form seeds for the first galaxies.

Then emerges the non-dark matter, the stuff that will in time go on to make stars, planets and life emerge.

But early on there are a series of cataclysmic explosions when it gets sucked into black holes and then spat out: a chaotic period which was regulating the formation of stars and galaxies. Eventually, the simulation settles into a Universe that is similar to the one we see around us.

According to Dr Mark Vogelsberger of Massachusetts Institute of Technology (MIT), who led the research, the simulations back many of the current theories of cosmology.

“Many of the simulated galaxies agree very well with the galaxies in the real Universe. It tells us that the basic understanding of how the Universe works must be correct and complete,” he said.

In particular, it backs the theory that dark matter is the scaffold on which the visible Universe is hanging.

“If you don’t include dark matter (in the simulation) it will not look like the real Universe,” Dr Vogelsberger told BBC News.

Read the entire article here.

Image: On the left: the real universe imaged via the Hubble telescope. On the right: a view of what emerges from the computer simulation. Courtesy of BBC / Illustris Collaboration.

The Inflaton and the Multiverse

multiverse-illustration

 

 

 

 

 

 

 

 

 

Last week’s announcement that cosmologists had found signals of gravitational waves from the primordial cosmic microwave background of the Big Bang made many headlines, even on cable news. If verified by separate experiments this will be ground-breaking news indeed — much like the discovery of the Higgs Boson in 2012. Should the result stand, this may well pave the way for new physics and greater support for the multiverse theory of the universe. So, in addition to the notion that we may not be alone in the vast cosmos, we’ll now have to consider not being alone in a cosmos made up of multiple universes — our universe may not be alone either!

From the New Scientist:

Wave hello to the multiverse? Ripples in the very fabric of the cosmos, unveiled this week, are allowing us to peer further back in time than anyone thought possible, showing us what was happening in the first slivers of a second after the big bang.

The discovery of these primordial waves could solidify the idea that our young universe went through a rapid growth spurt called inflation. And that theory is linked to the idea that the universe is constantly giving birth to smaller “pocket” universes within an ever-expanding multiverse.

The waves in question are called gravitational waves, and they appear in Einstein’s highly successful theory of general relativity (see “A surfer’s guide to gravitational waves”). On 17 March, scientists working with the BICEP2 telescope in Antarctica announced the first indirect detection of primordial gravitational waves. This version of the ripples was predicted to be visible in maps of the cosmic microwave background (CMB), the earliest light emitted in the universe, roughly 380,000 years after the big bang.

Repulsive gravity

The BICEP2 team had spent three years analysing CMB data, looking for a distinctive curling pattern called B-mode polarisation. These swirls indicate that the light of the CMB has been twisted, or polarised, into specific curling alignments. In two papers published online on the BICEP project website, the team said they have high confidence the B-mode pattern is there, and that they can rule out alternative explanations such as dust in our own galaxy, distortions caused by the gravity of other galaxies and errors introduced by the telescope itself. That suggests the swirls could have been left only by the very first gravitational waves being stretched out by inflation.

“If confirmed, this result would constitute the most important breakthrough in cosmology over the past 15 years. It will open a new window into the beginning of our universe and have fundamental implications for extensions of the standard model of physics,” says Avi Loeb at Harvard University. “If it is real, the signal will likely lead to a Nobel prize.”

And for some theorists, simply proving that inflation happened at all would be a sign of the multiverse.

“If inflation is there, the multiverse is there,” said Andrei Linde of Stanford University in California, who is not on the BICEP2 team and is one of the originators of inflationary theory. “Each observation that brings better credence to inflation brings us closer to establishing that the multiverse is real.” (Watch video of Linde being surprised with the news that primordial gravitational waves have been detected.)

The simplest models of inflation, which the BICEP2 results seem to support, require a particle called an inflaton to push space-time apart at high speed.

“Inflation depends on a kind of material that turns gravity on its head and causes it to be repulsive,” says Alan Guth at the Massachusetts Institute of Technology, another author of inflationary theory. Theory says the inflaton particle decays over time like a radioactive element, so for inflation to work, these hypothetical particles would need to last longer than the period of inflation itself. Afterwards, inflatons would continue to drive inflation in whatever pockets of the universe they inhabit, repeatedly blowing new universes into existence that then rapidly inflate before settling down. This “eternal inflation” produces infinite pocket universes to create a multiverse.

Quantum harmony

For now, physicists don’t know how they might observe the multiverse and confirm that it exists. “But when the idea of inflation was proposed 30 years ago, it was a figment of theoretical imagination,” says Marc Kamionkowski at Johns Hopkins University in Baltimore, Maryland. “What I’m hoping is that with these results, other theorists out there will start to think deeply about the multiverse, so that 20 years from now we can have a press conference saying we’ve found evidence of it.”

In the meantime, studying the properties of the swirls in the CMB might reveal details of what the cosmos was like just after its birth. The power and frequency of the waves seen by BICEP2 show that they were rippling through a particle soup with an energy of about 1016 gigaelectronvolts, or 10 trillion times the peak energy expected at the Large Hadron Collider. At such high energies, physicists expect that three of the four fundamental forces in physics – the strong, weak and electromagnetic forces – would be merged into one.

The detection is also the first whiff of quantum gravity, one of the thorniest puzzles in modern physics. Right now, theories of quantum mechanics can explain the behaviour of elementary particles and those three fundamental forces, but the equations fall apart when the fourth force, gravity, is added to the mix. Seeing gravitational waves in the CMB means that gravity is probably linked to a particle called the graviton, which in turn is governed by quantum mechanics. Finding these primordial waves won’t tell us how quantum mechanics and gravity are unified, says Kamionkowski. “But it does tell us that gravity obeys quantum laws.”

“For the first time, we’re directly testing an aspect of quantum gravity,” says Frank Wilczek at MIT. “We’re seeing gravitons imprinted on the sky.”

Waiting for Planck

Given the huge potential of these results, scientists will be eagerly anticipating polarisation maps from projects such as the POLARBEAR experiment in Chile or the South Pole Telescope. The next full-sky CMB maps from the Planck space telescope are also expected to include polarisation data. Seeing a similar signal from one or more of these experiments would shore up the BICEP2 findings and make a firm case for inflation and boost hints of the multiverse and quantum gravity.

One possible wrinkle is that previous temperature maps of the CMB suggested that the signal from primordial gravitational waves should be much weaker that what BICEP2 is seeing. Those results set theorists bickering about whether inflation really happened and whether it could create a multiverse. Several physicists suggested that we scrap the idea entirely for a new model of cosmic birth.

Taken alone, the BICEP2 results give a strong-enough signal to clinch inflation and put the multiverse back in the game. But the tension with previous maps is worrying, says Paul Steinhardt at Princeton University, who helped to develop the original theory of inflation but has since grown sceptical of it.

“If you look at the best-fit models with the new data added, they’re bizarre,” Steinhardt says. “If it remains like that, it requires adding extra fields, extra parameters, and you get really rather ugly-looking models.”

Forthcoming data from Planck should help resolve the issue, and we may not have long to wait. Olivier Doré at the California Institute of Technology is a member of the Planck collaboration. He says that the BICEP2 results are strong and that his group should soon be adding their data to the inflation debate: “Planck in particular will have something to say about it as soon as we publish our polarisation result in October 2014.”

Read the entire article here.

Image: Multiverse illustration. Courtesy of National Geographic.

Gravity Makes Some Waves

[tube]ZlfIVEy_YOA[/tube]

Gravity, the movie, made some “waves” at the recent Academy Awards ceremony in Hollywood. But the real star in this case, is the real gravity that seems to hold all macroscopic things in the cosmos together. And the waves in the this case are real gravitational waves. A long-running experiment based at the South Pole has discerned a signal from the Cosmic Microwave Background that points to the existence of gravitational waves. This is a discovery of great significance, if upheld, and confirms the Inflationary Theory of our universe’s exponential expansion just after the Big Bang. Theorists who first proposed this remarkable hypothesis — Alan Guth (1979) and Andrei Linde (1981) — are probably popping some champagne right now.

From the New Statesman:

The announcement yesterday that scientists working on the BICEP2 experiment in Antarctica had detected evidence of “inflation” may not appear incredible, but it is. It appears to confirm longstanding hypotheses about the Big Bang and the earliest moments of our universe, and could open a new path to resolving some of physics’ most difficult mysteries.

Here’s the explainer. BICEP2, near the South Pole (where the sky is clearest of pollution), was scanning the visible universe for cosmic background radiation – that is, the fuzzy warmth left over from the Big Bang. It’s the oldest light in the universe, and as such our maps of it are our oldest glimpses of the young universe. Here’s a map created with data collected by the ESA’s Planck Surveyor probe last year:

ESA-Planck-Surveyor-image

What should be clear from this is that the universe is remarkably flat and regular – that is, there aren’t massive clumps of radiation in some areas and gaps in others. This doesn’t quite make intuitive sense.

If the Big Bang really was a chaotic event, with energy and matter being created and destroyed within tiny fractions of nanoseconds, then we would expect the net result to be a universe that’s similarly chaotic in its structure. Something happened to smooth everything out, and that something is inflation.

Inflation assumes that something must have happened to the rate of expansion of the universe, somewhere between 10-35 and 10-32 seconds after the Big Bang, to make it massively increase. It would mean that the size of the “lumps” would outpace the rate at which they appear in the cosmos, smoothing them out.

For an analogy, imagine if the Moon was suddenly stretched out to the size of the Sun. You’d see – just before it collapsed in on itself – that its rifts and craters had become, relative to its new size, made barely perceptible. Just like a sheet being pulled tightly on a bed, a chaotic structure becomes more uniform.

Inflation, first theorised by Alan Guth in 1979 and refined by Andrei Linde in 1981, became the best hypothesis to explain what we were observing in the universe. It also seemed to offer a way to better understand how dark energy drove the expansion of the Big Bang, and even possibly lead a way towards unifying quantum mechanics with general relativity. That is, if it was correct. And there have been plenty of theories which tied-up some loose ends only to come apart with further observation.

The key evidence needed to verify inflation would be in the form of gravitational waves – that is, ripples in spacetime. Such waves were a part of Einstein’s theory of general relativity, and in the 90s scientists observed some for the first time, but until now there’s never been any evidence of them from inside the cosmic background radiation.

BICEP2, though, has found that evidence, and with it scientists now have a crucial piece of fact that can falsify other theories about the early universe and potentially open up entirely new areas of investigation. This is why it’s being compared with the discovery of the Higgs Boson last year, as just as that particle was fundamental to our understanding of molecular physics, so to is inflation to our understanding of the wider universe.

Read the entire article here.

Video: Professor physicist Chao-Lin Kuo delivers news of results from his gravitational wave experiment. Professor Andrei Linde reacts to the discovery, March 17, 2014. Courtesy of Stanford University.

NASA’s 30-Year Roadmap

NASA-logoWhile NASA vacillates over any planned manned missions back to the Moon or to the Red Planet, the agency continues to think ahead. Despite perennial budget constraints and severe cuts NASA still has some fascinating plans for unmanned exploration of our solar system and beyond to the very horizon of the visible universe.

In its latest 30 year roadmap, NASA maps out its long-term goals, which include examining the atmospheres of exoplanets, determining the structure of neutron stars and tracing the history of galactic formation.

Download the NASA roadmap directly from NASA here.

From Technology Review:

The past 30 years has seen a revolution in astronomy and our understanding of the Universe. That’s thanks in large part to a relatively small number of orbiting observatories that have changed the way we view our cosmos.

These observatories have contributed observations from every part of the electromagnetic spectrum, from NASA’s Compton Gamma Ray Observatory at the very high energy end to HALCA, a Japanese 8-metre radio telescope at the low energy end.  Then there is the Hubble Space Telescope in the visible part of the spectrum, arguably the greatest telescope in history.

It’s fair to say that these  observatories have had a profound effect not just on science , but on the history of humankind.

So an interesting question is: what next?  Today, we find out, at least as far as NASA is concerned, with the publication of the organisation’s roadmap for astrophysics over the next 30 years. The future space missions identified in this document will have a profound influence on the future of astronomy but also on the way imaging technology develops in general.

So what has NASA got up its sleeve? To start off with, it says its goal in astrophysics is to answer three questions: Are we alone? How did we get here? And how does our universe work?

So let’s start with the first question. Perhaps the most important discovery in astronomy in recent years is that the Milky Way is littered with planets, many of which must have conditions ripe for life. So it’s no surprise that NASA aims first to understand the range of planets that exist and the types of planetary systems they form.

The James Webb Space Telescope, Hubble’s successor due for launch in 2018, will study the atmospheres of exoplanets, along with the Large UV Optical IR (LUVOIR) Surveyor due for launch in the 2020s. Together, these telescopes may produce results just as spectacular as Hubble’s.

To complement the Kepler mission, which has found numerous warm planets orbiting all kinds of stars, NASA is also planning the WFIRST-AFTA mission which will look for cold, free-floating planets using gravitational lensing. That’s currently scheduled for launch in the mid 2020s.

Beyond that, NASA hopes to build an ExoEarth Mapper mission that combines the observations from several large optical space telescopes to produce the first resolved images of other Earths. “For the first time, we will identify continents and oceans—and perhaps the signatures of life—on distant worlds,” says the report.

To tackle the second question—how did we get here?—NASA hopes to trace the origins of the first stars, star clusters and galaxies, again using JWST, LUVOIR and WFIRST-AXA. “These missions will also directly trace the history of galaxies and intergalactic gas through cosmic time, peering nearly 14 billion years into the past,” it says.

And to understand how the universe works, NASA hopes to observe the most extreme events in the universe, by peering inside neutron stars, observing the collisions of black holes and even watching the first nanoseconds of time. Part of this will involve an entirely new way to observe the universe using gravitational waves (as long as today’s Earth-based gravitational wave detectors finally spot something of interest).

The technology challenges in all this will be immense. NASA needs everything from bigger, lighter optics and extremely high contrast imaging devices to smart materials and micro-thrusters with unprecedented positioning accuracy.

One thing NASA’s roadmap doesn’t mention though is money and management—the two thorniest issues in the space business. The likelihood is that NASA will not have to sweat too hard for the funds it needs to carry out these missions. Much more likely is that any sleep lost will be over the type of poor management and oversight that has brought many a multibillion dollar mission to its knees.

Read the entire article here.

Image: NASA logo. Courtesy of NASA / Wikipedia.

God Is a Thermodynamicist

Physicists and cosmologists are constantly postulating and testing new ideas to explain the universe and everything within. Over the last hundred years or so, two such ideas have grown to explain much about our cosmos, and do so very successfully — quantum mechanics, which describes the very small, and relativity which describes the very large. However, these two views do no reconcile, leaving theoreticians and researchers looking for a more fundamental theory of everything. One possible idea banishes the notions of time and gravity — treating them both as emergent properties of a deeper reality.

From New Scientist:

As revolutions go, its origins were haphazard. It was, according to the ringleader Max Planck, an “act of desperation”. In 1900, he proposed the idea that energy comes in discrete chunks, or quanta, simply because the smooth delineations of classical physics could not explain the spectrum of energy re-radiated by an absorbing body.

Yet rarely was a revolution so absolute. Within a decade or so, the cast-iron laws that had underpinned physics since Newton’s day were swept away. Classical certainty ceded its stewardship of reality to the probabilistic rule of quantum mechanics, even as the parallel revolution of Einstein’s relativity displaced our cherished, absolute notions of space and time. This was complete regime change.

Except for one thing. A single relict of the old order remained, one that neither Planck nor Einstein nor any of their contemporaries had the will or means to remove. The British astrophysicist Arthur Eddington summed up the situation in 1915. “If your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation,” he wrote.

In this essay, I will explore the fascinating question of why, since their origins in the early 19th century, the laws of thermodynamics have proved so formidably robust. The journey traces the deep connections that were discovered in the 20th century between thermodynamics and information theory – connections that allow us to trace intimate links between thermodynamics and not only quantum theory but also, more speculatively, relativity. Ultimately, I will argue, those links show us how thermodynamics in the 21st century can guide us towards a theory that will supersede them both.

In its origins, thermodynamics is a theory about heat: how it flows and what it can be made to do (see diagram). The French engineer Sadi Carnot formulated the second law in 1824 to characterise the mundane fact that the steam engines then powering the industrial revolution could never be perfectly efficient. Some of the heat you pumped into them always flowed into the cooler environment, rather than staying in the engine to do useful work. That is an expression of a more general rule: unless you do something to stop it, heat will naturally flow from hotter places to cooler places to even up any temperature differences it finds. The same principle explains why keeping the refrigerator in your kitchen cold means pumping energy into it; only that will keep warmth from the surroundings at bay.

A few decades after Carnot, the German physicist Rudolph Clausius explained such phenomena in terms of a quantity characterising disorder that he called entropy. In this picture, the universe works on the back of processes that increase entropy – for example dissipating heat from places where it is concentrated, and therefore more ordered, to cooler areas, where it is not.

That predicts a grim fate for the universe itself. Once all heat is maximally dissipated, no useful process can happen in it any more: it dies a “heat death”. A perplexing question is raised at the other end of cosmic history, too. If nature always favours states of high entropy, how and why did the universe start in a state that seems to have been of comparatively low entropy? At present we have no answer, and later I will mention an intriguing alternative view.

Perhaps because of such undesirable consequences, the legitimacy of the second law was for a long time questioned. The charge was formulated with the most striking clarity by the British physicist James Clerk Maxwell in 1867. He was satisfied that inanimate matter presented no difficulty for the second law. In an isolated system, heat always passes from the hotter to the cooler, and a neat clump of dye molecules readily dissolves in water and disperses randomly, never the other way round. Disorder as embodied by entropy does always increase.

Maxwell’s problem was with life. Living things have “intentionality”: they deliberately do things to other things to make life easier for themselves. Conceivably, they might try to reduce the entropy of their surroundings and thereby violate the second law.

Information is power

Such a possibility is highly disturbing to physicists. Either something is a universal law or it is merely a cover for something deeper. Yet it was only in the late 1970s that Maxwell’s entropy-fiddling “demon” was laid to rest. Its slayer was the US physicist Charles Bennett, who built on work by his colleague at IBM, Rolf Landauer, using the theory of information developed a few decades earlier by Claude Shannon. An intelligent being can certainly rearrange things to lower the entropy of its environment. But to do this, it must first fill up its memory, gaining information as to how things are arranged in the first place.

This acquired information must be encoded somewhere, presumably in the demon’s memory. When this memory is finally full, or the being dies or otherwise expires, it must be reset. Dumping all this stored, ordered information back into the environment increases entropy – and this entropy increase, Bennett showed, will ultimately always be at least as large as the entropy reduction the demon originally achieved. Thus the status of the second law was assured, albeit anchored in a mantra of Landauer’s that would have been unintelligible to the 19th-century progenitors of thermodynamics: that “information is physical”.

But how does this explain that thermodynamics survived the quantum revolution? Classical objects behave very differently to quantum ones, so the same is presumably true of classical and quantum information. After all, quantum computers are notoriously more powerful than classical ones (or would be if realised on a large scale).

The reason is subtle, and it lies in a connection between entropy and probability contained in perhaps the most profound and beautiful formula in all of science. Engraved on the tomb of the Austrian physicist Ludwig Boltzmann in Vienna’s central cemetery, it reads simply S = k log W. Here S is entropy – the macroscopic, measurable entropy of a gas, for example – while k is a constant of nature that today bears Boltzmann’s name. Log W is the mathematical logarithm of a microscopic, probabilistic quantity W – in a gas, this would be the number of ways the positions and velocities of its many individual atoms can be arranged.

On a philosophical level, Boltzmann’s formula embodies the spirit of reductionism: the idea that we can, at least in principle, reduce our outward knowledge of a system’s activities to basic, microscopic physical laws. On a practical, physical level, it tells us that all we need to understand disorder and its increase is probabilities. Tot up the number of configurations the atoms of a system can be in and work out their probabilities, and what emerges is nothing other than the entropy that determines its thermodynamical behaviour. The equation asks no further questions about the nature of the underlying laws; we need not care if the dynamical processes that create the probabilities are classical or quantum in origin.

There is an important additional point to be made here. Probabilities are fundamentally different things in classical and quantum physics. In classical physics they are “subjective” quantities that constantly change as our state of knowledge changes. The probability that a coin toss will result in heads or tails, for instance, jumps from ½ to 1 when we observe the outcome. If there were a being who knew all the positions and momenta of all the particles in the universe – known as a “Laplace demon”, after the French mathematician Pierre-Simon Laplace, who first countenanced the possibility – it would be able to determine the course of all subsequent events in a classical universe, and would have no need for probabilities to describe them.

In quantum physics, however, probabilities arise from a genuine uncertainty about how the world works. States of physical systems in quantum theory are represented in what the quantum pioneer Erwin Schrödinger called catalogues of information, but they are catalogues in which adding information on one page blurs or scrubs it out on another. Knowing the position of a particle more precisely means knowing less well how it is moving, for example. Quantum probabilities are “objective”, in the sense that they cannot be entirely removed by gaining more information.

That casts in an intriguing light thermodynamics as originally, classically formulated. There, the second law is little more than impotence written down in the form of an equation. It has no deep physical origin itself, but is an empirical bolt-on to express the otherwise unaccountable fact that we cannot know, predict or bring about everything that might happen, as classical dynamical laws suggest we can. But this changes as soon as you bring quantum physics into the picture, with its attendant notion that uncertainty is seemingly hardwired into the fabric of reality. Rooted in probabilities, entropy and thermodynamics acquire a new, more fundamental physical anchor.

It is worth pointing out, too, that this deep-rooted connection seems to be much more general. Recently, together with my colleagues Markus Müller of the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, Canada, and Oscar Dahlsten at the Centre for Quantum Technologies in Singapore, I have looked at what happens to thermodynamical relations in a generalised class of probabilistic theories that embrace quantum theory and much more besides. There too, the crucial relationship between information and disorder, as quantified by entropy, survives (arxiv.org/1107.6029).

One theory to rule them all

As for gravity – the only one of nature’s four fundamental forces not covered by quantum theory – a more speculative body of research suggests it might be little more than entropy in disguise (see “Falling into disorder”). If so, that would also bring Einstein’s general theory of relativity, with which we currently describe gravity, firmly within the purview of thermodynamics.

Take all this together, and we begin to have a hint of what makes thermodynamics so successful. The principles of thermodynamics are at their roots all to do with information theory. Information theory is simply an embodiment of how we interact with the universe – among other things, to construct theories to further our understanding of it. Thermodynamics is, in Einstein’s term, a “meta-theory”: one constructed from principles over and above the structure of any dynamical laws we devise to describe reality’s workings. In that sense we can argue that it is more fundamental than either quantum physics or general relativity.

If we can accept this and, like Eddington and his ilk, put all our trust in the laws of thermodynamics, I believe it may even afford us a glimpse beyond the current physical order. It seems unlikely that quantum physics and relativity represent the last revolutions in physics. New evidence could at any time foment their overthrow. Thermodynamics might help us discern what any usurping theory would look like.

For example, earlier this year, two of my colleagues in Singapore, Esther Hänggi and Stephanie Wehner, showed that a violation of the quantum uncertainty principle – that idea that you can never fully get rid of probabilities in a quantum context – would imply a violation of the second law of thermodynamics. Beating the uncertainty limit means extracting extra information about the system, which requires the system to do more work than thermodynamics allows it to do in the relevant state of disorder. So if thermodynamics is any guide, whatever any post-quantum world might look like, we are stuck with a degree of uncertainty (arxiv.org/abs/1205.6894).

My colleague at the University of Oxford, the physicist David Deutsch, thinks we should take things much further. Not only should any future physics conform to thermodynamics, but the whole of physics should be constructed in its image. The idea is to generalise the logic of the second law as it was stringently formulated by the mathematician Constantin Carathéodory in 1909: that in the vicinity of any state of a physical system, there are other states that cannot physically be reached if we forbid any exchange of heat with the environment.

James Joule’s 19th century experiments with beer can be used to illustrate this idea. The English brewer, whose name lives on in the standard unit of energy, sealed beer in a thermally isolated tub containing a paddle wheel that was connected to weights falling under gravity outside. The wheel’s rotation warmed the beer, increasing the disorder of its molecules and therefore its entropy. But hard as we might try, we simply cannot use Joule’s set-up to decrease the beer’s temperature, even by a fraction of a millikelvin. Cooler beer is, in this instance, a state regrettably beyond the reach of physics.

God, the thermodynamicist

The question is whether we can express the whole of physics simply by enumerating possible and impossible processes in a given situation. This is very different from how physics is usually phrased, in both the classical and quantum regimes, in terms of states of systems and equations that describe how those states change in time. The blind alleys down which the standard approach can lead are easiest to understand in classical physics, where the dynamical equations we derive allow a whole host of processes that patently do not occur – the ones we have to conjure up the laws of thermodynamics expressly to forbid, such as dye molecules reclumping spontaneously in water.

By reversing the logic, our observations of the natural world can again take the lead in deriving our theories. We observe the prohibitions that nature puts in place, be it on decreasing entropy, getting energy from nothing, travelling faster than light or whatever. The ultimately “correct” theory of physics – the logically tightest – is the one from which the smallest deviation gives us something that breaks those taboos.

There are other advantages in recasting physics in such terms. Time is a perennially problematic concept in physical theories. In quantum theory, for example, it enters as an extraneous parameter of unclear origin that cannot itself be quantised. In thermodynamics, meanwhile, the passage of time is entropy increase by any other name. A process such as dissolved dye molecules forming themselves into a clump offends our sensibilities because it appears to amount to running time backwards as much as anything else, although the real objection is that it decreases entropy.

Apply this logic more generally, and time ceases to exist as an independent, fundamental entity, but one whose flow is determined purely in terms of allowed and disallowed processes. With it go problems such as that I alluded to earlier, of why the universe started in a state of low entropy. If states and their dynamical evolution over time cease to be the question, then anything that does not break any transformational rules becomes a valid answer.

Such an approach would probably please Einstein, who once said: “What really interests me is whether God had any choice in the creation of the world.” A thermodynamically inspired formulation of physics might not answer that question directly, but leaves God with no choice but to be a thermodynamicist. That would be a singular accolade for those 19th-century masters of steam: that they stumbled upon the essence of the universe, entirely by accident. The triumph of thermodynamics would then be a revolution by stealth, 200 years in the making.

Read the entire article here.

The Universe of Numbers

There is no doubt that mathematics — some very complex — has been able to explain much of what we consider the universe. In reality, and perhaps surprisingly, only a small subset of equations is required to explain everything around us from the atoms and their constituents to the vast cosmos. Why is that? And, what is the fundamental relationship between mathematics and our current physical understanding of all things great and small?

From the New Scientist:

When Albert Einstein finally completed his general theory of relativity in 1916, he looked down at the equations and discovered an unexpected message: the universe is expanding.

Einstein didn’t believe the physical universe could shrink or grow, so he ignored what the equations were telling him. Thirteen years later, Edwin Hubble found clear evidence of the universe’s expansion. Einstein had missed the opportunity to make the most dramatic scientific prediction in history.

How did Einstein’s equations “know” that the universe was expanding when he did not? If mathematics is nothing more than a language we use to describe the world, an invention of the human brain, how can it possibly churn out anything beyond what we put in? “It is difficult to avoid the impression that a miracle confronts us here,” wrote physicist Eugene Wigner in his classic 1960 paper “The unreasonable effectiveness of mathematics in the natural sciences” (Communications on Pure and Applied Mathematics, vol 13, p 1).

The prescience of mathematics seems no less miraculous today. At the Large Hadron Collider at CERN, near Geneva, Switzerland, physicists recently observed the fingerprints of a particle that was arguably discovered 48 years ago lurking in the equations of particle physics.

How is it possible that mathematics “knows” about Higgs particles or any other feature of physical reality? “Maybe it’s because math is reality,” says physicist Brian Greene of Columbia University, New York. Perhaps if we dig deep enough, we would find that physical objects like tables and chairs are ultimately not made of particles or strings, but of numbers.

“These are very difficult issues,” says philosopher of science James Ladyman of the University of Bristol, UK, “but it might be less misleading to say that the universe is made of maths than to say it is made of matter.”

Difficult indeed. What does it mean to say that the universe is “made of mathematics”? An obvious starting point is to ask what mathematics is made of. The late physicist John Wheeler said that the “basis of all mathematics is 0 = 0”. All mathematical structures can be derived from something called “the empty set”, the set that contains no elements. Say this set corresponds to zero; you can then define the number 1 as the set that contains only the empty set, 2 as the set containing the sets corresponding to 0 and 1, and so on. Keep nesting the nothingness like invisible Russian dolls and eventually all of mathematics appears. Mathematician Ian Stewart of the University of Warwick, UK, calls this “the dreadful secret of mathematics: it’s all based on nothing” (New Scientist, 19 November 2011, p 44). Reality may come down to mathematics, but mathematics comes down to nothing at all.

That may be the ultimate clue to existence – after all, a universe made of nothing doesn’t require an explanation. Indeed, mathematical structures don’t seem to require a physical origin at all. “A dodecahedron was never created,” says Max Tegmark of the Massachusetts Institute of Technology. “To be created, something first has to not exist in space or time and then exist.” A dodecahedron doesn’t exist in space or time at all, he says – it exists independently of them. “Space and time themselves are contained within larger mathematical structures,” he adds. These structures just exist; they can’t be created or destroyed.

That raises a big question: why is the universe only made of some of the available mathematics? “There’s a lot of math out there,” Greene says. “Today only a tiny sliver of it has a realisation in the physical world. Pull any math book off the shelf and most of the equations in it don’t correspond to any physical object or physical process.”

It is true that seemingly arcane and unphysical mathematics does, sometimes, turn out to correspond to the real world. Imaginary numbers, for instance, were once considered totally deserving of their name, but are now used to describe the behaviour of elementary particles; non-Euclidean geometry eventually showed up as gravity. Even so, these phenomena represent a tiny slice of all the mathematics out there.

Not so fast, says Tegmark. “I believe that physical existence and mathematical existence are the same, so any structure that exists mathematically is also real,” he says.

So what about the mathematics our universe doesn’t use? “Other mathematical structures correspond to other universes,” Tegmark says. He calls this the “level 4 multiverse”, and it is far stranger than the multiverses that cosmologists often discuss. Their common-or-garden multiverses are governed by the same basic mathematical rules as our universe, but Tegmark’s level 4 multiverse operates with completely different mathematics.

All of this sounds bizarre, but the hypothesis that physical reality is fundamentally mathematical has passed every test. “If physics hits a roadblock at which point it turns out that it’s impossible to proceed, we might find that nature can’t be captured mathematically,” Tegmark says. “But it’s really remarkable that that hasn’t happened. Galileo said that the book of nature was written in the language of mathematics – and that was 400 years ago.”

Read the entire article here.

Bert and Ernie and Friends

The universe is a very strange place, stranger than Washington D.C., stranger than most reality TV shows.

And, it keep getting stranger as astronomers and cosmologists continue to make ever more head-scratching discoveries. The latest, a pair of super-high energy neutrinos, followed by another 28. It seems that these tiny, almost massless, particles are reaching Earth from an unknown source, or sources, of immense power outside of our own galaxy.

The neutrinos were spotted by the IceCube detector, which is buried beneath about a mile of solid ice in an Antarctic glacier.

From i09:

By drilling a 1.5 mile hole deep into an Antarctic glacier, physicists working at the IceCube South Pole Observatory have captured 28 extraterrestrial neutrinos — those mysterious and extremely powerful subatomic particles that can pass straight through solid matter. Welcome to an entirely new age of astronomy.

Back in April of this year, the same team of physicists captured the highest energy neutrinos ever detected. Dubbed Bert and Ernie, the elusive subatomic particles likely originated from beyond our solar system, and possibly even our galaxy.

Neutrinos are extremely tiny and prolific subatomic particles that are born in nuclear reactions, including those that occur inside of stars. And because they’re practically massless (together they contain only a tiny fraction of the mass of a single electron), they can pass through normal matter, which is why they’re dubbed ‘ghost particles.’ Neutrinos are able to do this because they don’t carry an electric charge, so they’re immune to electromagnetic forces that influence charged particles like electrons and protons.

A Billion Times More Powerful

But not all neutrinos are the same. The ones discovered by the IceCube team are about a billion times more energetic than the ones coming out of our sun. A pair of them had energies above an entire petaelectron volt. That’s more than 1,000 times the energy produced by protons smashed at CERN’s Large Hadron Collider.

So whatever created them must have been extremely powerful. Like, mindboggingly powerful — probably the remnants of supernova explosions. Indeed, as a recent study has shown, these cosmic explosions are more powerful than we could have ever imagined — to the point where they’re defying known physics.

Other candidates for neutrino production include black holes, pulsars, galactic nuclei — or even the cataclysmic merger of two black holes.

That’s why the discovery of these 28 new neutrinos, and the construction of the IceCube facility, is so important. It’s still a mystery, but these new findings, and the new detection technique, will help.

Back in April, the IceCube project looked for neutrinos above one petaelectronvolt, which is how Bert and Ernie were detected. But the team went back and searched through their data and found 26 neutrinos with slightly lower energies, though still above 30 teraelectronvolts that were detected between May 2010 and May 2012. While it’s possible that some of these less high-energy neutrinos could have been produced by cosmic rays in the Earth’s atmosphere, the researchers say that most of them likely came from space. And in fact, the data was analyzed in such a way as to exclude neutrinos that didn’t come from space and other types of particles that may have tripped off the detector.

The Dawn of a New Field

“This is a landmark discovery — possibly a Nobel Prize in the making,” said Alexander Kusenko, a UCLA astroparticle physicist who was not involved in the IceCube collaboration. Thanks to the remarkable IceCube facility, where neutrinos are captured in holes drilled 1.5 miles down into the Antarctic glacier, astronomers have a completely new way to scope out the cosmos. It’s both literally and figuratively changing the way we see the universe.

“It really is the dawn of a new field,” said Darren Grant, a University of Alberta physicist, and a member of the IceCube team.

Read the entire article here.

Above and Beyond

According to NASA, Voyager 1 officially left the protection of the solar system on or about August 25, 2013, and is now heading into interstellar space. It is now the first and only human-made object to leave the solar system.

Perhaps, one day in the distant future real human voyagers — or their android cousins — will come across the little probe as it continues on its lonely journey.

From Space:

A spacecraft from Earth has left its cosmic backyard and taken its first steps in interstellar space.

After streaking through space for nearly 35 years, NASA’s robotic Voyager 1 probe finally left the solar system in August 2012, a study published today (Sept. 12) in the journal Science reports.

“Voyager has boldly gone where no probe has gone before, marking one of the most significant technological achievements in the annals of the history of science, and as it enters interstellar space, it adds a new chapter in human scientific dreams and endeavors,” NASA science chief John Grunsfeld said in a statement. “Perhaps some future deep-space explorers will catch up with Voyager, our first interstellar envoy, and reflect on how this intrepid spacecraft helped enable their future.”

A long and historic journey

Voyager 1 launched on Sept. 5, 1977, about two weeks after its twin, Voyager 2. Together, the two probes conducted a historic “grand tour” of the outer planets, giving scientists some of their first up-close looks at Jupiter, Saturn, Uranus, Neptune and the moons of these faraway worlds.

The duo completed its primary mission in 1989, and then kept on flying toward the edge of the heliosphere, the huge bubble of charged particles and magnetic fields that the sun puffs out around itself. Voyager 1 has now popped free of this bubble into the exotic and unexplored realm of interstellar space, scientists say.

They reached this historic conclusion with a little help from the sun. A powerful solar eruption caused electrons in Voyager 1’s location to vibrate signficantly between April 9 and May 22 of this year. The probe’s plasma wave instrument detected these oscillations, and researchers used the measurements to figure out that Voyager 1’s surroundings contained about 1.3 electrons per cubic inch (0.08 electrons per cubic centimeter).

That’s far higher than the density observed in the outer regions of the heliosphere (roughly 0.03 electrons per cubic inch, or 0.002 electrons per cubic cm) and very much in line with the 1.6 electrons per cubic inch (0.10 electrons per cubic cm) or so expected in interstellar space. [Photos from NASA’s Voyager 1 and 2 Probes]

“We literally jumped out of our seats when we saw these oscillations in our data — they showed us that the spacecraft was in an entirely new region, comparable to what was expected in interstellar space, and totally different than in the solar bubble,” study lead author Don Gurnett of the University of Iowa, the principal investigator of Voyager 1’s plasma wave instrument, said in a statement.

It may seem surprising that electron density is higher beyond the solar system than in its extreme outer reaches. Interstellar space is, indeed, emptier than the regions in Earth’s neighborhood, but the density inside the solar bubble drops off dramatically at great distances from the sun, researchers said.

Calculating a departure date

The study team wanted to know if Voyager 1 left the solar system sometime before April 2013, so they combed through some of the probe’s older data. They found a monthlong period of electron oscillations in October-November 2012 that translated to a density of 0.004 electrons per cubic inch (0.006 electrons per cubic cm).

Using these numbers and the amount of ground that Voyager 1 covers — about 325 million miles (520 million kilometers) per year — the researchers calculated that the spacecraft likely left the solar system in August 2012.

That time frame matches up well with several other important changes Voyager 1 observed. On Aug. 25, 2012, the probe recorded a 1,000-fold drop in the number of charged solar particles while also measuring a 9 percent increase in fast-moving galactic cosmic rays, which originate beyond the solar system.

“These results, and comparison with previous heliospheric radio measurements, strongly support the view that Voyager 1 crossed the heliopause into the interstellar plasma on or about Aug. 25, 2012,” Gurnett and his colleagues write in the new study.

At that point, Voyager 1 was about 11.25 billion miles (18.11 billion km) from the sun, or roughly 121 times the distance between Earth and the sun. The probe is now 11.66 billion miles (18.76 billion km) from the sun. (Voyager 2, which took a different route through the solar system, is currently 9.54 billion miles, or 15.35 billion km, from the sun.)

Read the entire article here.

Image: Voyager Gold Disk. Courtesy of Wikipedia.

Everywhere And Nowhere

Most physicists believe that dark matter exists, but have never seen it, only deduced its existence. This is a rather unsettling state of affairs since by most estimates dark matter (and possibly dark energy) accounts for 95 percent of the universe. The stuff we are made from, interact with and see on a daily basis — atoms, their constituents and their forces — is a mere 5 percent.

From the Atlantic:

Here’s a little experiment.

Hold up your hand.

Now put it back down.

In that window of time, your hand somehow interacted with dark matter — the mysterious stuff that comprises the vast majority of the universe. “Our best guess,” according to Dan Hooper, an astronomy professor at the University of Chicago and a theoretical astrophysicist at the Fermi National Accelerator Laboratory, “is that a million particles of dark matter passed through your hand just now.”

Dark matter, in other words, is not merely the stuff of black holes and deep space. It is all around us. Somehow. We’re pretty sure.

But if you did the experiment — as the audience at Hooper’s talk on dark matter and other cosmic mysteries did at the Aspen Ideas Festival today — you didn’t feel those million particles. We humans have no sense of their existence, Hooper said, in part because they don’t hew to the forces that regulate our movement in the world — gravity, electromagnetism, the forces we can, in some way, feel. Dark matter, instead, is “this ghostly, elusive stuff that dominates our universe,” Hooper said.

It’s everywhere. And it’s also, as far as human knowledge is concerned, nowhere.

And yet, despite its mysteries, we know it’s out there. “All astronomers are in complete conviction that there is dark matter,” said Richard Massey, the lead author of a recent study mapping the dark matter of the universe, and Hooper’s co-panelist. The evidence for its existence, Hooper agreed, is “overwhelming.” And yet it’s evidence based on deduction: through our examinations of the observable universe, we make assumptions about the unobservable version.

Dark matter, in other words, is aptly named. A full 95 percent of the universe — the dark matter, the stuff that both is and is not — is effectively unknown to us. “All the science that we’ve ever done only ever examines five percent of the universe,” Massey said. Which means that there are still mysteries to be unraveled, and dark truths to be brought to light.

And it also means, Massey pointed out, that for scientists, “the job security is great.”

You might be wondering, though: given how little we know about dark matter, how is it that Hooper knew that a million particles of the stuff passed through your hand as you raised and lowered it?

“I cheated a little,” Hooper admitted. He assumed a particular mass for the individual particles. “We know what the density of dark matter is on Earth from watching how the Milky Way rotates. And we know roughly how fast they’re going. So you take those two bits of information, and all you need to know is how much mass each individual particle has, and then I can get the million number. And I assumed a kind of traditional guess. But it could be 10,000 higher; it could be 10,000 lower.”

Read the entire article here.

General Relativity Lives on For Now

Since Einstein first published his elegant theory of General Relativity almost 100 years ago it has proved to be one of most powerful and enduring cornerstones of modern science. Yet theorists and researchers the world over know that it cannot possibly remain the sole answer to our cosmological questions. It answers questions about the very, very large — galaxies, stars and planets and the gravitational relationship between them. But it fails to tackle the science of the very, very small — atoms, their constituents and the forces that unite and repel them, which is addressed by the elegant and complex, but mutually incompatible Quantum Theory.

So, scientists continue to push their measurements to ever greater levels of precision across both greater and smaller distances with one aim in mind — to test the limits of each theory and to see which one breaks down first.

A recent highly precise and yet very long distance experiment, confirmed that Einstein’s theory still rules the heavens.

From ars technica:

The general theory of relativity is a remarkably successful model for gravity. However, many of the best tests for it don’t push its limits: they measure phenomena where gravity is relatively weak. Some alternative theories predict different behavior in areas subject to very strong gravity, like near the surface of a pulsar—the compact, rapidly rotating remnant of a massive star (also called a neutron star). For that reason, astronomers are very interested in finding a pulsar paired with another high-mass object. One such system has now provided an especially sensitive test of strong gravity.

The system is a binary consisting of a high-mass pulsar and a bright white dwarf locked in mutual orbit with a period of about 2.5 hours. Using optical and radio observations, John Antoniadis and colleagues measured its properties as it spirals toward merger by emitting gravitational radiation. After monitoring the system for a number of orbits, the researchers determined its behavior is in complete agreement with general relativity to a high level of precision.

The binary system was first detected in a survey of pulsars by the Green Bank Telescope (GBT). The pulsar in the system, memorably labeled PSR J0348+0432, emits radio pulses about once every 39 milliseconds (0.039 seconds). Fluctuations in the pulsar’s output indicated that it is in a binary system, though its companion lacked radio emissions. However, the GBT’s measurements were precise enough to pinpoint its location in the sky, which enabled the researchers to find the system in the archives of the Sloan Digital Sky Survey (SDSS). They determined the companion object was a particularly bright white dwarf, the remnant of the core of a star similar to our Sun. It and the pulsar are locked in a mutual orbit about 2.46 hours in length.

Following up with the Very Large Telescope (VLT) in Chile, the astronomers built up enough data to model the system. Pulsars are extremely dense, packing a star’s worth of mass into a sphere roughly 10 kilometers in radius—far too small to see directly. White dwarfs are less extreme, but they still involve stellar masses in a volume roughly equivalent to Earth’s. That means the objects in the PSR J0348+0432 system can orbit much closer to each other than stars could—as little as 0.5 percent of the average Earth-Sun separation, or 1.2 times the Sun’s radius.

The pulsar itself was interesting because of its relatively high mass: about 2.0 times that of the Sun (most observed pulsars are about 1.4 times more massive). Unlike more mundane objects, pulsar size doesn’t grow with mass; according to some models, a higher mass pulsar may actually be smaller than one with lower mass. As a result, the gravity at the surface of PSR J0348+0432 is far more intense than at a lower-mass counterpart, providing a laboratory for testing general relativity (GR). The gravitational intensity near PSR J0348+0432 is about twice that of other pulsars in binary systems, creating a more extreme environment than previously measured.

According to GR, a binary emits gravitational waves that carry energy away from the system, causing the size of the orbit to shrink. For most binaries, the effect is small, but for compact systems like the one containing PSR J0348+0432, it is measurable. The first such system was found by Russel Hulse and Joseph Taylor; its discovery won the two astronomers the Nobel Prize.

The shrinking of the orbit results in a decrease in the orbital period as the two objects revolve around each other more quickly. In this case, the researchers measured the effect by studying the change in the spectrum of light emitted by the white dwarf, as well as fluctuations in the emissions from the pulsar. (This study also helped demonstrate the two objects were in mutual orbit, rather than being coincidentally in the same part of the sky.)

To test agreement with GR, physicists established a set of observable quantities. These include the rate of orbit decrease (which is a reflection of the energy loss to gravitational radiation) and something called the Shapiro delay. The latter phenomenon occurs because light emitted from the pulsar must travel through the intense gravitational field of the pulsar when exiting the system. This effect depends on the relative orientation of the pulsar to us, but alternative models also predict different observable results.

In the case of the PSR J0348+0432 system, the change in orbital period and the Shapiro delay agreed with the predictions of GR, placing strong constraints on alternative theories. The researchers were also able to rule out energy loss from other, non-gravitational sources (rotation or electromagnetic phenomena). If the system continues as models predict, the white dwarf and pulsar will merge in about 400 million years—we don’t know what the product of that merger will be, so astronomers are undoubtedly marking their calendars now.

The results are of potential use for the Laser Interferometer Gravitational-wave Observatory (LIGO) and other ground-based gravitational-wave detectors. These instruments are sensitive to the final death spiral of binaries like the one containing PSR J0348+0432. The current detection and observation strategies involve “templates,” or theoretical models of the gravitational wave signal from binaries. All information about the behavior of close pulsar binaries helps gravitational-wave astronomers refine those templates, which should improve the chances of detection.

Of course, no theory can be “proven right” by experiment or observation—data provides evidence in support of or against the predictions of a particular model. However, the PSR J0348+0432 binary results placed stringent constraints on any alternative model to GR in the strong-gravity regime. (Certain other alternative models focus on altering gravity on large scales to explain dark energy and the acceleration expansion of the Universe.) Based on this new data, only theories that agree with GR to high precision are still standing—leaving general relativity the continuing champion theory of gravity.

Read the entire article after the jump.

Image: Artist’s impression of the PSR J0348+0432 system. The compact pulsar (with beams of radio emission) produces a strong distortion of spacetime (illustrated by the green mesh). Courtesy of Science Mag.

Shedding Light on Dark Matter

Scientists are cautiously optimistic that results from a particle experiment circling the Earth onboard the International Space Station (ISS) hint at the existence of dark matter.

From Symmetry:

The space-based Alpha Magnetic Spectrometer experiment could be building toward evidence of dark matter, judging by its first result.

The AMS detector does its work more than 200 miles above Earth, latched to the side of the International Space Station. It detects charged cosmic rays, high-energy particles that for the most part originate outside our solar system.

The experiment’s first result, released today, showed an excess of antimatter particles—over the number expected to come from cosmic-ray collisions—in a certain energy range.

There are two competing explanations for this excess. Extra antimatter particles called positrons could be forming in collisions between unseen dark-matter particles and their antiparticles in space. Or an astronomical object such as a pulsar could be firing them into our solar system.

Luckily, there are a couple of ways to find out which explanation is correct.

If dark-matter particles are the culprits, the excess of positrons should sink suddenly above a certain energy. But if a pulsar is responsible, at higher energies the excess will only gradually disappear.

“The way they drop off tells you everything,” said AMS Spokesperson and Nobel laureate Sam Ting, in today’s presentation at CERN, the European center for particle physics.

The AMS result, to be published in Physical Review Letters on April 5, includes data from the energy range between 0.5 and 350 GeV. A graph of the flux of positrons over the flux of electrons and positrons takes the shape of a valley, dipping in the energy range between 0.5 to 10 GeV and then increasing steadily between 10 and 250 GeV. After that point, it begins to dip again—but the graph cuts off just before one can tell whether this is the great drop-off expected in dark matter models or the gradual fade-out expected in pulsar models. This confirms previous results from the PAMELA experiment, with greater precision.

Ting smiled slightly while presenting this cliffhanger, pointing to the empty edge of the graph. “In here, what happens is of great interest,” he said.

“We, of course, have a feeling what is happening,” he said. “But probably it is too early to discuss that.”

Ting kept mum about any data collected so far above that energy, telling curious audience members to wait until the experiment had enough information to present a statistically significant result.

“I’ve been working at CERN for many years. I’ve never made a mistake on an experiment,” he said. “And this is a very difficult experiment.”

A second way to determine the origin of the excess of positrons is to consider where they’re coming from. If positrons are hitting the detector from all directions at random, they could be coming from something as diffuse as dark matter. But if they are arriving from one preferred direction, they might be coming from a pulsar.

So far, the result leans toward the dark-matter explanation, with positrons coming from all directions. But AMS scientists will need to collect more data to say this for certain.

Read the entire article following the jump.

Image: Alpha Magnetic Spectrometer (AMS) detector latched on to the International Space Station. Courtesy of NASA / AMS-02.

MondayMap: Quiet News Day = Map of the Universe

It was surely a quiet news day on March 21 2013 — most major online news outlets showed a fresh map of the Cosmic Microwave Background (CMB) on the front page. It was taken by the Planck Telescope, operated by the European Space Agency, over a period of 15 months. The image shows a landscape of primordial cosmic microwaves from when the universe was only around 380,000 years old, and is often referred to as “first light”.

From ESA:

Acquired by ESA’s Planck space telescope, the most detailed map ever created of the cosmic microwave background – the relic radiation from the Big Bang – was released today revealing the existence of features that challenge the foundations of our current understanding of the Universe.

The image is based on the initial 15.5 months of data from Planck and is the mission’s first all-sky picture of the oldest light in our Universe, imprinted on the sky when it was just 380 000 years old.

At that time, the young Universe was filled with a hot dense soup of interacting protons, electrons and photons at about 2700ºC. When the protons and electrons joined to form hydrogen atoms, the light was set free. As the Universe has expanded, this light today has been stretched out to microwave wavelengths, equivalent to a temperature of just 2.7 degrees above absolute zero.

This ‘cosmic microwave background’ – CMB – shows tiny temperature fluctuations that correspond to regions of slightly different densities at very early times, representing the seeds of all future structure: the stars and galaxies of today.

According to the standard model of cosmology, the fluctuations arose immediately after the Big Bang and were stretched to cosmologically large scales during a brief period of accelerated expansion known as inflation.

Planck was designed to map these fluctuations across the whole sky with greater resolution and sensitivity than ever before. By analysing the nature and distribution of the seeds in Planck’s CMB image, we can determine the composition and evolution of the Universe from its birth to the present day.

Overall, the information extracted from Planck’s new map provides an excellent confirmation of the standard model of cosmology at an unprecedented accuracy, setting a new benchmark in our manifest of the contents of the Universe.

But because precision of Planck’s map is so high, it also made it possible to reveal some peculiar unexplained features that may well require new physics to be understood.

“The extraordinary quality of Planck’s portrait of the infant Universe allows us to peel back its layers to the very foundations, revealing that our blueprint of the cosmos is far from complete. Such discoveries were made possible by the unique technologies developed for that purpose by European industry,” says Jean-Jacques Dordain, ESA’s Director General.

“Since the release of Planck’s first all-sky image in 2010, we have been carefully extracting and analysing all of the foreground emissions that lie between us and the Universe’s first light, revealing the cosmic microwave background in the greatest detail yet,” adds George Efstathiou of the University of Cambridge, UK.

One of the most surprising findings is that the fluctuations in the CMB temperatures at large angular scales do not match those predicted by the standard model – their signals are not as strong as expected from the smaller scale structure revealed by Planck.

Read the entire article after the jump.

Image: Cosmic microwave background (CMB) seen by Planck. Courtesy of ESA (European Space Agency).

Shedding Some Light On Dark Matter

Cosmologists theorized the need for dark matter to account for hidden mass in our universe. Yet, as the name implies, it is proving rather hard to find. Now astronomers believe they see hints of it in ancient galactic collisions.

[div class=attrib]From New Scientist:[end-div]

Colliding clusters of galaxies may hold clues to a mysterious dark force at work in the universe. This force would act only on invisible dark matter, the enigmatic stuff that makes up 86 per cent of the mass in the universe.

Dark matter famously refuses to interact with ordinary matter except via gravity, so theorists had assumed that its particles would be just as aloof with each other. But new observations suggest that dark matter interacts significantly with itself, while leaving regular matter out of the conversation.

“There could be a whole class of dark particles that don’t interact with normal matter but do interact with themselves,” says James Bullock of the University of California, Irvine. “Dark matter could be doing all sorts of interesting things, and we’d never know.”

Some of the best evidence for dark matter’s existence came from the Bullet clusterMovie Camera, a smash-up in which a small galaxy cluster plunged through a larger one about 100 million years ago. Separated by hundreds of light years, the individual galaxies sailed right past each other, and the two clusters parted ways. But intergalactic gas collided and pooled on the trailing ends of each cluster.

Mass maps of the Bullet cluster showed that dark matter stayed in line with the galaxies instead of pooling with the gas, proving that it can separate from ordinary matter. This also hinted that dark matter wasn’t interacting with itself, and was affected by gravity alone.

Musket shot

Last year William Dawson of the University of California, Davis, and colleagues found an older set of clusters seen about 700 million years after their collision. Nicknamed the Musket Ball cluster, this smash-up told a different tale. When Dawson’s team analysed the concentration of matter in the Musket Ball, they found that galaxies are separated from dark matter by about 19,000 light years.

“The galaxies outrun the dark matter. That’s what creates the offset,” Dawson said. “This is fitting that picture of self-interacting dark matter.” If dark matter particles do interact, perhaps via a dark force, they would slow down like the gas.

This new picture could solve some outstanding mysteries in cosmology, Dawson said this week during a meeting of the American Astronomical Society in Long Beach, California. Non-interacting dark matter should sink to the cores of star clusters and dwarf galaxies, but observations show that it is more evenly distributed. If it interacts with itself, it could puff up and spread outward like a gas.

So why doesn’t the Bullet cluster show the same separation between dark matter and galaxies? Dawson thinks it’s a question of age – dark matter in the younger Bullet simply hasn’t had time to separate.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: An overlay of an optical image of a cluster of galaxies with an x-ray image of hot gas lying within the cluster. Courtesy of NASA.[end-div]

Engage the Warp Engines

According to Star Trek fictional history warp engines were invented in 2063. That gives us just over 50 years. While very unlikely based on our current technological prowess and general lack of understanding of the cosmos, warp engines are perhaps becoming just a little closer to being realized. But, please, no photon torpedoes!

[div class=attrib]From Wired:[end-div]

NASA scientists now think that the famous warp drive concept is a realistic possibility, and that in the far future humans could regularly travel faster than the speed of light.

A warp drive would work by “warping” spacetime around any spaceship, which physicist Miguel Alcubierre showed was theoretically possible in 1994, albeit well beyond the current technical capabilities of humanity. However, any such Alcubierre drive was assumed to require more energy — equivalent to the mass-energy of the whole planet of Jupiter – than could ever possibly be supplied, rendering it impossible to build.

But now scientists believe that those requirements might not be so vast, making warp travel a tangible possibility. Harold White, from NASA’s Johnson Space Centre, revealed the news on Sept. 14 at the 100 Year Starship Symposium, a gathering to discuss the possibilities and challenges of interstellar space travel. Space.com reports that White and his team have calculated that the amount of energy required to create an Alcubierre drive may be smaller than first thought.

The drive works by using a wave to compress the spacetime in front of the spaceship while expanding the spacetime behind it. The ship itself would float in a “bubble” of normal spacetime that would float along the wave of compressed spacetime, like the way a surfer rides a break. The ship, inside the warp bubble, would be going faster than the speed of light relative to objects outside the bubble.

By changing the shape of the warp bubble from a sphere to more of a rounded doughnut, White claims that the energy requirements will be far, far smaller for any faster-than-light ship — merely equivalent to the mass-energy of an object the size of Voyager 1.

Alas, before you start plotting which stars you want to visit first, don’t expect one appearing within our lifetimes. Any warp drive big enough to transport a ship would still require vast amounts of energy by today’s standards, which would probably necessitate exploiting dark energy — but we don’t know yet what, exactly, dark energy is, nor whether it’s something a spaceship could easily harness. There’s also the issue that we have no idea how to create or maintain a warp bubble, let alone what it would be made out of. It could even potentially, if not constructed properly, create unintended black holes.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: U.S.S Enterprise D. Courtesy of Startrek.com.[end-div]