Tag Archives: physics

Cause and Effect

One of the most fundamental tenets of our macroscopic world is the notion that an effect has a cause. Throw a pebble (cause) into a still pond and the ripples (effect) will be visible for all to see. Down at the microscopic level, physicists have determined through their mathematical convolutions that there is no such thing — there is nothing precluding the laws of physics running in reverse. Yet, we never witness ripples in a pond diminishing and ejecting a pebble, which then finds its way back to a catcher.

Of course, this quandary has kept many a philosopher’s pencil well sharpened while physicists continue to scratch their heads. So, is cause and effect merely an coincidental illusion? Or, does our physics only operate in one direction, determined by a yet to be discovered fundamental law?

Author of Causal Reasoning in Physics, philosopher Mathias Frisch, offers great summary of current thinking, but no fundamental breakthrough.

From Aeon:

Do early childhood vaccinations cause autism, as the American model Jenny McCarthy maintains? Are human carbon emissions at the root of global warming? Come to that, if I flick this switch, will it make the light on the porch come on? Presumably I don’t need to persuade you that these would be incredibly useful things to know.

Since anthropogenic greenhouse gas emissions do cause climate change, cutting our emissions would make a difference to future warming. By contrast, autism cannot be prevented by leaving children unvaccinated. Now, there’s a subtlety here. For our judgments to be much use to us, we have to distinguish between causal relations and mere correlations. From 1999 and 2009, the number of people in the US who fell into a swimming pool and drowned varies with the number of films in which Nicholas Cage appeared – but it seems unlikely that we could reduce the number of pool drownings by keeping Cage off the screen, desirable as the remedy might be for other reasons.

In short, a working knowledge of the way in which causes and effects relate to one another seems indispensible to our ability to make our way in the world. Yet there is a long and venerable tradition in philosophy, dating back at least to David Hume in the 18th century, that finds the notions of causality to be dubious. And that might be putting it kindly.

Hume argued that when we seek causal relations, we can never discover the real power; the, as it were, metaphysical glue that binds events together. All we are able to see are regularities – the ‘constant conjunction’ of certain sorts of observation. He concluded from this that any talk of causal powers is illegitimate. Which is not to say that he was ignorant of the central importance of causal reasoning; indeed, he said that it was only by means of such inferences that we can ‘go beyond the evidence of our memory and senses’. Causal reasoning was somehow both indispensable and illegitimate. We appear to have a dilemma.

Hume’s remedy for such metaphysical quandaries was arguably quite sensible, as far as it went: have a good meal, play backgammon with friends, and try to put it out of your mind. But in the late 19th and 20th centuries, his causal anxieties were reinforced by another problem, arguably harder to ignore. According to this new line of thought, causal notions seemed peculiarly out of place in our most fundamental science – physics.

There were two reasons for this. First, causes seemed too vague for a mathematically precise science. If you can’t observe them, how can you measure them? If you can’t measure them, how can you put them in your equations? Second, causality has a definite direction in time: causes have to happen before their effects. Yet the basic laws of physics (as distinct from such higher-level statistical generalisations as the laws of thermodynamics) appear to be time-symmetric: if a certain process is allowed under the basic laws of physics, a video of the same process played backwards will also depict a process that is allowed by the laws.

The 20th-century English philosopher Bertrand Russell concluded from these considerations that, since cause and effect play no fundamental role in physics, they should be removed from the philosophical vocabulary altogether. ‘The law of causality,’ he said with a flourish, ‘like much that passes muster among philosophers, is a relic of a bygone age, surviving, like the monarchy, only because it is erroneously supposed not to do harm.’

Neo-Russellians in the 21st century express their rejection of causes with no less rhetorical vigour. The philosopher of science John Earman of the University of Pittsburgh maintains that the wooliness of causal notions makes them inappropriate for physics: ‘A putative fundamental law of physics must be stated as a mathematical relation without the use of escape clauses or words that require a PhD in philosophy to apply (and two other PhDs to referee the application, and a third referee to break the tie of the inevitable disagreement of the first two).’

This is all very puzzling. Is it OK to think in terms of causes or not? If so, why, given the apparent hostility to causes in the underlying laws? And if not, why does it seem to work so well?

A clearer look at the physics might help us to find our way. Even though (most of) the basic laws are symmetrical in time, there are many arguably non-thermodynamic physical phenomena that can happen only one way. Imagine a stone thrown into a still pond: after the stone breaks the surface, waves spread concentrically from the point of impact. A common enough sight.

Now, imagine a video clip of the spreading waves played backwards. What we would see are concentrically converging waves. For some reason this second process, which is the time-reverse of the first, does not seem to occur in nature. The process of waves spreading from a source looks irreversible. And yet the underlying physical law describing the behaviour of waves – the wave equation – is as time-symmetric as any law in physics. It allows for both diverging and converging waves. So, given that the physical laws equally allow phenomena of both types, why do we frequently observe organised waves diverging from a source but never coherently converging waves?

Physicists and philosophers disagree on the correct answer to this question – which might be fine if it applied only to stones in ponds. But the problem also crops up with electromagnetic waves and the emission of light or radio waves: anywhere, in fact, that we find radiating waves. What to say about it?

On the one hand, many physicists (and some philosophers) invoke a causal principle to explain the asymmetry. Consider an antenna transmitting a radio signal. Since the source causes the signal, and since causes precede their effects, the radio waves diverge from the antenna after it is switched on simply because they are the repercussions of an initial disturbance, namely the switching on of the antenna. Imagine the time-reverse process: a radio wave steadily collapses into an antenna before the latter has been turned on. On the face of it, this conflicts with the idea of causality, because the wave would be present before its cause (the antenna) had done anything. David Griffiths, Emeritus Professor of Physics at Reed College in Oregon and the author of a widely used textbook on classical electrodynamics, favours this explanation, going so far as to call a time-asymmetric principle of causality ‘the most sacred tenet in all of physics’.

On the other hand, some physicists (and many philosophers) reject appeals to causal notions and maintain that the asymmetry ought to be explained statistically. The reason why we find coherently diverging waves but never coherently converging ones, they maintain, is not that wave sources cause waves, but that a converging wave would require the co?ordinated behaviour of ‘wavelets’ coming in from multiple different directions of space – delicately co?ordinated behaviour so improbable that it would strike us as nearly miraculous.

It so happens that this wave controversy has quite a distinguished history. In 1909, a few years before Russell’s pointed criticism of the notion of cause, Albert Einstein took part in a published debate concerning the radiation asymmetry. His opponent was the Swiss physicist Walther Ritz, a name you might not recognise.

It is in fact rather tragic that Ritz did not make larger waves in his own career, because his early reputation surpassed Einstein’s. The physicist Hermann Minkowski, who taught both Ritz and Einstein in Zurich, called Einstein a ‘lazy dog’ but had high praise for Ritz.  When the University of Zurich was looking to appoint its first professor of theoretical physics in 1909, Ritz was the top candidate for the position. According to one member of the hiring committee, he possessed ‘an exceptional talent, bordering on genius’. But he suffered from tuberculosis, and so, due to his failing health, he was passed over for the position, which went to Einstein instead. Ritz died that very year at age 31.

Months before his death, however, Ritz published a joint letter with Einstein summarising their disagreement. While Einstein thought that the irreversibility of radiation processes could be explained probabilistically, Ritz proposed what amounted to a causal explanation. He maintained that the reason for the asymmetry is that an elementary source of radiation has an influence on other sources in the future and not in the past.

This joint letter is something of a classic text, widely cited in the literature. What is less well-known is that, in the very same year, Einstein demonstrated a striking reversibility of his own. In a second published letter, he appears to take a position very close to Ritz’s – the very view he had dismissed just months earlier. According to the wave theory of light, Einstein now asserted, a wave source ‘produces a spherical wave that propagates outward. The inverse process does not exist as elementary process’. The only way in which converging waves can be produced, Einstein claimed, was by combining a very large number of coherently operating sources. He appears to have changed his mind.

Given Einstein’s titanic reputation, you might think that such a momentous shift would occasion a few ripples in the history of science. But I know of only one significant reference to his later statement: a letter from the philosopher Karl Popper to the journal Nature in 1956. In this letter, Popper describes the wave asymmetry in terms very similar to Einstein’s. And he also makes one particularly interesting remark, one that might help us to unpick the riddle. Coherently converging waves, Popper insisted, ‘would demand a vast number of distant coherent generators of waves the co?ordination of which, to be explicable, would have to be shown as originating from the centre’ (my italics).

This is, in fact, a particular instance of a much broader phenomenon. Consider two events that are spatially distant yet correlated with one another. If they are not related as cause and effect, they tend to be joint effects of a common cause. If, for example, two lamps in a room go out suddenly, it is unlikely that both bulbs just happened to burn out simultaneously. So we look for a common cause – perhaps a circuit breaker that tripped.

Common-cause inferences are so pervasive that it is difficult to imagine what we could know about the world beyond our immediate surroundings without them. Hume was right: judgments about causality are absolutely essential in going ‘beyond the evidence of the senses’. In his book The Direction of Time (1956), the philosopher Hans Reichenbach formulated a principle underlying such inferences: ‘If an improbable coincidence has occurred, there must exist a common cause.’ To the extent that we are bound to apply Reichenbach’s rule, we are all like the hard-boiled detective who doesn’t believe in coincidences.

Read the entire article here.

The Big Crunch

cmb

It may just be possible that prophetic doomsayers have been right all along. The end is coming… well, in a few tens of billions of years. A group of physicists propose that the cosmos will soon begin collapsing in on itself. Keep in mind that soon in cosmological terms runs into the billions of years. So, it does appear that we still have some time to crunch down our breakfast cereal a few more times before the ultimate universal apocalypse. Clearly this may not please those who seek the end of days within their lifetimes, and for rather different — scientific — reasons, cosmologists seem to be unhappy too.

From Phys:

Physicists have proposed a mechanism for “cosmological collapse” that predicts that the universe will soon stop expanding and collapse in on itself, obliterating all matter as we know it. Their calculations suggest that the collapse is “imminent”—on the order of a few tens of billions of years or so—which may not keep most people up at night, but for the physicists it’s still much too soon.

In a paper published in Physical Review Letters, physicists Nemanja Kaloper at the University of California, Davis; and Antonio Padilla at the University of Nottingham have proposed the cosmological collapse mechanism and analyzed its implications, which include an explanation of dark energy.

“The fact that we are seeing dark energy now could be taken as an indication of impending doom, and we are trying to look at the data to put some figures on the end date,” Padilla told Phys.org. “Early indications suggest the collapse will kick in in a few tens of billions of years, but we have yet to properly verify this.”

The main point of the paper is not so much when exactly the universe will end, but that the mechanism may help resolve some of the unanswered questions in physics. In particular, why is the universe expanding at an accelerating rate, and what is the dark energy causing this acceleration? These questions are related to the cosmological constant problem, which is that the predicted vacuum energy density of the universe causing the expansion is much larger than what is observed.

“I think we have opened up a brand new approach to what some have described as ‘the mother of all physics problems,’ namely the cosmological constant problem,” Padilla said. “It’s way too early to say if it will stand the test of time, but so far it has stood up to scrutiny, and it does seem to address the issue of vacuum energy contributions from the standard model, and how they gravitate.”

The collapse mechanism builds on the physicists’ previous research on vacuum energy sequestering, which they proposed to address the cosmological constant problem. The dynamics of vacuum energy sequestering predict that the universe will collapse, but don’t provide a specific mechanism for how collapse will occur.

According to the new mechanism, the universe originated under a set of specific initial conditions so that it naturally evolved to its present state of acceleration and will continue on a path toward collapse. In this scenario, once the collapse trigger begins to dominate, it does so in a period of “slow roll” that brings about the accelerated expansion we see today. Eventually the universe will stop expanding and reach a turnaround point at which it begins to shrink, culminating in a “big crunch.”

Read the entire article here.

Image: Image of the Cosmic Microwave Background (CMB) from nine years of WMAP data. The image reveals 13.77 billion year old temperature fluctuations (shown as color differences) that correspond to the seeds that grew to become the galaxies. Courtesy of NASA.

A Physics Based Theory of Life

Carnot_heat_engine

Those who subscribe to the non-creationist theory of the origins of life tend gravitate towards the idea of assembly of self-replicating, organic molecules in our primeval oceans — the so-called primordial soup theory. Recently however, professor Jeremy England of MIT has proposed a thermodynamic explanation, which posits that inorganic matter tends to organize — under the right conditions — in a way that enables it to dissipate increasing amounts of energy. This is one of the fundamental attributes of living organisms.

Could we be the product of the Second Law of Thermodynamics, nothing more than the expression of increasing entropy?

Read more of this fascinating new hypothesis below or check out England’s paper on the Statistical Physics of Self-replication.

From Quanta:

Why does life exist?

Popular hypotheses credit a primordial soup, a bolt of lightning and a colossal stroke of luck. But if a provocative new theory is correct, luck may have little to do with it. Instead, according to the physicist proposing the idea, the origin and subsequent evolution of life follow from the fundamental laws of nature and “should be as unsurprising as rocks rolling downhill.”

From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat. Jeremy England, a 31-year-old assistant professor at the Massachusetts Institute of Technology, has derived a mathematical formula that he believes explains this capacity. The formula, based on established physics, indicates that when a group of atoms is driven by an external source of energy (like the sun or chemical fuel) and surrounded by a heat bath (like the ocean or atmosphere), it will often gradually restructure itself in order to dissipate increasingly more energy. This could mean that under certain conditions, matter inexorably acquires the key physical attribute associated with life.

“You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant,” England said.

England’s theory is meant to underlie, rather than replace, Darwin’s theory of evolution by natural selection, which provides a powerful description of life at the level of genes and populations. “I am certainly not saying that Darwinian ideas are wrong,” he explained. “On the contrary, I am just saying that from the perspective of the physics, you might call Darwinian evolution a special case of a more general phenomenon.”

His idea, detailed in a recent paper and further elaborated in a talk he is delivering at universities around the world, has sparked controversy among his colleagues, who see it as either tenuous or a potential breakthrough, or both.

England has taken “a very brave and very important step,” said Alexander Grosberg, a professor of physics at New York University who has followed England’s work since its early stages. The “big hope” is that he has identified the underlying physical principle driving the origin and evolution of life, Grosberg said.

“Jeremy is just about the brightest young scientist I ever came across,” said Attila Szabo, a biophysicist in the Laboratory of Chemical Physics at the National Institutes of Health who corresponded with England about his theory after meeting him at a conference. “I was struck by the originality of the ideas.”

Others, such as Eugene Shakhnovich, a professor of chemistry, chemical biology and biophysics at Harvard University, are not convinced. “Jeremy’s ideas are interesting and potentially promising, but at this point are extremely speculative, especially as applied to life phenomena,” Shakhnovich said.

England’s theoretical results are generally considered valid. It is his interpretation — that his formula represents the driving force behind a class of phenomena in nature that includes life — that remains unproven. But already, there are ideas about how to test that interpretation in the lab.

“He’s trying something radically different,” said Mara Prentiss, a professor of physics at Harvard who is contemplating such an experiment after learning about England’s work. “As an organizing lens, I think he has a fabulous idea. Right or wrong, it’s going to be very much worth the investigation.”

At the heart of England’s idea is the second law of thermodynamics, also known as the law of increasing entropy or the “arrow of time.” Hot things cool down, gas diffuses through air, eggs scramble but never spontaneously unscramble; in short, energy tends to disperse or spread out as time progresses. Entropy is a measure of this tendency, quantifying how dispersed the energy is among the particles in a system, and how diffuse those particles are throughout space. It increases as a simple matter of probability: There are more ways for energy to be spread out than for it to be concentrated. Thus, as particles in a system move around and interact, they will, through sheer chance, tend to adopt configurations in which the energy is spread out. Eventually, the system arrives at a state of maximum entropy called “thermodynamic equilibrium,” in which energy is uniformly distributed. A cup of coffee and the room it sits in become the same temperature, for example. As long as the cup and the room are left alone, this process is irreversible. The coffee never spontaneously heats up again because the odds are overwhelmingly stacked against so much of the room’s energy randomly concentrating in its atoms.

Although entropy must increase over time in an isolated or “closed” system, an “open” system can keep its entropy low — that is, divide energy unevenly among its atoms — by greatly increasing the entropy of its surroundings. In his influential 1944 monograph “What Is Life?” the eminent quantum physicist Erwin Schrödinger argued that this is what living things must do. A plant, for example, absorbs extremely energetic sunlight, uses it to build sugars, and ejects infrared light, a much less concentrated form of energy. The overall entropy of the universe increases during photosynthesis as the sunlight dissipates, even as the plant prevents itself from decaying by maintaining an orderly internal structure.

Life does not violate the second law of thermodynamics, but until recently, physicists were unable to use thermodynamics to explain why it should arise in the first place. In Schrödinger’s day, they could solve the equations of thermodynamics only for closed systems in equilibrium. In the 1960s, the Belgian physicist Ilya Prigogine made progress on predicting the behavior of open systems weakly driven by external energy sources (for which he won the 1977 Nobel Prize in chemistry). But the behavior of systems that are far from equilibrium, which are connected to the outside environment and strongly driven by external sources of energy, could not be predicted.

Read the entire story here.

Image: Carnot engine diagram, where an amount of heat QH flows from a high temperature TH furnace through the fluid of the “working body” (working substance) and the remaining heat QC flows into the cold sink TC, thus forcing the working substance to do mechanical work W on the surroundings, via cycles of contractions and expansions. Courtesy of Wikipedia.

 

A Godless Universe: Mind or Mathematics

In his science column for the NYT George Johnson reviews several recent books by noted thinkers who for different reasons believe science needs to expand its borders. Philosopher Thomas Nagel and physicist Max Tegmark both agree that our current understanding of the universe is rather limited and that science needs to turn to new or alternate explanations. Nagel, still an atheist, suggests in his book Mind and Cosmos that the mind somehow needs to be considered a fundamental structure of the universe. While Tegmark in his book Our Mathematical Universe: My Quest for the Ultimate Nature of Reality suggests that mathematics is the core, irreducible framework of the cosmos. Two radically different ideas — yet both are correct in one respect: we still know so very little about ourselves and our surroundings.

From the NYT:

Though he probably didn’t intend anything so jarring, Nicolaus Copernicus, in a 16th-century treatise, gave rise to the idea that human beings do not occupy a special place in the heavens. Nearly 500 years after replacing the Earth with the sun as the center of the cosmic swirl, we’ve come to see ourselves as just another species on a planet orbiting a star in the boondocks of a galaxy in the universe we call home. And this may be just one of many universes — what cosmologists, some more skeptically than others, have named the multiverse.

Despite the long string of demotions, we remain confident, out here on the edge of nowhere, that our band of primates has what it takes to figure out the cosmos — what the writer Timothy Ferris called “the whole shebang.” New particles may yet be discovered, and even new laws. But it is almost taken for granted that everything from physics to biology, including the mind, ultimately comes down to four fundamental concepts: matter and energy interacting in an arena of space and time.

There are skeptics who suspect we may be missing a crucial piece of the puzzle. Recently, I’ve been struck by two books exploring that possibility in very different ways. There is no reason why, in this particular century, Homo sapiens should have gathered all the pieces needed for a theory of everything. In displacing humanity from a privileged position, the Copernican principle applies not just to where we are in space but to when we are in time.

Since it was published in 2012, “Mind and Cosmos,” by the philosopher Thomas Nagel, is the book that has caused the most consternation. With his taunting subtitle — “Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False” — Dr. Nagel was rejecting the idea that there was nothing more to the universe than matter and physical forces. He also doubted that the laws of evolution, as currently conceived, could have produced something as remarkable as sentient life. That idea borders on anathema, and the book quickly met with a blistering counterattack. Steven Pinker, a Harvard psychologist, denounced it as “the shoddy reasoning of a once-great thinker.”

What makes “Mind and Cosmos” worth reading is that Dr. Nagel is an atheist, who rejects the creationist idea of an intelligent designer. The answers, he believes, may still be found through science, but only by expanding it further than it may be willing to go.

“Humans are addicted to the hope for a final reckoning,” he wrote, “but intellectual humility requires that we resist the temptation to assume that the tools of the kind we now have are in principle sufficient to understand the universe as a whole.”

Dr. Nagel finds it astonishing that the human brain — this biological organ that evolved on the third rock from the sun — has developed a science and a mathematics so in tune with the cosmos that it can predict and explain so many things.

Neuroscientists assume that these mental powers somehow emerge from the electrical signaling of neurons — the circuitry of the brain. But no one has come close to explaining how that occurs.

Continue reading the main story Continue reading the main story
Continue reading the main story

That, Dr. Nagel proposes, might require another revolution: showing that mind, along with matter and energy, is “a fundamental principle of nature” — and that we live in a universe primed “to generate beings capable of comprehending it.” Rather than being a blind series of random mutations and adaptations, evolution would have a direction, maybe even a purpose.

“Above all,” he wrote, “I would like to extend the boundaries of what is not regarded as unthinkable, in light of how little we really understand about the world.”

Dr. Nagel is not alone in entertaining such ideas. While rejecting anything mystical, the biologist Stuart Kauffman has suggested that Darwinian theory must somehow be expanded to explain the emergence of complex, intelligent creatures. And David J. Chalmers, a philosopher, has called on scientists to seriously consider “panpsychism” — the idea that some kind of consciousness, however rudimentary, pervades the stuff of the universe.

Some of this is a matter of scientific taste. It can be just as exhilarating, as Stephen Jay Gould proposed in “Wonderful Life,” to consider the conscious mind as simply a fluke, no more inevitable than the human appendix or a starfish’s five legs. But it doesn’t seem so crazy to consider alternate explanations.

Heading off in another direction, a new book by the physicist Max Tegmark suggests that a different ingredient — mathematics — needs to be admitted into science as one of nature’s irreducible parts. In fact, he believes, it may be the most fundamental of all.

In a well-known 1960 essay, the physicist Eugene Wigner marveled at “the unreasonable effectiveness of mathematics” in explaining the world. It is “something bordering on the mysterious,” he wrote, for which “there is no rational explanation.”

The best he could offer was that mathematics is “a wonderful gift which we neither understand nor deserve.”

Dr. Tegmark, in his new book, “Our Mathematical Universe: My Quest for the Ultimate Nature of Reality,” turns the idea on its head: The reason mathematics serves as such a forceful tool is that the universe is a mathematical structure. Going beyond Pythagoras and Plato, he sets out to show how matter, energy, space and time might emerge from numbers.

Read the entire article here.

Defying Enemy Number One

Sir_Isaac_NewtonEnemy number one in this case is not your favorite team’s arch-rival or your political nemesis or your neighbor’s nocturnal barking dog. It is not sugar, nor is it trans-fat. Enemy number one is not North Korea (close),  nor is it the latest group of murderous  terrorists  (closer).

The real enemy is gravity. Not the movie, that is, but the natural phenomenon.

Gravity is constricting: it anchors us to our measly home  planet, making extra-terrestrial exploration rather difficult. Gravity is painful: it drags us down, it makes us fall — and when we’re down , it helps other things fall on top  of us. Gravity is an enigma.

But help may not be too distant; enter The Gravity Research Foundation. While the foundation’s mission may no longer be to counteract gravity, it still aims to help us better understand.

From the NYT:

Not long after the bombings of Hiroshima and Nagasaki, while the world was reckoning with the specter of nuclear energy, a businessman named Roger Babson was worrying about another of nature’s forces: gravity.

It had been 55 years since his sister Edith drowned in the Annisquam River, in Gloucester, Mass., when gravity, as Babson later described it, “came up and seized her like a dragon and brought her to the bottom.” Later on, the dragon took his grandson, too, as he tried to save a friend during a boating mishap.

Something had to be done.

“It seems as if there must be discovered some partial insulator of gravity which could be used to save millions of lives and prevent accidents,” Babson wrote in a manifesto, “Gravity — Our Enemy Number One.” In 1949, drawing on his considerable wealth, he started the Gravity Research Foundation and began awarding annual cash prizes for the best new ideas for furthering his cause.

It turned out to be a hopeless one. By the time the 2014 awards were announced last month, the foundation was no longer hoping to counteract gravity — it forms the very architecture of space-time — but to better understand it. What began as a crank endeavor has become mainstream. Over the years, winners of the prizes have included the likes of Stephen Hawking, Freeman Dyson, Roger Penrose and Martin Rees.

With his theory of general relativity, Einstein described gravity with an elegance that has not been surpassed. A mass like the sun makes the universe bend, causing smaller masses like planets to move toward it.

The problem is that nature’s other three forces are described in an entirely different way, by quantum mechanics. In this system forces are conveyed by particles. Photons, the most familiar example, are the carriers of light. For many scientists, the ultimate prize would be proof that gravity is carried by gravitons, allowing it to mesh neatly with the rest of the machine.

So far that has been as insurmountable as Babson’s old dream. After nearly a century of trying, the best physicists have come up with is superstring theory, a self-consistent but possibly hollow body of mathematics that depends on the existence of extra dimensions and implies that our universe is one of a multitude, each unknowable to the rest.

With all the accomplishments our species has achieved, we could be forgiven for concluding that we have reached a dead end. But human nature compels us to go on.

This year’s top gravity prize of $4,000 went to Lawrence Krauss and Frank Wilczek. Dr. Wilczek shared a Nobel Prize in 2004 for his part in developing the theory of the strong nuclear force, the one that holds quarks together and forms the cores of atoms.

So far gravitons have eluded science’s best detectors, like LIGO, the Laser Interferometer Gravitational-Wave Observatory. Mr. Dyson suggested at a recent talk that the search might be futile, requiring an instrument with mirrors so massive that they would collapse to form a black hole — gravity defeating its own understanding. But in their paper Dr. Krauss and Dr. Wilczek suggest how gravitons might leave their mark on cosmic background radiation, the afterglow of the Big Bang.

Continue reading the main story Continue reading the main story
Continue reading the main story

There are other mysteries to contend with. Despite the toll it took on Babson’s family, theorists remain puzzled over why gravity is so much weaker than electromagnetism. Hold a refrigerator magnet over a paper clip, and it will fly upward and away from Earth’s pull.

Reaching for an explanation, the physicists Lisa Randall and Raman Sundrum once proposed that gravity is diluted because it leaks into a parallel universe. Striking off in a different direction, Dr. Randall and another colleague, Matthew Reece, recently speculated that the pull of a disk of dark matter might be responsible for jostling the solar system and unleashing periodic comet storms like one that might have killed off the dinosaurs.

It was a young theorist named Bryce DeWitt who helped disabuse Babson of his dream of stopping such a mighty force. In “The Perfect Theory,” a new book about general relativity, the Oxford astrophysicist Pedro G. Ferreira tells how DeWitt, in need of a down payment for a house, entered the Gravitational Research Foundation’s competition in 1953 with a paper showing why the attempt to make any kind of antigravity device was “a waste of time.”

He won the prize, the foundation became more respectable, and DeWitt went on to become one of the most prominent theorists of general relativity. Babson, however, was not entirely deterred. In 1962 after more than 100 prominent Atlantans were killed in a plane crash in Paris, he donated $5,000 to Emory University along with a marble monument “to remind students of the blessings forthcoming” once gravity is counteracted.

He paid for similar antigravity monuments at more than a dozen campuses, including one at Tufts University, where newly minted doctoral students in cosmology kneel before it in a ceremony in which an apple is dropped on their heads.

I thought of Babson recently during a poignant scene in the movie “Gravity,” in which two astronauts are floating high above Earth, stranded from home. During a moment of calm, one of them, Lt. Matt Kowalski (played by George Clooney), asks the other, Dr. Ryan Stone (Sandra Bullock), “What do you miss down there?”

She tells him about her daughter:

“She was 4. She was at school playing tag, slipped and hit her head, and that was it. The stupidest thing.” It was gravity that did her in.

Read the entire article here.

Image: Portrait of Isaac Newton (1642-1727) by  Sir Godfrey Kneller (1646–1723). Courtesy of Wikipedia.

95.5 Percent is Made Up and It’s Dark

Petrarch_by_Bargilla

Physicists and astronomers observe the very small and the very big. Although they are focused on very different areas of scientific endeavor and discovery, they tend to agree on one key observation: 95.5 of the cosmos is currently invisible to us. That is, only around 4.5 percent of our physical universe is made up of matter or energy that we can see or sense directly through experimental interaction. The rest, well, it’s all dark — so-called dark matter and dark energy. But nobody really knows what or how or why. Effectively, despite tremendous progress in our understanding of our world, we are still in a global “Dark Age”.

From the New Scientist:

TO OUR eyes, stars define the universe. To cosmologists they are just a dusting of glitter, an insignificant decoration on the true face of space. Far outweighing ordinary stars and gas are two elusive entities: dark matter and dark energy. We don’t know what they are… except that they appear to be almost everything.

These twin apparitions might be enough to give us pause, and make us wonder whether all is right with the model universe we have spent the past century so carefully constructing. And they are not the only thing. Our standard cosmology also says that space was stretched into shape just a split second after the big bang by a third dark and unknown entity called the inflaton field. That might imply the existence of a multiverse of countless other universes hidden from our view, most of them unimaginably alien – just to make models of our own universe work.

Are these weighty phantoms too great a burden for our observations to bear – a wholesale return of conjecture out of a trifling investment of fact, as Mark Twain put it?

The physical foundation of our standard cosmology is Einstein’s general theory of relativity. Einstein began with a simple observation: that any object’s gravitational mass is exactly equal to its resistance to accelerationMovie Camera, or inertial mass. From that he deduced equations that showed how space is warped by mass and motion, and how we see that bending as gravity. Apples fall to Earth because Earth’s mass bends space-time.

In a relatively low-gravity environment such as Earth, general relativity’s effects look very like those predicted by Newton’s earlier theory, which treats gravity as a force that travels instantaneously between objects. With stronger gravitational fields, however, the predictions diverge considerably. One extra prediction of general relativity is that large accelerating masses send out tiny ripples in the weave of space-time called gravitational waves. While these waves have never yet been observed directly, a pair of dense stars called pulsars, discovered in 1974, are spiralling in towards each other just as they should if they are losing energy by emitting gravitational waves.

Gravity is the dominant force of nature on cosmic scales, so general relativity is our best tool for modelling how the universe as a whole moves and behaves. But its equations are fiendishly complicated, with a frightening array of levers to pull. If you then give them a complex input, such as the details of the real universe’s messy distribution of mass and energy, they become effectively impossible to solve. To make a working cosmological model, we make simplifying assumptions.

The main assumption, called the Copernican principle, is that we are not in a special place. The cosmos should look pretty much the same everywhere – as indeed it seems to, with stuff distributed pretty evenly when we look at large enough scales. This means there’s just one number to put into Einstein’s equations: the universal density of matter.

Einstein’s own first pared-down model universe, which he filled with an inert dust of uniform density, turned up a cosmos that contracted under its own gravity. He saw that as a problem, and circumvented it by adding a new term into the equations by which empty space itself gains a constant energy density. Its gravity turns out to be repulsive, so adding the right amount of this “cosmological constant” ensured the universe neither expanded nor contracted. When observations in the 1920s showed it was actually expanding, Einstein described this move as his greatest blunder.

It was left to others to apply the equations of relativity to an expanding universe. They arrived at a model cosmos that grows from an initial point of unimaginable density, and whose expansion is gradually slowed down by matter’s gravity.

This was the birth of big bang cosmology. Back then, the main question was whether the expansion would ever come to a halt. The answer seemed to be no; there was just too little matter for gravity to rein in the fleeing galaxies. The universe would coast outwards forever.

Then the cosmic spectres began to materialise. The first emissary of darkness put a foot in the door as long ago as the 1930s, but was only fully seen in the late 1970s when astronomers found that galaxies are spinning too fast. The gravity of the visible matter would be too weak to hold these galaxies together according to general relativity, or indeed plain old Newtonian physics. Astronomers concluded that there must be a lot of invisible matter to provide extra gravitational glue.

The existence of dark matter is backed up by other lines of evidence, such as how groups of galaxies move, and the way they bend light on its way to us. It is also needed to pull things together to begin galaxy-building in the first place. Overall, there seems to be about five times as much dark matter as visible gas and stars.

Dark matter’s identity is unknown. It seems to be something beyond the standard model of particle physics, and despite our best efforts we have yet to see or create a dark matter particle on Earth (see “Trouble with physics: Smashing into a dead end”). But it changed cosmology’s standard model only slightly: its gravitational effect in general relativity is identical to that of ordinary matter, and even such an abundance of gravitating stuff is too little to halt the universe’s expansion.

The second form of darkness required a more profound change. In the 1990s, astronomers traced the expansion of the universe more precisely than ever before, using measurements of explosions called type 1a supernovae. They showed that the cosmic expansion is accelerating. It seems some repulsive force, acting throughout the universe, is now comprehensively trouncing matter’s attractive gravity.

This could be Einstein’s cosmological constant resurrected, an energy in the vacuum that generates a repulsive force, although particle physics struggles to explain why space should have the rather small implied energy density. So imaginative theorists have devised other ideas, including energy fields created by as-yet-unseen particles, and forces from beyond the visible universe or emanating from other dimensions.

Whatever it might be, dark energy seems real enough. The cosmic microwave background radiation, released when the first atoms formed just 370,000 years after the big bang, bears a faint pattern of hotter and cooler spots that reveals where the young cosmos was a little more or less dense. The typical spot sizes can be used to work out to what extent space as a whole is warped by the matter and motions within it. It appears to be almost exactly flat, meaning all these bending influences must cancel out. This, again, requires some extra, repulsive energy to balance the bending due to expansion and the gravity of matter. A similar story is told by the pattern of galaxies in space.

All of this leaves us with a precise recipe for the universe. The average density of ordinary matter in space is 0.426 yoctograms per cubic metre (a yoctogram is 10-24 grams, and 0.426 of one equates to about 250 protons), making up 4.5 per cent of the total energy density of the universe. Dark matter makes up 22.5 per cent, and dark energy 73 per cent (see diagram). Our model of a big-bang universe based on general relativity fits our observations very nicely – as long as we are happy to make 95.5 per cent of it up.

Arguably, we must invent even more than that. To explain why the universe looks so extraordinarily uniform in all directions, today’s consensus cosmology contains a third exotic element. When the universe was just 10-36 seconds old, an overwhelming force took over. Called the inflaton field, it was repulsive like dark energy, but far more powerful, causing the universe to expand explosively by a factor of more than 1025, flattening space and smoothing out any gross irregularities.

When this period of inflation ended, the inflaton field transformed into matter and radiation. Quantum fluctuations in the field became slight variations in density, which eventually became the spots in the cosmic microwave background, and today’s galaxies. Again, this fantastic story seems to fit the observational facts. And again it comes with conceptual baggage. Inflation is no trouble for general relativity – mathematically it just requires an add-on term identical to the cosmological constant. But at one time this inflaton field must have made up 100 per cent of the contents of the universe, and its origin poses as much of a puzzle as either dark matter or dark energy. What’s more, once inflation has started it proves tricky to stop: it goes on to create a further legion of universes divorced from our own. For some cosmologists, the apparent prediction of this multiverse is an urgent reason to revisit the underlying assumptions of our standard cosmology (see “Trouble with physics: Time to rethink cosmic inflation?”).

The model faces a few observational niggles, too. The big bang makes much more lithium-7 in theory than the universe contains in practice. The model does not explain the possible alignment in some features in the cosmic background radiation, or why galaxies along certain lines of sight seem biased to spin left-handedly. A newly discovered supergalactic structure 4 billion light years long calls into question the assumption that the universe is smooth on large scales.

Read the entire story here.

Image: Petrarch, who first conceived the idea of a European “Dark Age”, by Andrea di Bartolo di Bargilla, c1450. Courtesy of Galleria degli Uffizi, Florence, Italy / Wikipedia.

The Arrow of Time

Arthur_Stanley_EddingtonEinstein’s “spooky action at a distance” and quantum information theory (QIT) may help explain the so-called arrow of time — specifically, why it seems to flow in only one direction. Astronomer Arthur Eddington first described this asymmetry in 1927, and it has stumped theoreticians ever since.

At a macro-level the classic and simple example is that of an egg breaking when it hits your kitchen floor: repeat this over and over, and it’s likely that the egg will always make for a scrambled mess on your clean tiles, but it will never rise up from the floor and spontaneously re-assemble in your slippery hand. Yet at the micro-level, physicists know their underlying laws apply equally in both directions. Enter two new tenets of the quantum world that may help us better understand this perplexing forward flow of time: entanglement and QIT.

From Wired:

Coffee cools, buildings crumble, eggs break and stars fizzle out in a universe that seems destined to degrade into a state of uniform drabness known as thermal equilibrium. The astronomer-philosopher Sir Arthur Eddington in 1927 cited the gradual dispersal of energy as evidence of an irreversible “arrow of time.”

But to the bafflement of generations of physicists, the arrow of time does not seem to follow from the underlying laws of physics, which work the same going forward in time as in reverse. By those laws, it seemed that if someone knew the paths of all the particles in the universe and flipped them around, energy would accumulate rather than disperse: Tepid coffee would spontaneously heat up, buildings would rise from their rubble and sunlight would slink back into the sun.

“In classical physics, we were struggling,” said Sandu Popescu, a professor of physics at the University of Bristol in the United Kingdom. “If I knew more, could I reverse the event, put together all the molecules of the egg that broke? Why am I relevant?”

Surely, he said, time’s arrow is not steered by human ignorance. And yet, since the birth of thermodynamics in the 1850s, the only known approach for calculating the spread of energy was to formulate statistical distributions of the unknown trajectories of particles, and show that, over time, the ignorance smeared things out.

Now, physicists are unmasking a more fundamental source for the arrow of time: Energy disperses and objects equilibrate, they say, because of the way elementary particles become intertwined when they interact — a strange effect called “quantum entanglement.”

“Finally, we can understand why a cup of coffee equilibrates in a room,” said Tony Short, a quantum physicist at Bristol. “Entanglement builds up between the state of the coffee cup and the state of the room.”

Popescu, Short and their colleagues Noah Linden and Andreas Winter reported the discovery in the journal Physical Review E in 2009, arguing that objects reach equilibrium, or a state of uniform energy distribution, within an infinite amount of time by becoming quantum mechanically entangled with their surroundings. Similar results by Peter Reimann of the University of Bielefeld in Germany appeared several months earlier in Physical Review Letters. Short and a collaborator strengthened the argument in 2012 by showing that entanglement causes equilibration within a finite time. And, in work that was posted on the scientific preprint site arXiv.org in February, two separate groups have taken the next step, calculating that most physical systems equilibrate rapidly, on time scales proportional to their size. “To show that it’s relevant to our actual physical world, the processes have to be happening on reasonable time scales,” Short said.

The tendency of coffee — and everything else — to reach equilibrium is “very intuitive,” said Nicolas Brunner, a quantum physicist at the University of Geneva. “But when it comes to explaining why it happens, this is the first time it has been derived on firm grounds by considering a microscopic theory.”

If the new line of research is correct, then the story of time’s arrow begins with the quantum mechanical idea that, deep down, nature is inherently uncertain. An elementary particle lacks definite physical properties and is defined only by probabilities of being in various states. For example, at a particular moment, a particle might have a 50 percent chance of spinning clockwise and a 50 percent chance of spinning counterclockwise. An experimentally tested theorem by the Northern Irish physicist John Bell says there is no “true” state of the particle; the probabilities are the only reality that can be ascribed to it.

Quantum uncertainty then gives rise to entanglement, the putative source of the arrow of time.

When two particles interact, they can no longer even be described by their own, independently evolving probabilities, called “pure states.” Instead, they become entangled components of a more complicated probability distribution that describes both particles together. It might dictate, for example, that the particles spin in opposite directions. The system as a whole is in a pure state, but the state of each individual particle is “mixed” with that of its acquaintance. The two could travel light-years apart, and the spin of each would remain correlated with that of the other, a feature Albert Einstein famously described as “spooky action at a distance.”

“Entanglement is in some sense the essence of quantum mechanics,” or the laws governing interactions on the subatomic scale, Brunner said. The phenomenon underlies quantum computing, quantum cryptography and quantum teleportation.

The idea that entanglement might explain the arrow of time first occurred to Seth Lloyd about 30 years ago, when he was a 23-year-old philosophy graduate student at Cambridge University with a Harvard physics degree. Lloyd realized that quantum uncertainty, and the way it spreads as particles become increasingly entangled, could replace human uncertainty in the old classical proofs as the true source of the arrow of time.

Using an obscure approach to quantum mechanics that treated units of information as its basic building blocks, Lloyd spent several years studying the evolution of particles in terms of shuffling 1s and 0s. He found that as the particles became increasingly entangled with one another, the information that originally described them (a “1” for clockwise spin and a “0” for counterclockwise, for example) would shift to describe the system of entangled particles as a whole. It was as though the particles gradually lost their individual autonomy and became pawns of the collective state. Eventually, the correlations contained all the information, and the individual particles contained none. At that point, Lloyd discovered, particles arrived at a state of equilibrium, and their states stopped changing, like coffee that has cooled to room temperature.

“What’s really going on is things are becoming more correlated with each other,” Lloyd recalls realizing. “The arrow of time is an arrow of increasing correlations.”

The idea, presented in his 1988 doctoral thesis, fell on deaf ears. When he submitted it to a journal, he was told that there was “no physics in this paper.” Quantum information theory “was profoundly unpopular” at the time, Lloyd said, and questions about time’s arrow “were for crackpots and Nobel laureates who have gone soft in the head.” he remembers one physicist telling him.

“I was darn close to driving a taxicab,” Lloyd said.

Advances in quantum computing have since turned quantum information theory into one of the most active branches of physics. Lloyd is now a professor at the Massachusetts Institute of Technology, recognized as one of the founders of the discipline, and his overlooked idea has resurfaced in a stronger form in the hands of the Bristol physicists. The newer proofs are more general, researchers say, and hold for virtually any quantum system.

“When Lloyd proposed the idea in his thesis, the world was not ready,” said Renato Renner, head of the Institute for Theoretical Physics at ETH Zurich. “No one understood it. Sometimes you have to have the idea at the right time.”

Read the entire article here.

Image: English astrophysicist Sir Arthur Stanley Eddington (1882–1944). Courtesy: George Grantham Bain Collection (Library of Congress).

What of Consciousness?

google-search-holiday-feast

As we dig into the traditional holiday fare surrounded by family and friends it is useful to ponder whether any of it is actually real or is it all inside the mind. The in-laws may be a figment of the brain, but the wine probably is real.

From the New Scientist:

Descartes might have been onto something with “I think therefore I am”, but surely “I think therefore you are” is going a bit far? Not for some of the brightest minds of 20th-century physics as they wrestled mightily with the strange implications of the quantum world.

According to prevailing wisdom, a quantum particle such as an electron or photon can only be properly described as a mathematical entity known as a wave function. Wave functions can exist as “superpositions” of many states at once. A photon, for instance, can circulate in two different directions around an optical fibre; or an electron can simultaneously spin clockwise and anticlockwise or be in two positions at once.

When any attempt is made to observe these simultaneous existences, however, something odd happens: we see only one. How do many possibilities become one physical reality?

This is the central question in quantum mechanics, and has spawned a plethora of proposals, or interpretations. The most popular is the Copenhagen interpretation, which says nothing is real until it is observed, or measured. Observing a wave function causes the superposition to collapse.

However, Copenhagen says nothing about what exactly constitutes an observation. John von Neumann broke this silence and suggested that observation is the action of a conscious mind. It’s an idea also put forward by Max Planck, the founder of quantum theory, who said in 1931, “I regard consciousness as fundamental. I regard matter as derivative from consciousness.”

That argument relies on the view that there is something special about consciousness, especially human consciousness. Von Neumann argued that everything in the universe that is subject to the laws of quantum physics creates one vast quantum superposition. But the conscious mind is somehow different. It is thus able to select out one of the quantum possibilities on offer, making it real – to that mind, at least.

Henry Stapp of the Lawrence Berkeley National Laboratory in California is one of the few physicists that still subscribe to this notion: we are “participating observers” whose minds cause the collapse of superpositions, he says. Before human consciousness appeared, there existed a multiverse of potential universes, Stapp says. The emergence of a conscious mind in one of these potential universes, ours, gives it a special status: reality.

There are many objectors. One problem is that many of the phenomena involved are poorly understood. “There’s a big question in philosophy about whether consciousness actually exists,” says Matthew Donald, a philosopher of physics at the University of Cambridge. “When you add on quantum mechanics it all gets a bit confused.”

Donald prefers an interpretation that is arguably even more bizarre: “many minds”. This idea – related to the “many worlds” interpretation of quantum theory, which has each outcome of a quantum decision happen in a different universe – argues that an individual observing a quantum system sees all the many states, but each in a different mind. These minds all arise from the physical substance of the brain, and share a past and a future, but cannot communicate with each other about the present.

Though it sounds hard to swallow, this and other approaches to understanding the role of the mind in our perception of reality are all worthy of attention, Donald reckons. “I take them very seriously,” he says.

Read the entire article here.

Image courtesy of Google Search.

The Promise of Quantum Computation

Advanced in quantum physics and in the associated realm of quantum information promise to revolutionize computing. Imagine a computer several trillions of times faster than the present day supercomputers — well, that’s where we are heading.

[div class=attrib]From the New York Times:[end-div]

THIS summer, physicists celebrated a triumph that many consider fundamental to our understanding of the physical world: the discovery, after a multibillion-dollar effort, of the Higgs boson.

Given its importance, many of us in the physics community expected the event to earn this year’s Nobel Prize in Physics. Instead, the award went to achievements in a field far less well known and vastly less expensive: quantum information.

It may not catch as many headlines as the hunt for elusive particles, but the field of quantum information may soon answer questions even more fundamental — and upsetting — than the ones that drove the search for the Higgs. It could well usher in a radical new era of technology, one that makes today’s fastest computers look like hand-cranked adding machines.

The basis for both the work behind the Higgs search and quantum information theory is quantum physics, the most accurate and powerful theory in all of science. With it we created remarkable technologies like the transistor and the laser, which, in time, were transformed into devices — computers and iPhones — that reshaped human culture.

But the very usefulness of quantum physics masked a disturbing dissonance at its core. There are mysteries — summed up neatly in Werner Heisenberg’s famous adage “atoms are not things” — lurking at the heart of quantum physics suggesting that our everyday assumptions about reality are no more than illusions.

Take the “principle of superposition,” which holds that things at the subatomic level can be literally two places at once. Worse, it means they can be two things at once. This superposition animates the famous parable of Schrödinger’s cat, whereby a wee kitty is left both living and dead at the same time because its fate depends on a superposed quantum particle.

For decades such mysteries were debated but never pushed toward resolution, in part because no resolution seemed possible and, in part, because useful work could go on without resolving them (an attitude sometimes called “shut up and calculate”). Scientists could attract money and press with ever larger supercolliders while ignoring such pesky questions.

But as this year’s Nobel recognizes, that’s starting to change. Increasingly clever experiments are exploiting advances in cheap, high-precision lasers and atomic-scale transistors. Quantum information studies often require nothing more than some equipment on a table and a few graduate students. In this way, quantum information’s progress has come not by bludgeoning nature into submission but by subtly tricking it to step into the light.

Take the superposition debate. One camp claims that a deeper level of reality lies hidden beneath all the quantum weirdness. Once the so-called hidden variables controlling reality are exposed, they say, the strangeness of superposition will evaporate.

Another camp claims that superposition shows us that potential realities matter just as much as the single, fully manifested one we experience. But what collapses the potential electrons in their two locations into the one electron we actually see? According to this interpretation, it is the very act of looking; the measurement process collapses an ethereal world of potentials into the one real world we experience.

And a third major camp argues that particles can be two places at once only because the universe itself splits into parallel realities at the moment of measurement, one universe for each particle location — and thus an infinite number of ever splitting parallel versions of the universe (and us) are all evolving alongside one another.

These fundamental questions might have lived forever at the intersection of physics and philosophy. Then, in the 1980s, a steady advance of low-cost, high-precision lasers and other “quantum optical” technologies began to appear. With these new devices, researchers, including this year’s Nobel laureates, David J. Wineland and Serge Haroche, could trap and subtly manipulate individual atoms or light particles. Such exquisite control of the nano-world allowed them to design subtle experiments probing the meaning of quantum weirdness.

Soon at least one interpretation, the most common sense version of hidden variables, was completely ruled out.

At the same time new and even more exciting possibilities opened up as scientists began thinking of quantum physics in terms of information, rather than just matter — in other words, asking if physics fundamentally tells us more about our interaction with the world (i.e., our information) than the nature of the world by itself (i.e., matter). And so the field of quantum information theory was born, with very real new possibilities in the very real world of technology.

What does this all mean in practice? Take one area where quantum information theory holds promise, that of quantum computing.

Classical computers use “bits” of information that can be either 0 or 1. But quantum-information technologies let scientists consider “qubits,” quantum bits of information that are both 0 and 1 at the same time. Logic circuits, made of qubits directly harnessing the weirdness of superpositions, allow a quantum computer to calculate vastly faster than anything existing today. A quantum machine using no more than 300 qubits would be a million, trillion, trillion, trillion times faster than the most modern supercomputer.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Bloch sphere representation of a qubit, the fundamental building block of quantum computers. Courtesy of Wikipedia.[end-div]

Engage the Warp Engines

According to Star Trek fictional history warp engines were invented in 2063. That gives us just over 50 years. While very unlikely based on our current technological prowess and general lack of understanding of the cosmos, warp engines are perhaps becoming just a little closer to being realized. But, please, no photon torpedoes!

[div class=attrib]From Wired:[end-div]

NASA scientists now think that the famous warp drive concept is a realistic possibility, and that in the far future humans could regularly travel faster than the speed of light.

A warp drive would work by “warping” spacetime around any spaceship, which physicist Miguel Alcubierre showed was theoretically possible in 1994, albeit well beyond the current technical capabilities of humanity. However, any such Alcubierre drive was assumed to require more energy — equivalent to the mass-energy of the whole planet of Jupiter – than could ever possibly be supplied, rendering it impossible to build.

But now scientists believe that those requirements might not be so vast, making warp travel a tangible possibility. Harold White, from NASA’s Johnson Space Centre, revealed the news on Sept. 14 at the 100 Year Starship Symposium, a gathering to discuss the possibilities and challenges of interstellar space travel. Space.com reports that White and his team have calculated that the amount of energy required to create an Alcubierre drive may be smaller than first thought.

The drive works by using a wave to compress the spacetime in front of the spaceship while expanding the spacetime behind it. The ship itself would float in a “bubble” of normal spacetime that would float along the wave of compressed spacetime, like the way a surfer rides a break. The ship, inside the warp bubble, would be going faster than the speed of light relative to objects outside the bubble.

By changing the shape of the warp bubble from a sphere to more of a rounded doughnut, White claims that the energy requirements will be far, far smaller for any faster-than-light ship — merely equivalent to the mass-energy of an object the size of Voyager 1.

Alas, before you start plotting which stars you want to visit first, don’t expect one appearing within our lifetimes. Any warp drive big enough to transport a ship would still require vast amounts of energy by today’s standards, which would probably necessitate exploiting dark energy — but we don’t know yet what, exactly, dark energy is, nor whether it’s something a spaceship could easily harness. There’s also the issue that we have no idea how to create or maintain a warp bubble, let alone what it would be made out of. It could even potentially, if not constructed properly, create unintended black holes.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: U.S.S Enterprise D. Courtesy of Startrek.com.[end-div]

First Ever Demonstration of Time Cloaking

[div class=attrib]From the Physics arXiv for Technology Review:[end-div]

Physicists have created a “hole in time” using the temporal equivalent of an invisibility cloak.

Invisibility cloaks are the result of physicists’ newfound ability to distort electromagnetic fields in extreme ways. The idea is steer light around a volume of space so that anything inside this region is essentially invisible.

The effect has generated huge interest. The first invisibility cloaks worked only at microwave frequencies but in only a few years, physicists have found ways to create cloaks that work for visible light, for sound and for ocean waves. They’ve even designed illusion cloaks that can make one object look like another.

Today, Moti Fridman and buddies, at Cornell University in Ithaca, go a step further. These guys have designed and built a cloak that hides events in time.

Time cloaking is possible because of a kind of duality between space and time in electromagnetic theory. In particular, the diffraction of a beam of light in space is mathematically equivalent to the temporal propagation of light through a dispersive medium. In other words, diffraction and dispersion are symmetric in spacetime.

That immediately leads to an interesting idea. Just as its easy to make a lens that focuses light in space using diffraction, so it is possible to use dispersion to make a lens that focuses in time.

Such a time-lens can be made using an electro-optic modulator, for example, and has a variety of familiar properties. “This time-lens can, for example, magnify or compress in time,” say Fridman and co.

This magnifying and compressing in time is important.

The trick to building a temporal cloak is to place two time-lenses in series and then send a beam of light through them. The first compresses the light in time while the second decompresses it again.

But this leaves a gap. For short period, there is a kind of hole in time in which any event is unrecorded.

So to an observer, the light coming out of the second time-lens appears undistorted, as if no event has occurred.

In effect, the space between the two lenses is a kind of spatio-temporal cloak that deletes changes that occur in short periods of time.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Original paper from arXiv.org here.[end-div]

Richard Feynman on the Ascendant

Genius – The Life and Science of Richard Feynman by James Gleick was a good first course for those fascinated by Richard Feynman’s significant contributions to physics, cosmology (and percussion).

Now, eight years later come two more biographies that observe Richard Feynman from very different perspectives, reviewed in the New York Review of Books. The first, Lawrence Krauss’s book, Quantum Man is the weighty main course; the second, by Jim Ottaviani and artist Leland Myrick, is a graphic-book (as in comic) biography, and delicious dessert.

In his review — The ‘Dramatic Picture’ of Richard Feynman — Freeman Dyson rightly posits that Richard Feynman’s star may now, or soon, be in the same exalted sphere as Einstein and Hawking. Though, type “Richard” in Google search and wait for its predictive text to fill in the rest and you’ll find that Richard Nixon, Richard Dawkins and Richard Branson rank higher than this giant of physics.

[div class=attrib]Freeman Dyson for the New York Review of Books:[end-div]

In the last hundred years, since radio and television created the modern worldwide mass-market entertainment industry, there have been two scientific superstars, Albert Einstein and Stephen Hawking. Lesser lights such as Carl Sagan and Neil Tyson and Richard Dawkins have a big public following, but they are not in the same class as Einstein and Hawking. Sagan, Tyson, and Dawkins have fans who understand their message and are excited by their science. Einstein and Hawking have fans who understand almost nothing about science and are excited by their personalities.

On the whole, the public shows good taste in its choice of idols. Einstein and Hawking earned their status as superstars, not only by their scientific discoveries but by their outstanding human qualities. Both of them fit easily into the role of icon, responding to public adoration with modesty and good humor and with provocative statements calculated to command attention. Both of them devoted their lives to an uncompromising struggle to penetrate the deepest mysteries of nature, and both still had time left over to care about the practical worries of ordinary people. The public rightly judged them to be genuine heroes, friends of humanity as well as scientific wizards.

Two new books now raise the question of whether Richard Feynman is rising to the status of superstar. The two books are very different in style and in substance. Lawrence Krauss’s book, Quantum Man, is a narrative of Feynman’s life as a scientist, skipping lightly over the personal adventures that have been emphasized in earlier biographies. Krauss succeeds in explaining in nontechnical language the essential core of Feynman’s thinking.

… The other book, by writer Jim Ottaviani and artist Leland Myrick, is very different. It is a comic-book biography of Feynman, containing 266 pages of pictures of Feynman and his legendary adventures. In every picture, bubbles of text record Feynman’s comments, mostly taken from stories that he and others had told and published in earlier books.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of Shelley Gazin/Corbis.[end-div]

When the multiverse and many-worlds collide

[div class=attrib]From the New Scientist:[end-div]

TWO of the strangest ideas in modern physics – that the cosmos constantly splits into parallel universes in which every conceivable outcome of every event happens, and the notion that our universe is part of a larger multiverse – have been unified into a single theory. This solves a bizarre but fundamental problem in cosmology and has set physics circles buzzing with excitement, as well as some bewilderment.

The problem is the observability of our universe. While most of us simply take it for granted that we should be able to observe our universe, it is a different story for cosmologists. When they apply quantum mechanics – which successfully describes the behaviour of very small objects like atoms – to the entire cosmos, the equations imply that it must exist in many different states simultaneously, a phenomenon called a superposition. Yet that is clearly not what we observe.

Cosmologists reconcile this seeming contradiction by assuming that the superposition eventually “collapses” to a single state. But they tend to ignore the problem of how or why such a collapse might occur, says cosmologist Raphael Bousso at the University of California, Berkeley. “We’ve no right to assume that it collapses. We’ve been lying to ourselves about this,” he says.

In an attempt to find a more satisfying way to explain the universe’s observability, Bousso, together with Leonard Susskind at Stanford University in California, turned to the work of physicists who have puzzled over the same problem but on a much smaller scale: why tiny objects such as electrons and photons exist in a superposition of states but larger objects like footballs and planets apparently do not.

This problem is captured in the famous thought experiment of Schrödinger’s cat. This unhappy feline is inside a sealed box containing a vial of poison that will break open when a radioactive atom decays. Being a quantum object, the atom exists in a superposition of states – so it has both decayed and not decayed at the same time. This implies that the vial must be in a superposition of states too – both broken and unbroken. And if that’s the case, then the cat must be both dead and alive as well.

[div class=attrib]More from theSource here.[end-div]

The Cutting-Edge Physics of Jackson Pollock

 

Untitled, ca. 1948-49. Jackson Pollock

[div class=attrib]From Wired:[end-div]

Jackson Pollock, famous for his deceptively random-seeming drip paintings, took advantage of certain features of fluid dynamics years before physicists thought to study them.

“His particular painting technique essentially lets physics be a player in the creative process,” said physicist Andrzej Herczynski of Boston College, coauthor of a new paper in Physics Today that analyzes the physics in Pollock’s art. “To the degree that he lets physics take a role in the painting process, he is inviting physics to be a coauthor of his pieces.”

Pollock’s unique technique — letting paint drip and splatter on the floor rather than spreading it on a vertical canvas — revolutionized the art world in the 1940s. The resulting streaks and blobs look haphazard, but art historians and, more recently, physicists argue they’re anything but. Some have suggested that the snarls of paint have lasting appeal because they reflect fractal geometry that shows up in clouds and coast lines.

Now, Boston College art historian Claude Cernuschi, Harvard mathematician Lakshminarayanan Mahadevan and Herczynski have turned the tools of physics on Pollock’s painting process. In what they believe is the first quantitative analysis of drip painting, the researchers derived an equation for how Pollock spread paint.

The team focused on the painting Untitled 1948-49, which features wiggling lines and curlicues of red paint. Those loops formed through a fluid instability called coiling, in which thick fluids fold onto themselves like coils of rope.

“People thought perhaps Pollock created this effect by wiggling his hand in a sinusoidal way, but he didn’t,” Herczynski said.

Coiling is familiar to anyone who’s ever squeezed honey on toast, but it’s only recently grabbed the attention of physicists. Recent studies have shown that the patterns fluids form as they fall depends on their viscosity and their speed. Viscous liquids fall in straight lines when moving quickly, but form loops, squiggles and figure eights when poured slowly, as seen in this video of honey falling on a conveyor belt.

The first physics papers that touched on this phenomenon appeared in the late 1950s, but Pollock knew all about it in 1948. Pollock was famous for searching for using different kinds of paints than anyone else in the art world, and mixing his paints with solvents to make them thicker or thinner. Instead of using a brush or pouring paint directly from a can, he lifted paint with a rod and let it dribble onto the canvas in continuous streams. By moving his arm at different speeds and using paints of different thicknesses, he could control how much coiling showed up in the final painting.

[div class=attrib]More from theSource here.[end-div]

More subatomic spot changing

[div class=attrib]From the Economist:[end-div]

IN THIS week’s print edition we report a recent result from the T2K collaboration in Japan which has found strong hints that neutrinos, the elusive particles theorists believe to be as abundant in the universe as photons, but which almost never interact with anything, are as fickle as they are coy.

It has been known for some time that neutrinos switch between three types, or flavours, as they zip through space at a smidgen below the speed of light. The flavours are distinguished by the particles which emerge on the rare occasion a neutrino does bump into something. And so, an electron-neutrino conjures up an electron, a muon-neutrino, a muon, and a tau-neutrino, a tau particle (muons and tau are a lot like electrons, but heavier and less stable). Researchers at T2K observed, for the first time, muon-neutrinos transmuting into the electron variety—the one sort of spot-changing that had not been seen before. But their results, with a 0.7% chance of being a fluke, was, by the elevated standards of particle physics, tenuous.

Now, T2K’s rival across the Pacific has made it less so. MINOS beams muon-neutrinos from Fermilab, America’s biggest particle-physics lab located near Chicago, to a 5,000-tonne detector sitting in the Soudan mine in Minnesota, 735km (450 miles) to the north-west. On June 24th its researchers annouced that they, too, had witnessed some of muon-neutrinos change to the electron variety along the way. To be precise, the experiment recorded 62 events which could have been caused by electron-neutrinos. If the proposed transmutation does not occur in nature, it ought to have seen no more than 49 (the result of electron-neutrinos streaming in from space or radioactive rocks on Earth). Were the T2K figures spot on, as it were, it should have seen 71.

As such, the result from MINOS, which uses different methods to study the same phenomenon, puts the transmutation hypothesis on a firmer footing. This advances the search for a number known as delta (?). This is one of the parameters of the formula which physicists think describes neutrinos spot-changing antics. Physicists are keen to pin it down, since it also governs the description of the putative asymmetry between matter and antimatter that left matter as the dominant feature of the universe after the Big Bang.

In light of the latest result, it remains unclear whether either the American or the Japanese experiment is precise enough to measure delta. In 2013, however, MINOS will be supplanted by NOvA, a fancier device located in another Minnesota mine 810km from Fermilab’s muon-neutrino cannon. That ought to do the trick. Then again, nature has the habit of springing surprises.

And in more ways than one. Days after T2K’s run was cut short by the earthquake that shook Japan in March, devastating the muon-neutrino source at J-PARC, the country’s main particle-accelerator complex, MINOS had its own share of woe when the Soudan mine sustained significant flooding. Fortunately, the experiment itself escaped relatively unscathed. But the eerie coincidence spurred some boffins, not a particularly superstitious bunch, to speak of a neutrino curse. Fingers crossed that isn’t the case.

[div class=attrib]More from theSource here.[end-div]

[div]Image courtesy of Fermilab.[end-div]

Cosmic Smoothness

Simulations based on the standard cosmological model, as shown here, indicate that on very large distance scales, galaxies should be uniformly distributed. But observations show a clumpier distribution than expected. (The length bar represents about $2.3$ billion light years.)[div class=attrib]From American Physical Society, Michael J. Hudson:[end-div]

The universe is expected to be very nearly homogeneous in density on large scales. In Physical Review Letters, Shaun Thomas and colleagues from University College London analyze measurements of the density of galaxies on the largest spatial scales so far—billions of light years—and find that the universe is less smooth than expected. If it holds up, this result will have important implications for our understanding of dark matter, dark energy, and perhaps gravity itself.

In the current standard cosmological model, the average mass-energy density of the observable universe consists of 5% normal matter (most of which is hydrogen and helium), 23% dark matter, and 72% dark energy. The dark energy is assumed to be uniform, but the normal and dark matter are not. The balance between matter and dark energy determines both how the universe expands and how regions of unusually high or low matter density evolve with time.

The same cosmological model predicts the statistics of the nonuniform structure and their dependence on spatial scale. On scales that are small by cosmological standards, fluctuations in the matter density are comparable to its mean, in agreement with what is seen: matter is clumped into galaxies, clusters of galaxies, and filaments of the “cosmic web.” On larger scales, however, the contrast of the structures compared to the mean density decreases. On the largest cosmological scales, these density fluctuations are small in amplitude compared to the average density of the universe and so are well described by linear perturbation theory (see simulation results in Fig. 1). Moreover, these perturbations can be calibrated at early times directly from the cosmic microwave background (CMB), a snapshot of the universe from when it was only 380,000 years old. Despite the fact that only 5% of the Universe is well understood, this model is an excellent fit to data spanning a wide range of spatial scales as the fluctuations evolved from the time of the CMB to the present age of the universe, some 13.8 billion years. On the largest scales, dark energy drives accelerated expansion of the universe. Because this aspect of the standard model is least understood, it is important to test it on these scales.

Thomas et al. use publicly-released catalogs from the Sloan Digital Sky Survey to select more than 700,000 galaxies whose observed colors indicate a significant redshift and are therefore presumed to be at large cosmological distances. They use the redshift of the galaxies, combined with their observed positions on the sky, to create a rough three-dimensional map of the galaxies in space and to assess the homogeneity on scales of a couple of billion light years. One complication is that Thomas et al. measure the density of galaxies, not the density of all matter, but we expect that fluctuations of these two densities about their means to be proportional; the constant of proportionality can be calibrated by observations on smaller scales. Indeed, on small scales the galaxy data are in good agreement with the standard model. On the largest scales, the fluctuations in galaxy density are expected to be of order a percent of the mean density, but Thomas et al. find fluctuations double this prediction. This result then suggests that the universe is less homogeneous than expected.

This result is not entirely new: previous studies based on subsets of the data studied by Thomas et al. showed the same effect, albeit with a lower statistical significance. In addition, there are other ways of probing the large-scale mass distribution. For example, inhomogeneities in the mass distribution lead to inhomogeneities in the local rate of expansion. Some studies have suggested that, on very large scales, this expansion too is less homogeneous than the model predictions.

Future large-scale surveys will produce an avalanche of data. These surveys will allow the methods employed by Thomas et al. and others to be extended to still larger scales. Of course, the challenge for these future surveys will be to correct for the systematic effects to even greater accuracy.

[div class=attrib]More from theSource here.[end-div]

The Evolution of the Physicist’s Picture of Nature

[div class=attrib]From Scientific American:[end-div]

Editor’s Note: We are republishing this article by Paul Dirac from the May 1963 issue of Scientific American, as it might be of interest to listeners to the June 24, 2010, and June 25, 2010 Science Talk podcasts, featuring award-winning writer and physicist Graham Farmelo discussing The Strangest Man, his biography of the Nobel Prize-winning British theoretical physicist.

In this article I should like to discuss the development of general physical theory: how it developed in the past and how one may expect it to develop in the future. One can look on this continual development as a process of evolution, a process that has been going on for several centuries.

The first main step in this process of evolution was brought about by Newton. Before Newton, people looked on the world as being essentially two-dimensional-the two dimensions in which one can walk about-and the up-and-down dimension seemed to be something essentially different. Newton showed how one can look on the up-and-down direction as being symmetrical with the other two directions, by bringing in gravitational forces and showing how they take their place in physical theory. One can say that Newton enabled us to pass from a picture with two-dimensional symmetry to a picture with three-dimensional symmetry.

Einstein made another step in the same direction, showing how one can pass from a picture with three-dimensional symmetry to a picture with four­dimensional symmetry. Einstein brought in time and showed how it plays a role that is in many ways symmetrical with the three space dimensions. However, this symmetry is not quite perfect. With Einstein’s picture one is led to think of the world from a four-dimensional point of view, but the four dimensions are not completely symmetrical. There are some directions in the four-dimensional picture that are different from others: directions that are called null directions, along which a ray of light can move; hence the four-dimensional picture is not completely symmetrical. Still, there is a great deal of symmetry among the four dimensions. The only lack of symmetry, so far as concerns the equations of physics, is in the appearance of a minus sign in the equations with respect to the time dimension as compared with the three space dimensions [see top equation in diagram].

four-dimensional symmetry equation and Schrodinger's equationsWe have, then, the development from the three-dimensional picture of the world to the four-dimensional picture. The reader will probably not be happy with this situation, because the world still appears three-dimensional to his consciousness. How can one bring this appearance into the four-dimensional picture that Einstein requires the physicist to have?

What appears to our consciousness is really a three-dimensional section of the four-dimensional picture. We must take a three-dimensional section to give us what appears to our consciousness at one time; at a later time we shall have a different three-dimensional section. The task of the physicist consists largely of relating events in one of these sections to events in another section referring to a later time. Thus the picture with four­dimensional symmetry does not give us the whole situation. This becomes particularly important when one takes into account the developments that have been brought about by quantum theory. Quantum theory has taught us that we have to take the process of observation into account, and observations usually require us to bring in the three-dimensional sections of the four-dimensional picture of the universe.

The special theory of relativity, which Einstein introduced, requires us to put all the laws of physics into a form that displays four-dimensional symmetry. But when we use these laws to get results about observations, we have to bring in something additional to the four-dimensional symmetry, namely the three-dimensional sections that describe our consciousness of the universe at a certain time.

Einstein made another most important contribution to the development of our physical picture: he put forward the general theory of relativity, which requires us to suppose that the space of physics is curved. Before this physicists had always worked with a flat space, the three-dimensional flat space of Newton which was then extended to the four­dimensional flat space of special relativity. General relativity made a really important contribution to the evolution of our physical picture by requiring us to go over to curved space. The general requirements of this theory mean that all the laws of physics can be formulated in curved four-dimensional space, and that they show symmetry among the four dimensions. But again, when we want to bring in observations, as we must if we look at things from the point of view of quantum theory, we have to refer to a section of this four-dimensional space. With the four-dimensional space curved, any section that we make in it also has to be curved, because in general we cannot give a meaning to a flat section in a curved space. This leads us to a picture in which we have to take curved three­dimensional sections in the curved four­dimensional space and discuss observations in these sections.

During the past few years people have been trying to apply quantum ideas to gravitation as well as to the other phenomena of physics, and this has led to a rather unexpected development, namely that when one looks at gravitational theory from the point of view of the sections, one finds that there are some degrees of freedom that drop out of the theory. The gravitational field is a tensor field with 10 components. One finds that six of the components are adequate for describing everything of physical importance and the other four can be dropped out of the equations. One cannot, however, pick out the six important components from the complete set of 10 in any way that does not destroy the four-dimensional symmetry. Thus if one insists on preserving four-dimensional symmetry in the equations, one cannot adapt the theory of gravitation to a discussion of measurements in the way quantum theory requires without being forced to a more complicated description than is needed bv the physical situation. This result has led me to doubt how fundamental the four-dimensional requirement in physics is. A few decades ago it seemed quite certain that one had to express the whole of physics in four­dimensional form. But now it seems that four-dimensional symmetry is not of such overriding importance, since the description of nature sometimes gets simplified when one departs from it.

Now I should like to proceed to the developments that have been brought about by quantum theory. Quantum theory is the discussion of very small things, and it has formed the main subject of physics for the past 60 years. During this period physicists have been amassing quite a lot of experimental information and developing a theory to correspond to it, and this combination of theory and experiment has led to important developments in the physicist’s picture of the world.

[div class=attrib]More from theSource here.[end-div]

Is Quantum Mechanics Controlling Your Thoughts?

[div class=attrib]From Discover:[end-div]

Graham Fleming sits down at an L-shaped lab bench, occupying a footprint about the size of two parking spaces. Alongside him, a couple of off-the-shelf lasers spit out pulses of light just millionths of a billionth of a second long. After snaking through a jagged path of mirrors and lenses, these minus­cule flashes disappear into a smoky black box containing proteins from green sulfur bacteria, which ordinarily obtain their energy and nourishment from the sun. Inside the black box, optics manufactured to billionths-of-a-meter precision detect something extraordinary: Within the bacterial proteins, dancing electrons make seemingly impossible leaps and appear to inhabit multiple places at once.

Peering deep into these proteins, Fleming and his colleagues at the University of California at Berkeley and at Washington University in St. Louis have discovered the driving engine of a key step in photosynthesis, the process by which plants and some microorganisms convert water, carbon dioxide, and sunlight into oxygen and carbohydrates. More efficient by far in its ability to convert energy than any operation devised by man, this cascade helps drive almost all life on earth. Remarkably, photosynthesis appears to derive its ferocious efficiency not from the familiar physical laws that govern the visible world but from the seemingly exotic rules of quantum mechanics, the physics of the subatomic world. Somehow, in every green plant or photosynthetic bacterium, the two disparate realms of physics not only meet but mesh harmoniously. Welcome to the strange new world of quantum biology.

On the face of things, quantum mechanics and the biological sciences do not mix. Biology focuses on larger-scale processes, from molecular interactions between proteins and DNA up to the behavior of organisms as a whole; quantum mechanics describes the often-strange nature of electrons, protons, muons, and quarks—the smallest of the small. Many events in biology are considered straightforward, with one reaction begetting another in a linear, predictable way. By contrast, quantum mechanics is fuzzy because when the world is observed at the subatomic scale, it is apparent that particles are also waves: A dancing electron is both a tangible nugget and an oscillation of energy. (Larger objects also exist in particle and wave form, but the effect is not noticeable in the macroscopic world.)

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of Dylan Burnette/Olympus Bioscapes Imaging Competition.[end-div]

Raiders of the lost dimension

[div class=attrib]From Los Alamos National Laboratory:[end-div]

A team of scientists working at the National High Magnetic Field Laboratory’s Pulsed Field Facility at Los Alamos has uncovered an intriguing phenomenon while studying magnetic waves in barium copper silicate, a 2,500-year-old pigment known as Han purple. The researchers discovered that when they exposed newly grown crystals of the pigment to very high magnetic fields at very low temperatures, it entered a rarely observed state of matter. At the threshold of that matter state–called the quantum critical point-the waves actually lose a dimension. That is, the magnetic waves go from a three-dimensional to a two-dimensional pattern. The discovery is yet another step toward understanding the quantum mechanics of the universe.

Writing about the work in today’s issue of the scientific journal Nature, the researchers describe how they discovered that at high magnetic fields (above 23 Tesla) and at temperatures between 1 and 3 degrees Kelvin (or roughly minus 460 degrees Fahrenheit), the magnetic waves in Han purple crystals “exist” in a unique state of matter called a Bose Einstein condensate (BEC). In the BEC state, magnetic waves propagate simultaneously in all of three directions (up-down, forward-backward and left-right). At the quantum critical point, however, the waves stop propagating in the up-down dimension, causing the magnetic ripples to exist in only two dimensions, much the same way as ripples are confined to the surface of a pond.

“The reduced dimensionality really came as a surprise,” said Neil Harrison, an experimental physicist at the Los Alamos Pulsed Field Facility, “just when we thought we had reached an understanding of the quantum nature of its magnetic BEC.”

[div class=attrib]More from theSource here.[end-div]

Quantum Trickery: Testing Einstein’s Strangest Theory

[div class=attrib]From the New York Times:[end-div]

Einstein said there would be days like this.

This fall scientists announced that they had put a half dozen beryllium atoms into a “cat state.”

No, they were not sprawled along a sunny windowsill. To a physicist, a “cat state” is the condition of being two diametrically opposed conditions at once, like black and white, up and down, or dead and alive.

These atoms were each spinning clockwise and counterclockwise at the same time. Moreover, like miniature Rockettes they were all doing whatever it was they were doing together, in perfect synchrony. Should one of them realize, like the cartoon character who runs off a cliff and doesn’t fall until he looks down, that it is in a metaphysically untenable situation and decide to spin only one way, the rest would instantly fall in line, whether they were across a test tube or across the galaxy.

The idea that measuring the properties of one particle could instantaneously change the properties of another one (or a whole bunch) far away is strange to say the least – almost as strange as the notion of particles spinning in two directions at once. The team that pulled off the beryllium feat, led by Dietrich Leibfried at the National Institute of Standards and Technology, in Boulder, Colo., hailed it as another step toward computers that would use quantum magic to perform calculations.

But it also served as another demonstration of how weird the world really is according to the rules, known as quantum mechanics.

The joke is on Albert Einstein, who, back in 1935, dreamed up this trick of synchronized atoms – “spooky action at a distance,” as he called it – as an example of the absurdity of quantum mechanics.

“No reasonable definition of reality could be expected to permit this,” he, Boris Podolsky and Nathan Rosen wrote in a paper in 1935.

Today that paper, written when Einstein was a relatively ancient 56 years old, is the most cited of Einstein’s papers. But far from demolishing quantum theory, that paper wound up as the cornerstone for the new field of quantum information.

Nary a week goes by that does not bring news of another feat of quantum trickery once only dreamed of in thought experiments: particles (or at least all their properties) being teleported across the room in a microscopic version of Star Trek beaming; electrical “cat” currents that circle a loop in opposite directions at the same time; more and more particles farther and farther apart bound together in Einstein’s spooky embrace now known as “entanglement.” At the University of California, Santa Barbara, researchers are planning an experiment in which a small mirror will be in two places at once.

[div class=attrib]More from theSource here.[end-div]