Tag Archives: Einstein

A Gravitational Wave Comes Ashore

ligo-gravitational-waves-detection

On February 11, 2016, a historic day for astronomers the world over, scientists announced a monumental discovery, which was made on September 14, 2015! Thank you LIGO, the era of gravitational wave (G-Wave) astronomy has begun.

One hundred years after a prediction from Einstein’s theory of general relativity scientists have their first direct evidence of gravitational waves. These waves are ripples in the fabric of spacetime itself rather than the movement of fields and particles, such as from electromagnetic radiation. These ripples show up when gravitationally immense bodies warp the structure of space in which they sit, such as through collisions or acceleration.

ligo-hanford-aerial

As you might imagine for such disturbances to be observed here on Earth over distances in the tens to hundreds of millions, of light-years requires not only vastly powerful forces at one end but immensely sensitive instruments at the other. In fact the detector credited with discovery in this case is the Laser Interferometer Gravitational-Wave Observatory, or LIGO. It is so sensitive it can detect a change in length of its measurement apparatus — infra-red laser beams — 10,000 times smaller than the width of a proton. LIGO is operated by Caltech and MIT and supported through the U.S. National Science Foundation.

Prof Kip Thorne, one of the founders of LIGO, said that until now, astronomers had looked at the universe as if on a calm sea. This is now changed. He adds:

“The colliding black holes that produced these gravitational waves created a violent storm in the fabric of space and time, a storm in which time speeded up and slowed down, and speeded up again, a storm in which the shape of space was bent in this way and that way.”

And, as Prof Stephen Hawking remarked:

“Gravitational waves provide a completely new way of looking at the universe. The ability to detect them has the potential to revolutionise astronomy. This discovery is the first detection of a black hole binary system and the first observation of black holes merging.”

Congratulations to the many hundreds of engineers, technicians, researchers and theoreticians who have collaborated on this ground-breaking experiment. Particular congratulations go to LIGO’s three principal instigators: Rainier Weiss, Kip Thorne, and Ronald Drever.

This discovery paves the way for deeper understanding of our cosmos and lays the foundation for a new and rich form of astronomy through gravitational observations.

Galileo’s first telescopes opened our eyes to the visual splendor of our solar system and its immediate neighborhood. More recently, radio-wave, x-ray and gamma-ray astronomy have allowed us to discover wonders further afield: star-forming nebulae, neutron stars, black holes, active galactic nuclei, the Cosmic Microwave Background (CMB). Now, through LIGO and its increasingly sensitive descendants we are likely to make even more breathtaking discoveries, some of which, courtesy of gravitational waves, may let us peer at the very origin of the universe itself — the Big Bang.

How brilliant is that!

Image 1: The historic detection of gravitational waves by the Laser Interferometer Gravitational-Wave Observatory (LIGO) is shown in this plot during a press conference in Washington, D.C. on Feb. 11, 2016.Courtesy: National Science Foundation.

Image 2: LIGO Laboratory operates two detector sites 1,800 miles apart: one near Hanford in eastern Washington, and another near Livingston, Louisiana. This photo shows the Hanford detector. Courtesy of LIGO Caltech.

 

When 8 Equals 16

commercial-standard-cs215-58

I’m sure that most, if not all, mathematicians would tell you that their calling is at the heart of our understanding of the universe. Mathematics describes our world precisely and logically. But, mix it with the world of women’s fashion and this rigorous discipline becomes rather squishy, and far from absolute. A case in point: a women’s size 16 today is equivalent to a women’s size 8 from 1958.

This makes me wonder what the fundamental measurements and equations describing our universe would look like if controlled by advertisers and marketers. Though, Einstein’s work on Special and General Relativity may seem to fit the fashion industry quite well: one of the central tenets of relativity holds that measurements of various quantities (read: dress size) are relative to the velocities (market size) of observers (retailers). In particular, space (dress size) contracts and time (waist size) dilates.

From the Washington Post:

Here are some numbers that illustrate the insanity of women’s clothing sizes: A size 8 dress today is nearly the equivalent of a size 16 dress in 1958. And a size 8 dress of 1958 doesn’t even have a modern-day equivalent — the waist and bust measurements of a Mad Men-era 8 come in smaller than today’s size 00.

These measurements come from official sizing standards once maintained by the National Bureau of Standards (now the National Institute of Standards and Technology) and taken over in recent years by the American Society of Testing and Materials. Data visualizer Max Galka recently unearthed them for a blog post on America’s obesity epidemic.

Centers for Disease Control and Prevention data show that the average American woman today weighs about as much as the average 1960s man. And while the weight story is pretty straightforward — Americans got heavier — the story behind the dress sizes is a little more complicated, as any woman who’s ever shopped for clothes could probably tell you.

As Julia Felsenthal detailed over at Slate, today’s women’s clothing sizes have their roots in a depression-era government project to define the “Average American Woman” by sending a pair of statisticians to survey and measure nearly 15,000 women. They “hoped to determine whether any proportional relationships existed among measurements that could be broadly applied to create a simple, standardized system of sizing,” Felsenthal writes.

Sadly, they failed. Not surprisingly, women’s bodies defied standardization. The project did yield one lasting contribution to women’s clothing: The statisticians were the first to propose the notion of arbitrary numerical sizes that weren’t based on any specific measurement — similar to shoe sizes.

The government didn’t return to the project until the late 1950s, when the National Bureau of Standards published “Body Measurements for the Sizing of Women’s Patterns and Apparel” in 1958. The standard was based on the 15,000 women interviewed previously, with the addition of a group of women who had been in the Army during World War II. The document’s purpose? “To provide the consumer with a means of identifying her body type and size from the wide range of body types covered, and enable her to be fitted properly by the same size regardless of price, type of apparel, or manufacturer of the garment.”

Read the entire article here.

Image: Diagram from “Body Measurements for the Sizing of Women’s Patterns and Apparel”, 1958. Courtesy of National Bureau of Standards /  National Institute of Standards and Technology (NIST).

Questioning Quantum Orthodoxy

de-BrogliePhysics works very well in explaining our world, yet it is also broken — it cannot, at the moment, reconcile our views of the very small (quantum theory) with those of the very large (relativity theory).

So although the probabilistic underpinnings of quantum theory have done wonders in allowing physicists to construct the Standard Model, gaps remain.

Back in the mid-1920s, the probabilistic worldview proposed by Niels Bohr and others gained favor and took hold. A competing theory, known as the pilot wave theory, proposed by a young Louis de Broglie, was given short shrift. Yet some theorists have maintained that it may do a better job of reconciling this core gap in our understanding — so it is time to revisit and breathe fresh life into pilot wave theory.

From Wired / Quanta:

For nearly a century, “reality” has been a murky concept. The laws of quantum physics seem to suggest that particles spend much of their time in a ghostly state, lacking even basic properties such as a definite location and instead existing everywhere and nowhere at once. Only when a particle is measured does it suddenly materialize, appearing to pick its position as if by a roll of the dice.

This idea that nature is inherently probabilistic — that particles have no hard properties, only likelihoods, until they are observed — is directly implied by the standard equations of quantum mechanics. But now a set of surprising experiments with fluids has revived old skepticism about that worldview. The bizarre results are fueling interest in an almost forgotten version of quantum mechanics, one that never gave up the idea of a single, concrete reality.

The experiments involve an oil droplet that bounces along the surface of a liquid. The droplet gently sloshes the liquid with every bounce. At the same time, ripples from past bounces affect its course. The droplet’s interaction with its own ripples, which form what’s known as a pilot wave, causes it to exhibit behaviors previously thought to be peculiar to elementary particles — including behaviors seen as evidence that these particles are spread through space like waves, without any specific location, until they are measured.

Particles at the quantum scale seem to do things that human-scale objects do not do. They can tunnel through barriers, spontaneously arise or annihilate, and occupy discrete energy levels. This new body of research reveals that oil droplets, when guided by pilot waves, also exhibit these quantum-like features.

To some researchers, the experiments suggest that quantum objects are as definite as droplets, and that they too are guided by pilot waves — in this case, fluid-like undulations in space and time. These arguments have injected new life into a deterministic (as opposed to probabilistic) theory of the microscopic world first proposed, and rejected, at the birth of quantum mechanics.

“This is a classical system that exhibits behavior that people previously thought was exclusive to the quantum realm, and we can say why,” said John Bush, a professor of applied mathematics at the Massachusetts Institute of Technology who has led several recent bouncing-droplet experiments. “The more things we understand and can provide a physical rationale for, the more difficult it will be to defend the ‘quantum mechanics is magic’ perspective.”

Magical Measurements

The orthodox view of quantum mechanics, known as the “Copenhagen interpretation” after the home city of Danish physicist Niels Bohr, one of its architects, holds that particles play out all possible realities simultaneously. Each particle is represented by a “probability wave” weighting these various possibilities, and the wave collapses to a definite state only when the particle is measured. The equations of quantum mechanics do not address how a particle’s properties solidify at the moment of measurement, or how, at such moments, reality picks which form to take. But the calculations work. As Seth Lloyd, a quantum physicist at MIT, put it, “Quantum mechanics is just counterintuitive and we just have to suck it up.”

A classic experiment in quantum mechanics that seems to demonstrate the probabilistic nature of reality involves a beam of particles (such as electrons) propelled one by one toward a pair of slits in a screen. When no one keeps track of each electron’s trajectory, it seems to pass through both slits simultaneously. In time, the electron beam creates a wavelike interference pattern of bright and dark stripes on the other side of the screen. But when a detector is placed in front of one of the slits, its measurement causes the particles to lose their wavelike omnipresence, collapse into definite states, and travel through one slit or the other. The interference pattern vanishes. The great 20th-century physicist Richard Feynman said that this double-slit experiment “has in it the heart of quantum mechanics,” and “is impossible, absolutely impossible, to explain in any classical way.”

Some physicists now disagree. “Quantum mechanics is very successful; nobody’s claiming that it’s wrong,” said Paul Milewski, a professor of mathematics at the University of Bath in England who has devised computer models of bouncing-droplet dynamics. “What we believe is that there may be, in fact, some more fundamental reason why [quantum mechanics] looks the way it does.”

Riding Waves

The idea that pilot waves might explain the peculiarities of particles dates back to the early days of quantum mechanics. The French physicist Louis de Broglie presented the earliest version of pilot-wave theory at the 1927 Solvay Conference in Brussels, a famous gathering of the founders of the field. As de Broglie explained that day to Bohr, Albert Einstein, Erwin Schrödinger, Werner Heisenberg and two dozen other celebrated physicists, pilot-wave theory made all the same predictions as the probabilistic formulation of quantum mechanics (which wouldn’t be referred to as the “Copenhagen” interpretation until the 1950s), but without the ghostliness or mysterious collapse.

The probabilistic version, championed by Bohr, involves a single equation that represents likely and unlikely locations of particles as peaks and troughs of a wave. Bohr interpreted this probability-wave equation as a complete definition of the particle. But de Broglie urged his colleagues to use two equations: one describing a real, physical wave, and another tying the trajectory of an actual, concrete particle to the variables in that wave equation, as if the particle interacts with and is propelled by the wave rather than being defined by it.

For example, consider the double-slit experiment. In de Broglie’s pilot-wave picture, each electron passes through just one of the two slits, but is influenced by a pilot wave that splits and travels through both slits. Like flotsam in a current, the particle is drawn to the places where the two wavefronts cooperate, and does not go where they cancel out.

De Broglie could not predict the exact place where an individual particle would end up — just like Bohr’s version of events, pilot-wave theory predicts only the statistical distribution of outcomes, or the bright and dark stripes — but the two men interpreted this shortcoming differently. Bohr claimed that particles don’t have definite trajectories; de Broglie argued that they do, but that we can’t measure each particle’s initial position well enough to deduce its exact path.

In principle, however, the pilot-wave theory is deterministic: The future evolves dynamically from the past, so that, if the exact state of all the particles in the universe were known at a given instant, their states at all future times could be calculated.

At the Solvay conference, Einstein objected to a probabilistic universe, quipping, “God does not play dice,” but he seemed ambivalent about de Broglie’s alternative. Bohr told Einstein to “stop telling God what to do,” and (for reasons that remain in dispute) he won the day. By 1932, when the Hungarian-American mathematician John von Neumann claimed to have proven that the probabilistic wave equation in quantum mechanics could have no “hidden variables” (that is, missing components, such as de Broglie’s particle with its well-defined trajectory), pilot-wave theory was so poorly regarded that most physicists believed von Neumann’s proof without even reading a translation.

More than 30 years would pass before von Neumann’s proof was shown to be false, but by then the damage was done. The physicist David Bohm resurrected pilot-wave theory in a modified form in 1952, with Einstein’s encouragement, and made clear that it did work, but it never caught on. (The theory is also known as de Broglie-Bohm theory, or Bohmian mechanics.)

Later, the Northern Irish physicist John Stewart Bell went on to prove a seminal theorem that many physicists today misinterpret as rendering hidden variables impossible. But Bell supported pilot-wave theory. He was the one who pointed out the flaws in von Neumann’s original proof. And in 1986 he wrote that pilot-wave theory “seems to me so natural and simple, to resolve the wave-particle dilemma in such a clear and ordinary way, that it is a great mystery to me that it was so generally ignored.”

The neglect continues. A century down the line, the standard, probabilistic formulation of quantum mechanics has been combined with Einstein’s theory of special relativity and developed into the Standard Model, an elaborate and precise description of most of the particles and forces in the universe. Acclimating to the weirdness of quantum mechanics has become a physicists’ rite of passage. The old, deterministic alternative is not mentioned in most textbooks; most people in the field haven’t heard of it. Sheldon Goldstein, a professor of mathematics, physics and philosophy at Rutgers University and a supporter of pilot-wave theory, blames the “preposterous” neglect of the theory on “decades of indoctrination.” At this stage, Goldstein and several others noted, researchers risk their careers by questioning quantum orthodoxy.

A Quantum Drop

Now at last, pilot-wave theory may be experiencing a minor comeback — at least, among fluid dynamicists. “I wish that the people who were developing quantum mechanics at the beginning of last century had access to these experiments,” Milewski said. “Because then the whole history of quantum mechanics might be different.”

The experiments began a decade ago, when Yves Couder and colleagues at Paris Diderot University discovered that vibrating a silicon oil bath up and down at a particular frequency can induce a droplet to bounce along the surface. The droplet’s path, they found, was guided by the slanted contours of the liquid’s surface generated from the droplet’s own bounces — a mutual particle-wave interaction analogous to de Broglie’s pilot-wave concept.

Read the entire article here.

Image: Louis de Broglie. Courtesy of Wikipedia.

c2=e/m

Feynmann_Diagram_Gluon_RadiationParticle physicists will soon attempt to reverse the direction of Einstein’s famous equation delineating energy-matter equivalence, e=mc2. Next year, they plan to crash quanta of light into each other to create matter. Cool or what!

From the Guardian:

Researchers have worked out how to make matter from pure light and are drawing up plans to demonstrate the feat within the next 12 months.

The theory underpinning the idea was first described 80 years ago by two physicists who later worked on the first atomic bomb. At the time they considered the conversion of light into matter impossible in a laboratory.

But in a report published on Sunday, physicists at Imperial College London claim to have cracked the problem using high-powered lasers and other equipment now available to scientists.

“We have shown in principle how you can make matter from light,” said Steven Rose at Imperial. “If you do this experiment, you will be taking light and turning it into matter.”

The scientists are not on the verge of a machine that can create everyday objects from a sudden blast of laser energy. The kind of matter they aim to make comes in the form of subatomic particles invisible to the naked eye.

The original idea was written down by two US physicists, Gregory Breit and John Wheeler, in 1934. They worked out that – very rarely – two particles of light, or photons, could combine to produce an electron and its antimatter equivalent, a positron. Electrons are particles of matter that form the outer shells of atoms in the everyday objects around us.

But Breit and Wheeler had no expectations that their theory would be proved any time soon. In their study, the physicists noted that the process was so rare and hard to produce that it would be “hopeless to try to observe the pair formation in laboratory experiments”.

Oliver Pike, the lead researcher on the study, said the process was one of the most elegant demonstrations of Einstein’s famous relationship that shows matter and energy are interchangeable currencies. “The Breit-Wheeler process is the simplest way matter can be made from light and one of the purest demonstrations of E=mc2,” he said.

Writing in the journal Nature Photonics, the scientists describe how they could turn light into matter through a number of separate steps. The first step fires electrons at a slab of gold to produce a beam of high-energy photons. Next, they fire a high-energy laser into a tiny gold capsule called a hohlraum, from the German for “empty room”. This produces light as bright as that emitted from stars. In the final stage, they send the first beam of photons into the hohlraum where the two streams of photons collide.

The scientists’ calculations show that the setup squeezes enough particles of light with high enough energies into a small enough volume to create around 100,000 electron-positron pairs.

The process is one of the most spectacular predictions of a theory called quantum electrodynamics (QED) that was developed in the run up to the second world war. “You might call it the most dramatic consequence of QED and it clearly shows that light and matter are interchangeable,” Rose told the Guardian.

The scientists hope to demonstrate the process in the next 12 months. There are a number of sites around the world that have the technology. One is the huge Omega laser in Rochester, New York. But another is the Orion laser at Aldermaston, the atomic weapons facility in Berkshire.

A successful demonstration will encourage physicists who have been eyeing the prospect of a photon-photon collider as a tool to study how subatomic particles behave. “Such a collider could be used to study fundamental physics with a very clean experimental setup: pure light goes in, matter comes out. The experiment would be the first demonstration of this,” Pike said.

Read the entire story here.

Image: Feynmann diagram for gluon radiation. Courtesy of Wikipedia.

 

 

General Relativity Lives on For Now

Since Einstein first published his elegant theory of General Relativity almost 100 years ago it has proved to be one of most powerful and enduring cornerstones of modern science. Yet theorists and researchers the world over know that it cannot possibly remain the sole answer to our cosmological questions. It answers questions about the very, very large — galaxies, stars and planets and the gravitational relationship between them. But it fails to tackle the science of the very, very small — atoms, their constituents and the forces that unite and repel them, which is addressed by the elegant and complex, but mutually incompatible Quantum Theory.

So, scientists continue to push their measurements to ever greater levels of precision across both greater and smaller distances with one aim in mind — to test the limits of each theory and to see which one breaks down first.

A recent highly precise and yet very long distance experiment, confirmed that Einstein’s theory still rules the heavens.

From ars technica:

The general theory of relativity is a remarkably successful model for gravity. However, many of the best tests for it don’t push its limits: they measure phenomena where gravity is relatively weak. Some alternative theories predict different behavior in areas subject to very strong gravity, like near the surface of a pulsar—the compact, rapidly rotating remnant of a massive star (also called a neutron star). For that reason, astronomers are very interested in finding a pulsar paired with another high-mass object. One such system has now provided an especially sensitive test of strong gravity.

The system is a binary consisting of a high-mass pulsar and a bright white dwarf locked in mutual orbit with a period of about 2.5 hours. Using optical and radio observations, John Antoniadis and colleagues measured its properties as it spirals toward merger by emitting gravitational radiation. After monitoring the system for a number of orbits, the researchers determined its behavior is in complete agreement with general relativity to a high level of precision.

The binary system was first detected in a survey of pulsars by the Green Bank Telescope (GBT). The pulsar in the system, memorably labeled PSR J0348+0432, emits radio pulses about once every 39 milliseconds (0.039 seconds). Fluctuations in the pulsar’s output indicated that it is in a binary system, though its companion lacked radio emissions. However, the GBT’s measurements were precise enough to pinpoint its location in the sky, which enabled the researchers to find the system in the archives of the Sloan Digital Sky Survey (SDSS). They determined the companion object was a particularly bright white dwarf, the remnant of the core of a star similar to our Sun. It and the pulsar are locked in a mutual orbit about 2.46 hours in length.

Following up with the Very Large Telescope (VLT) in Chile, the astronomers built up enough data to model the system. Pulsars are extremely dense, packing a star’s worth of mass into a sphere roughly 10 kilometers in radius—far too small to see directly. White dwarfs are less extreme, but they still involve stellar masses in a volume roughly equivalent to Earth’s. That means the objects in the PSR J0348+0432 system can orbit much closer to each other than stars could—as little as 0.5 percent of the average Earth-Sun separation, or 1.2 times the Sun’s radius.

The pulsar itself was interesting because of its relatively high mass: about 2.0 times that of the Sun (most observed pulsars are about 1.4 times more massive). Unlike more mundane objects, pulsar size doesn’t grow with mass; according to some models, a higher mass pulsar may actually be smaller than one with lower mass. As a result, the gravity at the surface of PSR J0348+0432 is far more intense than at a lower-mass counterpart, providing a laboratory for testing general relativity (GR). The gravitational intensity near PSR J0348+0432 is about twice that of other pulsars in binary systems, creating a more extreme environment than previously measured.

According to GR, a binary emits gravitational waves that carry energy away from the system, causing the size of the orbit to shrink. For most binaries, the effect is small, but for compact systems like the one containing PSR J0348+0432, it is measurable. The first such system was found by Russel Hulse and Joseph Taylor; its discovery won the two astronomers the Nobel Prize.

The shrinking of the orbit results in a decrease in the orbital period as the two objects revolve around each other more quickly. In this case, the researchers measured the effect by studying the change in the spectrum of light emitted by the white dwarf, as well as fluctuations in the emissions from the pulsar. (This study also helped demonstrate the two objects were in mutual orbit, rather than being coincidentally in the same part of the sky.)

To test agreement with GR, physicists established a set of observable quantities. These include the rate of orbit decrease (which is a reflection of the energy loss to gravitational radiation) and something called the Shapiro delay. The latter phenomenon occurs because light emitted from the pulsar must travel through the intense gravitational field of the pulsar when exiting the system. This effect depends on the relative orientation of the pulsar to us, but alternative models also predict different observable results.

In the case of the PSR J0348+0432 system, the change in orbital period and the Shapiro delay agreed with the predictions of GR, placing strong constraints on alternative theories. The researchers were also able to rule out energy loss from other, non-gravitational sources (rotation or electromagnetic phenomena). If the system continues as models predict, the white dwarf and pulsar will merge in about 400 million years—we don’t know what the product of that merger will be, so astronomers are undoubtedly marking their calendars now.

The results are of potential use for the Laser Interferometer Gravitational-wave Observatory (LIGO) and other ground-based gravitational-wave detectors. These instruments are sensitive to the final death spiral of binaries like the one containing PSR J0348+0432. The current detection and observation strategies involve “templates,” or theoretical models of the gravitational wave signal from binaries. All information about the behavior of close pulsar binaries helps gravitational-wave astronomers refine those templates, which should improve the chances of detection.

Of course, no theory can be “proven right” by experiment or observation—data provides evidence in support of or against the predictions of a particular model. However, the PSR J0348+0432 binary results placed stringent constraints on any alternative model to GR in the strong-gravity regime. (Certain other alternative models focus on altering gravity on large scales to explain dark energy and the acceleration expansion of the Universe.) Based on this new data, only theories that agree with GR to high precision are still standing—leaving general relativity the continuing champion theory of gravity.

Read the entire article after the jump.

Image: Artist’s impression of the PSR J0348+0432 system. The compact pulsar (with beams of radio emission) produces a strong distortion of spacetime (illustrated by the green mesh). Courtesy of Science Mag.

The Death of Scientific Genius

There is a certain school of thought that asserts that scientific genius is a thing of the past. After all, we haven’t seen the recent emergence of pivotal talents such as Galileo, Newton, Darwin or Einstein. Is it possible that fundamentally new ways to look at our world — that a new mathematics or a new physics is no longer possible?

In a recent essay in Nature, Dean Keith Simonton, professor of psychology at UC Davis, argues that such fundamental and singular originality is a thing of the past.

[div class=attrib]From ars technica:[end-div]

Einstein, Darwin, Galileo, Mendeleev: the names of the great scientific minds throughout history inspire awe in those of us who love science. However, according to Dean Keith Simonton, a psychology professor at UC Davis, the era of the scientific genius may be over. In a comment paper published in Nature last week, he explains why.

The “scientific genius” Simonton refers to is a particular type of scientist; their contributions “are not just extensions of already-established, domain-specific expertise.” Instead, “the scientific genius conceives of a novel expertise.” Simonton uses words like “groundbreaking” and “overthrow” to illustrate the work of these individuals, explaining that they each contributed to science in one of two major ways: either by founding an entirely new field or by revolutionizing an already-existing discipline.

Today, according to Simonton, there just isn’t room to create new disciplines or overthrow the old ones. “It is difficult to imagine that scientists have overlooked some phenomenon worthy of its own discipline,” he writes. Furthermore, most scientific fields aren’t in the type of crisis that would enable paradigm shifts, according to Thomas Kuhn’s classic view of scientific revolutions. Simonton argues that instead of finding big new ideas, scientists currently work on the details in increasingly specialized and precise ways.

And to some extent, this argument is demonstrably correct. Science is becoming more and more specialized. The largest scientific fields are currently being split into smaller sub-disciplines: microbiology, astrophysics, neuroscience, and paleogeography, to name a few. Furthermore, researchers have more tools and the knowledge to hone in on increasingly precise issues and questions than they did a century—or even a decade—ago.

But other aspects of Simonton’s argument are a matter of opinion. To me, separating scientists who “build on what’s already known” from those who “alter the foundations of knowledge” is a false dichotomy. Not only is it possible to do both, but it’s impossible to establish—or even make a novel contribution to—a scientific field without piggybacking on the work of others to some extent. After all, it’s really hard to solve the problems that require new solutions if other people haven’t done the work to identify them. Plate tectonics, for example, was built on observations that were already widely known.

And scientists aren’t done altering the foundations of knowledge, either. In science, as in many other walks of life, we don’t yet know everything we don’t know. Twenty years ago, exoplanets were hypothetical. Dark energy, as far as we knew, didn’t exist.

Simonton points out that “cutting-edge work these days tends to emerge from large, well-funded collaborative teams involving many contributors” rather than a single great mind. This is almost certainly true, especially in genomics and physics. However, it’s this collaboration and cooperation between scientists, and between fields, that has helped science progress past where we ever thought possible. While Simonton uses “hybrid” fields like astrophysics and biochemistry to illustrate his argument that there is no room for completely new scientific disciplines, I see these fields as having room for growth. Here, diverse sets of ideas and methodologies can mix and lead to innovation.

Simonton is quick to assert that the end of scientific genius doesn’t mean science is at a standstill or that scientists are no longer smart. In fact, he argues the opposite: scientists are probably more intelligent now, since they must master more theoretical work, more complicated methods, and more diverse disciplines. In fact, Simonton himself would like to be wrong; “I hope that my thesis is incorrect. I would hate to think that genius in science has become extinct,” he writes.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Einstein 1921 by F. Schmutzer. Courtesy of Wikipedia.[end-div]

Spooky Action at a Distance Explained

[div class=attrib]From Scientific American:[end-div]

Quantum entanglement is such a mainstay of modern physics that it is worth reflecting on how long it took to emerge. What began as a perceptive but vague insight by Albert Einstein languished for decades before becoming a branch of experimental physics and, increasingly, modern technology.

Einstein’s two most memorable phrases perfectly capture the weirdness of quantum mechanics. “I cannot believe that God plays dice with the universe” expressed his disbelief that randomness in quantum physics was genuine and impervious to any causal explanation. “Spooky action at a distance” referred to the fact that quantum physics seems to allow influences to travel faster than the speed of light. This was, of course, disturbing to Einstein, whose theory of relativity prohibited any such superluminal propagation.

These arguments were qualitative. They were targeted at the worldview offered by quantum theory rather than its predictive power. Niels Bohr is commonly seen as the patron saint of quantum physics, defending it against Einstein’s repeated onslaughts. He is usually said to be the ultimate winner in this battle of wits. However, Bohr’s writing was terribly obscure. He was known for saying “never express yourself more clearly than you are able to think,” a motto which he adhered to very closely. His arguments, like Einstein’s, were qualitative, verging on highly philosophical. The Einstein-Bohr dispute, although historically important, could not be settled experimentally—and the experiment is the ultimate judge of validity of any theoretical ideas in physics. For decades, the phenomenon was all but ignored.

All that changed with John Bell. In 1964 he understood how to convert the complaints about “dice-playing” and “spooky action at a distance” into a simple inequality involving measurements on two particles. The inequality is satisfied in a world where God does not play dice and there is no spooky action. The inequality is violated if the fates of the two particles are intertwined, so that if we measure a property of one of them, we immediately know the same property of the other one—no matter how far apart the particles are from each other. This state where particles behave like twin brothers is said to be entangled, a term introduced by Erwin Schrödinger.

[div class=attrib]Read the whole article here.[end-div]

Science: A Contest of Ideas

[div class=attrib]From Project Syndicate:[end-div]

It was recently discovered that the universe’s expansion is accelerating, not slowing, as was previously thought. Light from distant exploding stars revealed that an unknown force (dubbed “dark energy”) more than outweighs gravity on cosmological scales.

Unexpected by researchers, such a force had nevertheless been predicted in 1915 by a modification that Albert Einstein proposed to his own theory of gravity, the general theory of relativity. But he later dropped the modification, known as the “cosmological term,” calling it the “biggest blunder” of his life.

So the headlines proclaim: “Einstein was right after all,” as though scientists should be compared as one would clairvoyants: Who is distinguished from the common herd by knowing the unknowable – such as the outcome of experiments that have yet to be conceived, let alone conducted? Who, with hindsight, has prophesied correctly?

But science is not a competition between scientists; it is a contest of ideas – namely, explanations of what is out there in reality, how it behaves, and why. These explanations are initially tested not by experiment but by criteria of reason, logic, applicability, and uniqueness at solving the mysteries of nature that they address. Predictions are used to test only the tiny minority of explanations that survive these criteria.

The story of why Einstein proposed the cosmological term, why he dropped it, and why cosmologists today have reintroduced it illustrates this process. Einstein sought to avoid the implication of unmodified general relativity that the universe cannot be static – that it can expand (slowing down, against its own gravity), collapse, or be instantaneously at rest, but that it cannot hang unsupported.

This particular prediction cannot be tested (no observation could establish that the universe is at rest, even if it were), but it is impossible to change the equations of general relativity arbitrarily. They are tightly constrained by the explanatory substance of Einstein’s theory, which holds that gravity is due to the curvature of spacetime, that light has the same speed for all observers, and so on.

But Einstein realized that it is possible to add one particular term – the cosmological term – and adjust its magnitude to predict a static universe, without spoiling any other explanation. All other predictions based on the previous theory of gravity – that of Isaac Newton – that were testable at the time were good approximations to those of unmodified general relativity, with that single exception: Newton’s space was an unmoving background against which objects move. There was no evidence yet, contradicting Newton’s view – no mystery of expansion to explain. Moreover, anything beyond that traditional conception of space required a considerable conceptual leap, while the cosmological term made no measurable difference to other predictions. So Einstein added it.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of Wikipedia / Creative Commons.[end-div]

Quantum Trickery: Testing Einstein’s Strangest Theory

[div class=attrib]From the New York Times:[end-div]

Einstein said there would be days like this.

This fall scientists announced that they had put a half dozen beryllium atoms into a “cat state.”

No, they were not sprawled along a sunny windowsill. To a physicist, a “cat state” is the condition of being two diametrically opposed conditions at once, like black and white, up and down, or dead and alive.

These atoms were each spinning clockwise and counterclockwise at the same time. Moreover, like miniature Rockettes they were all doing whatever it was they were doing together, in perfect synchrony. Should one of them realize, like the cartoon character who runs off a cliff and doesn’t fall until he looks down, that it is in a metaphysically untenable situation and decide to spin only one way, the rest would instantly fall in line, whether they were across a test tube or across the galaxy.

The idea that measuring the properties of one particle could instantaneously change the properties of another one (or a whole bunch) far away is strange to say the least – almost as strange as the notion of particles spinning in two directions at once. The team that pulled off the beryllium feat, led by Dietrich Leibfried at the National Institute of Standards and Technology, in Boulder, Colo., hailed it as another step toward computers that would use quantum magic to perform calculations.

But it also served as another demonstration of how weird the world really is according to the rules, known as quantum mechanics.

The joke is on Albert Einstein, who, back in 1935, dreamed up this trick of synchronized atoms – “spooky action at a distance,” as he called it – as an example of the absurdity of quantum mechanics.

“No reasonable definition of reality could be expected to permit this,” he, Boris Podolsky and Nathan Rosen wrote in a paper in 1935.

Today that paper, written when Einstein was a relatively ancient 56 years old, is the most cited of Einstein’s papers. But far from demolishing quantum theory, that paper wound up as the cornerstone for the new field of quantum information.

Nary a week goes by that does not bring news of another feat of quantum trickery once only dreamed of in thought experiments: particles (or at least all their properties) being teleported across the room in a microscopic version of Star Trek beaming; electrical “cat” currents that circle a loop in opposite directions at the same time; more and more particles farther and farther apart bound together in Einstein’s spooky embrace now known as “entanglement.” At the University of California, Santa Barbara, researchers are planning an experiment in which a small mirror will be in two places at once.

[div class=attrib]More from theSource here.[end-div]