Category Archives: BigBang

The Universe and Determinism

General scientific consensus suggests that our universe has no pre-defined destiny. While a number of current theories propose anything from a final Big Crush to an accelerating expansion into cold nothingness the future plan for the universe is not pre-determined. Unfortunately, our increasingly sophisticated scientific tools are still to meager to test and answer these questions definitively. So, theorists currently seem to have the upper hand. And, now yet another theory puts current cosmological thinking on its head by proposing that the future is pre-destined and that it may even reach back into the past to shape the present. Confused? Read on!

[div class=attrib]From FQXi:[end-div]

The universe has a destiny—and this set fate could be reaching backwards in time and combining with influences from the past to shape the present. It’s a mind-bending claim, but some cosmologists now believe that a radical reformulation of quantum mechanics in which the future can affect the past could solve some of the universe’s biggest mysteries, including how life arose. What’s more, the researchers claim that recent lab experiments are dramatically confirming the concepts underpinning this reformulation.

Cosmologist Paul Davies, at Arizona State University in Tempe, is embarking on a project to investigate the future’s reach into the present, with the help of a $70,000 grant from the Foundational Questions Institute. It is a project that has been brewing for more than 30 years, since Davies first heard of attempts by physicist Yakir Aharonov to get to root of some of the paradoxes of quantum mechanics. One of these is the theory’s apparent indeterminism: You cannot predict the outcome of experiments on a quantum particle precisely; perform exactly the same experiment on two identical particles and you will get two different results.

While most physicists faced with this have concluded that reality is fundamentally, deeply random, Aharonov argues that there is order hidden within the uncertainty. But to understand its source requires a leap of imagination that takes us beyond our traditional view of time and causality. In his radical reinterpretation of quantum mechanics, Aharonov argues that two seemingly identical particles behave differently under the same conditions because they are fundamentally different. We just do not appreciate this difference in the present because it can only be revealed by experiments carried out in the future.

“It’s a very, very profound idea,” says Davies. Aharonov’s take on quantum mechanics can explain all the usual results that the conventional interpretations can, but with the added bonus that it also explains away nature’s apparent indeterminism. What’s more, a theory in which the future can influence the past may have huge—and much needed—repercussions for our understanding of the universe, says Davies.

[div class=attrib]More from theSource here.[end-div]

Once Not So Crazy Ideas About Our Sun

Some wacky ideas about our sun from not so long ago help us realize the importance of a healthy dose of skepticism combined with good science. In fact, as you’ll see from the timestamp on the image from NASA’s Solar and Heliospheric Observatory (SOHO) science can now bring us – the public – near realtime images of our nearest star.

[div class=attrib]From Slate:[end-div]

The sun is hell.

The18th-century English clergyman Tobias Swinden argued that hell couldn’t lie below Earth’s surface: The fires would soon go out, he reasoned, due to lack of air. Not to mention that the Earth’s interior would be too small to accommodate all the damned, especially after making allowances for future generations of the damned-to-be. Instead, wrote Swinden, it’s obvious that hell stares us in the face every day: It’s the sun.

The sun is made of ice.

In 1798, Charles Palmer—who was not an astronomer, but an accountant—argued that the sun can’t be a source of heat, since Genesis says that light already existed before the day that God created the sun. Therefore, he reasoned, the sun must merely focus light upon Earth—light that exists elsewhere in the universe. Isn’t the sun even shaped like a giant lens? The only natural, transparent substance that it could be made of, Palmer figured, is ice. Palmer’s theory was published in a widely read treatise that, its title crowed, “overturn[ed] all the received systems of the universe hitherto extant, proving the celebrated and indefatigable Sir Isaac Newton, in his theory of the solar system, to be as far distant from the truth, as any of the heathen authors of Greece or Rome.”

Earth is a sunspot.

Sunspots are magnetic regions on the sun’s surface. But in 1775, mathematician and theologian J. Wiedeberg said that the sun’s spots are created by the clumping together of countless solid “heat particles,” which he speculated were constantly being emitted by the sun. Sometimes, he theorized, these heat particles stick together even at vast distances from the sun—and this is how planets form. In other words, he believed that Earth is a sunspot.

The sun’s surface is liquid.

Throughout the 18th and 19th centuries, textbooks and astronomers were torn between two competing ideas about the sun’s nature. Some believed that its dazzling brightness was caused by luminous clouds and that small holes in the clouds, which revealed the cool, dark solar surface below, were the sunspots. But the majority view was that the sun’s body was a hot, glowing liquid, and that the sunspots were solar mountains sticking up through this lava-like substance.

The sun is inhabited.

No less a distinguished astronomer than William Herschel, who discovered the planet Uranus in 1781, often stated that the sun has a cool, solid surface on which human-like creatures live and play. According to him, these solar citizens are shielded from the heat given off by the sun’s “dazzling outer clouds” by an inner protective cloud layer—like a layer of haz-mat material—that perfectly blocks the solar emissions and allows for pleasant grassy solar meadows and idyllic lakes.

Sleep: Defragmenting the Brain

[div class=attrib]From Neuroskeptic:[end-div]

After a period of heavy use, hard disks tend to get ‘fragmented’. Data gets written all over random parts of the disk, and it gets inefficient to keep track of it all.

That’s why you need to run a defragmentation program occasionally. Ideally, you do this overnight, while you’re asleep, so it doesn’t stop you from using the computer.

A new paper from some Stanford neuroscientists argues that the function of sleep is to reorganize neural connections – a bit like a disk defrag for the brain – although it’s also a bit like compressing files to make more room, and a bit like a system reset: Synaptic plasticity in sleep: learning, homeostasis and disease

The basic idea is simple. While you’re awake, you’re having experiences, and your brain is forming memories. Memory formation involves a process called long-term potentiation (LTP) which is essentially the strengthening of synaptic connections between nerve cells.

Yet if LTP is strengthening synapses, and we’re learning all our lives, wouldn’t the synapses eventually hit a limit? Couldn’t they max out, so that they could never get any stronger?

Worse, the synapses that strengthen during memory are primarily glutamate synapses – and these are dangerous. Glutamate is a common neurotransmitter, and it’s even a flavouring, but it’s also a toxin.

Too much glutamate damages the very cells that receive the messages. Rather like how sound is useful for communication, but stand next to a pneumatic drill for an hour, and you’ll go deaf.

So, if our brains were constantly forming stronger glutamate synapses, we might eventually run into serious problems. This is why we sleep, according to the new paper. Indeed, sleep deprivation is harmful to health, and this theory would explain why.

[div class=attrib]More from theSource here.[end-div]

Science: A Contest of Ideas

[div class=attrib]From Project Syndicate:[end-div]

It was recently discovered that the universe’s expansion is accelerating, not slowing, as was previously thought. Light from distant exploding stars revealed that an unknown force (dubbed “dark energy”) more than outweighs gravity on cosmological scales.

Unexpected by researchers, such a force had nevertheless been predicted in 1915 by a modification that Albert Einstein proposed to his own theory of gravity, the general theory of relativity. But he later dropped the modification, known as the “cosmological term,” calling it the “biggest blunder” of his life.

So the headlines proclaim: “Einstein was right after all,” as though scientists should be compared as one would clairvoyants: Who is distinguished from the common herd by knowing the unknowable – such as the outcome of experiments that have yet to be conceived, let alone conducted? Who, with hindsight, has prophesied correctly?

But science is not a competition between scientists; it is a contest of ideas – namely, explanations of what is out there in reality, how it behaves, and why. These explanations are initially tested not by experiment but by criteria of reason, logic, applicability, and uniqueness at solving the mysteries of nature that they address. Predictions are used to test only the tiny minority of explanations that survive these criteria.

The story of why Einstein proposed the cosmological term, why he dropped it, and why cosmologists today have reintroduced it illustrates this process. Einstein sought to avoid the implication of unmodified general relativity that the universe cannot be static – that it can expand (slowing down, against its own gravity), collapse, or be instantaneously at rest, but that it cannot hang unsupported.

This particular prediction cannot be tested (no observation could establish that the universe is at rest, even if it were), but it is impossible to change the equations of general relativity arbitrarily. They are tightly constrained by the explanatory substance of Einstein’s theory, which holds that gravity is due to the curvature of spacetime, that light has the same speed for all observers, and so on.

But Einstein realized that it is possible to add one particular term – the cosmological term – and adjust its magnitude to predict a static universe, without spoiling any other explanation. All other predictions based on the previous theory of gravity – that of Isaac Newton – that were testable at the time were good approximations to those of unmodified general relativity, with that single exception: Newton’s space was an unmoving background against which objects move. There was no evidence yet, contradicting Newton’s view – no mystery of expansion to explain. Moreover, anything beyond that traditional conception of space required a considerable conceptual leap, while the cosmological term made no measurable difference to other predictions. So Einstein added it.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of Wikipedia / Creative Commons.[end-div]

A Better Way to Board An Airplane

Frequent fliers the world over may soon find themselves thanking a physicist named Jason Steffen. Back in 2008 he ran some computer simulations to find a more efficient way for travelers to board an airplane. Recent tests inside a mock cabin interior confirmed Steffen’s model to be both faster for the airline and easier for passengers, and best of all less time spent waiting in the aisle and jostling for overhead bin space.

[div class=attrib]From the New Scientist:[end-div]

The simulations showed that the best way was to board every other row of window seats on one side of the plane, starting from the back, then do the mirror image on the other side. The remaining window seats on the first side would follow, again starting from the back; then their counterparts on the second side; followed by the same procedure with middle seats and lastly aisles (see illustration).

In Steffen’s computer models, the strategy minimized traffic jams in the aisle and allowed multiple people to stow their luggage simultaneously. “It spread people out along the length of the aisle,” Steffen says. “They’d all put their stuff away and get out of the way at the same time.”

Steffen published his model in the Journal of Air Transport Management in 2008, then went back to his “day job” searching for extrasolar planets. He mostly forgot about the plane study until this May, when he received an email from Jon Hotchkiss, the producer of a new TV show called “This vs That.”

“It’s a show that answers the kinds of scientific questions that come up in people’s everyday life,” Hotchkiss says. He wanted to film an episode addressing the question of the best way to board a plane, and wanted Steffen on board as an expert commentator. Steffen jumped at the chance: “I said, hey, someone wants to test my theory? Sure!”

They, along with 72 volunteers and Hollywood extras, spent a day on a mock plane that has been used in movies such as Kill Bill and Miss Congeniality 2.

[div class=attrib]More from theSource here.[end-div]

Dark Matter: An Illusion?

Cosmologists and particle physicists have over the last decade or so proposed the existence of Dark Matter. It’s so called because it cannot be seen or sensed directly. It is inferred from gravitational effects on visible matter. Together with it’s theoretical cousin, Dark Energy, the two were hypothesized to make up most of the universe. In fact, the regular star-stuff — matter and energy — of which we, our planet, solar system and the visible universe are made, consists of only a paltry 4 percent.

Dark Matter and Dark Energy were originally proposed to account for discrepancies in calculations of the mass of large objects such as galaxies and galaxy clusters, and calculations derived from the mass of smaller visible objects such as stars, nebulae and interstellar gas.

The problem with Dark Matter is that it remains elusive and for the most part a theoretical construct. And, now a new group of theories suggest that the dark stuff may in fact be an illusion.

[div class=attrib]From National Geographic:[end-div]

The mysterious substance known as dark matter may actually be an illusion created by gravitational interactions between short-lived particles of matter and antimatter, a new study says.

Dark matter is thought to be an invisible substance that makes up almost a quarter of the mass in the universe. The concept was first proposed in 1933 to explain why the outer galaxies in galaxy clusters orbit faster than they should, based on the galaxies’ visible mass.

(Related: “Dark-Matter Galaxy Detected: Hidden Dwarf Lurks Nearby?”)

At the observed speeds, the outer galaxies should be flung out into space, since the clusters don’t appear to have enough mass to keep the galaxies at their edges gravitationally bound.

So physicists proposed that the galaxies are surrounded by halos of invisible matter. This dark matter provides the extra mass, which in turn creates gravitational fields strong enough to hold the clusters together.

In the new study, physicist Dragan Hajdukovic at the European Organization for Nuclear Research (CERN) in Switzerland proposes an alternative explanation, based on something he calls the “gravitational polarization of the quantum vacuum.”

(Also see “Einstein’s Gravity Confirmed on a Cosmic Scale.”)

Empty Space Filled With “Virtual” Particles

The quantum vacuum is the name physicists give to what we see as empty space.

According to quantum physics, empty space is not actually barren but is a boiling sea of so-called virtual particles and antiparticles constantly popping in and out of existence.

Antimatter particles are mirror opposites of normal matter particles. For example, an antiproton is a negatively charged version of the positively charged proton, one of the basic constituents of the atom.

When matter and antimatter collide, they annihilate in a flash of energy. The virtual particles spontaneously created in the quantum vacuum appear and then disappear so quickly that they can’t be directly observed.

In his new mathematical model, Hajdukovic investigates what would happen if virtual matter and virtual antimatter were not only electrical opposites but also gravitational opposites—an idea some physicists previously proposed.

“Mainstream physics assumes that there is only one gravitational charge, while I have assumed that there are two gravitational charges,” Hajdukovic said.

According to his idea, outlined in the current issue of the journal Astrophysics and Space Science, matter has a positive gravitational charge and antimatter a negative one.

That would mean matter and antimatter are gravitationally repulsive, so that an object made of antimatter would “fall up” in the gravitational field of Earth, which is composed of normal matter.

Particles and antiparticles could still collide, however, since gravitational repulsion is much weaker than electrical attraction.

How Galaxies Could Get Gravity Boost

While the idea of particle antigravity might seem exotic, Hajdukovic says his theory is based on well-established tenants in quantum physics.

For example, it’s long been known that particles can team up to create a so-called electric dipole, with positively charge particles at one end and negatively charged particles at the other. (See “Universe’s Existence May Be Explained by New Material.”)

According to theory, there are countless electric dipoles created by virtual particles in any given volume of the quantum vacuum.

All of these electric dipoles are randomly oriented—like countless compass needles pointing every which way. But if the dipoles form in the presence of an existing electric field, they immediately align along the same direction as the field.

According to quantum field theory, this sudden snapping to order of electric dipoles, called polarization, generates a secondary electric field that combines with and strengthens the first field.

Hajdukovic suggests that a similar phenomenon happens with gravity. If virtual matter and antimatter particles have different gravitational charges, then randomly oriented gravitational dipoles would be generated in space.

[div class=attrib]More from theSource here.[end-div]

Cities Might Influence Not Just Our Civilizations, but Our Evolution

[div class=attrib]From Scientific American:[end-div]

Cities reverberate through history as centers of civilization. Ur. Babylon. Rome. Baghdad. Tenochtitlan. Beijing. Paris. London. New York. As pivotal as cities have been for our art and culture, our commerce and trade, our science and technology, our wars and peace, it turns out that cities might have been even more important than we had suspected, influencing our very genes and evolution.

Cities reverberate through history as centers of civilization. Ur. Babylon. Rome. Baghdad. Tenochtitlan. Beijing. Paris. London. New York. As pivotal as cities have been for our art and culture, our commerce and trade, our science and technology, our wars and peace, it turns out that cities might have been even more important than we had suspected, influencing our very genes and evolution.

Cities have been painted as hives of scum and villainy, dens of filth and squalor, with unsafe water, bad sanitation, industrial pollution and overcrowded neighborhoods. It turns out that by bringing people closer together and spreading disease, cities might increase the chance that, over time, the descendants of survivors could resist infections.

Evolutionary biologist Ian Barnes at the University of London and his colleagues focused on a genetic variant with the alphabet-soup name of SLC11A1 1729+55del4. This variant is linked with natural resistance to germs that dwell within cells, such as tuberculosis and leprosy.

The scientists analyzed DNA samples from 17 modern populations that had occupied their cities for various lengths of time. The cities ranged from Çatalhöyük in Turkey, settled in roughly 6000 B.C., to Juba in Sudan, settled in the 20th century.

The researchers discovered an apparently highly significant link between the occurrence of this genetic variant and the duration of urban settlement. People from a long-populated urban area often seemed better adapted to resisting these specific types of infections — for instance, those in areas settled for more than 5,200 years, such as Susa in Iran, were almost certain to possess this variant, while in cities settled for only a few hundred years, such as Yakutsk in Siberia, only 70 percent to 80 percent of people would have it.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of Scientific American.[end-div]

So the Universe is Flat?


Having just posted an article that described the universe in terms of holographic principles – a 3-D projection on a two dimensional surface, it’s timely to put the theory in context, of other theories of course. There’s a theory that posits that the universe is a bubble wrought from the collision of high-dimensional branes (membrane that is). There’s a theory that suggests that our universe is one of many in a soup of multi-verses. Other theories suggest that the universe is made up of 9, 10 or 11 dimensions.

There’s another theory that the universe is flat, and that’s where Davide Castelvecchi (mathematician, science editor at Scientific American and blogger) over at Degrees of Freedom describes the current thinking.

[div class=attrib]What Do You Mean, The Universe Is Flat? (Part I), from Degrees of Freedom:[end-div]

In the last decade—you may have read this news countless times—cosmologists have found what they say is rather convincing evidence that the universe (meaning 3-D space) is flat, or at least very close to being flat.

The exact meaning of flat, versus curved, space deserves a post of its own, and that is what Part II of this series will be about. For the time being, it is convenient to just visualize a plane as our archetype of flat object, and the surface of the Earth as our archetype of a curved one. Both are two-dimensional, but as I will describe in the next installment, flatness and curviness make sense in any number of dimensions.

What I do want to talk about here is what it is that is supposed to be flat.

When cosmologists say that the universe is flat they are referring to space—the nowverse and its parallel siblings of time past. Spacetime is not flat. It can’t be: Einstein’s general theory of relativity says that matter and energy curve spacetime, and there are enough matter and energy lying around to provide for curvature. Besides, if spacetime were flat I wouldn’t be sitting here because there would be no gravity to keep me on the chair. To put it succintly: space can be flat even if spacetime isn’t.

Moreover, when they talk about the flatness of space cosmologists are referring to the large-scale appearance of the universe. When you “zoom in” and look at something of less-than-cosmic scale, such as the solar system, space—not just spacetime—is definitely not flat. Remarkable fresh evidence for this fact was obtained recently by the longest-running experiment in NASA history, Gravity Probe B, which took a direct measurement of the curvature of space around Earth. (And the most extreme case of non-flatness of space is thought to occur inside the event horizon of a black hole, but that’s another story.)

On a cosmic scale, the curvature created in space by the countless stars, black holes, dust clouds, galaxies, and so on constitutes just a bunch of little bumps on a space that is, overall, boringly flat.

Thus the seeming contradiction:

Matter curves spacetime. The universe is flat

is easily explained, too: spacetime is curved, and so is space; but on a large scale, space is overall flat.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image of Cosmic Microwave Background temperature fluctuations from the 7-year Wilkinson Microwave Anisotropy Probe data seen over the full sky. Courtesy of NASA.[end-div]

Using An Antimagnet to Build an Invisibility Cloak

The invisibility cloak of science fiction takes another step further into science fact this week. Researchers over at Physics arVix report a practical method for building a device that repels electromagnetic waves. Alvaro Sanchez and colleagues at Spain’s Universitat Autonoma de Barcelona describe the design of a such a device utilizing the bizarre properties of metamaterials.

[div class=attrib]From Technology Review:[end-div]

A metamaterial is a bizarre substance with properties that physicists can fine tune as they wish. Tuned in a certain way, a metamaterial can make light perform all kinds of gymnastics, steering it round objects to make them seem invisible.

This phenomenon, known as cloaking, is set to revolutionise various areas of electromagnetic science.

But metamaterials can do more. One idea is that as well as electromagnetic fields, metamaterials ought to be able to manipulate plain old magnetic fields too. After all, a static magnetic field is merely an electromagnetic wave with a frequency of zero.

So creating a magnetic invisibility cloak isn’t such a crazy idea.

Today, Alvaro Sanchez and friends at Universitat Autonoma de Barcelona in Spain reveal the design of a cloak that can do just this.

The basic ingredients are two materials; one with a permeability that is smaller than 1 in one direction and one with a permeability greater than one in a perpendicular direction.

Materials with these permeabilities are easy to find. Superconductors have a permeability of 0 and ordinary ferromagnets have a permeability greater than 1.

The difficulty is creating a material with both these properties at the same time. Sanchez and co solve the problem with a design consisting of ferromagnetic shells coated with a superconducting layer.

The result is a device that can completely shield the outside world from a magnet inside it.

[div class=attrib]More from theSource here.[end-div]

Nuclear Fission in the Kitchen

theDiagonal usually does not report on the news. Though we do make a few worthy exceptions based on the import or surreal nature of the event. A case in point below.

Humans do have a curious way of repeating history. In a less meticulous attempt to re-enact the late-90s true story, which eventually led to the book “The Radioactive Boy Scout“, a Swedish man was recently arrested for trying to set up a nuclear reactor in his kitchen.

[div class=attrib]From the AP:[end-div]

A Swedish man who was arrested after trying to split atoms in his kitchen said Wednesday he was only doing it as a hobby.

Richard Handl told The Associated Press that he had the radioactive elements radium, americium and uranium in his apartment in southern Sweden when police showed up and arrested him on charges of unauthorized possession of nuclear material.

The 31-year-old Handl said he had tried for months to set up a nuclear reactor at home and kept a blog about his experiments, describing how he created a small meltdown on his stove.

Only later did he realize it might not be legal and sent a question to Sweden’s Radiation Authority, which answered by sending the police.

“I have always been interested in physics and chemistry,” Handl said, adding he just wanted to “see if it’s possible to split atoms at home.”

[div class=attrib]More from theSource here.[end-div]

Are You Real, Or Are You a Hologram?

The principle of a holographic universe, not to be confused with the Holographic Universe, an album by swedish death metal rock band Scar Symmetry, continues to hold serious sway among a not insignificant group of even more serious cosmologists.

Originally proposed by noted physicists Gerard ‘t Hooft, and Leonard Susskind in the mid-1990s, the holographic theory of the universe suggests that our entire universe can described as a informational 3-D projection painted in two dimensions on a cosmological boundary. This is analogous to the flat hologram printed on a credit card creating the illusion of a 3-D object.

While current mathematical theory and experimental verification is lagging, the theory has garnered much interest and forward momentum — so this area warrants a brief status check, courtesy of the New Scientist.

[div class=attrib]From the New Scientist:[end-div]

TAKE a look around you. The walls, the chair you’re sitting in, your own body – they all seem real and solid. Yet there is a possibility that everything we see in the universe – including you and me – may be nothing more than a hologram.

It sounds preposterous, yet there is already some evidence that it may be true, and we could know for sure within a couple of years. If it does turn out to be the case, it would turn our common-sense conception of reality inside out.

The idea has a long history, stemming from an apparent paradox posed by Stephen Hawking’s work in the 1970s. He discovered that black holes slowly radiate their mass away. This Hawking radiation appears to carry no information, however, raising the question of what happens to the information that described the original star once the black hole evaporates. It is a cornerstone of physics that information cannot be destroyed.

In 1972 Jacob Bekenstein at the Hebrew University of Jerusalem, Israel, showed that the information content of a black hole is proportional to the two-dimensional surface area of its event horizon – the point-of-no-return for in-falling light or matter. Later, string theorists managed to show how the original star’s information could be encoded in tiny lumps and bumps on the event horizon, which would then imprint it on the Hawking radiation departing the black hole.

This solved the paradox, but theoretical physicists Leonard Susskind and Gerard ‘t Hooft decided to take the idea a step further: if a three-dimensional star could be encoded on a black hole’s 2D event horizon, maybe the same could be true of the whole universe. The universe does, after all, have a horizon 42 billion light years away, beyond which point light would not have had time to reach us since the big bang. Susskind and ‘t Hooft suggested that this 2D “surface” may encode the entire 3D universe that we experience – much like the 3D hologram that is projected from your credit card.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of Computerarts.[end-div]

Flowing Water on Mars?

NASA’s latest spacecraft to visit Mars, the Mars Reconnaissance Orbiter, has made some stunning observations that show the possibility of flowing water on the red planet. Intriguingly,  repeated observations of the same regions over several Martian seasons show visible changes attributable to some kind of dynamic flow.

[div class=attrib]From NASA / JPL:[end-div]

Observations from NASA’s Mars Reconnaissance Orbiter have revealed possible flowing water during the warmest months on Mars.

“NASA’s Mars Exploration Program keeps bringing us closer to determining whether the Red Planet could harbor life in some form,” NASA Administrator Charles Bolden said, “and it reaffirms Mars as an important future destination for human exploration.”

Dark, finger-like features appear and extend down some Martian slopes during late spring through summer, fade in winter, and return during the next spring. Repeated observations have tracked the seasonal changes in these recurring features on several steep slopes in the middle latitudes of Mars’ southern hemisphere.

“The best explanation for these observations so far is the flow of briny water,” said Alfred McEwen of the University of Arizona, Tucson. McEwen is the principal investigator for the orbiter’s High Resolution Imaging Science Experiment (HiRISE) and lead author of a report about the recurring flows published in Thursday’s edition of the journal Science.

Some aspects of the observations still puzzle researchers, but flows of liquid brine fit the features’ characteristics better than alternate hypotheses. Saltiness lowers the freezing temperature of water. Sites with active flows get warm enough, even in the shallow subsurface, to sustain liquid water that is about as salty as Earth’s oceans, while pure water would freeze at the observed temperatures.

[div class=attrib]More from theSource here.[end-div]

The Science Behind Dreaming

[div class=attrib]From Scientific American:[end-div]

For centuries people have pondered the meaning of dreams. Early civilizations thought of dreams as a medium between our earthly world and that of the gods. In fact, the Greeks and Romans were convinced that dreams had certain prophetic powers. While there has always been a great interest in the interpretation of human dreams, it wasn’t until the end of the nineteenth century that Sigmund Freud and Carl Jung put forth some of the most widely-known modern theories of dreaming. Freud’s theory centred around the notion of repressed longing — the idea that dreaming allows us to sort through unresolved, repressed wishes. Carl Jung (who studied under Freud) also believed that dreams had psychological importance, but proposed different theories about their meaning.

Since then, technological advancements have allowed for the development of other theories. One prominent neurobiological theory of dreaming is the “activation-synthesis hypothesis,” which states that dreams don’t actually mean anything: they are merely electrical brain impulses that pull random thoughts and imagery from our memories. Humans, the theory goes, construct dream stories after they wake up, in a natural attempt to make sense of it all. Yet, given the vast documentation of realistic aspects to human dreaming as well as indirect experimental evidence that other mammals such as cats also dream, evolutionary psychologists have theorized that dreaming really does serve a purpose. In particular, the “threat simulation theory” suggests that dreaming should be seen as an ancient biological defence mechanism that provided an evolutionary advantage because of  its capacity to repeatedly simulate potential threatening events – enhancing the neuro-cognitive mechanisms required for efficient threat perception and avoidance.

So, over the years, numerous theories have been put forth in an attempt to illuminate the mystery behind human dreams, but, until recently, strong tangible evidence has remained largely elusive.

Yet, new research published in the Journal of Neuroscience provides compelling insights into the mechanisms that underlie dreaming and the strong relationship our dreams have with our memories. Cristina Marzano and her colleagues at the University of Rome have succeeded, for the first time, in explaining how humans remember their dreams. The scientists predicted the likelihood of successful dream recall based on a signature pattern of brain waves. In order to do this, the Italian research team invited 65 students to spend two consecutive nights in their research laboratory.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image: The Knight’s Dream by Antonio de Pereda. Courtesy of Wikipedia / Creative Commons.[end-div]

Dawn Over Vesta

More precisely NASA’s Dawn spacecraft entered into orbit around the asteroid Vesta on July 15, 2011. Vesta is the second largest of our solar system’s asteroids and is located in the asteroid belt between Mars and Jupiter.

Now that Dawn is safely in orbit, the spacecraft will circle about 10,000 miles above Vesta’s surface for a year and use two different cameras, a gamma-ray detector and a neutron detector, to study the asteroid.

Then in July 2012, Dawn will depart for a visit to Vesta’s close neighbor and largest object in the asteroid belt, Ceres.

The image of Vesta above was taken from a distance of about 9,500 miles (15,000 kilometers) away.

[div class=attrib]Image courtesy of NASA/JPL-Caltech/UCLA/MPS/DLR/IDA.[end-div]

Seven Sisters Star Cluster

The Seven Sisters star cluster, also known as the Pleiades, consists of many, young, bright and hot stars. While the cluster contains hundreds of stars it is so named because only seven are typically visible to the naked eye. The Seven Sisters is visible from the northern hemisphere, and resides in the constellation Taurus.

[div class=attrib]Image and supporting text courtesy of Davide De Martin over at Skyfactory.[end-div]

This image is a composite from black and white images taken with the Palomar Observatory’s 48-inch (1.2-meter) Samuel Oschin Telescope as a part of the second National Geographic Palomar Observatory Sky Survey (POSS II). The images were recorded on two type of glass photographic plates – one sensitive to red light and the other to blue and later they were digitized. Credit: Caltech, Palomar Observatory, Digitized Sky Survey.

In order to produce the color image seen here, I worked with data coming from 2 different photographic plates taken in 1986 and 1989. Original file is 10.252 x 9.735 pixels with a resolution of about 1 arcsec per pixel. The image shows an area of sky large 2,7° x 2,7° (for comparison, the full-Moon is about 0,5° in diameter).

[div class=attrib]More from theSource here.[end-div]

And You Thought Being Direct and Precise Was Good

A new psychological study upends our understanding of the benefits of direct and precise information as a motivational tool. Results from the study by Himanshu Mishra and Baba Shiv describe the cognitive benefits of vague and inarticulate feedback over precise information. At first glance this seems to be counter-intuitive. After all, fuzzy math, blurred reasoning and unclear directives would seem to be the banes of current societal norms that value data in as a precise a form as possible. We measure, calibrate, verify and re-measure and report information to the nth degree.

[div class=attrib]Stanford Business:[end-div]

Want to lose weight in 2011? You’ve got a better chance of pulling it off if you tell yourself, “I’d like to slim down and maybe lose somewhere between 5 and 15 pounds this year” instead of, “I’d like to lose 12 pounds by July 4.”

In a paper to be published in an upcoming issue of the journal Psychological Science, business school Professor Baba Shiv concludes that people are more likely to stay motivated and achieve a goal if it’s sketched out in vague terms than if it’s set in stone as a rigid or precise plan.

“For one to be successful, one needs to be motivated,” says Shiv, the Stanford Graduate School of Business Sanwa Bank, Limited, Professor of Marketing. He is coauthor of the paper “In Praise of Vagueness: Malleability of Vague Information as a Performance Booster” with Himanshu Mishra and Arul Mishra, both of the University of Utah. Presenting information in a vague way — for instance using numerical ranges or qualitative descriptions — “allows you to sample from the information that’s in your favor,” says Shiv, whose research includes studying people’s responses to incentives. “You’re sampling and can pick the part you want,” the part that seems achievable or encourages you to keep your expectations upbeat to stay on track, says Shiv.

By comparison, information presented in a more-precise form doesn’t let you view it in a rosy light and so can be discouraging. For instance, Shiv says, a coach could try to motivate a sprinter by reviewing all her past times, recorded down to the thousandths of a second. That would remind her of her good times but also the poor ones, potentially de-motivating her. Or, the coach could give the athlete less-precise but still-accurate qualitative information. “Good coaches get people not to focus on the times but on a dimension that is malleable,” says Shiv. “They’ll say, “You’re mentally tough.’ You can’t measure that.” The runner can then zero in on her mental strength to help her concentrate on her best past performances, boosting her motivation and ultimately improving her times. “She’s cherry-picking her memories, and that’s okay, because that’s allowing her to get motivated,” says Shiv.

Of course, Shiv isn’t saying there’s no place for precise information. A pilot needs exact data to monitor a plane’s location, direction, and fuel levels, for instance. But information meant to motivate is different, and people seeking motivation need the chance to focus on just the positive. When it comes to motivation, Shiv said, “negative information outweighs positive. If I give you five pieces of negative information and five pieces of positive information, the brain weighs the negative far more than the positive … It’s a survival mechanism. The brain weighs the negative to keep us secure.”

[div class=attrib]More from theSource here.[end-div]

 

Just Another Week at Fermilab

Another day, another particle, courtesy of scientists at Fermilab. The CDF group working with data from Fermilab’s Tevatron particle collider announced the finding of a new, neutron-like particle last week. The particle known as a neutral Xi-sub-b is a heavy relative of the neutron and is made up of a strange quark, an up quark and a bottom quark, hence the “s-u-b” moniker.

[div class=attrib]Here’s more from Symmetry Breaking:[end-div]

While its existence was predicted by the Standard Model, the observation of the neutral Xi-sub-b is significant because it strengthens our understanding of how quarks form matter. Fermilab physicist Pat Lukens, a member of the CDF collaboration, presented the discovery at Fermilab on Wednesday, July 20.

The neutral Xi-sub-b is the latest entry in the periodic table of baryons. Baryons are particles formed of three quarks, the most common examples being the proton (two up quarks and a down quark) and the neutron (two down quarks and an up quark). The neutral Xi-sub-b belongs to the family of bottom baryons, which are about six times heavier than the proton and neutron because they all contain a heavy bottom quark. The particles are produced only in high-energy collisions, and are rare and very difficult to observe.

Although Fermilab’s Tevatron particle collider is not a dedicated bottom quark factory, sophisticated particle detectors and trillions of proton-antiproton collisions have made it a haven for discovering and studying almost all of the known bottom baryons. Experiments at the Tevatron discovered the Sigma-sub-b baryons (?b and ?b*) in 2006, observed the Xi-b-minus baryon (?b) in 2007, and found the Omega-sub-b (?b) in 2009.

[div class=attrib]Image courtesy of Fermilab/CDF Collaboration.[end-div]

Higgs Particle Collides with Modern Art

Jonathan Jones over at the Guardian puts an creative spin (pun intended) on the latest developments in the world of particle physics. He suggests that we might borrow from the world of modern and contemporary art to help us take the vast imaginative leaps necessary to understand our physical world and its underlying quantum mechanical nature bound up in uncertainty and paradox.

Jones makes a good point that many leading artists of recent times broke new ground by presenting us with an alternate reality that demanded a fresh perspective of the world and what lies beneath. Think Picasso and Dali and Miro and Twombly.

[div class=attrib]From Jonathan Jones for the Guardian:[end-div]

The experiments currently being performed in the LHC are enigmatic, mind-boggling and imaginative. But are they science – or art? In his renowned television series The Ascent of Man, the polymath Jacob Bronowski called the discovery of the invisible world within the atom the great collective achievement of science in the 20th century. Then he went further. “No – it is a great, collective work of art.”

Niels Bohr, who was at the heart of the new sub-atomic physics in the early 20th century, put the mystery of what he and others were finding into provocative sayings. He was very quotable, and every quote stresses the ambiguity of the new realm he was opening up, the realm of the smallest conceivable things in the universe. “If quantum mechanics hasn’t profoundly shocked you, you haven’t understood it yet,” ran one of his remarks. According to Bronowski, Bohr also said that to think about the paradoxical truths of quantum mechanics is to think in images, because the only way to know anything about the invisible is to create an image of it that is by definition a human construct, a model, a half-truth trying to hint at the real truth.

. . .

We won’t understand what those guys at Cern are up to until our idea of science catches up with the greatest minds of the 20th century who blew apart all previous conventions of thought. One guide offers itself to those of us who are not physicists: modern art. Bohr, explained Bronowski, collected Cubist paintings. Cubism was invented by Pablo Picasso and Georges Braque at the same time modern physics was being created: its crystalline structures and opaque surfaces suggest the astonishment of a reality whose every microcosmic particle is sublimely complex.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of Wikipedia / CERN / Creative Commons.[end-div]

Tour de France and the Higgs Particle

Two exciting races tracked through Grenoble, France this passed week. First, the Tour de France held one of the definitive stages of the 2011 race in Grenoble, the individual time trial. Second, Grenoble hosted the European Physical Society conference on High-Energy Physics. Fans of professional cycling and high energy physics would not be disappointed.

In cycling, Cadel Evans set a blistering pace in his solo effort on stage 20 to ensure the Yellow Jersey and an overall win in this year’s Tour.

In the world of high energy physics, physicists from Fermilab and CERN presented updates on their competing searches to discover (or not) the Higgs boson. The two main experiments at Fermilab, CDF and DZero, are looking for traces of the Higgs particle in the debris of Tevatron collider’s proton-antiproton collisions. At CERN’s Large Hadron Collider scientists working at the two massive detectors, Atlas and CMS, are sifting through vast mountains of data accumulated from proton-proton collisions.

Both colliders have been smashing particles together in their ongoing quest to refine our understanding of the building blocks of matter, and to determine the existence of the Higgs particle. The Higgs is believed to convey mass to other particles, and remains one of the remaining undiscovered components of the Standard Model of physics.

The latest results presented in Grenoble show excess particle events, above a chance distribution, across the search range where the Higgs particle is predicted to be found. There is a surplus of unusual events at a mass of 140-145 GeV (gigaelectronvolts), which is at the low end of the range allowed for the particle. Tantalizingly, physicists’ theories predict that this is the most likely region where the Higgs is to be found.

[div class=attrib]Further details from Symmetry Breaking:[end-div]

Physicists could be on their way to discovering the Higgs boson, if it exists, by next year. Scientists in two experiments at the Large Hadron Collider pleasantly surprised attendees at the European Physical Society conference this afternoon by both showing small hints of what could be the prized particle in the same area.

“This is what we expect to find on the road to the Higgs,” said Gigi Rolandi, physics coordinator for the CMS experiment.

Both experiments found excesses in the 130-150 GeV mass region. But the excesses did not have enough statistical significance to count as evidence of the Higgs.

If the Higgs really is lurking in this region, it is still in reach of experiments at Fermilab’s Tevatron. Although the accelerator will shut down for good at the end of September, Fermilab’s CDF and DZero experiments will continue to collect data up until that point and to improve their analyses.

“This should give us the sensitivity to make a new statement about the 114-180 mass range,” said Rob Roser, CDF spokesperson. Read more about the differences between Higgs searches at the Tevatron and at the LHC here.

The CDF and DZero experiments announced expanded exclusions in the search for their specialty, the low-mass Higgs, this morning. On Wednesday, the two experiments will announce their combined Higgs results.

Scientists measure statistical significance in units called sigma, written as the Greek letter ?. These high-energy experiments usually require 3?  level of confidence, about 99.7 percent certainty, to claim they’ve seen evidence of something. They need 5? to claim a discovery. The ATLAS experiment reported excesses at confidence levels between 2 and 2.8?, and the CMS experiment found similar excesses at close to 3?.

After the two experiments combine their results — a mathematical process much more arduous than simple addition — they could find themselves on new ground. They hope to do this in the next few months, at the latest by the winter conferences, said Kyle Cranmer, an assistant professor at New York University who presented the results for the ATLAS collaboration.

“The fact that these two experiments with different issues, different approaches and different modeling found similar results leads you to believe it might not be just a fluke,” Cranmer said. “This is what it would look like if it were real.”

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]CERN photograph courtesy Fabrice Coffrini/AFP/Getty Images. Tour de France image courtesy of NBCSports.[end-div]

First Ever Demonstration of Time Cloaking

[div class=attrib]From the Physics arXiv for Technology Review:[end-div]

Physicists have created a “hole in time” using the temporal equivalent of an invisibility cloak.

Invisibility cloaks are the result of physicists’ newfound ability to distort electromagnetic fields in extreme ways. The idea is steer light around a volume of space so that anything inside this region is essentially invisible.

The effect has generated huge interest. The first invisibility cloaks worked only at microwave frequencies but in only a few years, physicists have found ways to create cloaks that work for visible light, for sound and for ocean waves. They’ve even designed illusion cloaks that can make one object look like another.

Today, Moti Fridman and buddies, at Cornell University in Ithaca, go a step further. These guys have designed and built a cloak that hides events in time.

Time cloaking is possible because of a kind of duality between space and time in electromagnetic theory. In particular, the diffraction of a beam of light in space is mathematically equivalent to the temporal propagation of light through a dispersive medium. In other words, diffraction and dispersion are symmetric in spacetime.

That immediately leads to an interesting idea. Just as its easy to make a lens that focuses light in space using diffraction, so it is possible to use dispersion to make a lens that focuses in time.

Such a time-lens can be made using an electro-optic modulator, for example, and has a variety of familiar properties. “This time-lens can, for example, magnify or compress in time,” say Fridman and co.

This magnifying and compressing in time is important.

The trick to building a temporal cloak is to place two time-lenses in series and then send a beam of light through them. The first compresses the light in time while the second decompresses it again.

But this leaves a gap. For short period, there is a kind of hole in time in which any event is unrecorded.

So to an observer, the light coming out of the second time-lens appears undistorted, as if no event has occurred.

In effect, the space between the two lenses is a kind of spatio-temporal cloak that deletes changes that occur in short periods of time.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Original paper from arXiv.org here.[end-div]

Why Does Time Fly?

[div class=attrib]From Scientific American:[end-div]

Everybody knows that the passage of time is not constant. Moments of terror or elation can stretch a clock tick to what seems like a life time. Yet, we do not know how the brain “constructs” the experience of subjective time. Would it not be important to know so we can find ways to make moments last, or pass by, more quickly?

A recent study by van Wassenhove and colleagues is beginning to shed some light on this problem. This group used a simple experimental set up to measure the “subjective” experience of time. They found that people accurately judge whether a dot appears on the screen for shorter, longer or the same amount of time as another dot. However, when the dot increases in size so as to appear to be moving toward the individual — i.e. the dot is “looming” — something strange happens. People overestimate the time that the dot lasted on the screen.  This overestimation does not happen when the dot seems to move away.  Thus, the overestimation is not simply a function of motion. Van Wassenhove and colleagues conducted this experiment during functional magnetic resonance imaging, which enabled them to examine how the brain reacted differently to looming and receding.

The brain imaging data revealed two main findings. First, structures in the middle of the brain were more active during the looming condition. These brain areas are also known to activate in experiments that involve the comparison of self-judgments to the judgments of others, or when an experimenter does not tell the subject what to do. In both cases, the prevailing idea is that the brain is busy wondering about itself, its ongoing plans and activities, and relating oneself to the rest of the world.

Read more from the original study here.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of Sawayasu Tsuji.[end-div]

When the multiverse and many-worlds collide

[div class=attrib]From the New Scientist:[end-div]

TWO of the strangest ideas in modern physics – that the cosmos constantly splits into parallel universes in which every conceivable outcome of every event happens, and the notion that our universe is part of a larger multiverse – have been unified into a single theory. This solves a bizarre but fundamental problem in cosmology and has set physics circles buzzing with excitement, as well as some bewilderment.

The problem is the observability of our universe. While most of us simply take it for granted that we should be able to observe our universe, it is a different story for cosmologists. When they apply quantum mechanics – which successfully describes the behaviour of very small objects like atoms – to the entire cosmos, the equations imply that it must exist in many different states simultaneously, a phenomenon called a superposition. Yet that is clearly not what we observe.

Cosmologists reconcile this seeming contradiction by assuming that the superposition eventually “collapses” to a single state. But they tend to ignore the problem of how or why such a collapse might occur, says cosmologist Raphael Bousso at the University of California, Berkeley. “We’ve no right to assume that it collapses. We’ve been lying to ourselves about this,” he says.

In an attempt to find a more satisfying way to explain the universe’s observability, Bousso, together with Leonard Susskind at Stanford University in California, turned to the work of physicists who have puzzled over the same problem but on a much smaller scale: why tiny objects such as electrons and photons exist in a superposition of states but larger objects like footballs and planets apparently do not.

This problem is captured in the famous thought experiment of Schrödinger’s cat. This unhappy feline is inside a sealed box containing a vial of poison that will break open when a radioactive atom decays. Being a quantum object, the atom exists in a superposition of states – so it has both decayed and not decayed at the same time. This implies that the vial must be in a superposition of states too – both broken and unbroken. And if that’s the case, then the cat must be both dead and alive as well.

[div class=attrib]More from theSource here.[end-div]

Dark energy spotted in the cosmic microwave background

[div class=attrib]From Institute of Physics:[end-div]

Astronomers studying the cosmic microwave background (CMB) have uncovered new direct evidence for dark energy – the mysterious substance that appears to be accelerating the expansion of the universe. Their findings could also help map the structure of dark matter on the universe’s largest length scales.

The CMB is the faint afterglow of the universe’s birth in the Big Bang. Around 400,000 years after its creation, the universe had cooled sufficiently to allow electrons to bind to atomic nuclei. This “recombination” set the CMB radiation free from the dense fog of plasma that was containing it. Space telescopes such as WMAP and Planck have charted the CMB and found its presence in all parts of the sky, with a temperature of 2.7 K. However, measurements also show tiny fluctuations in this temperature on the scale of one part in a million. These fluctuations follow a Gaussian distribution.

In the first of two papers, a team of astronomers including Sudeep Das at the University of California, Berkeley, has uncovered fluctuations in the CMB that deviate from this Gaussian distribution. The deviations, observed with the Atacama Cosmology Telescope in Chile, are caused by interactions with large-scale structures in the universe, such as galaxy clusters. “On average, a CMB photon will have encountered around 50 large-scale structures before it reaches our telescope,” Das told physicsworld.com. “The gravitational influence of these structures, which are dominated by massive clumps of dark matter, will each deflect the path of the photon,” he adds. This process, called “lensing”, eventually adds up to a total deflection of around 3 arc minutes – one-20th of a degree.

Dark energy versus structure

In the second paper Das, along with Blake Sherwin of Princeton University and Joanna Dunkley of Oxford University, looks at how lensing could reveal dark energy. Dark energy acts to counter the emergence of structures within the universe. A universe with no dark energy would have a lot of structure. As a result, the CMB photons would undergo greater lensing and the fluctuations would deviate more from the original Gaussian distribution.

[div class=attrib]More from theSource here.[end-div]

The Good, the Bad and the Ugly – 40 years on

One of the most fascinating and (in)famous experiments in social psychology began in the bowels of Stanford University 40 years ago next month. The experiment intended to evaluate how people react to being powerless. However, on conclusion it took a broader look at role assignment and reaction to authority.

The Stanford Prison Experiment incarcerated male college student volunteers in a mock prison for 6 fateful days. Some of the students were selected to be prison guards, the remainder would be prisoners. The researchers, led by psychology professor Philip Zimbardo encouraged the guards to think of themselves as actual guards in a real prison. What happened during these 6 days in “prison” is the stuff of social science legend. The results continues to shock psychologists to this day; many were not prepared for the outcome after 6 days, which saw guards take their roles to the extreme becoming overarchingly authoritarian and mentally abusive, and prisoners become down-trodden and eventually rebellious. A whistle-blower eventually called the experiment to an abrupt end (it was to have continued for 2 weeks).

Forty years on, researchers went back to interview professor Zimbardo and some of the participating guards and prisoners to probe their feelings now. Recollections from one of the guards is below.

[div class=attrib]From Stanford Magazine:[end-div]

I was just looking for some summer work. I had a choice of doing this or working at a pizza parlor. I thought this would be an interesting and different way of finding summer employment.

The only person I knew going in was John Mark. He was another guard and wasn’t even on my shift. That was critical. If there were prisoners in there who knew me before they encountered me, then I never would have been able to pull off anything I did. The act that I put on—they would have seen through it immediately.

What came over me was not an accident. It was planned. I set out with a definite plan in mind, to try to force the action, force something to happen, so that the researchers would have something to work with. After all, what could they possibly learn from guys sitting around like it was a country club? So I consciously created this persona. I was in all kinds of drama productions in high school and college. It was something I was very familiar with: to take on another personality before you step out on the stage. I was kind of running my own experiment in there, by saying, “How far can I push these things and how much abuse will these people take before they say, ‘knock it off?'” But the other guards didn’t stop me. They seemed to join in. They were taking my lead. Not a single guard said, “I don’t think we should do this.”

The fact that I ramped up the intimidation and the mental abuse without any real sense as to whether I was hurting anybody— I definitely regret that. But in the long run, no one suffered any lasting damage. When the Abu Ghraib scandal broke, my first reaction was, this is so familiar to me. I knew exactly what was going on. I could picture myself in the middle of that and watching it spin out of control. When you have little or no supervision as to what you’re doing, and no one steps in and says, “Hey, you can’t do this”—things just keep escalating. You think, how can we top what we did yesterday? How do we do something even more outrageous? I felt a deep sense of familiarity with that whole situation.

Sometimes when people know about the experiment and then meet me, it’s like, My God, this guy’s a psycho! But everyone who knows me would just laugh at that.

[div class=attrib]More from theSource here.[end-div]

Happy Birthday Neptune

One hundred and sixty-four years ago, or one Neptunian year, Neptune was first observed by telescope. Significantly, it was the first planet to be discovered deliberately; the existence and location of the gas giant was calculated mathematically. Subsequently, it was located by telescope, on 24 September 1846, and found to be within one degree of the mathematically predicted location. Astronomers hypothesized Neptune’s existence due to perturbations in the orbit of its planetary neighbor, Uranus, around the sun, which could only be explained by the presence of another object in nearby orbit. A triumph for the scientific method, and besides, it’s beautiful too.

[div class=attrib]Image courtesy of NASA.[end-div]

Undiscovered

[div class=attrib]From Eurozine:[end-div]

Neurological and Darwinistic strands in the philosophy of consciousness see human beings as no more than our evolved brains. Avoiding naturalistic explanations of human beings’ fundamental difference from other animals requires openness to more expansive approaches, argues Raymond Tallis.

For several decades I have been arguing against what I call biologism. This is the idea, currently dominant within secular humanist circles, that humans are essentially animals (or at least much more beastly than has been hitherto thought) and that we need therefore to look to the biological sciences, and only there, to advance our understanding of human nature. As a result of my criticism of this position I have been accused of being a Cartesian dualist, who thinks that the mind is some kind of a ghost in the machinery of the brain. Worse, it has been suggested that I am opposed to Darwinism, to neuroscience or to science itself. Worst of all, some have suggested that I have a hidden religious agenda. For the record, I regard neuroscience (which was my own area of research) as one of the greatest monuments of the human intellect; I think Cartesian dualism is a lost cause; and I believe that Darwin’s theory is supported by overwhelming evidence. Nor do I have a hidden religious agenda: I am an atheist humanist. And this is in fact the reason why I have watched the rise of biologism with such dismay: it is a consequence of the widespread assumption that the only alternative to a supernatural understanding of human beings is a strictly naturalistic one that sees us as just another kind of beast and, ultimately, as being less conscious agents than pieces of matter stitched into the material world.

This is to do humanity a gross disservice, as I think we are so much more than gifted chimps. Unpacking the most “ordinary” moment of human life reveals knowledge, skills, emotions, intuitions, a sense of past and future and of an infinitely elaborated world, that are not to be found elsewhere in the living world.

Biologism has two strands: “Neuromania” and “Darwinitis”. Neuromania arises out of the belief that human consciousness is identical with neural activity in certain parts of the brain. It follows from this that the best way to investigate what we humans truly are, to understand the origins of our beliefs, our predispositions, our morality and even our aesthetic pleasures, will be to peer into the brains of human subjects using the latest scanning technology. This way we shall know what is really going on when we are having experiences, thinking thoughts, feeling emotions, remembering memories, making decisions, being wise or silly, breaking the law, falling in love and so on.

The other strand is Darwinitis, rooted in the belief that evolutionary theory not only explains the origin of the species H. sapiens – which it does, of course – but also explains humans as they are today; that people are at bottom the organisms forged by the processes of natural selection and nothing more.

[div class=attrib]More from theSource here.[end-div]

Brilliant, but Distant: Most Far-Flung Known Quasar Offers Glimpse into Early Universe

[div class=attrib]From Scientific American:[end-div]

Peering far across space and time, astronomers have located a luminous beacon aglow when the universe was still in its infancy. That beacon, a bright astrophysical object known as a quasar, shines with the luminosity of 63 trillion suns as gas falling into a supermassive black holes compresses, heats up and radiates brightly. It is farther from Earth than any other known quasar—so distant that its light, emitted 13 billion years ago, is only now reaching Earth. Because of its extreme luminosity and record-setting distance, the quasar offers a unique opportunity to study the conditions of the universe as it underwent an important transition early in cosmic history.

By the time the universe was one billion years old, the once-neutral hydrogen gas atoms in between galaxies had been almost completely stripped of their electrons (ionized) by the glow of the first massive stars. But the full timeline of that process, known as re-ionization because it separated protons and electrons, as they had been in the first 380,000 years post–big bang, is somewhat uncertain. Quasars, with their tremendous intrinsic brightness, should make for excellent markers of the re-ionization process, acting as flashlights to illuminate the intergalactic medium. But quasar hunters working with optical telescopes had only been able to see back as far as 870 million years after the big bang, when the intergalactic medium’s transition from neutral to ionized was almost complete. (The universe is now 13.75 billion years old.) Beyond that point, a quasar’s light has been so stretched, or redshifted, by cosmic expansion that it no longer falls in the visible portion of the electromagnetic spectrum but rather in the longer-wavelength infrared.

Daniel Mortlock, an astrophysicist at Imperial College London, and his colleagues used that fact to their advantage. The researchers looked for objects that showed up in a large-area infrared sky survey but not in a visible-light survey covering the same area of sky, essentially isolating the high-redshift objects. They could thus discover a quasar, known as ULAS J1120+0641, at redshift 7.085, corresponding to a time just 770 million years after the big bang. That places the newfound quasar about 100 million years earlier in cosmic history than the previous record holder, which was at redshift 6.44. Mortlock and his colleagues report their finding in the June 30 issue of Nature. (Scientific American is part of Nature Publishing Group.)

[div class=attrib]More from theSource here.[end-div]

New Tevatron collider result may help explain the matter-antimatter asymmetry in the universe

[div class=attrib]From Symmetry Breaking:[end-div]

About a year ago, the DZero collaboration at Fermilab published  a tantalizing result in which the universe unexpectedly showed a preference for matter over antimatter. Now the collaboration has more data, and the evidence for this effect has grown stronger.

The result is extremely exciting: The question of why our universe should exist solely of matter is one of the burning scientific questions of our time. Theory predicts that matter and antimatter was made in equal quantities. If something hadn’t slightly favored matter over antimatter, our universe would consist of a bath of photons and little else. Matter wouldn’t exist.

The Standard Model predicts a value near zero for one of the parameters that is associated with the difference between the production of muons and antimuons in B meson decays. The DZero results from 2010 and 2011 differ from zero and are consistent with each other. The vertical bars of the measurements indicate their uncertainty. 

The 2010 measurement looked at muons and antimuons emerging from the decays of neutral mesons containing bottom quarks, which is a source that scientists have long expected to be a fruitful place to study the behavior of matter and antimatter under high-energy conditions. DZero scientists found a 1 percent difference between the production of pairs of muons and pairs of antimuons in B meson decays at Fermilab’s Tevatron collider. Like all measurements, that measurement had an uncertainty associated with it. Specifically, there was about a 0.07 percent chance that the measurement could come from a random fluctuation of the data recorded. That’s a tiny probability, but since DZero makes thousands of measurements, scientists expect to see the occasional rare fluctuation that turns out to be nothing.

During the last year, the DZero collaboration has taken more data and refined its analysis techniques. In addition, other scientists have raised questions and requested additional cross-checks. One concern was whether the muons and antimuons are actually coming from the decay of B mesons, rather than some other source.

Now, after incorporating almost 50 percent more data and dozens of cross-checks, DZero scientists are even more confident in the strength of their result. The probability that the observed effect is from a random fluctuation has dropped quite a bit and now is only 0.005 percent. DZero scientists will present the details of their analysis in a seminar geared toward particle physicists later today.

Scientists are a cautious bunch and require a high level of certainty to claim a discovery. For a measurement of the level of certainty achieved in the summer of 2010, particle physicists claim that they have evidence for an unexpected phenomenon. A claim of discovery requires a higher level of certainty.

If the earlier measurement were a fluctuation, scientists would expect the uncertainty of the new result to grow, not get smaller. Instead, the improvement is exactly what scientists expect if the effect is real. But the uncertainty associated with the new result is still too high to claim a discovery. For a discovery, particle physicists require an uncertainty of less than 0.00005 percent.

The new result suggests that DZero is hot on the trail of a crucial clue in one of the defining questions of all time: Why are we here at all?

[div class=attrib]More from theSource here.[end-div]

More subatomic spot changing

[div class=attrib]From the Economist:[end-div]

IN THIS week’s print edition we report a recent result from the T2K collaboration in Japan which has found strong hints that neutrinos, the elusive particles theorists believe to be as abundant in the universe as photons, but which almost never interact with anything, are as fickle as they are coy.

It has been known for some time that neutrinos switch between three types, or flavours, as they zip through space at a smidgen below the speed of light. The flavours are distinguished by the particles which emerge on the rare occasion a neutrino does bump into something. And so, an electron-neutrino conjures up an electron, a muon-neutrino, a muon, and a tau-neutrino, a tau particle (muons and tau are a lot like electrons, but heavier and less stable). Researchers at T2K observed, for the first time, muon-neutrinos transmuting into the electron variety—the one sort of spot-changing that had not been seen before. But their results, with a 0.7% chance of being a fluke, was, by the elevated standards of particle physics, tenuous.

Now, T2K’s rival across the Pacific has made it less so. MINOS beams muon-neutrinos from Fermilab, America’s biggest particle-physics lab located near Chicago, to a 5,000-tonne detector sitting in the Soudan mine in Minnesota, 735km (450 miles) to the north-west. On June 24th its researchers annouced that they, too, had witnessed some of muon-neutrinos change to the electron variety along the way. To be precise, the experiment recorded 62 events which could have been caused by electron-neutrinos. If the proposed transmutation does not occur in nature, it ought to have seen no more than 49 (the result of electron-neutrinos streaming in from space or radioactive rocks on Earth). Were the T2K figures spot on, as it were, it should have seen 71.

As such, the result from MINOS, which uses different methods to study the same phenomenon, puts the transmutation hypothesis on a firmer footing. This advances the search for a number known as delta (?). This is one of the parameters of the formula which physicists think describes neutrinos spot-changing antics. Physicists are keen to pin it down, since it also governs the description of the putative asymmetry between matter and antimatter that left matter as the dominant feature of the universe after the Big Bang.

In light of the latest result, it remains unclear whether either the American or the Japanese experiment is precise enough to measure delta. In 2013, however, MINOS will be supplanted by NOvA, a fancier device located in another Minnesota mine 810km from Fermilab’s muon-neutrino cannon. That ought to do the trick. Then again, nature has the habit of springing surprises.

And in more ways than one. Days after T2K’s run was cut short by the earthquake that shook Japan in March, devastating the muon-neutrino source at J-PARC, the country’s main particle-accelerator complex, MINOS had its own share of woe when the Soudan mine sustained significant flooding. Fortunately, the experiment itself escaped relatively unscathed. But the eerie coincidence spurred some boffins, not a particularly superstitious bunch, to speak of a neutrino curse. Fingers crossed that isn’t the case.

[div class=attrib]More from theSource here.[end-div]

[div]Image courtesy of Fermilab.[end-div]

Largest cosmic structures ‘too big’ for theories

[div class=attrib]From New Scientist:[end-div]

Space is festooned with vast “hyperclusters” of galaxies, a new cosmic map suggests. It could mean that gravity or dark energy – or perhaps something completely unknown – is behaving very strangely indeed.

We know that the universe was smooth just after its birth. Measurements of the cosmic microwave background radiation (CMB), the light emitted 370,000 years after the big bang, reveal only very slight variations in density from place to place. Gravity then took hold and amplified these variations into today’s galaxies and galaxy clusters, which in turn are arranged into big strings and knots called superclusters, with relatively empty voids in between.

On even larger scales, though, cosmological models say that the expansion of the universe should trump the clumping effect of gravity. That means there should be very little structure on scales larger than a few hundred million light years across.

But the universe, it seems, did not get the memo. Shaun Thomas of University College London (UCL), and colleagues have found aggregations of galaxies stretching for more than 3 billion light years. The hyperclusters are not very sharply defined, with only a couple of per cent variation in density from place to place, but even that density contrast is twice what theory predicts.

“This is a challenging result for the standard cosmological models,” says Francesco Sylos Labini of the University of Rome, Italy, who was not involved in the work.

Colour guide

The clumpiness emerges from an enormous catalogue of galaxies called the Sloan Digital Sky Survey, compiled with a telescope at Apache Point, New Mexico. The survey plots the 2D positions of galaxies across a quarter of the sky. “Before this survey people were looking at smaller areas,” says Thomas. “As you look at more of the sky, you start to see larger structures.”

A 2D picture of the sky cannot reveal the true large-scale structure in the universe. To get the full picture, Thomas and his colleagues also used the colour of galaxies recorded in the survey.

More distant galaxies look redder than nearby ones because their light has been stretched to longer wavelengths while travelling through an expanding universe. By selecting a variety of bright, old elliptical galaxies whose natural colour is well known, the team calculated approximate distances to more than 700,000 objects. The upshot is a rough 3D map of one quadrant of the universe, showing the hazy outlines of some enormous structures.

[div class=attrib]More from theSource here.[end-div]