Category Archives: BigBang

Pale Blue 2Dot0

IDL TIFF file

Thanks to the prodding of Carl Sagan, just over 25 years ago, on February 14, 1990 to be exact, the Voyager 1 spacecraft turned its camera towards Earth and snapped what has since become an iconic image. It showed our home planet as a very small, very pale blue dot — much as you’d expect from a distance of around 3.7 billion miles.

Though much closer to Earth, the Cassini spacecraft snapped a similar shot of our planet in 2013. Cassini is in its seemingly never-ending orbits of discovery around the Saturnian system, which it began over 10 years ago. It took the image in 2013: Earth in the distance at the center right is dwarfed by Saturn’s rings in the foreground. A rare, beautiful and remarkable image!

Image: Saturn’s rings and Earth in the same frame. Taken on July 19, 2013, via the wide-angle camera on NASA’s Cassini spacecraft Courtesy: NASA/JPL-Caltech/Space Science Institute.

Multitasking: A Powerful and Diabolical Illusion

Our ever-increasingly ubiquitous technology makes possible all manner of things that would have been insurmountable just decades ago. We carry smartphones that envelope more computational power than mainframes just a generation ago. Yet for all this power at our fingertips we seem to forget that we are still very much human animals with limitations. One such “shortcoming” [your friendly editor believes it’s a boon] is our inability to multitask like our phones. I’ve written about this before, and am compelled to do so again after reading this thoughtful essay by Daniel J. Levitin, extracted from his book The Organized Mind: Thinking Straight in the Age of Information Overload. I even had to use his phrasing for the title of this post.

From the Guardian:

Our brains are busier than ever before. We’re assaulted with facts, pseudo facts, jibber-jabber, and rumour, all posing as information. Trying to figure out what you need to know and what you can ignore is exhausting. At the same time, we are all doing more. Thirty years ago, travel agents made our airline and rail reservations, salespeople helped us find what we were looking for in shops, and professional typists or secretaries helped busy people with their correspondence. Now we do most of those things ourselves. We are doing the jobs of 10 different people while still trying to keep up with our lives, our children and parents, our friends, our careers, our hobbies, and our favourite TV shows.

Our smartphones have become Swiss army knife–like appliances that include a dictionary, calculator, web browser, email, Game Boy, appointment calendar, voice recorder, guitar tuner, weather forecaster, GPS, texter, tweeter, Facebook updater, and flashlight. They’re more powerful and do more things than the most advanced computer at IBM corporate headquarters 30 years ago. And we use them all the time, part of a 21st-century mania for cramming everything we do into every single spare moment of downtime. We text while we’re walking across the street, catch up on email while standing in a queue – and while having lunch with friends, we surreptitiously check to see what our other friends are doing. At the kitchen counter, cosy and secure in our domicile, we write our shopping lists on smartphones while we are listening to that wonderfully informative podcast on urban beekeeping.

But there’s a fly in the ointment. Although we think we’re doing several things at once, multitasking, this is a powerful and diabolical illusion. Earl Miller, a neuroscientist at MIT and one of the world experts on divided attention, says that our brains are “not wired to multitask well… When people think they’re multitasking, they’re actually just switching from one task to another very rapidly. And every time they do, there’s a cognitive cost in doing so.” So we’re not actually keeping a lot of balls in the air like an expert juggler; we’re more like a bad amateur plate spinner, frantically switching from one task to another, ignoring the one that is not right in front of us but worried it will come crashing down any minute. Even though we think we’re getting a lot done, ironically, multitasking makes us demonstrably less efficient.

Multitasking has been found to increase the production of the stress hormone cortisol as well as the fight-or-flight hormone adrenaline, which can overstimulate your brain and cause mental fog or scrambled thinking. Multitasking creates a dopamine-addiction feedback loop, effectively rewarding the brain for losing focus and for constantly searching for external stimulation. To make matters worse, the prefrontal cortex has a novelty bias, meaning that its attention can be easily hijacked by something new – the proverbial shiny objects we use to entice infants, puppies, and kittens. The irony here for those of us who are trying to focus amid competing activities is clear: the very brain region we need to rely on for staying on task is easily distracted. We answer the phone, look up something on the internet, check our email, send an SMS, and each of these things tweaks the novelty- seeking, reward-seeking centres of the brain, causing a burst of endogenous opioids (no wonder it feels so good!), all to the detriment of our staying on task. It is the ultimate empty-caloried brain candy. Instead of reaping the big rewards that come from sustained, focused effort, we instead reap empty rewards from completing a thousand little sugar-coated tasks.

In the old days, if the phone rang and we were busy, we either didn’t answer or we turned the ringer off. When all phones were wired to a wall, there was no expectation of being able to reach us at all times – one might have gone out for a walk or been between places – and so if someone couldn’t reach you (or you didn’t feel like being reached), it was considered normal. Now more people have mobile phones than have toilets. This has created an implicit expectation that you should be able to reach someone when it is convenient for you, regardless of whether it is convenient for them. This expectation is so ingrained that people in meetings routinely answer their mobile phones to say, “I’m sorry, I can’t talk now, I’m in a meeting.” Just a decade or two ago, those same people would have let a landline on their desk go unanswered during a meeting, so different were the expectations for reachability.

Just having the opportunity to multitask is detrimental to cognitive performance. Glenn Wilson, former visiting professor of psychology at Gresham College, London, calls it info-mania. His research found that being in a situation where you are trying to concentrate on a task, and an email is sitting unread in your inbox, can reduce your effective IQ by 10 points. And although people ascribe many benefits to marijuana, including enhanced creativity and reduced pain and stress, it is well documented that its chief ingredient, cannabinol, activates dedicated cannabinol receptors in the brain and interferes profoundly with memory and with our ability to concentrate on several things at once. Wilson showed that the cognitive losses from multitasking are even greater than the cognitive losses from pot?smoking.

Russ Poldrack, a neuroscientist at Stanford, found that learning information while multitasking causes the new information to go to the wrong part of the brain. If students study and watch TV at the same time, for example, the information from their schoolwork goes into the striatum, a region specialised for storing new procedures and skills, not facts and ideas. Without the distraction of TV, the information goes into the hippocampus, where it is organised and categorised in a variety of ways, making it easier to retrieve. MIT’s Earl Miller adds, “People can’t do [multitasking] very well, and when they say they can, they’re deluding themselves.” And it turns out the brain is very good at this deluding business.

Then there are the metabolic costs that I wrote about earlier. Asking the brain to shift attention from one activity to another causes the prefrontal cortex and striatum to burn up oxygenated glucose, the same fuel they need to stay on task. And the kind of rapid, continual shifting we do with multitasking causes the brain to burn through fuel so quickly that we feel exhausted and disoriented after even a short time. We’ve literally depleted the nutrients in our brain. This leads to compromises in both cognitive and physical performance. Among other things, repeated task switching leads to anxiety, which raises levels of the stress hormone cortisol in the brain, which in turn can lead to aggressive and impulsive behaviour. By contrast, staying on task is controlled by the anterior cingulate and the striatum, and once we engage the central executive mode, staying in that state uses less energy than multitasking and actually reduces the brain’s need for glucose.

To make matters worse, lots of multitasking requires decision-making: Do I answer this text message or ignore it? How do I respond to this? How do I file this email? Do I continue what I’m working on now or take a break? It turns out that decision-making is also very hard on your neural resources and that little decisions appear to take up as much energy as big ones. One of the first things we lose is impulse control. This rapidly spirals into a depleted state in which, after making lots of insignificant decisions, we can end up making truly bad decisions about something important. Why would anyone want to add to their daily weight of information processing by trying to multitask?

Read the entire article here.

Active SETI

google-search-aliens

Seventy years after the SETI (Search for Extra-Terrestrial Intelligence) experiment began some astronomers are thinking of SETI 2.0 or active SETI. Rather than just passively listening for alien-made signals emanating from the far distant exoplanets these astronomers wish to take the work a bold step further. They’re planning to transmit messages in the hope that someone or something will be listening. And that has opponents of the plan rather worried. If somethings do hear us, will they come looking, and if so, then what? Will the process result in a real-life The Day the Earth Stood Still or Alien? And, more importantly, will they all look astonishingly Hollywood-like?

From BBC:

Scientists at a US conference have said it is time to try actively to contact intelligent life on other worlds.

Researchers involved in the search for extra-terrestrial life are considering what the message from Earth should be.

The call was made by the Search for Extra Terrestrial Intelligence institute at a meeting of the American Association for the Advancement of Science in San Jose.

But others argued that making our presence known might be dangerous.

Researchers at the Seti institute have been listening for signals from outer space for more than 30 years using radio telescope facilities in the US. So far there has been no sign of ET.

The organisation’s director, Dr Seth Shostak, told attendees to the AAAS meeting that it was now time to step up the search.

“Some of us at the institute are interested in ‘active Seti’, not just listening but broadcasting something to some nearby stars because maybe there is some chance that if you wake somebody up you’ll get a response,” he told BBC News.

The concerns are obvious, but sitting in his office at the institute in Mountain View, California, in the heart of Silicon Valley, he expresses them with characteristic, impish glee.

Game over?

“A lot of people are against active Seti because it is dangerous. It is like shouting in the jungle. You don’t know what is out there; you better not do it. If you incite the aliens to obliterate the planet, you wouldn’t want that on your tombstone, right?”

I couldn’t argue with that. But initially, I could scarcely believe I was having this conversation at a serious research institute rather than at a science fiction convention. The sci-fi feel of our talk was underlined by the toy figures of bug-eyed aliens that cheerfully decorate the office.

But Dr Shostak is a credible and popular figure and has been invited to present his arguments.

Leading astronomers, anthropologists and social scientists will gather at his institute after the AAAS meeting for a symposium to flesh out plans for a proposal for active Seti to put to the public and politicians.

High on the agenda is whether such a move would, as he put it so starkly, lead to the “obliteration” of the planet.

“I don’t see why the aliens would have any incentive to do that,” Dr Shostak tells me.

“Beyond that, we have been telling them willy-nilly that we are here for 70 years now. They are not very interesting messages but the early TV broadcasts, the early radio, the radar from the Second World War – all that has leaked off the Earth.

“Any society that could come here and ruin our whole day by incinerating the planet already knows we are here.”

Read the entire article here.

Image courtesy of Google Search.

Death Explained

StillLifeWithASkull

Let’s leave the mysteries of the spiritual after-life aside for our various religions to fight over, and concentrate on what really happens after death. It many not please many aesthetes, but the cyclic process is beautiful nonetheless.

From Raw Story:

“It might take a little bit of force to break this up,” says mortician Holly Williams, lifting John’s arm and gently bending it at the fingers, elbow and wrist. “Usually, the fresher a body is, the easier it is for me to work on.”

Williams speaks softly and has a happy-go-lucky demeanour that belies the nature of her work. Raised and now employed at a family-run funeral home in north Texas, she has seen and handled dead bodies on an almost daily basis since childhood. Now 28 years old, she estimates that she has worked on something like 1,000 bodies.

Her work involves collecting recently deceased bodies from the Dallas–Fort Worth area and preparing them for their funeral.

“Most of the people we pick up die in nursing homes,” says Williams, “but sometimes we get people who died of gunshot wounds or in a car wreck. We might get a call to pick up someone who died alone and wasn’t found for days or weeks, and they’ll already be decomposing, which makes my work much harder.”

John had been dead about four hours before his body was brought into the funeral home. He had been relatively healthy for most of his life. He had worked his whole life on the Texas oil fields, a job that kept him physically active and in pretty good shape. He had stopped smoking decades earlier and drank alcohol moderately. Then, one cold January morning, he suffered a massive heart attack at home (apparently triggered by other, unknown, complications), fell to the floor, and died almost immediately. He was just 57 years old.

Now, John lay on Williams’ metal table, his body wrapped in a white linen sheet, cold and stiff to the touch, his skin purplish-grey – telltale signs that the early stages of decomposition were well under way.

Self-digestion

Far from being ‘dead’, a rotting corpse is teeming with life. A growing number of scientists view a rotting corpse as the cornerstone of a vast and complex ecosystem, which emerges soon after death and flourishes and evolves as decomposition proceeds.

Decomposition begins several minutes after death with a process called autolysis, or self-digestion. Soon after the heart stops beating, cells become deprived of oxygen, and their acidity increases as the toxic by-products of chemical reactions begin to accumulate inside them. Enzymes start to digest cell membranes and then leak out as the cells break down. This usually begins in the liver, which is rich in enzymes, and in the brain, which has a high water content. Eventually, though, all other tissues and organs begin to break down in this way. Damaged blood cells begin to spill out of broken vessels and, aided by gravity, settle in the capillaries and small veins, discolouring the skin.

Body temperature also begins to drop, until it has acclimatised to its surroundings. Then, rigor mortis – “the stiffness of death” – sets in, starting in the eyelids, jaw and neck muscles, before working its way into the trunk and then the limbs. In life, muscle cells contract and relax due to the actions of two filamentous proteins (actin and myosin), which slide along each other. After death, the cells are depleted of their energy source and the protein filaments become locked in place. This causes the muscles to become rigid and locks the joints.

During these early stages, the cadaveric ecosystem consists mostly of the bacteria that live in and on the living human body. Our bodies host huge numbers of bacteria; every one of the body’s surfaces and corners provides a habitat for a specialised microbial community. By far the largest of these communities resides in the gut, which is home to trillions of bacteria of hundreds or perhaps thousands of different species.

The gut microbiome is one of the hottest research topics in biology; it’s been linked to roles in human health and a plethora of conditions and diseases, from autism and depression to irritable bowel syndrome and obesity. But we still know little about these microbial passengers. We know even less about what happens to them when we die.

Putrefaction

Scattered among the pine trees in Huntsville, Texas, lie around half a dozen human cadavers in various stages of decay. The two most recently placed bodies are spread-eagled near the centre of the small enclosure with much of their loose, grey-blue mottled skin still intact, their ribcages and pelvic bones visible between slowly putrefying flesh. A few metres away lies another, fully skeletonised, with its black, hardened skin clinging to the bones, as if it were wearing a shiny latex suit and skullcap. Further still, beyond other skeletal remains scattered by vultures, lies a third body within a wood and wire cage. It is nearing the end of the death cycle, partly mummified. Several large, brown mushrooms grow from where an abdomen once was.

For most of us the sight of a rotting corpse is at best unsettling and at worst repulsive and frightening, the stuff of nightmares. But this is everyday for the folks at the Southeast Texas Applied Forensic Science Facility. Opened in 2009, the facility is located within a 247-acre area of National Forest owned by Sam Houston State University (SHSU). Within it, a nine-acre plot of densely wooded land has been sealed off from the wider area and further subdivided, by 10-foot-high green wire fences topped with barbed wire.

In late 2011, SHSU researchers Sibyl Bucheli and Aaron Lynne and their colleagues placed two fresh cadavers here, and left them to decay under natural conditions.

Once self-digestion is under way and bacteria have started to escape from the gastrointestinal tract, putrefaction begins. This is molecular death – the breakdown of soft tissues even further, into gases, liquids and salts. It is already under way at the earlier stages of decomposition but really gets going when anaerobic bacteria get in on the act.

Putrefaction is associated with a marked shift from aerobic bacterial species, which require oxygen to grow, to anaerobic ones, which do not. These then feed on the body’s tissues, fermenting the sugars in them to produce gaseous by-products such as methane, hydrogen sulphide and ammonia, which accumulate within the body, inflating (or ‘bloating’) the abdomen and sometimes other body parts.

This causes further discolouration of the body. As damaged blood cells continue to leak from disintegrating vessels, anaerobic bacteria convert haemoglobin molecules, which once carried oxygen around the body, into sulfhaemoglobin. The presence of this molecule in settled blood gives skin the marbled, greenish-black appearance characteristic of a body undergoing active decomposition.

Colonisation

When a decomposing body starts to purge, it becomes fully exposed to its surroundings. At this stage, the cadaveric ecosystem really comes into its own: a ‘hub’ for microbes, insects and scavengers.

Two species closely linked with decomposition are blowflies and flesh flies (and their larvae). Cadavers give off a foul, sickly-sweet odour, made up of a complex cocktail of volatile compounds that changes as decomposition progresses. Blowflies detect the smell using specialised receptors on their antennae, then land on the cadaver and lay their eggs in orifices and open wounds.

Each fly deposits around 250 eggs that hatch within 24 hours, giving rise to small first-stage maggots. These feed on the rotting flesh and then moult into larger maggots, which feed for several hours before moulting again. After feeding some more, these yet larger, and now fattened, maggots wriggle away from the body. They then pupate and transform into adult flies, and the cycle repeats until there’s nothing left for them to feed on.

Under the right conditions, an actively decaying body will have large numbers of stage-three maggots feeding on it. This ‘maggot mass’ generates a lot of heat, raising the inside temperature by more than 10°C. Like penguins huddling in the South Pole, individual maggots within the mass are constantly on the move. But whereas penguins huddle to keep warm, maggots in the mass move around to stay cool.

“It’s a double-edged sword,” Bucheli explains, surrounded by large toy insects and a collection of Monster High dolls in her SHSU office. “If you’re always at the edge, you might get eaten by a bird, and if you’re always in the centre, you might get cooked. So they’re constantly moving from the centre to the edges and back.”

Purging

“We’re looking at the purging fluid that comes out of decomposing bodies,” says Daniel Wescott, director of the Forensic Anthropology Center at Texas State University in San Marcos.

Wescott, an anthropologist specialising in skull structure, is using a micro-CT scanner to analyse the microscopic structure of the bones brought back from the body farm. He also collaborates with entomologists and microbiologists – including Javan, who has been busy analysing samples of cadaver soil collected from the San Marcos facility – as well as computer engineers and a pilot, who operate a drone that takes aerial photographs of the facility.

“I was reading an article about drones flying over crop fields, looking at which ones would be best to plant in,” he says. “They were looking at near-infrared, and organically rich soils were a darker colour than the others. I thought if they can do that, then maybe we can pick up these little circles.”

Those “little circles” are cadaver decomposition islands. A decomposing body significantly alters the chemistry of the soil beneath it, causing changes that may persist for years. Purging – the seeping of broken-down materials out of what’s left of the body – releases nutrients into the underlying soil, and maggot migration transfers much of the energy in a body to the wider environment. Eventually, the whole process creates a ‘cadaver decomposition island’, a highly concentrated area of organically rich soil. As well as releasing nutrients into the wider ecosystem, this attracts other organic materials, such as dead insects and faecal matter from larger animals.

According to one estimate, an average human body consists of 50–75 per cent water, and every kilogram of dry body mass eventually releases 32 g of nitrogen, 10 g of phosphorous, 4 g of potassium and 1 g of magnesium into the soil. Initially, it kills off some of the underlying and surrounding vegetation, possibly because of nitrogen toxicity or because of antibiotics found in the body, which are secreted by insect larvae as they feed on the flesh. Ultimately, though, decomposition is beneficial for the surrounding ecosystem.

According to the laws of thermodynamics, energy cannot be created or destroyed, only converted from one form to another. In other words: things fall apart, converting their mass to energy while doing so. Decomposition is one final, morbid reminder that all matter in the universe must follow these fundamental laws. It breaks us down, equilibrating our bodily matter with its surroundings, and recycling it so that other living things can put it to use.

Ashes to ashes, dust to dust.

Read the entire article here.

Image: Still-Life with a Skull, 17th-century painting by Philippe de Champaigne. Public Domain.

Satan’s Copper

1935_Indian_Head_Buffalo_Nickel

Nickel is element 28 on the periodic table. Its name was bestowed by German copper miners and is derived from the word “kupfernickel”, which translates to “little nick’s copper”. Besides being a component of the eponymous US five-cent coin — nowadays it’s actually 75 percent copper — it has some rather surprising uses, from making margarine to forming critical elements of modern jet engines.

From the BBC:

It made the age of cheap foreign holidays possible, and for years it was what made margarine spreadable. Nickel may not be the flashiest metal but modern life would be very different without it.

Deep in the bowels of University College London lies a machine workshop, where metals are cut, lathed and shaped into instruments and equipment for the various science departments.

Chemistry professor Andrea Sella stands before me holding a thick, two-metre-long pipe made of Monel, a nickel-copper alloy. Then he lets it fall to the ground with a deafening clang.

“That really speaks to the hardness and stiffness of this metal,” he explains, picking up the undamaged pipe.

But another reason Monel is a “fantastic alloy”, he says, is that it resists corrosion. Chemists need ways of handling highly reactive materials – powerful acids perhaps, or gases like fluorine and chlorine – so they need something that won’t itself react with them.

Gold, silver or platinum might do, but imagine the price of two-meter-long pipe made of gold. Nickel by contrast is cheap and abundant, so it crops up everywhere where corrosion is a concern – from chemist’s spatulas to the protective coating on bicycle sprockets.

But nickel can produce other alloys far quirkier than Monel, Sella is eager to explain.

Take Invar, an alloy of nickel and iron. Uniquely, it hardly expands or contracts with changes in the temperature – a property that comes in very handy in precision instruments and clocks, whose workings can be interfered with by the “thermal expansion” of other lowlier metals.

Then there is Nitinol.

Sella produces a wire in the shape of a paperclip – but it is far too easy to twist out of shape to be of use holding sheets of paper together. He mangles it in his fingers, then dips it into a cup of boiling water. It immediately writhes about… and turns back into a perfect paperclip.

Nitinol has a special memory for the shape in which it is first formed. And its composition can be tuned, so that at a particular temperature it will always return to that original shape. This means, for example, that a rolled-up Nitinol stent can be inserted into a blood vessel. As it warms to body temperature, the stent opens itself out, allowing blood to flow through it.

But all these alloys pale in significance compared to a special class of alloys – so special they are called “superalloys”. These are the alloys that made the jet age possible.

The first jet engines were developed simultaneously in the 1930s and 40s, by Frank Whittle in the UK and by Hans von Ohain in Germany, both on opposing sides of an accelerating arms race.

Those engines, made of steel, had serious shortcomings.

“They didn’t have the temperature capability to go above about 500C,” explains Mike Hicks, head of materials at Rolls-Royce, the UK’s biggest manufacturer of jet turbines. “Its strength falls off quite quickly and its corrosion resistance isn’t good.”

In response, the Rolls-Royce team that took up Whittle’s work in the 1940s went back to the drawing board – one with the periodic table pinned on to it.

Tungsten was too heavy. Copper melted at too low a temperature. But nickel – with a bit of chromium mixed in – was the Goldilocks recipe. It tolerated high temperatures, it was strong, corrosion-resistant, cheap and light.

Today, the descendants of these early superalloys still provide most of the back end of turbines – both those used on jet planes, and those used in power generation.

“The turbine blades have to operate in the hottest part of the engine, and it’s spinning at a very high speed,” says Hicks’s colleague Neil Glover, head of materials technology research at Rolls-Royce.

“Each one of these blades extracts the same power as a Formula 1 racing car engine, and there are 68 of these in the core of the modern gas turbine engine.”

Read the entire article here.

Image: 1935 Buffalo Nickel. Public Domain.

Belief and the Falling Light

[tube]dpmXyJrs7iU[/tube]

Many of us now accept that lights falling from the sky are rocky interlopers from the asteroid clouds within our solar system, rather than visiting angels or signs from an angry (or mysteriously benevolent) God. New analysis of the meteor that overflew Chelyabinsk in Russia in 2013 suggests that one of the key founders of Christianity may have witnessed a similar natural phenomenon around two thousand years ago. However, at the time, Saul (later to become Paul the evangelist) interpreted the dazzling light on the road to Damascus — Acts of the Apostles, New Testament — as a message from a Christian God. The rest, as they say, is history. Luckily, recent scientific progress now means that most of us no longer establish new religious movements based on fireballs in the sky. But, we are awed nonetheless.

From the New Scientist:

Nearly two thousand years ago, a man named Saul had an experience that changed his life, and possibly yours as well. According to Acts of the Apostles, the fifth book of the biblical New Testament, Saul was on the road to Damascus, Syria, when he saw a bright light in the sky, was blinded and heard the voice of Jesus. Changing his name to Paul, he became a major figure in the spread of Christianity.

William Hartmann, co-founder of the Planetary Science Institute in Tucson, Arizona, has a different explanation for what happened to Paul. He says the biblical descriptions of Paul’s experience closely match accounts of the fireball meteor seen above Chelyabinsk, Russia, in 2013.

Hartmann has detailed his argument in the journal Meteoritics & Planetary Science (doi.org/3vn). He analyses three accounts of Paul’s journey, thought to have taken place around AD 35. The first is a third-person description of the event, thought to be the work of one of Jesus’s disciples, Luke. The other two quote what Paul is said to have subsequently told others.

“Everything they are describing in those three accounts in the book of Acts are exactly the sequence you see with a fireball,” Hartmann says. “If that first-century document had been anything other than part of the Bible, that would have been a straightforward story.”

But the Bible is not just any ancient text. Paul’s Damascene conversion and subsequent missionary journeys around the Mediterranean helped build Christianity into the religion it is today. If his conversion was indeed as Hartmann explains it, then a random space rock has played a major role in determining the course of history (see “Christianity minus Paul”).

That’s not as strange as it sounds. A large asteroid impact helped kill off the dinosaurs, paving the way for mammals to dominate the Earth. So why couldn’t a meteor influence the evolution of our beliefs?

“It’s well recorded that extraterrestrial impacts have helped to shape the evolution of life on this planet,” says Bill Cooke, head of NASA’s Meteoroid Environment Office in Huntsville, Alabama. “If it was a Chelyabinsk fireball that was responsible for Paul’s conversion, then obviously that had a great impact on the growth of Christianity.”

Hartmann’s argument is possible now because of the quality of observations of the Chelyabinsk incident. The 2013 meteor is the most well-documented example of larger impacts that occur perhaps only once in 100 years. Before 2013, the 1908 blast in Tunguska, also in Russia, was the best example, but it left just a scattering of seismic data, millions of flattened trees and some eyewitness accounts. With Chelyabinsk, there is a clear scientific argument to be made, says Hartmann. “We have observational data that match what we see in this first-century account.”

Read the entire article here.

Video: Meteor above Chelyabinsk, Russia in 2013. Courtesy of Tuvix72.

Dark Matter May Cause Cancer and Earthquakes

Abell 1689

Leave aside the fact that there is no direct evidence for the existence of dark matter. In fact, theories that indirectly point to its existence seem rather questionable as well. That said, cosmologists are increasingly convinced that dark matter’s gravitational effects can be derived from recent observations of gravitationally lenses galaxy clusters. Some researchers postulate that this eerily murky non-substance — it doesn’t interact with anything in our visible universe except, perhaps, gravity — may be a cause for activities much closer to home. All very interesting.

From NYT:

Earlier this year, Dr. Sabine Hossenfelder, a theoretical physicist in Stockholm, made the jarring suggestion that dark matter might cause cancer. She was not talking about the “dark matter” of the genome (another term for junk DNA) but about the hypothetical, lightless particles that cosmologists believe pervade the universe and hold the galaxies together.

Though it has yet to be directly detected, dark matter is presumed to exist because we can see the effects of its gravity. As its invisible particles pass through our bodies, they could be mutating DNA, the theory goes, adding at an extremely low level to the overall rate of cancer.

It was unsettling to see two such seemingly different realms, cosmology and oncology, suddenly juxtaposed. But that was just the beginning. Shortly after Dr. Hossenfelder broached her idea in an online essay, Michael Rampino, a professor at New York University, added geology and paleontology to the picture.

Dark matter, he proposed in an article for the Royal Astronomical Society, is responsible for the mass extinctions that have periodically swept Earth, including the one that killed the dinosaurs.

His idea is based on speculations by other scientists that the Milky Way is sliced horizontally through its center by a thin disk of dark matter. As the sun, traveling around the galaxy, bobs up and down through this darkling plane, it generates gravitational ripples strong enough to dislodge distant comets from their orbits, sending them hurtling toward Earth.

An earlier version of this hypothesis was put forth last year by the Harvard physicists Lisa Randall and Matthew Reece. But Dr. Rampino has added another twist: During Earth’s galactic voyage, dark matter accumulates in its core. There the particles self-destruct, generating enough heat to cause deadly volcanic eruptions. Struck from above and below, the dinosaurs succumbed.

It is surprising to see something as abstract as dark matter take on so much solidity, at least in the human mind. The idea was invented in the early 1930s as a theoretical contrivance — a means of explaining observations that otherwise didn’t make sense.

Galaxies appear to be rotating so fast that they should have spun apart long ago, throwing off stars like sparks from a Fourth of July pinwheel. There just isn’t enough gravity to hold a galaxy together, unless you assume that it hides a huge amount of unseen matter — particles that neither emit or absorb light.

Some mavericks propose alternatives, attempting to tweak the equations of gravity to account for what seems like missing mass. But for most cosmologists, the idea of unseeable matter has become so deeply ingrained that it has become almost impossible to do without it.

Said to be five times more abundant than the stuff we can see, dark matter is a crucial component of the theory behind gravitational lensing, in which large masses like galaxies can bend light beams and cause stars to appear in unexpected parts of the sky.

That was the explanation for the spectacular observation of an “Einstein Cross” reported last month. Acting like an enormous lens, a cluster of galaxies deflected the light of a supernova into four images — a cosmological mirage. The light for each reflection followed a different path, providing glimpses of four different moments of the explosion.

Continue reading the main storyContinue reading the main story

But not even a galactic cluster exerts enough gravity to bend light so severely unless you postulate that most of its mass consists of hypothetical dark matter. In fact, astronomers are so sure that dark matter exists that they have embraced gravitational lensing as a tool to map its extent.

Dark matter, in other words, is used to explain gravitational lensing, and gravitational lensing is taken as more evidence for dark matter.

Some skeptics have wondered if this is a modern-day version of what ancient astronomers called “saving the phenomena.” With enough elaborations, a theory can account for what we see without necessarily describing reality. The classic example is the geocentric model of the heavens that Ptolemy laid out in the Almagest, with the planets orbiting Earth along paths of complex curlicues.

Ptolemy apparently didn’t care whether his filigrees were real. What was important to him was that his model worked, predicting planetary movements with great precision.

Modern scientists are not ready to settle for such subterfuge. To show that dark matter resides in the world and not just in their equations, they are trying to detect it directly.

Though its identity remains unknown, most theorists are betting that dark matter consists of WIMPs — weakly interacting massive particles. If they really exist, it might be possible to glimpse them when they interact with ordinary matter.

Read the entire article here.

Image: Abell 1689 galaxy cluster. Courtesy ofNASA, ESA, and D. Coe (NASA JPL/Caltech and STScI).

A New Mobile App or Genomic Understanding?

Eyjafjallajökull

Silicon Valley has been a tremendous incubator for some of most our recent inventions: the first integrated transistor chip, which led to Intel; the first true personal computer, which led to Apple. Yet, this esteemed venture capital (VC) community now seems to need a self-medication of innovation. Aren’t we all getting a little jaded from yet another “new, great mobile app” — worth in the tens of billions (but having no revenue model) — courtesy of a bright and young group of 20-somethings?

It is indeed gratifying to see innovators, young and old, rewarded for their creativity and perseverance. Yet, we should be encouraging more of our pioneers to look beyond the next cool smartphone invention. Perhaps our technological and industrial luminaries and their retinues of futurists could do us all a favor if they channeled more of their speculative funds at longer-term and more significant endeavors: cost-effective desalination; cheaper medications; understanding and curing our insidious diseases; antibiotic replacements; more effective recycling; cleaner power; cheaper and stronger infrastructure; more effective education. These are all difficult problems. But therein lies the reward.

Clearly some pioneering businesses are investing in these areas. But isn’t it time we insisted that the majority of our private and public intellectual capital (and financial) should be invested in truly meaningful ways. Here’s an example from Iceland — with their national human genome project.

From ars technica:

An Icelandic genetics firm has sequenced the genomes of 2,636 of its countrymen and women, finding genetic markers for a variety of diseases, as well as a new timeline for the paternal ancestor of all humans.

Iceland is, in many ways, perfectly suited to being a genetic case study. It has a small population with limited genetic diversity, a result of the population descending from a small number of settlers—between 8 and 20 thousand, who arrived just 1100 years ago. It also has an unusually well-documented genealogical history, with information sometimes stretching all the way back to the initial settlement of the country. Combined with excellent medical records, it’s a veritable treasure trove for genetic researchers.

The researchers at genetics firm deCODE compared the complete genomes of participants with historical and medical records, publishing their findings in a series of four papers in Nature Genetics last Wednesday. The wealth of data allowed them to track down genetic mutations that are related to a number of diseases, some of them rare. Although few diseases are caused by a single genetic mutation, a combination of mutations can increase the risk for certain diseases. Having access to a large genetic sample with corresponding medical data can help to pinpoint certain risk-increasing mutations.

Among their headline findings was the identification of the gene ABCA7 as a risk factor for Alzheimer’s disease. Although previous research had established that a gene in this region was involved in Alzheimer’s, this result delivers a new level of precision. The researchers replicated their results in further groups in Europe and the United States.

Also identified was a genetic mutation that causes early-onset atrial fibrillation, a heart condition causing an irregular and often very fast heart rate. It’s the most common cardiac arrhythmia condition, and it’s considered early-onset if it’s diagnosed before the age of 60. The researchers found eight Icelanders diagnosed with the condition, all carrying a mutation in the same gene, MYL4.

The studies also turned up a gene with an unusual pattern of inheritance. It causes increased levels of thyroid stimulation when it’s passed down from the mother, but decreased levels when inherited from the father.

Genetic research in mice often involves “knocking out” or switching off a particular gene to explore the effects. However, mouse genetics aren’t a perfect approximation of human genetics. Obviously, doing this in humans presents all sorts of ethical problems, but a population such as Iceland provides the perfect natural laboratory to explore how knockouts affect human health.

The data showed that eight percent of people in Iceland have the equivalent of a knockout, one gene that isn’t working. This provides an opportunity to look at the data in a different way: rather than only looking for people with a particular diagnosis and finding out what they have in common genetically, the researchers can look for people who have genetic knockouts, and then examine their medical records to see how their missing genes affect their health. It’s then possible to start piecing together the story of how certain genes affect physiology.

Finally, the researchers used the data to explore human history, using Y chromosome data from 753 Icelandic males. Based on knowledge about mutation rates, Y chromosomes can be used to trace the male lineage of human groups, establishing dates of events like migrations. This technique has also been used to work out when the common ancestor of all humans was alive. The maternal ancestor, known as “Mitochondrial Eve,” is thought to have lived 170,000 to 180,000 years ago, while the paternal ancestor had previously been estimated to have lived around 338,000 years ago.

The Icelandic data allowed the researchers to calculate what they suggest is a more accurate mutation rate, placing the father of all humans at around 239,000 years ago. This is the estimate with the greatest likelihood, but the full range falls between 174,000 and 321,000 years ago. This estimate places the paternal ancestor closer in time to the maternal ancestor.

Read the entire story here.

Image: Gígjökull, an outlet glacier extending from Eyjafjallajökull, Iceland. Courtesy of Andreas Tille / Wikipedia.

The Big Crunch

cmb

It may just be possible that prophetic doomsayers have been right all along. The end is coming… well, in a few tens of billions of years. A group of physicists propose that the cosmos will soon begin collapsing in on itself. Keep in mind that soon in cosmological terms runs into the billions of years. So, it does appear that we still have some time to crunch down our breakfast cereal a few more times before the ultimate universal apocalypse. Clearly this may not please those who seek the end of days within their lifetimes, and for rather different — scientific — reasons, cosmologists seem to be unhappy too.

From Phys:

Physicists have proposed a mechanism for “cosmological collapse” that predicts that the universe will soon stop expanding and collapse in on itself, obliterating all matter as we know it. Their calculations suggest that the collapse is “imminent”—on the order of a few tens of billions of years or so—which may not keep most people up at night, but for the physicists it’s still much too soon.

In a paper published in Physical Review Letters, physicists Nemanja Kaloper at the University of California, Davis; and Antonio Padilla at the University of Nottingham have proposed the cosmological collapse mechanism and analyzed its implications, which include an explanation of dark energy.

“The fact that we are seeing dark energy now could be taken as an indication of impending doom, and we are trying to look at the data to put some figures on the end date,” Padilla told Phys.org. “Early indications suggest the collapse will kick in in a few tens of billions of years, but we have yet to properly verify this.”

The main point of the paper is not so much when exactly the universe will end, but that the mechanism may help resolve some of the unanswered questions in physics. In particular, why is the universe expanding at an accelerating rate, and what is the dark energy causing this acceleration? These questions are related to the cosmological constant problem, which is that the predicted vacuum energy density of the universe causing the expansion is much larger than what is observed.

“I think we have opened up a brand new approach to what some have described as ‘the mother of all physics problems,’ namely the cosmological constant problem,” Padilla said. “It’s way too early to say if it will stand the test of time, but so far it has stood up to scrutiny, and it does seem to address the issue of vacuum energy contributions from the standard model, and how they gravitate.”

The collapse mechanism builds on the physicists’ previous research on vacuum energy sequestering, which they proposed to address the cosmological constant problem. The dynamics of vacuum energy sequestering predict that the universe will collapse, but don’t provide a specific mechanism for how collapse will occur.

According to the new mechanism, the universe originated under a set of specific initial conditions so that it naturally evolved to its present state of acceleration and will continue on a path toward collapse. In this scenario, once the collapse trigger begins to dominate, it does so in a period of “slow roll” that brings about the accelerated expansion we see today. Eventually the universe will stop expanding and reach a turnaround point at which it begins to shrink, culminating in a “big crunch.”

Read the entire article here.

Image: Image of the Cosmic Microwave Background (CMB) from nine years of WMAP data. The image reveals 13.77 billion year old temperature fluctuations (shown as color differences) that correspond to the seeds that grew to become the galaxies. Courtesy of NASA.

A Physics Based Theory of Life

Carnot_heat_engine

Those who subscribe to the non-creationist theory of the origins of life tend gravitate towards the idea of assembly of self-replicating, organic molecules in our primeval oceans — the so-called primordial soup theory. Recently however, professor Jeremy England of MIT has proposed a thermodynamic explanation, which posits that inorganic matter tends to organize — under the right conditions — in a way that enables it to dissipate increasing amounts of energy. This is one of the fundamental attributes of living organisms.

Could we be the product of the Second Law of Thermodynamics, nothing more than the expression of increasing entropy?

Read more of this fascinating new hypothesis below or check out England’s paper on the Statistical Physics of Self-replication.

From Quanta:

Why does life exist?

Popular hypotheses credit a primordial soup, a bolt of lightning and a colossal stroke of luck. But if a provocative new theory is correct, luck may have little to do with it. Instead, according to the physicist proposing the idea, the origin and subsequent evolution of life follow from the fundamental laws of nature and “should be as unsurprising as rocks rolling downhill.”

From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat. Jeremy England, a 31-year-old assistant professor at the Massachusetts Institute of Technology, has derived a mathematical formula that he believes explains this capacity. The formula, based on established physics, indicates that when a group of atoms is driven by an external source of energy (like the sun or chemical fuel) and surrounded by a heat bath (like the ocean or atmosphere), it will often gradually restructure itself in order to dissipate increasingly more energy. This could mean that under certain conditions, matter inexorably acquires the key physical attribute associated with life.

“You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant,” England said.

England’s theory is meant to underlie, rather than replace, Darwin’s theory of evolution by natural selection, which provides a powerful description of life at the level of genes and populations. “I am certainly not saying that Darwinian ideas are wrong,” he explained. “On the contrary, I am just saying that from the perspective of the physics, you might call Darwinian evolution a special case of a more general phenomenon.”

His idea, detailed in a recent paper and further elaborated in a talk he is delivering at universities around the world, has sparked controversy among his colleagues, who see it as either tenuous or a potential breakthrough, or both.

England has taken “a very brave and very important step,” said Alexander Grosberg, a professor of physics at New York University who has followed England’s work since its early stages. The “big hope” is that he has identified the underlying physical principle driving the origin and evolution of life, Grosberg said.

“Jeremy is just about the brightest young scientist I ever came across,” said Attila Szabo, a biophysicist in the Laboratory of Chemical Physics at the National Institutes of Health who corresponded with England about his theory after meeting him at a conference. “I was struck by the originality of the ideas.”

Others, such as Eugene Shakhnovich, a professor of chemistry, chemical biology and biophysics at Harvard University, are not convinced. “Jeremy’s ideas are interesting and potentially promising, but at this point are extremely speculative, especially as applied to life phenomena,” Shakhnovich said.

England’s theoretical results are generally considered valid. It is his interpretation — that his formula represents the driving force behind a class of phenomena in nature that includes life — that remains unproven. But already, there are ideas about how to test that interpretation in the lab.

“He’s trying something radically different,” said Mara Prentiss, a professor of physics at Harvard who is contemplating such an experiment after learning about England’s work. “As an organizing lens, I think he has a fabulous idea. Right or wrong, it’s going to be very much worth the investigation.”

At the heart of England’s idea is the second law of thermodynamics, also known as the law of increasing entropy or the “arrow of time.” Hot things cool down, gas diffuses through air, eggs scramble but never spontaneously unscramble; in short, energy tends to disperse or spread out as time progresses. Entropy is a measure of this tendency, quantifying how dispersed the energy is among the particles in a system, and how diffuse those particles are throughout space. It increases as a simple matter of probability: There are more ways for energy to be spread out than for it to be concentrated. Thus, as particles in a system move around and interact, they will, through sheer chance, tend to adopt configurations in which the energy is spread out. Eventually, the system arrives at a state of maximum entropy called “thermodynamic equilibrium,” in which energy is uniformly distributed. A cup of coffee and the room it sits in become the same temperature, for example. As long as the cup and the room are left alone, this process is irreversible. The coffee never spontaneously heats up again because the odds are overwhelmingly stacked against so much of the room’s energy randomly concentrating in its atoms.

Although entropy must increase over time in an isolated or “closed” system, an “open” system can keep its entropy low — that is, divide energy unevenly among its atoms — by greatly increasing the entropy of its surroundings. In his influential 1944 monograph “What Is Life?” the eminent quantum physicist Erwin Schrödinger argued that this is what living things must do. A plant, for example, absorbs extremely energetic sunlight, uses it to build sugars, and ejects infrared light, a much less concentrated form of energy. The overall entropy of the universe increases during photosynthesis as the sunlight dissipates, even as the plant prevents itself from decaying by maintaining an orderly internal structure.

Life does not violate the second law of thermodynamics, but until recently, physicists were unable to use thermodynamics to explain why it should arise in the first place. In Schrödinger’s day, they could solve the equations of thermodynamics only for closed systems in equilibrium. In the 1960s, the Belgian physicist Ilya Prigogine made progress on predicting the behavior of open systems weakly driven by external energy sources (for which he won the 1977 Nobel Prize in chemistry). But the behavior of systems that are far from equilibrium, which are connected to the outside environment and strongly driven by external sources of energy, could not be predicted.

Read the entire story here.

Image: Carnot engine diagram, where an amount of heat QH flows from a high temperature TH furnace through the fluid of the “working body” (working substance) and the remaining heat QC flows into the cold sink TC, thus forcing the working substance to do mechanical work W on the surroundings, via cycles of contractions and expansions. Courtesy of Wikipedia.

 

The Religion of String Theory

Hyperboloid-of-one-sheetRead anything about string theory and you’ll soon learn that it resembles more of a religion than a scientific principle. String theory researchers and their supporters will be the first to tell you that this elegant, but extremely complex, integration of gravity and quantum field theory,  cannot be confirmed through experiment. And, neither, can it be dispelled through experiment.

So, while the promise of string theory — to bring us one unified understanding of the entire universe — is deliciously tantalizing, it nonetheless forces us to take a giant leap of faith. I suppose that would put string theory originators, physicists Michael Green and John Schwarz, somewhere in the same pantheon as Moses and Joseph Smith.

From Quanta:

Thirty years have passed since a pair of physicists, working together on a stormy summer night in Aspen, Colo., realized that string theory might have what it takes to be the “theory of everything.”

“We must be getting pretty close,” Michael Green recalls telling John Schwarz as the thunder raged and they hammered away at a proof of the theory’s internal consistency, “because the gods are trying to prevent us from completing this calculation.”

Their mathematics that night suggested that all phenomena in nature, including the seemingly irreconcilable forces of gravity and quantum mechanics, could arise from the harmonics of tiny, vibrating loops of energy, or “strings.” The work touched off a string theory revolution and spawned a generation of specialists who believed they were banging down the door of the ultimate theory of nature. But today, there’s still no answer. Because the strings that are said to quiver at the core of elementary particles are too small to detect — probably ever — the theory cannot be experimentally confirmed. Nor can it be disproven: Almost any observed feature of the universe jibes with the strings’ endless repertoire of tunes.

The publication of Green and Schwarz’s paper “was 30 years ago this month,” the string theorist and popular-science author Brian Greene wrote in Smithsonian Magazine in January, “making the moment ripe for taking stock: Is string theory revealing reality’s deep laws? Or, as some detractors have claimed, is it a mathematical mirage that has sidetracked a generation of physicists?” Greene had no answer, expressing doubt that string theory will “confront data” in his lifetime.

Recently, however, some string theorists have started developing a new tactic that gives them hope of someday answering these questions. Lacking traditional tests, they are seeking validation of string theory by a different route. Using a strange mathematical dictionary that translates between laws of gravity and those of quantum mechanics, the researchers have identified properties called “consistency conditions” that they say any theory combining quantum mechanics and gravity must meet. And in certain highly simplified imaginary worlds, they claim to have found evidence that the only consistent theories of “quantum gravity” involve strings.

According to many researchers, the work provides weak but concrete support for the decades-old suspicion that string theory may be the only mathematically consistent theory of quantum gravity capable of reproducing gravity’s known form on the scale of galaxies, stars and planets, as captured by Albert Einstein’s theory of general relativity. And if string theory is the only possible approach, then its proponents say it must be true — with or without physical evidence. String theory, by this account, is “the only game in town.”

“Proving that a big class of stringlike models are the only things consistent with general relativity and quantum mechanics would be a way, to some extent, of confirming it,” said Tom Hartman, a theoretical physicist at Cornell University who has been following the recent work.

If they are successful, the researchers acknowledge that such a proof will be seen as controversial evidence that string theory is correct. “‘Correct’ is a loaded word,” said Mukund Rangamani, a professor at Durham University in the United Kingdom and the co-author of a paper posted recently to the physics preprint site arXiv.org that finds evidence of “string universality” in a class of imaginary universes.

So far, the theorists have shown that string theory is the only “game” meeting certain conditions in “towns” wildly different from our universe, but they are optimistic that their techniques will generalize to somewhat more realistic physical worlds. “We will continue to accumulate evidence for the ‘string universality’ conjecture in different settings and for different classes of theories,” said Alex Maloney, a professor of physics at McGill University in Montreal and co-author of another recent paper touting evidence for the conjecture, “and eventually a larger picture will become clear.”

Meanwhile, outside experts caution against jumping to conclusions based on the findings to date. “It’s clear that these papers are an interesting attempt,” said Matt Strassler, a visiting professor at Harvard University who has worked on string theory and particle physics. “But these aren’t really proofs; these are arguments. They are calculations, but there are weasel words in certain places.”

Proponents of string theory’s rival, an underdog approach called “loop quantum gravity,” believe that the work has little to teach us about the real world. “They should try to solve the problems of their theory, which are many,” said Carlo Rovelli, a loop quantum gravity researcher at the Center for Theoretical Physics in Marseille, France, “instead of trying to score points by preaching around that they are ‘the only game in town.’”

Mystery Theory

Over the past century, physicists have traced three of the four forces of nature — strong, weak and electromagnetic — to their origins in the form of elementary particles. Only gravity remains at large. Albert Einstein, in his theory of general relativity, cast gravity as smooth curves in space and time: An apple falls toward the Earth because the space-time fabric warps under the planet’s weight. This picture perfectly captures gravity on macroscopic scales.

But in small enough increments, space and time lose meaning, and the laws of quantum mechanics — in which particles have no definite properties like “location,” only probabilities — take over. Physicists use a mathematical framework called quantum field theory to describe the probabilistic interactions between particles. A quantum theory of gravity would describe gravity’s origin in particles called “gravitons” and reveal how their behavior scales up to produce the space-time curves of general relativity. But unifying the laws of nature in this way has proven immensely difficult.

String theory first arose in the 1960s as a possible explanation for why elementary particles called quarks never exist in isolation but instead bind together to form protons, neutrons and other composite “hadrons.” The theory held that quarks are unable to pull apart because they form the ends of strings rather than being free-floating points. But the argument had a flaw: While some hadrons do consist of pairs of quarks and anti-quarks and plausibly resemble strings, protons and neutrons contain three quarks apiece, invoking the ugly and uncertain picture of a string with three ends. Soon, a different theory of quarks emerged. But ideas die hard, and some researchers, including Green, then at the University of London, and Schwarz, at the California Institute of Technology, continued to develop string theory.

Problems quickly stacked up. For the strings’ vibrations to make physical sense, the theory calls for many more spatial dimensions than the length, width and depth of everyday experience, forcing string theorists to postulate that six extra dimensions must be knotted up at every point in the fabric of reality, like the pile of a carpet. And because each of the innumerable ways of knotting up the extra dimensions corresponds to a different macroscopic pattern, almost any discovery made about our universe can seem compatible with string theory, crippling its predictive power. Moreover, as things stood in 1984, all known versions of string theory included a nonsensical mathematical term known as an “anomaly.”

On the plus side, researchers realized that a certain vibration mode of the string fit the profile of a graviton, the coveted quantum purveyor of gravity. And on that stormy night in Aspen in 1984, Green and Schwarz discovered that the graviton contributed a term to the equations that, for a particular version of string theory, exactly canceled out the problematic anomaly. The finding raised the possibility that this version was the one, true, mathematically consistent theory of quantum gravity, and it helped usher in a surge of activity known as the “first superstring revolution.”

 But only a year passed before another version of string theory was also certified anomaly-free. In all, five consistent string theories were discovered by the end of the decade. Some conceived of particles as closed strings, others described them as open strings with dangling ends, and still others generalized the concept of a string to higher-dimensional objects known as “D-branes,” which resemble quivering membranes in any number of dimensions. Five string theories seemed an embarrassment of riches.

Read the entire story here.

Image: Image of (1 + 1)-dimensional anti-de Sitter space embedded in flat (1 + 2)-dimensional space. The embedded surface contains closed timelike curves circling the x1 axis. Courtesy of Wikipedia.

What’s Next For the LHC?

As CERN’s Large Hadron Collider gears up for a restart in March 2015 after a refit that doubled its particle smashing power, researchers are pondering what may come next. During its previous run scientists uncovered signals identifying the long-sought Higgs boson. Now, particle physicists have their eyes and minds on more exotic, but no less significant, particle discoveries. And — of course — these come with suitably exotic names: gluino, photino, selectron, squark, axion — the list goes on. But beyond these creative names lie possible answers to some very big questions: What is the composition of dark matter (and even dark energy)? How does gravity fit in with all the other identified forces? Do other fundamental particles exist?

From the Smithsonian:

The Large Hadron Collider, the world’s biggest and most famous particle accelerator, will reopen in March after a years-long upgrade. So what’s the first order of business for the rebooted collider? Nothing less than looking for a particle that forces physicists to reconsider everything they think they know about how the universe works.

Since the second half of the twentieth century, physicists have used the Standard Model of physics to describe how particles look and act. But though the model explains pretty much everything scientists have observed using particle accelerators, it doesn’t account for everything they can observe in the universe, including the existence of dark matter.

That’s where supersymmetry, or SUSY, comes in. Supersymmetry predicts that each particle has what physicists call a “superpartner”—a more massive sub-atomic partner particle that acts like a twin of the particle we can observe. Each observable particle would have its own kind of superpartner, pairing bosons with “fermions,” electrons with “selectrons,” quarks with “squarks,” photons with “photinos,” and gluons with “gluinos.”

If scientists could identify a single superparticle, they could be on track for a more complete theory of particle physics that accounts for strange inconsistencies between existing knowledge and observable phenomena. Scientists used the Large Hadron Collider to identify Higgs boson particles in 2012, but it didn’t behave quite as they expected. One surprise was that its mass was much lighter than predicted—an inconsistency that would be explained by the existence of a supersymmetric particle.

Scientists hope that the rebooted—and more powerful—LHC will reveal just such a particle. “Higher energies at the new LHC could boost the production of hypothetical supersymmetric particles called gluinos by a factor of 60, increasing the odds of finding it,” reports Emily Conover for Science.

If the LHC were to uncover a single superparticle, it wouldn’t just be a win for supersymmetry as a theory—it could be a step toward understanding the origins of our universe. But it could also create a lot of work for scientists—after all, a supersymmetric universe is one that would hold at least twice as many particles.

Read the entire article here.

 

A Higher Purpose

In a fascinating essay, excerpted below, Michael Malone wonders if the tech gurus of Silicon Valley should be solving bigger problems. We see venture capitalists scrambling over one another to find the next viral, mobile app — perhaps one that automatically writes your tweets, or one that vibrates your smartphone if you say too many bad words. Should our capital markets — now with an attention span of 15 seconds — reward the so-called innovators of these so-called essential apps with millions or even billions in company valuations?

Shouldn’t Silicon Valley be tackling the hard problems? Wouldn’t humanity be better served, not from a new killer SnapChat replacement app, but from more efficient reverse osmosis; mitigation for Alzheimer’s (and all sundry of other chronic ailments); progress with alternative energy sources and more efficient energy sinks; next generation antibiotics; ridding the world of land-mines; growing and delivering nutritious food to those who need it most? Admittedly, these are some hard problems. But, isn’t that the point!

From Technology Review:

The view from Mike Steep’s office on Palo Alto’s Coyote Hill is one of the greatest in Silicon Valley.

Beyond the black and rosewood office furniture, the two large computer monitors, and three Indonesian artifacts to ward off evil spirits, Steep looks out onto a panorama stretching from Redwood City to Santa Clara. This is the historic Silicon Valley, the birthplace of Hewlett-Packard and Fairchild Semiconductor, Intel and Atari, Netscape and Google. This is the home of innovations that have shaped the modern world. So is Steep’s employer: Xerox’s Palo Alto Research Center, or PARC, where personal computing and key computer-­networking technologies were invented, and where he is senior vice president of global business operations.

And yet Mike Steep is disappointed at what he sees out the windows.

“I see a community that acts like it knows where it’s going, but that seems to have its head in the sand,” he says. He gestures towards the Hewlett-Packard headquarters a few blocks away and Hoover Tower at Stanford University. “This town used to think big—the integrated circuit, personal computers, the Internet. Are we really leveraging all that intellectual power and creativity creating Instagram and dating apps? Is this truly going to change the world?”

After spending years at Microsoft, HP, and Apple, Steep joined PARC in 2013 to help the legendary ideas factory better capitalize on its work. As part of the job, he travels around the world visiting R&D executives in dozens of big companies, and increasingly he worries that the Valley will become irrelevant to them. Steep is one of 22 tech executives on a board the mayor of London set up to promote a “smart city”; they advise officials on how to allocate hundreds of millions of pounds for projects that would combine physical infrastructure such as new high-speed rail with sensors, databases, and analytics. “I know for a fact that China and an array of other countries are chasing this project, which will be the template for scores of similar big-city infrastructure projects around the world in years to come,” Steep says. “From the U.S.? IBM. From Silicon Valley? Many in England ask if anyone here has even heard of the London subway project. That’s unbelievable. Why don’t we leverage opportunities like this here in the Valley?”

Steep isn’t alone in asking whether Silicon Valley is devoting far too many resources to easy opportunities in mobile apps and social media at the expense of attacking bigger problems in energy, medicine, and transportation (see Q&A: Peter Thiel). But if you put that argument to many investors and technologists here, you get a reasonable comeback: has Silicon Valley really ever set out to directly address big problems? In fact, the classic Valley approach has been to size up which technologies it can quickly and ambitiously advance, and then let the world make of them what it will. That is how we got Facebook and Google, and it’s why the Valley’s clean-tech affair was a short-lived mismatch. And as many people point out with classic Silicon Valley confidence, the kind of work that made the area great is still going on in abundance.

The next wave

A small group of executives, surrounded by hundreds of bottles of wine, sits in the private dining room at Bella Vita, an Italian restaurant in Los Altos’s picturesque downtown of expensive tiny shops. Within a few miles, one can find the site of the original Fairchild Semiconductor, Steve Jobs’s house, and the saloon where Nolan Bushnell set up the first Atari game. The host of this gathering is Carl Guardino, CEO of the Silicon Valley Leadership Group, an industry association dedicated to the economic health of the Valley. The 400 organizations that belong to the group are mostly companies that were founded long before the mobile-app craze; only 10 percent are startups. That is evident at this dinner, to which Guardino has invited three of his board members: Steve Berglund, CEO of Trimble, a maker of GPS equipment; Tom Werner, CEO of the solar provider SunPower; and Greg Becker, CEO of Silicon Valley Bank.

These are people who, like Steep, spend much of their time meeting with people in governments and other companies. Asked whether the Valley is falling out of touch with what the world really needs, each disagrees, vehemently. They are almost surprised by the question. “This is the most adaptive and flexible business community on the planet,” says Becker. “It is always about innovation—and going where the opportunity leads next. If you’re worried that the Valley is overpursuing one market or another, then just wait a while and it will change direction again. That’s what we are all about.”

“This is the center of world capitalism, and capitalism is always in flux,” Werner adds. “Are there too many social-­networking and app companies out there right now? Probably. But what makes you think it’s going to stay that way for long? We have always undergone corrections. It’s the nature of who we are … But we’ll come out stronger than ever, and in a whole different set of markets and new technologies. This will still be the best place on the planet for innovation.”

Berglund contends that a generational change already under way will reduce the emphasis on apps. “Young people don’t seem to care as much about code as their generational elders,” he says. “They want to build things—stuff like robots and drones. Just go to the Maker Faire and watch them. They’re going to take this valley in a whole different direction.”

Berglund could be right. In the first half of 2014, according to CB Insights, Internet startups were the leading recipient of venture investment in San Francisco and Silicon Valley (the area got half of the U.S. total; New York was second at 10 percent). But investment in the Internet sector accounted for 59 percent of the total, down from a peak of 68 percent in 2011.

Doug Henton, who heads the consulting firm Collaborative Economics and oversaw an upcoming research report on the state of the Valley, argues that since 1950 the area has experienced five technological waves. Each has lasted about 10 to 20 years and encompassed a frenzy followed by a crash and shakeout and then a mature “deployment period.” Henton has identified these waves as defense (1950s and 1960s), integrated circuits (1960s and 1970s), personal computers (1970s and 1980s), Internet (1990s), and social media (2000s and 2010s). By these lights, the social-media wave, however dominant it is in the public eye, soon may be replaced by another wave. Henton suggests that it’s likely to involve the combination of software, hardware, and sensors in wearable devices and the “Internet of things.”

Read the entire essay here.

Universal Amniotic Fluid

Another day, another physics paper describing the origin of the universe. This is no wonder. Since the development of general relativity and quantum mechanics — two mutually incompatible descriptions of our reality — theoreticians have been scurrying to come up with a grand theory, a rapprochement of sorts. This one describes the universe as a quantum fluid, perhaps made up of hypothesized gravitons.

From Nature Asia:

The prevailing model of cosmology, based on Einstein’s theory of general relativity, puts the universe at around 13.8 billion years old and suggests it originated from a “singularity” – an infinitely small and dense point – at the Big Bang.

 To understand what happened inside that tiny singularity, physicists must marry general relativity with quantum mechanics – the laws that govern small objects. Applying both of these disciplines has challenged physicists for decades. “The Big Bang singularity is the most serious problem of general relativity, because the laws of physics appear to break down there,” says Ahmed Farag Ali, a physicist at Zewail City of Science and Technology, Egypt.

 In an effort to bring together the laws of quantum mechanics and general relativity, and to solve the singularity puzzle, Ali and Saurya Das, a physicist at the University of Lethbridge in Alberta Canada, employed an equation that predicts the development of singularities in general relativity. That equation had been developed by Das’s former professor, Amal Kumar Raychaudhuri, when Das was an undergraduate student at Presidency University, in Kolkata, India, so Das was particularly familiar and fascinated by it.

 When Ali and Das made small quantum corrections to the Raychaudhuri equation, they realised it described a fluid, made up of small particles, that pervades space. Physicists have long believed that a quantum version of gravity would include a hypothetical particle, called the graviton, which generates the force of gravity. In their new model — which will appear in Physics Letters B in February — Ali and Das propose that such gravitons could form this fluid.

To understand the origin of the universe, they used this corrected equation to trace the behaviour of the fluid back through time. Surprisingly, they found that it did not converge into a singularity. Instead, the universe appears to have existed forever. Although it was smaller in the past, it never quite crunched down to nothing, says Das.

 “Our theory serves to complement Einstein’s general relativity, which is very successful at describing physics over large distances,” says Ali. “But physicists know that to describe short distances, quantum mechanics must be accommodated, and the quantum Raychaudhui equation is a big step towards that.”

The model could also help solve two other cosmic mysteries. In the late 1990s, astronomers discovered that the expansion of the universe is accelerating due the presence of a mysterious dark energy, the origin of which is not known. The model has the potential to explain it since the fluid creates a minor but constant outward force that expands space. “This is a happy offshoot of our work,” says Das.

 Astronomers also now know that most matter in the universe is in an invisible mysterious form called dark matter, only perceptible through its gravitational effect on visible matter such as stars. When Das and a colleague set the mass of the graviton in the model to a small level, they could make the density of their fluid match the universe’s observed density of dark matter, while also providing the right value for dark energy’s push.

Read the entire article here.

 

The Great Unknown: Consciousness

Google-search-consciousness

Much has been written in the humanities and scientific journals about consciousness. Scholars continue to probe and pontificate and theorize. And yet we seem to know more of the ocean depths and our cosmos than we do of that interminable, self-aware inner voice that sits behind our eyes.

From the Guardian:

One spring morning in Tucson, Arizona, in 1994, an unknown philosopher named David Chalmers got up to give a talk on consciousness, by which he meant the feeling of being inside your head, looking out – or, to use the kind of language that might give a neuroscientist an aneurysm, of having a soul. Though he didn’t realise it at the time, the young Australian academic was about to ignite a war between philosophers and scientists, by drawing attention to a central mystery of human life – perhaps the central mystery of human life – and revealing how embarrassingly far they were from solving it.

The scholars gathered at the University of Arizona – for what would later go down as a landmark conference on the subject – knew they were doing something edgy: in many quarters, consciousness was still taboo, too weird and new agey to take seriously, and some of the scientists in the audience were risking their reputations by attending. Yet the first two talks that day, before Chalmers’s, hadn’t proved thrilling. “Quite honestly, they were totally unintelligible and boring – I had no idea what anyone was talking about,” recalled Stuart Hameroff, the Arizona professor responsible for the event. “As the organiser, I’m looking around, and people are falling asleep, or getting restless.” He grew worried. “But then the third talk, right before the coffee break – that was Dave.” With his long, straggly hair and fondness for all-body denim, the 27-year-old Chalmers looked like he’d got lost en route to a Metallica concert. “He comes on stage, hair down to his butt, he’s prancing around like Mick Jagger,” Hameroff said. “But then he speaks. And that’s when everyone wakes up.”

The brain, Chalmers began by pointing out, poses all sorts of problems to keep scientists busy. How do we learn, store memories, or perceive things? How do you know to jerk your hand away from scalding water, or hear your name spoken across the room at a noisy party? But these were all “easy problems”, in the scheme of things: given enough time and money, experts would figure them out. There was only one truly hard problem of consciousness, Chalmers said. It was a puzzle so bewildering that, in the months after his talk, people started dignifying it with capital letters – the Hard Problem of Consciousness – and it’s this: why on earth should all those complicated brain processes feel like anything from the inside? Why aren’t we just brilliant robots, capable of retaining information, of responding to noises and smells and hot saucepans, but dark inside, lacking an inner life? And how does the brain manage it? How could the 1.4kg lump of moist, pinkish-beige tissue inside your skull give rise to something as mysterious as the experience of being that pinkish-beige lump, and the body to which it is attached?

What jolted Chalmers’s audience from their torpor was how he had framed the question. “At the coffee break, I went around like a playwright on opening night, eavesdropping,” Hameroff said. “And everyone was like: ‘Oh! The Hard Problem! The Hard Problem! That’s why we’re here!’” Philosophers had pondered the so-called “mind-body problem” for centuries. But Chalmers’s particular manner of reviving it “reached outside philosophy and galvanised everyone. It defined the field. It made us ask: what the hell is this that we’re dealing with here?”

Two decades later, we know an astonishing amount about the brain: you can’t follow the news for a week without encountering at least one more tale about scientists discovering the brain region associated with gambling, or laziness, or love at first sight, or regret – and that’s only the research that makes the headlines. Meanwhile, the field of artificial intelligence – which focuses on recreating the abilities of the human brain, rather than on what it feels like to be one – has advanced stupendously. But like an obnoxious relative who invites himself to stay for a week and then won’t leave, the Hard Problem remains. When I stubbed my toe on the leg of the dining table this morning, as any student of the brain could tell you, nerve fibres called “C-fibres” shot a message to my spinal cord, sending neurotransmitters to the part of my brain called the thalamus, which activated (among other things) my limbic system. Fine. But how come all that was accompanied by an agonising flash of pain? And what is pain, anyway?

Questions like these, which straddle the border between science and philosophy, make some experts openly angry. They have caused others to argue that conscious sensations, such as pain, don’t really exist, no matter what I felt as I hopped in anguish around the kitchen; or, alternatively, that plants and trees must also be conscious. The Hard Problem has prompted arguments in serious journals about what is going on in the mind of a zombie, or – to quote the title of a famous 1974 paper by the philosopher Thomas Nagel – the question “What is it like to be a bat?” Some argue that the problem marks the boundary not just of what we currently know, but of what science could ever explain. On the other hand, in recent years, a handful of neuroscientists have come to believe that it may finally be about to be solved – but only if we are willing to accept the profoundly unsettling conclusion that computers or the internet might soon become conscious, too.

Next week, the conundrum will move further into public awareness with the opening of Tom Stoppard’s new play, The Hard Problem, at the National Theatre – the first play Stoppard has written for the National since 2006, and the last that the theatre’s head, Nicholas Hytner, will direct before leaving his post in March. The 77-year-old playwright has revealed little about the play’s contents, except that it concerns the question of “what consciousness is and why it exists”, considered from the perspective of a young researcher played by Olivia Vinall. Speaking to the Daily Mail, Stoppard also clarified a potential misinterpretation of the title. “It’s not about erectile dysfunction,” he said.

Stoppard’s work has long focused on grand, existential themes, so the subject is fitting: when conversation turns to the Hard Problem, even the most stubborn rationalists lapse quickly into musings on the meaning of life. Christof Koch, the chief scientific officer at the Allen Institute for Brain Science, and a key player in the Obama administration’s multibillion-dollar initiative to map the human brain, is about as credible as neuroscientists get. But, he told me in December: “I think the earliest desire that drove me to study consciousness was that I wanted, secretly, to show myself that it couldn’t be explained scientifically. I was raised Roman Catholic, and I wanted to find a place where I could say: OK, here, God has intervened. God created souls, and put them into people.” Koch assured me that he had long ago abandoned such improbable notions. Then, not much later, and in all seriousness, he said that on the basis of his recent research he thought it wasn’t impossible that his iPhone might have feelings.

By the time Chalmers delivered his speech in Tucson, science had been vigorously attempting to ignore the problem of consciousness for a long time. The source of the animosity dates back to the 1600s, when René Descartes identified the dilemma that would tie scholars in knots for years to come. On the one hand, Descartes realised, nothing is more obvious and undeniable than the fact that you’re conscious. In theory, everything else you think you know about the world could be an elaborate illusion cooked up to deceive you – at this point, present-day writers invariably invoke The Matrix – but your consciousness itself can’t be illusory. On the other hand, this most certain and familiar of phenomena obeys none of the usual rules of science. It doesn’t seem to be physical. It can’t be observed, except from within, by the conscious person. It can’t even really be described. The mind, Descartes concluded, must be made of some special, immaterial stuff that didn’t abide by the laws of nature; it had been bequeathed to us by God.

This religious and rather hand-wavy position, known as Cartesian dualism, remained the governing assumption into the 18th century and the early days of modern brain study. But it was always bound to grow unacceptable to an increasingly secular scientific establishment that took physicalism – the position that only physical things exist – as its most basic principle. And yet, even as neuroscience gathered pace in the 20th century, no convincing alternative explanation was forthcoming. So little by little, the topic became taboo. Few people doubted that the brain and mind were very closely linked: if you question this, try stabbing your brain repeatedly with a kitchen knife, and see what happens to your consciousness. But how they were linked – or if they were somehow exactly the same thing – seemed a mystery best left to philosophers in their armchairs. As late as 1989, writing in the International Dictionary of Psychology, the British psychologist Stuart Sutherland could irascibly declare of consciousness that “it is impossible to specify what it is, what it does, or why it evolved. Nothing worth reading has been written on it.”

It was only in 1990 that Francis Crick, the joint discoverer of the double helix, used his position of eminence to break ranks. Neuroscience was far enough along by now, he declared in a slightly tetchy paper co-written with Christof Koch, that consciousness could no longer be ignored. “It is remarkable,” they began, “that most of the work in both cognitive science and the neurosciences makes no reference to consciousness” – partly, they suspected, “because most workers in these areas cannot see any useful way of approaching the problem”. They presented their own “sketch of a theory”, arguing that certain neurons, firing at certain frequencies, might somehow be the cause of our inner awareness – though it was not clear how.

Read the entire story here.

Image courtesy of Google Search.

Education And Reality

Recent studies show that having a higher level of education does not necessarily lead to greater acceptance of reality. This seems to fly in the face of oft cited anecdotal evidence and prevailing beliefs that suggest people with lower educational attainment are more likely to reject accepted scientific fact, such as evolutionary science and climate change.

From ars technica:

We like to think that education changes people for the better, helping them critically analyze information and providing a certain immunity from disinformation. But if that were really true, then you wouldn’t have low vaccination rates clustering in areas where parents are, on average, highly educated.

Vaccination isn’t generally a political issue. (Or, it is, but it’s rejected both by people who don’t trust pharmaceutical companies and by those who don’t trust government mandates; these tend to cluster on opposite ends of the political spectrum.) But some researchers decided to look at a number of issues that have become politicized, such as the Iraq War, evolution, and climate change. They find that, for these issues, education actually makes it harder for people to accept reality, an effect they ascribe to the fact that “highly educated partisans would be better equipped to challenge information inconsistent with predispositions.”

The researchers looked at two sets of questions about the Iraq War. The first involved the justifications for the war (weapons of mass destruction and links to Al Qaeda), as well as the perception of the war outside the US. The second focused on the role of the troop surge in reducing violence within Iraq. At the time the polls were taken, there was a clear reality: no evidence of an active weapons program or links to Al Qaeda; the war was frowned upon overseas; and the surge had successfully reduced violence in the country.

On the three issues that were most embarrassing to the Bush administration, Democrats were more likely to get things right, and their accuracy increased as their level of education rose. In contrast, the most and least educated Republicans were equally likely to have things wrong. When it came to the surge, the converse was true. Education increased the chances that Republicans would recognize reality, while the Democratic acceptance of the facts stayed flat even as education levels rose. In fact, among Democrats, the base level of recognition that the surge was a success was so low that it’s not even clear it would have been possible to detect a downward trend.

When it came to evolution, the poll question didn’t even ask whether people accepted the reality of evolution. Instead, it asked “Is there general agreement among scientists that humans have evolved over time, or not?” (This phrasing generally makes it easier for people to accept the reality of evolution, since it’s not asking about their personal beliefs.) Again, education increased the acceptance of this reality among both Democrats and Republicans, but the magnitude of the effect was much smaller among Republicans. In fact, the impact of ideology was stronger than education itself: “The effect of Republican identification on the likelihood of believing that there is a scientific consensus is roughly three times that of the effect of education.”

For climate change, the participants were asked “Do you believe that the earth is getting warmer because of human activity or natural patterns?” Overall, about the beliefs of 70 percent of those polled lined up with scientific conclusions on the matter. And, among the least educated, party affiliation made very little difference in terms of getting this right. But, as education rose, Democrats were more likely to get this right, while Republicans saw their accuracy drop. At the highest levels of education, Democrats got it right 90 percent of the time, while Republicans less than half.

The results are in keeping with a number of other studies that have been published of late, which also show that partisan divides over things that could be considered factual sometimes increase with education. Typically, these issues are widely perceived as political. (With some exceptions; GMOs, for example.) In this case, the authors suspect that education simply allows people to deploy more sophisticated cognitive filters that end up rejecting information that could otherwise compel them to change their perceptions.

The authors conclude that’s somewhat mixed news for democracy itself. Education is intended to improve people’s ability to assimilate information upon which to base their political judgements. And, to a large extent, it does: people, on average, got 70 percent of the questions right, and there was only a single case where education made matters worse.

Read the entire article here.

The Impending AI Apocalypse

Robbie_the_Robot_2006

AI as in Artificial Intelligence, not American Idol — though some believe the latter to be somewhat of a cultural apocalypse.

AI is reaching a technological tipping point; advances in computation especially neural networks are making machines more intelligent every day. These advances are likely to spawn machines — sooner rather than later — that will someday mimic and then surpass human cognition. This has an increasing number of philosophers, scientists and corporations raising alarms. The fear: what if super-intelligent AI machines one day decide that humans are far too inferior and superfluous?

From Wired:

On the first Sunday afternoon of 2015, Elon Musk took to the stage at a closed-door conference at a Puerto Rican resort to discuss an intelligence explosion. This slightly scary theoretical term refers to an uncontrolled hyper-leap in the cognitive ability of AI that Musk and physicist Stephen Hawking worry could one day spell doom for the human race.

That someone of Musk’s considerable public stature was addressing an AI ethics conference—long the domain of obscure academics—was remarkable. But the conference, with the optimistic title “The Future of AI: Opportunities and Challenges,” was an unprecedented meeting of the minds that brought academics like Oxford AI ethicist Nick Bostrom together with industry bigwigs like Skype founder Jaan Tallinn and Google AI expert Shane Legg.

Musk and Hawking fret over an AI apocalypse, but there are more immediate threats. In the past five years, advances in artificial intelligence—in particular, within a branch of AI algorithms called deep neural networks—are putting AI-driven products front-and-center in our lives. Google, Facebook, Microsoft and Baidu, to name a few, are hiring artificial intelligence researchers at an unprecedented rate, and putting hundreds of millions of dollars into the race for better algorithms and smarter computers.

AI problems that seemed nearly unassailable just a few years ago are now being solved. Deep learning has boosted Android’s speech recognition, and given Skype Star Trek-like instant translation capabilities. Google is building self-driving cars, and computer systems that can teach themselves to identify cat videos. Robot dogs can now walk very much like their living counterparts.

“Things like computer vision are starting to work; speech recognition is starting to work There’s quite a bit of acceleration in the development of AI systems,” says Bart Selman, a Cornell professor and AI ethicist who was at the event with Musk. “And that’s making it more urgent to look at this issue.”

Given this rapid clip, Musk and others are calling on those building these products to carefully consider the ethical implications. At the Puerto Rico conference, delegates signed an open letter pledging to conduct AI research for good, while “avoiding potential pitfalls.” Musk signed the letter too. “Here are all these leading AI researchers saying that AI safety is important,” Musk said yesterday. “I agree with them.”

Google Gets on Board

Nine researchers from DeepMind, the AI company that Google acquired last year, have also signed the letter. The story of how that came about goes back to 2011, however. That’s when Jaan Tallinn introduced himself to Demis Hassabis after hearing him give a presentation at an artificial intelligence conference. Hassabis had recently founded the hot AI startup DeepMind, and Tallinn was on a mission. Since founding Skype, he’d become an AI safety evangelist, and he was looking for a convert. The two men started talking about AI and Tallinn soon invested in DeepMind, and last year, Google paid $400 million for the 50-person company. In one stroke, Google owned the largest available talent pool of deep learning experts in the world. Google has kept its DeepMind ambitions under wraps—the company wouldn’t make Hassabis available for an interview—but DeepMind is doing the kind of research that could allow a robot or a self-driving car to make better sense of its surroundings.

That worries Tallinn, somewhat. In a presentation he gave at the Puerto Rico conference, Tallinn recalled a lunchtime meeting where Hassabis showed how he’d built a machine learning system that could play the classic ’80s arcade game Breakout. Not only had the machine mastered the game, it played it a ruthless efficiency that shocked Tallinn. While “the technologist in me marveled at the achievement, the other thought I had was that I was witnessing a toy model of how an AI disaster would begin, a sudden demonstration of an unexpected intellectual capability,” Tallinn remembered.

Read the entire story here.

Image: Robby the Robot (Forbidden Planet), Comic Con, San Diego, 2006. Courtesy of Pattymooney.

Exotic Exoplanets Await Your Arrival

NASA_kepler16b_poster

Vintage travel posters from the late 1890s through to the 1950s colorfully captured the public’s imagination. Now, not to be outdone by the classic works from the Art Nouveau and Art Deco periods, NASA has published a series of its own. But, these posters go beyond illustrating alpine ski resorts, sumptuous hotels and luxurious cruises. Rather, NASA has its sights on exotic and very distant travels — from tens to hundreds of millions of light-years. One such spot is the destination Kepler-16.

Kepler-16 A/B is a binary star system in the constellation of Cygnus that was targeted for analysis by the Kepler exoplanet hunting spacecraft. The star system is home to a Saturn-sized planet Kepler 16b orbiting the red dwarf star, Kepler 16-B, and  is 196 light-years from Earth.

See more of NASA’s travel posters here.

 

Tormented… For Things Remote

Would that our troubled species could put aside its pettiness and look to the stars. We are meant to seek, to explore, to discover, to learn…

If you do nothing else today, watch this video and envision our future. It’s compelling, gorgeous and achievable.

[tube]HYwUG322nMw[/tube]

Visit the filmmaker’s website here.

Video: Wanderers, a short film. Courtesy of Erik Wernquist. Words by the Carl Sagan.

Will the AIs Let Us Coexist?

At some point in the not too distant future artificial intelligences will far exceed humans in most capacities (except shopping and beer drinking). The scripts according to most Hollywood movies seem to suggest that we, humans, would be (mostly) wiped-out by AI machines, beings, robots or other non-human forms — we being the lesser-organisms, superfluous to AI needs.

Perhaps, we may find an alternate path, to a more benign coexistence, much like that posited in The Culture novels by dearly departed, Iain M. Banks. I’ll go with Mr.Banks’ version. Though, just perhaps, evolution is supposed to leave us behind, replacing our simplistic, selfish intelligence with much more advanced, non-human version.

From the Guardian:

From 2001: A Space Odyssey to Blade Runner and RoboCop to The Matrix, how humans deal with the artificial intelligence they have created has proved a fertile dystopian territory for film-makers. More recently Spike Jonze’s Her and Alex Garland’s forthcoming Ex Machina explore what it might be like to have AI creations living among us and, as Alan Turing’s famous test foregrounded, how tricky it might be to tell the flesh and blood from the chips and code.

These concerns are even troubling some of Silicon Valley’s biggest names: last month Telsa’s Elon Musk described AI as mankind’s “biggest existential threat… we need to be very careful”. What many of us don’t realise is that AI isn’t some far-off technology that only exists in film-maker’s imaginations and computer scientist’s labs. Many of our smartphones employ rudimentary AI techniques to translate languages or answer our queries, while video games employ AI to generate complex, ever-changing gaming scenarios. And so long as Silicon Valley companies such as Google and Facebook continue to acquire AI firms and hire AI experts, AI’s IQ will continue to rise…

Isn’t AI a Steven Spielberg movie?
No arguments there, but the term, which stands for “artificial intelligence”, has a more storied history than Spielberg and Kubrick’s 2001 film. The concept of artificial intelligence goes back to the birth of computing: in 1950, just 14 years after defining the concept of a general-purpose computer, Alan Turing asked “Can machines think?”

It’s something that is still at the front of our minds 64 years later, most recently becoming the core of Alex Garland’s new film, Ex Machina, which sees a young man asked to assess the humanity of a beautiful android. The concept is not a million miles removed from that set out in Turing’s 1950 paper, Computing Machinery and Intelligence, in which he laid out a proposal for the “imitation game” – what we now know as the Turing test. Hook a computer up to text terminal and let it have conversations with a human interrogator, while a real person does the same. The heart of the test is whether, when you ask the interrogator to guess which is the human, “the interrogator [will] decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman”.

Turing said that asking whether machines could pass the imitation game is more useful than the vague and philosophically unclear question of whether or not they “think”. “The original question… I believe to be too meaningless to deserve discussion.” Nonetheless, he thought that by the year 2000, “the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted”.

In terms of natural language, he wasn’t far off. Today, it is not uncommon to hear people talking about their computers being “confused”, or taking a long time to do something because they’re “thinking about it”. But even if we are stricter about what counts as a thinking machine, it’s closer to reality than many people think.

So AI exists already?
It depends. We are still nowhere near to passing Turing’s imitation game, despite reports to the contrary. In June, a chatbot called Eugene Goostman successfully fooled a third of judges in a mock Turing test held in London into thinking it was human. But rather than being able to think, Eugene relied on a clever gimmick and a host of tricks. By pretending to be a 13-year-old boy who spoke English as a second language, the machine explained away its many incoherencies, and with a smattering of crude humour and offensive remarks, managed to redirect the conversation when unable to give a straight answer.

The most immediate use of AI tech is natural language processing: working out what we mean when we say or write a command in colloquial language. For something that babies begin to do before they can even walk, it’s an astonishingly hard task. Consider the phrase beloved of AI researchers – “time flies like an arrow, fruit flies like a banana”. Breaking the sentence down into its constituent parts confuses even native English speakers, let alone an algorithm.

Read the entire article here.

Philae: The Little Lander That Could

Farewell_Philae_-_narrow-angle_view_large

What audacity! A ten year journey, covering 4 billion miles.

On November 12, 2014 at 16:03 UTC, the Rosetta spacecraft delivered the Philae probe to land on a comet; a comet the size of New York’s Manhattan Island, speeding through our solar system at 34,000 miles per hour. What utter audacity!

The team of scientists, engineers, and theoreticians at the European Space Agency (ESA), and its partners, pulled off an awe-inspiring, remarkable and historic feat; a feat that ranks with the other pinnacles of human endeavor and exploration. It shows what our fledgling species can truly achieve.

Sadly, our species is flawed, capable of such terrible atrocities to ourselves and to our planet. And yet, triumphant stories like this one — the search for fundamental understanding through science —  must give us all some continued hope.

Exploration. Inspiration. Daring. Risk. Execution. Discovery. Audacity!

From the Guardian:

These could be the dying hours of Philae, the device the size of a washing machine which travelled 4bn miles to hitch a ride on a comet. Philae is the “lander” which on Wednesday sprung from the craft that had carried it into deep, dark space, bounced a couple of times on the comet’s surface, and eventually found itself lodged in the shadows, starved of the sunlight its solar batteries needed to live. Yesterday, the scientists who had been planning this voyage for the past quarter-century sat and waited for word from their little explorer, hoping against hope that it still had enough energy to reveal its discoveries.

If Philae expires on the hard, rocky surface of Comet 67P the sadness will be felt far beyond mission control in Darmstadt, Germany. Indeed, it may be felt there least of all: those who have dedicated their working lives to this project pronounced it a success, regardless of a landing that didn’t quite go to plan (Philae’s anchor harpoons didn’t fire, so with gravity feeble there was nothing to keep the machine anchored to the original, optimal landing site). They were delighted to have got there at all and thrilled at Philae’s early work. Up to 90% of the science they planned to carry out has been done. As one scientist put it, “We’ve already got fantastic data.”

Those who lacked their expertise couldn’t help feel a pang all the same. The human instinct to anthropomorphise does not confine itself to cute animals, as anyone who has seen the film Wall-E can testify. If Pixar could make us well up for a waste-disposing robot, it’s little wonder the European Space Agency has had us empathising with a lander ejected from its “mothership”, identifiable only by its “spindly leg”. In those nervous hours, many will have been rooting for Philae, imagining it on that cold, hard surface yearning for sunlight, its beeps of data slowly petering out as its strength faded.

 But that barely accounts for the fascination this adventure has stirred. Part of it is simple, a break from the torments down here on earth. You don’t have to go as far as Christopher Nolan film Interstellar, which fantasises about leaving our broken, ravaged planet and starting somewhere else – to enjoy a rare respite from our earthly woes. For a few merciful days, the news has featured a story remote from the bloodshed of Islamic State and Ukraine, from the pain of child abuse and poverty. Even those who don’t dream of escaping this planet can relish the escapism.

But the comet landing has provided more than a diversion: it’s been an antidote too. For this has been a story of human cooperation in a world of conflict. The narrow version of this point focuses on this as a European success story. When our daily news sees “Europe” only as the source of unwanted migrants or maddening regulation, Philae has offered an alternative vision; that Germany, Italy, France, Britain and others can achieve far more together than they could ever dream of alone. The geopolitical experts so often speak of the global pivot to Asia, the rise of the Bric nations and the like – but this extraordinary voyage has proved that Europe is not dead yet.

Even that, as I say, is to view it too narrowly. The US, through Nasa, is involved as well. And note the language attached to the hardware: the Rosetta satellite, the Ptolemy measuring instrument, the Osiris on-board camera, Philea itself – all imagery drawn from ancient Egypt. The spacecraft was named after the Rosetta stone, the discovery that unlocked hieroglyphics, as if to suggest a similar, if not greater, ambition: to decode the secrets of the universe. By evoking humankind’s ancient past, this is presented as a mission of the entire human race. There will be no flag planting on Comet 67P. As the Open University’s Jessica Hughes puts it, Philea, Rosetta and the rest “have become distant representatives of our shared, earthly heritage”.

That fits because this is how we experience such a moment: as a human triumph. When we marvel at the numbers – a probe has travelled for 10 years, crossed those 4bn miles, landed on a comet speeding at 34,000mph and done so within two minutes of its planned arrival – we marvel at what our species is capable of. I can barely get past the communication: that Darmstadt is able to contact an object 300 million miles away, sending instructions, receiving pictures. I can’t get phone reception in my kitchen, yet the ESA can be in touch with a robot that lies far beyond Mars. Like watching Usain Bolt run or hearing Maria Callas sing, we find joy and exhilaration in the outer limits of human excellence.

And of course we feel awe. What Interstellar prompts us to feel artificially – making us gasp at the confected scale and digitally assisted magnitude – Philae gives us for real. It is the stretch of time and place, glimpsing somewhere so far away it is as out of reach as ancient Egypt.

All that is before you reckon with the voyage’s scholarly purpose. “We are on the cutting edge of science,” they say, and of course they are. They are probing the deepest mysteries, including the riddle of how life began. (One theory suggests a comet brought water to a previously arid Earth.) What the authors of the Book of Genesis understood is that this question of origins is intimately bound up with the question of purpose. From the dawn of human time, to ask “How did we get here?” has been to ask “Why are we here?”

It’s why contemplation of the cosmic so soon reverts to the spiritual. Interstellar, like 2001: A Space Odyssey before it, is no different. It’s why one of the most powerful moments of Ronald Reagan’s presidency came when he paid tribute to the astronauts killed in the Challenger disaster. They had, he said, “slipped the surly bonds of Earth to touch the face of God”.

Not that you have to believe in such things to share the romance. Secularists, especially on the left, used to have a faith of their own. They believed that humanity was proceeding along an inexorable path of progress, that the world was getting better and better with each generation. The slaughter of the past century robbed them – us – of that once-certain conviction. Yet every now and again comes an unambiguous advance, what one ESA scientist called “A big step for human civilisation”. Even if we never hear from Philae again, we can delight in that.

Read the entire article here.

Image: Philae lander, detached from the Rosetta spacecraft, on its solitary journey towards the surface of comet P67. Courtesy of ESA.

Non-Adaptive Evolution of the Very Small

Is every feature that arises from evolution an adaptation?  Some evolutionary biologists think not. That is, some traits arising from the process of natural section may be due to random occurrences that natural selection failed to discard. And, it seems that smaller organisms show this quite well. To many adaptationists this is heretical — but too some researchers it opens a new, fruitful avenue of inquiry, and may lead to a fine tuning in our understanding of the evolutionary process.

From New Scientist:

I have spent my life working on slime moulds and they sent me a message that started me thinking. What puzzled me was that two different forms are found side-by-side in the soil everywhere from the tundra to the tropics. The obvious difference lies in the tiny stalks that disperse their spores. In one species this fruiting body is branched, in the other it is not.

I had assumed that the branched and the unbranched forms occupied separate ecological niches but I could not imagine what those niches might be. Perhaps there were none and neither shape had an advantage over the other, as far as natural selection was concerned.

I wrote this up and sent it to a wise and respected friend who responded with a furious letter saying that my conclusion was absurd: it was easy to imagine ways in which the two kinds of stalks might be separate adaptations and co-exist everywhere in the soil. This set me thinking again and I soon realised that both my position and his were guesses. They were hypotheses and neither could be proved.

There is no concept that is more central to evolution than natural selection, so adding this extra dimension of randomness was heresy. Because of the overwhelming success of Darwin’s natural selection, biologists – certainly all evolutionary biologists – find it hard to believe that a feature of any organism can have arisen (with minor exceptions) in any other way. Natural selection favours random genetic mutations that offer an advantage, therefore many people believe that all properties of an organism are an adaptation. If one cannot find the adaptive reason for a feature of an organism, one should just assume that there was once one, or that there is one that will be revealed in the future.

This matter has created some heated arguments. For example, the renowned biologists Stephen Jay Gould and Richard Lewontin wrote an inflammatory paper in 1979 attacking adaptionists for being like Dr Pangloss, the incurable optimist in Voltaire’s 1759 satire Candide. While their point was well taken, its aggressive tone produced counterattacks. Adaptionists assume that every feature of an organism arises as an adaption, but I assume that some features are the results of random mutations that escape being culled by natural selection. This is what I was suggesting for the branched and unbranched fruiting bodies of the slime moulds.

How can these organisms escape the stranglehold of selection? One explanation grabbed me and I have clung to it ever since; in fact it is the backbone of my new book. The reason that these organisms might have shapes that are not governed by natural selection is because they are so small. It turns out there are good reasons why this might be the case.

Development is a long, slow process for large organisms. Humans spend nine months in utero and keep growing in different ways for a long time after birth. An elephant’s gestation is even longer (about two years) and a mouse’s much shorter, but they are all are vastly longer than a single-cell microorganism. Such small forms may divide every few hours; at most their development may span days, but whatever it is it will be a small fraction of that of a larger, more complex organism.

Large organisms develop in a series of steps usually beginning with the fertilisation of an egg that then goes through many cell divisions and an increase in size of the embryo, with many twists and turns as it progresses towards adulthood. These multitudinous steps involve the laying down of complex organs such as a heart or an eye.

Building a complex organism is an immense enterprise, and the steps are often interlocked in a sequence so that if an earlier step fails through a deleterious mutation, the result is very simple: the death of the embryo. I first came across this idea in a 1965 book by Lancelot Law Whyte called Internal Factors in Evolution and have been mystified ever since why the idea has been swallowed by oblivion. His thesis was straightforward. Not only is there selection of organisms in the environment – Darwinian natural selection, which is external – but there is also continuous internal selection during development. Maybe the idea was too simple and straightforward to have taken root.

This fits in neatly with my contention that the shape of microorganisms is more affected by randomness than for large, complex organisms. Being small means very few development steps, with little or no internal selection. The effect of a mutation is likely to be immediately evident in the external morphology, so adult variants are produced with large numbers of different shapes and there is an increased chance that some of these will be untouched by natural selection.

Compare this with what happens in a big, complex organism – a mammal, say. Only those mutations that occur at a late stage of development are likely to be viable – eye or hair colour in humans are obvious examples. Any unfavourable mutation that occurs earlier in development will likely be eliminated by internal selection.

Let us now examine the situation for microorganisms. What is the evidence that their shapes are less likely to be culled by natural selection? The best examples come from organisms that make mineral shells: Radiolaria (pictured) and diatoms with their silica skeletons and Foraminifera with their calciferous shells. About 50,000 species of radiolarians have been described, 100,000 species of diatoms and some 270,000 species among the Foraminifera – all with vastly different shapes. For example, radiolarian skeletons can be shaped like spiny balls, bells, crosses and octagonal pyramids, to name but a few.

If you are a strict adaptionist, you have to find a separate explanation for each shape. If you favour my suggestion that their shapes arose through random mutation and there is little or no selection, the problem vanishes. It turns out that this very problem concerned Darwin. In the third (and subsequent) editions of On the Origin of Species he has a passage that almost takes the wind out of my sails:

“If it were no advantage, these forms would be left by natural selection unimproved or but little improved; and might remain for indefinite ages in their present little advanced condition. And geology tells us that some of the lowest forms, as the infusoria and rhizopods, have remained for an enormous period in nearly their present state.”

Read the entire article here.

MondayMap: Our New Address — Laniakea

laniakea_nrao

Once upon a time we humans sat smugly at the center of the universe. Now, many of us (though, not yet all) know better. Over the the last several centuries we learned and accepted that the Earth spun around the nearest Star, and not the converse. We then learned that the Sun formed part of an immense galaxy, the Milky Way, itself spinning in a vast cosmological dance. More recently, we learned that the Milky Way formed part of a larger cluster of galaxies, known as the Local Group.

Now we find that our Local Group is a mere speck within an immense supercluster containing around 100,000 galaxies spanning half a billion light years. Researchers have dubbed this galactic supercluster, rather aptly, Laniakea, Hawaiian for “immense heaven”. Laniakea is your new address. And, fascinatingly, Laniakea is moving towards an even larger grouping of galaxies named the Shapely supercluster.

From the Guardian:

In what amounts to a back-to-school gift for pupils with nerdier leanings, researchers have added a fresh line to the cosmic address of humanity. No longer will a standard home address followed by “the Earth, the solar system, the Milky Way, the universe” suffice for aficionados of the extended astronomical location system.

The extra line places the Milky Way in a vast network of neighbouring galaxies or “supercluster” that forms a spectacular web of stars and planets stretching across 520m light years of our local patch of universe. Named Laniakea, meaning “immeasurable heaven” in Hawaiian, the supercluster contains 100,000 large galaxies that together have the mass of 100 million billion suns.

Our home galaxy, the Milky Way, lies on the far outskirts of Laniakea near the border with another supercluster of galaxies named Perseus-Pisces. “When you look at it in three dimensions, is looks like a sphere that’s been badly beaten up and we are over near the edge, being pulled towards the centre,” said Brent Tully, an astronomer at the University of Hawaii in Honolulu.

Astronomers have long known that just as the solar system is part of the Milky Way, so the Milky Way belongs to a cosmic structure that is much larger still. But their attempts to define the larger structure had been thwarted because it was impossible to work out where one cluster of galaxies ended and another began.

Tully’s team gathered measurements on the positions and movement of more than 8,000 galaxies and, after discounting the expansion of the universe, worked out which were being pulled towards us and which were being pulled away. This allowed the scientists to define superclusters of galaxies that all moved in the same direction (if you’re reading this story on a mobile device, click here to watch a video explaining the research).

The work published in Nature gives astronomers their first look at the vast group of galaxies to which the Milky Way belongs. A narrow arch of galaxies connects Laniakea to the neighbouring Perseus-Pisces supercluster, while two other superclusters called Shapley and Coma lie on the far side of our own.

Tully said the research will help scientists understand why the Milky Way is hurtling through space at 600km a second towards the constellation of Centaurus. Part of the reason is the gravitational pull of other galaxies in our supercluster.

“But our whole supercluster is being pulled in the direction of this other supercluster, Shapley, though it remains to be seen if that’s all that’s going on,” said Tully.

Read the entire article here or the nerdier paper here.

Image: Laniakea: Our Home Supercluster of Galaxies. The blue dot represents the location of the Milky Way. Courtesy: R. Brent Tully (U. Hawaii) et al., SDvision, DP, CEA/Saclay.

The Next (and Final) Doomsday Scenario

Personally, I love dystopian visions and apocalyptic nightmares. So, news that the famed Higgs boson may ultimately cause our demise, and incidentally the end of the entire cosmos, caught my attention.

Apparently theoreticians have calculated that the Higgs potential of which the Higgs boson is a manifestation has characteristics that make the universe unstable. (The Higgs was discovered in 2012 by teams at CERN’s Large Hadron Collider.) Luckily for those wishing to avoid the final catastrophe this instability may keep the universe intact for several more billions of years, and if suddenly the Higgs were to trigger the final apocalypse it would be at the speed of light.

From Popular Mechanics:

In July 2012, when scientists at CERN’s Large Hadron Collider culminated decades of work with their discovery of the Higgs boson, most physicists celebrated. Stephen Hawking did not. The famed theorist expressed his disappointmentthat nothing more unusual was found, calling the discovery “a pity in a way.” But did he ever say the Higgs could destroy the universe?

That’s what many reports in the media said earlier this week, quoting a preface Hawking wrote to a book called Starmus. According to The Australian, the preface reads in part: “The Higgs potential has the worrisome feature that it might become metastable at energies above 100 [billion] gigaelectronvolts (GeV). This could mean that the universe could undergo catastrophic vacuum decay, with a bubble of the true vacuum expanding at the speed of light. This could happen at any time and we wouldn’t see it coming.”

What Hawking is talking about here is not the Higgs boson but what’s called the Higgs potential, which are “totally different concepts,” says Katie Mack, a theoretical astrophysicist at Melbourne University. The Higgs field permeates the entire universe, and the Higgs boson is an excitation of that field, just like an electron is an excitation of an electric field. In this analogy, the Higgs potential is like the voltage, determining the value of the field.

Once physicists began to close in on the mass of the Higgs boson, they were able to work out the Higgs potential. That value seemed to reveal that the universe exists in what’s known as a meta-stable vacuum state, or false vacuum, a state that’s stable for now but could slip into the “true” vacuum at any time. This is the catastrophic vacuum decay in Hawking’s warning, though he is not the first to posit the idea.

Is he right?

“There are a couple of really good reasons to think that’s not the end of the story,” Mack says. There are two ways for a meta-stable state to fall off into the true vacuum—one classical way, and one quantum way. The first would occur via a huge energy boost, the 100 billion GeVs Hawking mentions. But, Mack says, the universe already experienced such high energies during the period of inflation just after the big bang. Particles in cosmic rays from space also regularly collide with these kinds of high energies, and yet the vacuum hasn’t collapsed (otherwise, we wouldn’t be here).

“Imagine that somebody hands you a piece of paper and says, ‘This piece of paper has the potential to spontaneously combust,’ and so you might be worried,” Mack says. “But then they tell you 20 years ago it was in a furnace.” If it didn’t combust in the furnace, it’s not likely to combust sitting in your hand.

Of course, there’s always the quantum world to consider, and that’s where things always get weirder. In the quantum world, where the smallest of particles interact, it’s possible for a particle on one side of a barrier to suddenly appear on the other side of the barrier without actually going through it, a phenomenon known as quantum tunneling. If our universe was in fact in a meta-stable state, it could quantum tunnel through the barrier to the vacuum on the other side with no warning, destroying everything in an instant. And while that is theoretically possible, predictions show that if it were to happen, it’s not likely for billions of billions of years. By then, the sun and Earth and you and I and Stephen Hawking will be a distant memory, so it’s probably not worth losing sleep over it.

What’s more likely, Mack says, is that there is some new physics not yet understood that makes our vacuum stable. Physicists know there are parts of the model missing; mysteries like quantum gravity and dark matter that still defy explanation. When two physicists published a paper documenting the Higgs potential conundrum in March, their conclusion was that an explanation lies beyond the Standard Model, not that the universe may collapse at any time.

Read the article here.

The Original Rolling Stones

rocks-at-racetrack_arno_gourdol

Who or what has been moving these Death Valley boulders? Theories have persisted for quite some time: unknown inhabitants of the desert straddling California and Nevada; mischievous troglodytes from Middle Earth; aliens sending us cryptic, geologic messages; invisible demons; telepathic teenagers.

But now we know, and the mysterious forces at work are, unfortunately, rather mundane — the rocks are moved through a combination of rain, ice and wind. Oh well — time to focus on crop circles again!

From ars technica:

Mario is just a video game, and rocks don’t have legs. Both of these things are true. Yet, like the Mario ghosts that advance only when your back is turned, there are rocks that we know have been moving—even though no one has ever seen them do it.

The rocks in question occupy a spot called Racetrack Playa in Death Valley. Playas are desert mudflats that sometimes host shallow lakes when enough water is around. Racetrack Playa gets its name from long furrows extending from large rocks sitting on the playa bed—tracks that make it look as if the rocks had been dragged through the mud. The tracks of the various rocks run parallel to each other, sometimes suggesting that the rocks had made sharp turns in unison, like dehydrated synchronize swimmers.

Many potential explanations have been offered up (some going back to the 1940s) for this bizarre situation, as the rocks seem to only move occasionally and had never been caught in the act. One thing everyone could agree on was that it must occur when the playa is wet and the muddy bottom is slick. At first, suggestions revolved around especially strong winds. One geologist went as far as to bring out a propeller airplane to see how much wind it would take.

The other idea was that ice, which does occasionally form there, could be responsible. If the rocks were frozen into a sheet of ice, a little buoyancy might reduce the friction beneath them. And again, strong winds over the surface of the ice could drag the whole mess around, accounting for the synchronized nature of the tracks.

Over the years, a number of clever studies have attempted to test these possibilities. But to truly put the question to rest, the rocks were going to have to be observed while moving. A team led by Richard Norris and his engineer cousin James Norris set out to do just that. They set out 15 rocks with GPS loggers, a weather station, and some time-lapse cameras in 2011. Magnetic triggers were buried beneath the rocks so that the loggers would start recording when they began to move. And the Norrises waited.

They got what they were after last winter. A little rain and snow provided enough water to fill the lake to a depth of a few centimeters. At night, temperatures were low enough for ice to form. On a few sunny days, the rocks stirred.

By noon, the thin sheet of ice—just a few millimeters thick—would start breaking up. Light wind pushed the ice, and the water in the lake, to the northeast. The rocks, which weren’t frozen into the thin ice, went along for the ride. On one occasion, two rocks were recorded traveling 65 meters over 16 minutes, with a peak rate of 5 to 6 meters per minute.

These movements were detectable in the time-lapse images, but you might not actually notice it if you were standing there. The researchers note that the tracks carved in the mud aren’t immediately apparent due to the muddy water.

The total distances traveled by the instrumented rocks between November and February ranged from 15 to 225 meters. While all moving rocks travel in the direction of the prevailing wind, they didn’t all move together—motion depended on the way the ice broke up and the depth of the water around each rock.

While the proposed explanations weren’t far off, the thinness of the ice and the minimal wind speed that were needed were both surprises. There was no ice buoyancy lifting the rocks. They were just being pushed by loose sheets of thin ice that were themselves being pushed by wind and water.

In the end, there’s nothing extraordinary about the motion of these rocks, but the necessary conditions are rare enough that the results still shock us. Similar tracks have been found in a few playas elsewhere around the world, though, and ice-pushed rocks also leave marks in the shallows of Canada’s Great Slave Lake. There’s no need to worry about the rocks at Racetrack Playa coming to life and opening secretly ferocious jaws when you look away.

Read the entire story here.

Image: Rocks at Racetrack Playa, Death Valley. Courtesy of Arno Gourdol. Some Rights Reserved.

Measuring the Quantum Jitter

Some physicists are determined to find out if we are mere holograms. Perhaps not quite like the dystopian but romanticized version fictionalized in The Matrix, but still a fascinating idea nonetheless. Armed with a very precise measuring tool, known as a Holometer or more precisely twin correlated Michelson holographic interferometers, researchers aim to find the scale at which the universe becomes jittery. In turn this will give a better picture of the fundamental units of space-time, well beyond the the elementary particles themselves, and somewhat closer to the Planck Length.

From the New Scientist:

The search for the fundamental units of space and time has officially begun. Physicists at the Fermi National Accelerator Laboratory near Chicago, Illinois, announced this week that the Holometer, a device designed to test whether we live in a giant hologram, has started taking data.

The experiment is testing the idea that the universe is actually made up of tiny “bits”, in a similar way to how a newspaper photo is actually made up of dots. These fundamental units of space and time would be unbelievably tiny: a hundred billion billion times smaller than a proton. And like the well-knownquantum behaviour of matter and energy, these bits of space-time would behave more like waves than particles.

“The theory is that space is made of waves instead of points, that everything is a little jittery, and never sits still,” says Craig Hogan at the University of Chicago, who dreamed up the experiment.

The Holometer is designed to measure this “jitter”. The surprisingly simple device is operated from a shed in a field near Chicago, and consists of two powerful laser beams that are directed through tubes 40 metres long. The lasers precisely measure the positions of mirrors along their paths at two points in time.

If space-time is smooth and shows no quantum behaviour, then the mirrors should remain perfectly still. But if both lasers measure an identical, small difference in the mirrors’ position over time, that could mean the mirrors are being jiggled about by fluctuations in the fabric of space itself.

 So what of the idea that the universe is a hologram? This stems from the notion that information cannot be destroyed, so for example the 2D event horizon of a black hole “records” everything that falls into it. If this is the case, then the boundary of the universe could also form a 2D representation of everything contained within the universe, like a hologram storing a 3D image in 2D .

Hogan cautions that the idea that the universe is a hologram is somewhat misleading because it suggests that our experience is some kind of illusion, a projection like a television screen. If the Holometer finds a fundamental unit of space, it won’t mean that our 3D world doesn’t exist. Rather it will change the way we understand its basic makeup. And so far, the machine appears to be working.

In a presentation given in Chicago on Monday at the International Conference on Particle Physics and Cosmology, Hogan said that the initial results show the Holometer is capable of measuring quantum fluctuations in space-time, if they are there.

“This was kind of an amazing moment,” says Hogan. “It’s just noise right now – we don’t know whether it’s space-time noise – but the machine is operating at that specification.”

Hogan expects that the Holometer will have gathered enough data to put together an answer to the quantum question within a year. If the space-time jitter is there, Hogan says it could underpin entirely new explanations for why the expansion of our universe is accelerating, something traditionally attributed to the little understood phenomenon of dark energy.

Read the entire article here.

Syndrome X

DNA_Structure

The quest for immortality or even great longevity has probably led humans since they first became self-aware. Entire cultural movements and industries are founded on the desire to enhance and extend our lives. Genetic research, of course, may eventually unlock some or all of life and death’s mysteries. In the meantime, groups of dedicated scientists continue to look for for the foundation of aging with a view to understanding the process and eventually slowing (and perhaps stopping) it. Richard Walker is one of these singularly focused researchers.

From the BBC:

Richard Walker has been trying to conquer ageing since he was a 26-year-old free-loving hippie. It was the 1960s, an era marked by youth: Vietnam War protests, psychedelic drugs, sexual revolutions. The young Walker relished the culture of exultation, of joie de vivre, and yet was also acutely aware of its passing. He was haunted by the knowledge that ageing would eventually steal away his vitality – that with each passing day his body was slightly less robust, slightly more decayed. One evening he went for a drive in his convertible and vowed that by his 40th birthday, he would find a cure for ageing.

Walker became a scientist to understand why he was mortal. “Certainly it wasn’t due to original sin and punishment by God, as I was taught by nuns in catechism,” he says. “No, it was the result of a biological process, and therefore is controlled by a mechanism that we can understand.”

Scientists have published several hundred theories of ageing, and have tied it to a wide variety of biological processes. But no one yet understands how to integrate all of this disparate information.

Walker, now 74, believes that the key to ending ageing may lie in a rare disease that doesn’t even have a real name, “Syndrome X”. He has identified four girls with this condition, marked by what seems to be a permanent state of infancy, a dramatic developmental arrest. He suspects that the disease is caused by a glitch somewhere in the girls’ DNA. His quest for immortality depends on finding it.

It’s the end of another busy week and MaryMargret Williams is shuttling her brood home from school. She drives an enormous SUV, but her six children and their coats and bags and snacks manage to fill every inch. The three big kids are bouncing in the very back. Sophia, 10, with a mouth of new braces, is complaining about a boy-crazy friend. She sits next to Anthony, seven, and Aleena, five, who are glued to something on their mother’s iPhone. The three little kids squirm in three car seats across the middle row. Myah, two, is mining a cherry slushy, and Luke, one, is pawing a bag of fresh crickets bought for the family gecko.

Finally there’s Gabrielle, who’s the smallest child, and the second oldest, at nine years old. She has long, skinny legs and a long, skinny ponytail, both of which spill out over the edges of her car seat. While her siblings giggle and squeal, Gabby’s dusty-blue eyes roll up towards the ceiling. By the calendar, she’s almost an adolescent. But she has the buttery skin, tightly clenched fingers and hazy awareness of a newborn.

Back in 2004, when MaryMargret and her husband, John, went to the hospital to deliver Gabby, they had no idea anything was wrong. They knew from an ultrasound that she would have clubbed feet, but so had their other daughter, Sophia, who was otherwise healthy. And because MaryMargret was a week early, they knew Gabby would be small, but not abnormally so. “So it was such a shock to us when she was born,” MaryMargret says.

Gabby came out purple and limp. Doctors stabilised her in the neonatal intensive care unit and then began a battery of tests. Within days the Williamses knew their new baby had lost the genetic lottery. Her brain’s frontal lobe was smooth, lacking the folds and grooves that allow neurons to pack in tightly. Her optic nerve, which runs between the eyes and the brain, was atrophied, which would probably leave her blind. She had two heart defects. Her tiny fists couldn’t be pried open. She had a cleft palate and an abnormal swallowing reflex, which meant she had to be fed through a tube in her nose. “They started trying to prepare us that she probably wouldn’t come home with us,” John says. Their family priest came by to baptise her.

Day after day, MaryMargret and John shuttled between Gabby in the hospital and 13-month-old Sophia at home. The doctors tested for a few known genetic syndromes, but they all came back negative. Nobody had a clue what was in store for her. Her strong Catholic family put their faith in God. “MaryMargret just kept saying, ‘She’s coming home, she’s coming home’,” recalls her sister, Jennie Hansen. And after 40 days, she did.

Gabby cried a lot, loved to be held, and ate every three hours, just like any other newborn. But of course she wasn’t. Her arms would stiffen and fly up to her ears, in a pose that the family nicknamed her “Harley-Davidson”. At four months old she started having seizures. Most puzzling and problematic, she still wasn’t growing. John and MaryMargret took her to specialist after specialist: a cardiologist, a gastroenterologist, a geneticist, a neurologist, an ophthalmologist and an orthopaedist. “You almost get your hopes up a little – ’This is exciting! We’re going to the gastro doctor, and maybe he’ll have some answers’,” MaryMargret says. But the experts always said the same thing: nothing could be done.

The first few years with Gabby were stressful. When she was one and Sophia two, the Williamses drove from their home in Billings, Montana, to MaryMargret’s brother’s home outside of St Paul, Minnesota. For nearly all of those 850 miles, Gabby cried and screamed. This continued for months until doctors realised she had a run-of-the-mill bladder infection. Around the same period, she acquired a severe respiratory infection that left her struggling to breathe. John and MaryMargret tried to prepare Sophia for the worst, and even planned which readings and songs to use at Gabby’s funeral. But the tiny toddler toughed it out.

While Gabby’s hair and nails grew, her body wasn’t getting bigger. She was developing in subtle ways, but at her own pace. MaryMargret vividly remembers a day at work when she was pushing Gabby’s stroller down a hallway with skylights in the ceiling. She looked down at Gabby and was shocked to see her eyes reacting to the sunlight. “I thought, ‘Well, you’re seeing that light!’” MaryMargret says. Gabby wasn’t blind, after all.

Despite the hardships, the couple decided they wanted more children. In 2007 MaryMargret had Anthony, and the following year she had Aleena. By this time, the Williamses had stopped trudging to specialists, accepting that Gabby was never going to be fixed. “At some point we just decided,” John recalls, “it’s time to make our peace.”

Mortal questions

When Walker began his scientific career, he focused on the female reproductive system as a model of “pure ageing”: a woman’s ovaries, even in the absence of any disease, slowly but inevitably slide into the throes of menopause. His studies investigated how food, light, hormones and brain chemicals influence fertility in rats. But academic science is slow. He hadn’t cured ageing by his 40th birthday, nor by his 50th or 60th. His life’s work was tangential, at best, to answering the question of why we’re mortal, and he wasn’t happy about it. He was running out of time.

So he went back to the drawing board. As he describes in his book, Why We Age, Walker began a series of thought experiments to reflect on what was known and not known about ageing.

Ageing is usually defined as the slow accumulation of damage in our cells, organs and tissues, ultimately causing the physical transformations that we all recognise in elderly people. Jaws shrink and gums recede. Skin slacks. Bones brittle, cartilage thins and joints swell. Arteries stiffen and clog. Hair greys. Vision dims. Memory fades. The notion that ageing is a natural, inevitable part of life is so fixed in our culture that we rarely question it. But biologists have been questioning it for a long time.

It’s a harsh world out there, and even young cells are vulnerable. It’s like buying a new car: the engine runs perfectly but is still at risk of getting smashed on the highway. Our young cells survive only because they have a slew of trusty mechanics on call. Take DNA, which provides the all-important instructions for making proteins. Every time a cell divides, it makes a near-perfect copy of its three-billion-letter code. Copying mistakes happen frequently along the way, but we have specialised repair enzymes to fix them, like an automatic spellcheck. Proteins, too, are ever vulnerable. If it gets too hot, they twist into deviant shapes that keep them from working. But here again, we have a fixer: so-called ‘heat shock proteins’ that rush to the aid of their misfolded brethren. Our bodies are also regularly exposed to environmental poisons, such as the reactive and unstable ‘free radical’ molecules that come from the oxidisation of the air we breathe. Happily, our tissues are stocked with antioxidants and vitamins that neutralise this chemical damage. Time and time again, our cellular mechanics come to the rescue.

Which leads to the biologists’ longstanding conundrum: if our bodies are so well tuned, why, then, does everything eventually go to hell?

One theory is that it all boils down to the pressures of evolution. Humans reproduce early in life, well before ageing rears its ugly head. All of the repair mechanisms that are important in youth – the DNA editors, the heat shock proteins, the antioxidants – help the young survive until reproduction, and are therefore passed down to future generations. But problems that show up after we’re done reproducing cannot be weeded out by evolution. Hence, ageing.

Most scientists say that ageing is not caused by any one culprit but by the breakdown of many systems at once. Our sturdy DNA mechanics become less effective with age, meaning that our genetic code sees a gradual increase in mutations. Telomeres, the sequences of DNA that act as protective caps on the ends of our chromosomes, get shorter every year. Epigenetic messages, which help turn genes on and off, get corrupted with time. Heat shock proteins run down, leading to tangled protein clumps that muck up the smooth workings of a cell. Faced with all of this damage, our cells try to adjust by changing the way they metabolise nutrients and store energy. To ward off cancer, they even know how to shut themselves down. But eventually cells stop dividing and stop communicating with each other, triggering the decline we see from the outside.

Scientists trying to slow the ageing process tend to focus on one of these interconnected pathways at a time. Some researchers have shown, for example, that mice on restricted-calorie diets live longer than normal. Other labs have reported that giving mice rapamycin, a drug that targets an important cell-growth pathway, boosts their lifespan. Still other groups are investigating substances that restore telomeres, DNA repair enzymes and heat shock proteins.

During his thought experiments, Walker wondered whether all of these scientists were fixating on the wrong thing. What if all of these various types of cellular damages were the consequences of ageing, but not the root cause of it? He came up with an alternative theory: that ageing is the unavoidable fallout of our development.

The idea sat on the back burner of Walker’s mind until the evening of 23 October 2005. He was working in his home office when his wife called out to him to join her in the family room. She knew he would want to see what was on TV: an episode of Dateline about a young girl who seemed to be “frozen in time”. Walker watched the show and couldn’t believe what he was seeing. Brooke Greenberg was 12 years old, but just 13 pounds (6kg) and 27 inches (69cm) long. Her doctors had never seen anything like her condition, and suspected the cause was a random genetic mutation. “She literally is the Fountain of Youth,” her father, Howard Greenberg, said.

Walker was immediately intrigued. He had heard of other genetic diseases, such as progeria and Werner syndrome, which cause premature ageing in children and adults respectively. But this girl seemed to be different. She had a genetic disease that stopped her development and with it, Walker suspected, the ageing process. Brooke Greenberg, in other words, could help him test his theory.

Uneven growth

Brooke was born a few weeks premature, with many birth defects. Her paediatrician labeled her with Syndrome X, not knowing what else to call it.

After watching the show, Walker tracked down Howard Greenberg’s address. Two weeks went by before Walker heard back, and after much discussion he was allowed to test Brooke. He was sent Brooke’s medical records as well as blood samples for genetic testing. In 2009, his team published a brief report describing her case.

Walker’s analysis found that Brooke’s organs and tissues were developing at different rates. Her mental age, according to standardised tests, was between one and eight months. Her teeth appeared to be eight years old; her bones, 10 years. She had lost all of her baby fat, and her hair and nails grew normally, but she had not reached puberty. Her telomeres were considerably shorter than those of healthy teenagers, suggesting that her cells were ageing at an accelerated rate.

All of this was evidence of what Walker dubbed “developmental disorganisation”. Brooke’s body seemed to be developing not as a coordinated unit, he wrote, but rather as a collection of individual, out-of-sync parts. “She is not simply ‘frozen in time’,” Walker wrote. “Her development is continuing, albeit in a disorganised fashion.”

The big question remained: why was Brooke developmentally disorganised? It wasn’t nutritional and it wasn’t hormonal. The answer had to be in her genes. Walker suspected that she carried a glitch in a gene (or a set of genes, or some kind of complex genetic programme) that directed healthy development. There must be some mechanism, after all, that allows us to develop from a single cell to a system of trillions of cells. This genetic programme, Walker reasoned, would have two main functions: it would initiate and drive dramatic changes throughout the organism, and it would also coordinate these changes into a cohesive unit.

Ageing, he thought, comes about because this developmental programme, this constant change, never turns off. From birth until puberty, change is crucial: we need it to grow and mature. After we’ve matured, however, our adult bodies don’t need change, but rather maintenance. “If you’ve built the perfect house, you would want to stop adding bricks at a certain point,” Walker says. “When you’ve built a perfect body, you’d want to stop screwing around with it. But that’s not how evolution works.” Because natural selection cannot influence traits that show up after we have passed on our genes, we never evolved a “stop switch” for development, Walker says. So we keep adding bricks to the house. At first this doesn’t cause much damage – a sagging roof here, a broken window there. But eventually the foundation can’t sustain the additions, and the house topples. This, Walker says, is ageing.

Brooke was special because she seemed to have been born with a stop switch. But finding the genetic culprit turned out to be difficult. Walker would need to sequence Brooke’s entire genome, letter by letter.

That never happened. Much to Walker’s chagrin, Howard Greenberg abruptly severed their relationship. The Greenbergs have not publicly explained why they ended their collaboration with Walker, and declined to comment for this article.

Second chance

In August 2009, MaryMargret Williams saw a photo of Brooke on the cover of People magazine, just below the headline “Heartbreaking mystery: The 16-year-old baby”. She thought Brooke sounded a lot like Gabby, so contacted Walker.

After reviewing Gabby’s details, Walker filled her in on his theory. Testing Gabby’s genes, he said, could help him in his mission to end age-related disease – and maybe even ageing itself.

This didn’t sit well with the Williamses. John, who works for the Montana Department of Corrections, often interacts with people facing the reality of our finite time on Earth. “If you’re spending the rest of your life in prison, you know, it makes you think about the mortality of life,” he says. What’s important is not how long you live, but rather what you do with the life you’re given. MaryMargret feels the same way. For years she has worked in a local dermatology office. She knows all too well the cultural pressures to stay young, and wishes more people would embrace the inevitability of getting older. “You get wrinkles, you get old, that’s part of the process,” she says.

But Walker’s research also had its upside. First and foremost, it could reveal whether the other Williams children were at risk of passing on Gabby’s condition.

For several months, John and MaryMargret hashed out the pros and cons. They were under no illusion that the fruits of Walker’s research would change Gabby’s condition, nor would they want it to. But they did want to know why. “What happened, genetically, to make her who she is?” John says. And more importantly: “Is there a bigger meaning for it?”

John and MaryMargret firmly believe that God gave them Gabby for a reason. Walker’s research offered them a comforting one: to help treat Alzheimer’s and other age-related diseases. “Is there a small piece that Gabby could present to help people solve these awful diseases?” John asks. “Thinking about it, it’s like, no, that’s for other people, that’s not for us.” But then he thinks back to the day Gabby was born. “I was in that delivery room, thinking the same thing – this happens to other people, not us.”

Still not entirely certain, the Williamses went ahead with the research.

Amassing evidence

Walker published his theory in 2011, but he’s only the latest of many researchers to think along the same lines. “Theories relating developmental processes to ageing have been around for a very long time, but have been somewhat under the radar for most researchers,” says Joao Pedro de Magalhaes, a biologist at the University of Liverpool. In 1932, for example, English zoologist George Parker Bidder suggested that mammals have some kind of biological “regulator” that stops growth after the animal reaches a specific size. Ageing, Bidder thought, was the continued action of this regulator after growth was done.

Subsequent studies showed that Bidder wasn’t quite right; there are lots of marine organisms, for example, that never stop growing but age anyway. Still, his fundamental idea of a developmental programme leading to ageing has persisted.

For several years, Stuart Kim’s group at Stanford University has been comparing which genes are expressed in young and old nematode worms. It turns out that some genes involved in ageing also help drive development in youth.

Kim suggested that the root cause of ageing is the “drift”, or mistiming, of developmental pathways during the ageing process, rather than an accumulation of cellular damage.

Other groups have since found similar patterns in mice and primates. One study, for example, found that many genes turned on in the brains of old monkeys and humans are the same as those expressed in young brains, suggesting that ageing and development are controlled by some of the same gene networks.

Perhaps most provocative of all, some studies of worms have shown that shutting down essential development genes in adults significantly prolongs life. “We’ve found quite a lot of genes in which this happened – several dozen,” de Magalhaes says.

Nobody knows whether the same sort of developmental-programme genes exist in people. But say that they do exist. If someone was born with a mutation that completely destroyed this programme, Walker reasoned, that person would undoubtedly die. But if a mutation only partially destroyed it, it might lead to a condition like what he saw in Brooke Greenberg or Gabby Williams. So if Walker could identify the genetic cause of Syndrome X, then he might also have a driver of the ageing process in the rest of us.

And if he found that, then could it lead to treatments that slow – or even end – ageing? “There’s no doubt about it,” he says.

Public stage

After agreeing to participate in Walker’s research, the Williamses, just like the Greenbergs before them, became famous. In January 2011, when Gabby was six, the television channel TLC featured her on a one-hour documentary. The Williams family also appeared on Japanese television and in dozens of newspaper and magazine articles.

Other than becoming a local celebrity, though, Gabby’s everyday life hasn’t changed much since getting involved in Walker’s research. She spends her days surrounded by her large family. She’ll usually lie on the floor, or in one of several cushions designed to keep her spine from twisting into a C shape. She makes noises that would make an outsider worry: grunting, gasping for air, grinding her teeth. Her siblings think nothing of it. They play boisterously in the same room, somehow always careful not to crash into her. Once a week, a teacher comes to the house to work with Gabby. She uses sounds and shapes on an iPad to try to teach cause and effect. When Gabby turned nine, last October, the family made her a birthday cake and had a party, just as they always do. Most of her gifts were blankets, stuffed animals and clothes, just as they are every year. Her aunt Jennie gave her make-up.

Walker teamed up with geneticists at Duke University and screened the genomes of Gabby, John and MaryMargret. This test looked at the exome, the 2% of the genome that codes for proteins. From this comparison, the researchers could tell that Gabby did not inherit any exome mutations from her parents – meaning that it wasn’t likely that her siblings would be able to pass on the condition to their kids. “It was a huge relief – huge,” MaryMargret says.

Still, the exome screening didn’t give any clues as to what was behind Gabby’s disease. Gabby carries several mutations in her exome, but none in a gene that would make sense of her condition. All of us have mutations littering our genomes. So it’s impossible to know, in any single individual, whether a particular mutation is harmful or benign – unless you can compare two people with the same condition.

All girls

Luckily for him, Walker’s continued presence in the media has led him to two other young girls who he believes have the same syndrome. One of them, Mackenzee Wittke, of Alberta, Canada, is now five years old, with has long and skinny limbs, just like Gabby. “We have basically been stuck in a time warp,” says her mother, Kim Wittke. The fact that all of these possible Syndrome X cases are girls is intriguing – it could mean that the crucial mutation is on their X chromosome. Or it could just be a coincidence.

Walker is working with a commercial outfit in California to compare all three girls’ entire genome sequences – the exome plus the other 98% of DNA code, which is thought to be responsible for regulating the expression of protein-coding genes.

For his theory, Walker says, “this is do or die – we’re going to do every single bit of DNA in these girls. If we find a mutation that’s common to them all, that would be very exciting.”

But that seems like a very big if.

Most researchers agree that finding out the genes behind Syndrome X is a worthwhile scientific endeavour, as these genes will no doubt be relevant to our understanding of development. They’re far less convinced, though, that the girls’ condition has anything to do with ageing. “It’s a tenuous interpretation to think that this is going to be relevant to ageing,” says David Gems, a geneticist at University College London. It’s not likely that these girls will even make it to adulthood, he says, let alone old age.

It’s also not at all clear that these girls have the same condition. Even if they do, and even if Walker and his collaborators discover the genetic cause, there would still be a steep hill to climb. The researchers would need to silence the same gene or genes in laboratory mice, which typically have a lifespan of two or three years. “If that animal lives to be 10, then we’ll know we’re on the right track,” Walker says. Then they’d have to find a way to achieve the same genetic silencing in people, whether with a drug or some kind of gene therapy. And then they’d have to begin long and expensive clinical trials to make sure that the treatment was safe and effective. Science is often too slow, and life too fast.

End of life

On 24 October 2013, Brooke passed away. She was 20 years old. MaryMargret heard about it when a friend called after reading it in a magazine. The news hit her hard. “Even though we’ve never met the family, they’ve just been such a part of our world,” she says.

MaryMargret doesn’t see Brooke as a template for Gabby – it’s not as if she now believes that she only has 11 years left with her daughter. But she can empathise with the pain the Greenbergs must be feeling. “It just makes me feel so sad for them, knowing that there’s a lot that goes into a child like that,” she says. “You’re prepared for them to die, but when it finally happens, you can just imagine the hurt.”

Today Gabby is doing well. MaryMargret and John are no longer planning her funeral. Instead, they’re beginning to think about what would happen if Gabby outlives them. (Sophia has offered to take care of her sister.) John turned 50 this year, and MaryMargret will be 41. If there were a pill to end ageing, they say they’d have no interest in it. Quite the contrary: they look forward to getting older, because it means experiencing the new joys, new pains and new ways to grow that come along with that stage of life.

Richard Walker, of course, has a fundamentally different view of growing old. When asked why he’s so tormented by it, he says it stems from childhood, when he watched his grandparents physically and psychologically deteriorate. “There was nothing charming to me about sedentary old people, rocking chairs, hot houses with Victorian trappings,” he says. At his grandparents’ funerals, he couldn’t help but notice that they didn’t look much different in death than they did at the end of life. And that was heartbreaking. “To say I love life is an understatement,” he says. “Life is the most beautiful and magic of all things.”

If his hypothesis is correct – who knows? – it might one day help prevent disease and modestly extend life for millions of people. Walker is all too aware, though, that it would come too late for him. As he writes in his book: “I feel a bit like Moses who, after wandering in the desert for most years of his life, was allowed to gaze upon the Promised Land but not granted entrance into it.”

 Read the entire story here.

Story courtesy of BBC and Mosaic under Creative Commons License.

Image: DNA structure. Courtesy of Wikipedia.

The Cosmological Axis of Evil

WMAP_temp-anisotropy

The cosmos seems remarkably uniform — look in any direction with the naked eye or the most powerful telescopes and you’ll see much the same as in any other direction. Yet, on a grand scale, our universe shows some peculiar fluctuations that have cosmologists scratching their heads. The temperature of the universe, as described by the cosmic microwave background (CMB), shows some interesting fluctuations in specific, vast regions. It is the distribution of these temperature variations that shows what seem to be non-random patterns. Cosmologists have dubbed the pattern, “axis of evil”.

From ars technica:

The Universe is incredibly regular. The variation of the cosmos’ temperature across the entire sky is tiny: a few millionths of a degree, no matter which direction you look. Yet the same light from the very early cosmos that reveals the Universe’s evenness also tells astronomers a great deal about the conditions that gave rise to irregularities like stars, galaxies, and (incidentally) us.

That light is the cosmic microwave background, and it provides some of the best knowledge we have about the structure, content, and history of the Universe. But it also contains a few mysteries: on very large scales, the cosmos seems to have a certain lopsidedness. That slight asymmetry is reflected in temperature fluctuations much larger than any galaxy, aligned on the sky in a pattern facetiously dubbed “the axis of evil.”

The lopsidedness is real, but cosmologists are divided over whether it reveals anything meaningful about the fundamental laws of physics. The fluctuations are sufficiently small that they could arise from random chance. We have just one observable Universe, but nobody sensible believes we can see all of it. With a sufficiently large cosmos beyond the reach of our telescopes, the rest of the Universe may balance the oddity that we can see, making it a minor, local variation.

However, if the asymmetry can’t be explained away so simply, it could indicate that some new physical mechanisms were at work in the early history of the Universe. As Amanda Yoho, a graduate student in cosmology at Case Western Reserve University, told Ars, “I think the alignments, in conjunction with all of the other large angle anomalies, must point to something we don’t know, whether that be new fundamental physics, unknown astrophysical or cosmological sources, or something else.”

Over the centuries, astronomers have provided increasing evidence that Earth, the Solar System, and the Milky Way don’t occupy a special position in the cosmos. Not only are we not at the center of existence—much less the corrupt sinkhole surrounded by the pure crystal heavens, as in early geocentric Christian theology—the Universe has no center and no edge.

In cosmology, that’s elevated to a principle. The Universe is isotropic, meaning it’s (roughly) the same in every direction. The cosmic microwave background (CMB) is the strongest evidence for the isotropic principle: the spectrum of the light reaching Earth from every direction indicates that it was emitted by matter at almost exactly the same temperature.

The Big Bang model explains why. In the early years of the Universe’s history, matter was very dense and hot, forming an opaque plasma of electrons, protons, and helium nuclei. The expansion of space-time thinned out until the plasma cooled enough that stable atoms could form. That event, which ended roughly 380,000 years after the Big Bang, is known as recombination. The immediate side effect was to make the Universe transparent and liberate vast numbers of photons, most of which have traveled through space unmolested ever since.

We observe the relics of recombination in the form of the CMB. The temperature of the Universe today is about 2.73 degrees above absolute zero in every part of the sky. The lack of variation makes the cosmos nearly as close to a perfect thermal body as possible. However, measurements show anisotropies—tiny fluctuations in temperature, roughly 10 millionths of a degree or less. These irregularities later gave rise to areas where mass gathered. A perfectly featureless, isotropic cosmos would have no stars, galaxies, or planets full of humans.

To measure the physical size of these anisotropies, researchers turn the whole-sky map of temperature fluctuations into something called a power spectrum. That’s akin to the process of taking light from a galaxy and finding the component wavelengths (colors) that make it up. The power spectrum encompasses fluctuations over the whole sky down to very small variations in temperature. (For those with some higher mathematics knowledge, this process involves decomposing the temperature fluctuations in spherical harmonics.)

Smaller details in the fluctuations tell cosmologists the relative amounts of ordinary matter, dark matter, and dark energy. However, some of the largest fluctuations—covering one-fourth, one-eighth, and one-sixteenth of the sky—are bigger than any structure in the Universe, therefore representing temperature variations across the whole sky.

Those large-scale fluctuations in the power spectrum are where something weird happens. The temperature variations are both larger than expected and aligned with each other to a high degree. That’s at odds with theoretical expectations: the CMB anisotropies should be randomly oriented, not aligned. In fact, the smaller-scale variations are random, which makes the deviation at larger scales that much stranger.

Kate Land and Joao Magueijo jokingly dubbed the strange alignment “the axis of evil” in a 2005 paper (freely available on the ArXiv), riffing on an infamous statement by then-US President George W. Bush. Their findings were based on data from an earlier observatory, the Wilkinson Microwave Anisotropy Probe (WMAP), but the follow-up Planck mission found similar results. There’s no question that the “axis of evil” is there; cosmologists just have to figure out what to think about it.

The task of interpretation is complicated by what’s called “cosmic variance,” or the fact that our observable Universe is just one region in a larger Universe. Random chance dictates that some pockets of the whole Universe will have larger or smaller fluctuations than others, and those fluctuations might even be aligned entirely by coincidence.

In other words, the “axis of evil” could very well be an illusion, a pattern that wouldn’t seem amiss if we could see more of the Universe. However, cosmic variance also predicts how big those local, random deviations should be—and the fluctuations in the CMB data are larger. They’re not so large as to rule out the possibility of a local variation entirely—they’re above-average height—but cosmologists can’t easily dismiss the possibility that something else is going on.

Read the entire article here.

Image courtesy of Hinshaw et al WMAP paper.

A Godless Universe: Mind or Mathematics

In his science column for the NYT George Johnson reviews several recent books by noted thinkers who for different reasons believe science needs to expand its borders. Philosopher Thomas Nagel and physicist Max Tegmark both agree that our current understanding of the universe is rather limited and that science needs to turn to new or alternate explanations. Nagel, still an atheist, suggests in his book Mind and Cosmos that the mind somehow needs to be considered a fundamental structure of the universe. While Tegmark in his book Our Mathematical Universe: My Quest for the Ultimate Nature of Reality suggests that mathematics is the core, irreducible framework of the cosmos. Two radically different ideas — yet both are correct in one respect: we still know so very little about ourselves and our surroundings.

From the NYT:

Though he probably didn’t intend anything so jarring, Nicolaus Copernicus, in a 16th-century treatise, gave rise to the idea that human beings do not occupy a special place in the heavens. Nearly 500 years after replacing the Earth with the sun as the center of the cosmic swirl, we’ve come to see ourselves as just another species on a planet orbiting a star in the boondocks of a galaxy in the universe we call home. And this may be just one of many universes — what cosmologists, some more skeptically than others, have named the multiverse.

Despite the long string of demotions, we remain confident, out here on the edge of nowhere, that our band of primates has what it takes to figure out the cosmos — what the writer Timothy Ferris called “the whole shebang.” New particles may yet be discovered, and even new laws. But it is almost taken for granted that everything from physics to biology, including the mind, ultimately comes down to four fundamental concepts: matter and energy interacting in an arena of space and time.

There are skeptics who suspect we may be missing a crucial piece of the puzzle. Recently, I’ve been struck by two books exploring that possibility in very different ways. There is no reason why, in this particular century, Homo sapiens should have gathered all the pieces needed for a theory of everything. In displacing humanity from a privileged position, the Copernican principle applies not just to where we are in space but to when we are in time.

Since it was published in 2012, “Mind and Cosmos,” by the philosopher Thomas Nagel, is the book that has caused the most consternation. With his taunting subtitle — “Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False” — Dr. Nagel was rejecting the idea that there was nothing more to the universe than matter and physical forces. He also doubted that the laws of evolution, as currently conceived, could have produced something as remarkable as sentient life. That idea borders on anathema, and the book quickly met with a blistering counterattack. Steven Pinker, a Harvard psychologist, denounced it as “the shoddy reasoning of a once-great thinker.”

What makes “Mind and Cosmos” worth reading is that Dr. Nagel is an atheist, who rejects the creationist idea of an intelligent designer. The answers, he believes, may still be found through science, but only by expanding it further than it may be willing to go.

“Humans are addicted to the hope for a final reckoning,” he wrote, “but intellectual humility requires that we resist the temptation to assume that the tools of the kind we now have are in principle sufficient to understand the universe as a whole.”

Dr. Nagel finds it astonishing that the human brain — this biological organ that evolved on the third rock from the sun — has developed a science and a mathematics so in tune with the cosmos that it can predict and explain so many things.

Neuroscientists assume that these mental powers somehow emerge from the electrical signaling of neurons — the circuitry of the brain. But no one has come close to explaining how that occurs.

Continue reading the main story Continue reading the main story
Continue reading the main story

That, Dr. Nagel proposes, might require another revolution: showing that mind, along with matter and energy, is “a fundamental principle of nature” — and that we live in a universe primed “to generate beings capable of comprehending it.” Rather than being a blind series of random mutations and adaptations, evolution would have a direction, maybe even a purpose.

“Above all,” he wrote, “I would like to extend the boundaries of what is not regarded as unthinkable, in light of how little we really understand about the world.”

Dr. Nagel is not alone in entertaining such ideas. While rejecting anything mystical, the biologist Stuart Kauffman has suggested that Darwinian theory must somehow be expanded to explain the emergence of complex, intelligent creatures. And David J. Chalmers, a philosopher, has called on scientists to seriously consider “panpsychism” — the idea that some kind of consciousness, however rudimentary, pervades the stuff of the universe.

Some of this is a matter of scientific taste. It can be just as exhilarating, as Stephen Jay Gould proposed in “Wonderful Life,” to consider the conscious mind as simply a fluke, no more inevitable than the human appendix or a starfish’s five legs. But it doesn’t seem so crazy to consider alternate explanations.

Heading off in another direction, a new book by the physicist Max Tegmark suggests that a different ingredient — mathematics — needs to be admitted into science as one of nature’s irreducible parts. In fact, he believes, it may be the most fundamental of all.

In a well-known 1960 essay, the physicist Eugene Wigner marveled at “the unreasonable effectiveness of mathematics” in explaining the world. It is “something bordering on the mysterious,” he wrote, for which “there is no rational explanation.”

The best he could offer was that mathematics is “a wonderful gift which we neither understand nor deserve.”

Dr. Tegmark, in his new book, “Our Mathematical Universe: My Quest for the Ultimate Nature of Reality,” turns the idea on its head: The reason mathematics serves as such a forceful tool is that the universe is a mathematical structure. Going beyond Pythagoras and Plato, he sets out to show how matter, energy, space and time might emerge from numbers.

Read the entire article here.