Category Archives: BigBang

Fight or Flight (or Record?)

Google-search-danger

Psychologists, social scientists and researchers of the human brain have long maintained that we have three typical responses to an existential, usually physical, threat. First, we may stand our ground to tackle and fight the threat. Second, we may turn and run from danger. Third, we may simply freeze with indecision and inaction. These responses have been studied, documented and confirmed over the decades. Further, they tend to mirror those of other animals when faced with a life-threatening situation.

But, now that humans have entered the smartphone age, it appears that there is a fourth response — to film or record the threat. This may seem hard to believe and foolhardy, but quite disturbingly it’s is a growing trend, especially among younger people.

From the Telegraph:

If you witnessed a violent attack on an innocent victim, would you:

a) help
b) run
c) freeze

Until now, that was the hypothetical question we all asked ourselves when reading about horrific events such as terror attacks.

What survival instinct would come most naturally? Fight or flight?

No longer. Over the last couple of years it’s become very obvious that there’s a fourth option:

d) record it all on your smartphone.

This reaction of filming traumatic events has become more prolific in recent weeks. Last month’s terror attacks in Paris saw mobile phone footage of people being shot, photos of bodies lying in the street, and perhaps most memorably, a pregnant woman clinging onto a window ledge.

Saturday [December 5, 2015] night saw another example when a terror suspect started attacking passengers on the Tube at Leytonstone Station. Most of the horrific incident was captured on video, as people stood filming him.

One brave man, 33-year-old engineer David Pethers, tried to fight the attacker. He ended up with a cut to the neck as he tried to protect passing children. But while he was intervening, others just held up their phones.

“There were so many opportunities where someone could have grabbed him,” he told the Daily Mail. “One guy came up to me afterwards and said ‘well done, I want to shake your hand, you are the only one who did anything, I got the whole thing on film.’

“I was so angry, I nearly turned on him but I walked away. I though, ‘Are you crazy? You are standing there filming and did nothing.’ I was really angry afterwards.”

It’s hard to disagree. Most of us know heroism is rare and admirable. We can easily understand people trying to escape and save themselves, or even freezing in the face of terror.

But deliberately doing nothing and choosing to film the whole thing? That’s a lot harder to sympathise with.

Psychotherapist Richard Reid agrees – “the sensible option would be to think about your own safety and get out, or think about helping people” – but he says it’s important we understand this new reaction.

“Because events like terror attacks are so outside our experience, people don’t fully connect with it,” he explains.

“It’s like they’re watching a film. It doesn’t occur to them they could be in danger or they could be helping. The reality only sinks in after the event. It’s a natural phenomenon. It’s not necessarily the most useful response, but we have to accept it.”

Read the entire story here.

Image courtesy of Google Search.

A Googol Years From Now

If humanity makes it the next few years and decades without destroying itself and the planet, we can ponder the broader fate of our universal home. Assuming humanity escapes the death of our beautiful local star (in 4-5 billion years or so) and the merging of our very own Milky Way and the Andromeda galaxy (around 7-10 billion years), we’ll be toast in a googol years. Actually, we and everything else in the cosmos will be more like a cold, dark particle soup. By the way, a googol is a rather large number — 10100. That gives us plenty of time to fix ourselves.

From Space:

Yes, the universe is dying. Get over it.

 Well, let’s back up. The universe, as defined as “everything there is, in total summation,” isn’t going anywhere anytime soon. Or ever. If the universe changes into something else far into the future, well then, that’s just more universe, isn’t it?

But all the stuff in the universe? That’s a different story. When we’re talking all that stuff, then yes, everything in the universe is dying, one miserable day at a time.

You may not realize it by looking at the night sky, but the ultimate darkness is already settling in. Stars first appeared on the cosmic stage rather early — more than 13 billion years ago; just a few hundred million years into this Great Play. But there’s only so much stuff in the universe, and only so many opportunities to make balls of it dense enough to ignite nuclear fusion, creating the stars that fight against the relentless night.

The expansion of the universe dilutes everything in it, meaning there are fewer and fewer chances to make the nuclear magic happen. And around 10 billion years ago, the expansion reached a tipping point. The matter in the cosmos was spread too thin. The engines of creation shut off. The curtain was called: the epoch of peak star formation has already passed, and we are currently living in the wind-down stage. Stars are still born all the time, but the birth rate is dropping.

At the same time, that dastardly dark energy is causing the expansion of the universe to accelerate, ripping galaxies away from each other faster than the speed of light (go ahead, say that this violates some law of physics, I dare you), drawing them out of the range of any possible contact — and eventually, visibility — with their neighbors. With the exception of the Andromeda Galaxy and a few pathetic hangers-on, no other galaxies will be visible. We’ll become very lonely in our observable patch of the universe.

The infant universe was a creature of heat and light, but the cosmos of the ancient future will be a dim, cold animal.

The only consolation is the time scale involved. You thought 14 billion years was a long time? The numbers I’m going to present are ridiculous, even with exponential notation. You can’t wrap your head around it. They’re just … big.

For starters, we have at least 2 trillion years until the last sun is born, but the smallest stars will continue to burn slow and steady for another 100 trillion years in a cosmic Children of Men. Our own sun will be long gone by then, heaving off its atmosphere within the next 5 billion years and charcoaling the Earth. Around the same time, the Milky Way and Andromeda galaxies will collide, making a sorry mess of the local system.

At the end of this 100-trillion-year “stelliferous” era, the universe will only be left with the … well, leftovers: white dwarves (some cooled to black dwarves), neutron stars and black holes. Lots of black holes.

Welcome to the Degenerate Era, a state that is as sad as it sounds. But even that isn’t the end game. Oh no, it gets worse. After countless gravitational interactions, planets will get ejected from their decaying systems and galaxies themselves will dissolve. Losing cohesion, our local patch of the universe will be a disheveled wreck of a place, with dim, dead stars scattered about randomly and black holes haunting the depths.

The early universe was a very strange place, and the late universe will be equally bizarre. Given enough time, things that seem impossible become commonplace, and objects that appear immutable … uh, mutate. Through a process called quantum tunneling, any solid object will slowly “leak” atoms, dissolving. Because of this, gone will be the white dwarves, the planets, the asteroids, the solid.

Even fundamental particles are not immune: given 10^34 years, the neutrons in neutron stars will break apart into their constituent particles. We don’t yet know if the proton is stable, but if it isn’t, it’s only got 10^40 years before it meets its end.

With enough time (and trust me, we’ve got plenty of time), the universe will consist of nothing but light particles (electrons, neutrinos and their ilk), photons and black holes. The black holes themselves will probably dissolve via Hawking Radiation, briefly illuminating the impenetrable darkness as they decay.

After 10^100 years (but who’s keeping track at this point?), nothing macroscopic remains. Just a weak soup of particles and photons, spread so thin that they hardly ever interact.

Read the entire article here.

In case, you’ve forgotten, a googol is 10100 (10 to the power of 100) or 10 followed by 100 zeros. And, yes, that’s how the company Google derived its name.

See, Earth is at the Center of the Cosmos

A single image of the entire universe from 2012 has been collecting lots of attention recently. Not only is it beautiful, it shows the Earth and our solar system clearly in the correct location — at the rightful center!

Some seem to be using this to claim that the circa 2,000 year old, geo-centric view of the cosmos must be right.

Observable_universe_logarithmic_illustration

Well, sorry creationists, flat-earthers, and followers of Ptolemy, this gorgeous image is a logarithmic illustration.

Image: Artist’s logarithmic scale conception of the observable universe with the Solar System at the center, inner and outer planets, Kuiper belt, Oort cloud, Alpha Centauri, Perseus Arm, Milky Way galaxy, Andromeda galaxy, nearby galaxies, Cosmic Web, Cosmic microwave radiation and Big Bang’s invisible plasma on the edge. Courtesy: Pablo Carlos Budassi / Wikipedia.

To Another Year

Let me put aside humanity’s destructive failings for a moment, with the onset of a New Year, to celebrate one of our most fundamental positive traits: our need to know — how things work, how and why we’re here, and if we’re alone. We are destined to explore, discover and learn more about ourselves and our surroundings. I hope and trust that 2016 will bring us yet more knowledge (and more really cool images). We are fortunate indeed.

pluto-psychedelic

Image: New Horizons scientists false color image of Pluto. Image data collected by the spacecraft’s Ralph/MVIC color camera on July 14, 2015 from a range of 22,000 miles. Courtesy: NASA/JHUAPL/SwRI.

pluto-mountainousshoreline

Image: Highest-resolution image from NASA’s New Horizons spacecraft shows huge blocks of Pluto’s water-ice crust jammed together in the informally named al-Idrisi mountains. The mountains end abruptly at the shoreline of the informally named Sputnik Planum, where the soft, nitrogen-rich ices of the plain form a nearly level surface, broken only by the fine trace work of striking, cellular boundaries. Courtesy: NASA/JHUAPL/SwRI.

 

 

Titan Close Up

You could be forgiven for thinking the image below is of Earth. Rather, it is Saturn’s largest moon, Titan, as imaged in infra-red by NASA’s Cassini spacecraft on November 13, 2015. Gorgeous.

Titan-Cassini- flyby-13Nov2015

Image: Composite image shows an infrared view of Saturn’s moon Titan from NASA’s Cassini spacecraft, acquired during the mission’s “T-114” flyby on Nov. 13, 2015. Courtesy NASA.

Neutrinos in the News

Something’s up. Perhaps there’s some degree of hope that we may be reversing the tide of “dumbeddownness” in the stories that the media pumps through its many tubes to reach us. So, it comes as a welcome surprise to see articles about the very, very small making big news in publications like the New Yorker. Stories about neutrinos no less. Thank you New Yorker for dumbing us up. And, kudos to the latest Nobel laureates — Takaaki Kajita and Arthur B. McDonald — for helping us understand just a little bit more about our world.

From the New Yorker:

This week the 2015 Nobel Prize in Physics was awarded jointly to Takaaki Kajita and Arthur B. McDonald for their discovery that elementary particles called neutrinos have mass. This is, remarkably, the fourth Nobel Prize associated with the experimental measurement of neutrinos. One might wonder why we should care so much about these ghostly particles, which barely interact with normal matter.

Even though the existence of neutrinos was predicted in 1930, by Wolfgang Pauli, none were experimentally observed until 1956. That’s because neutrinos almost always pass through matter without stopping. Every second of every day, more than six trillion neutrinos stream through your body, coming directly from the fiery core of the sun—but most of them go right through our bodies, and the Earth, without interacting with the particles out of which those objects are made. In fact, on average, those neutrinos would be able to traverse more than one thousand light-years of lead before interacting with it even once.

The very fact that we can detect these ephemeral particles is a testament to human ingenuity. Because the rules of quantum mechanics are probabilistic, we know that, even though almost all neutrinos will pass right through the Earth, a few will interact with it. A big enough detector can observe such an interaction. The first detector of neutrinos from the sun was built in the nineteen-sixties, deep within a mine in South Dakota. An area of the mine was filled with a hundred thousand gallons of cleaning fluid. On average, one neutrino each day would interact with an atom of chlorine in the fluid, turning it into an atom of argon. Almost unfathomably, the physicist in charge of the detector, Raymond Davis, Jr., figured out how to detect these few atoms of argon, and, four decades later, in 2002, he was awarded the Nobel Prize in Physics for this amazing technical feat.

Because neutrinos interact so weakly, they can travel immense distances. They provide us with a window into places we would never otherwise be able to see. The neutrinos that Davis detected were emitted by nuclear reactions at the very center of the sun, escaping this incredibly dense, hot place only because they so rarely interact with other matter. We have been able to detect neutrinos emerging from the center of an exploding star more than a hundred thousand light-years away.

But neutrinos also allow us to observe the universe at its very smallest scales—far smaller than those that can be probed even at the Large Hadron Collider, in Geneva, which, three years ago, discovered the Higgs boson. It is for this reason that the Nobel Committee decided to award this year’s Nobel Prize for yet another neutrino discovery.

Read the entire story here.

PhotoMash: Climate Skeptic and Climate Science

Aptly, today’s juxtaposition of stories comes from the Washington Post. One day into the COP21 UN climate change conference in Paris, France, US House of Representatives’ science committee chair Lamar Smith is still at it. He’s a leading climate change skeptic, an avid opponent of the NOAA (National Atmospheric and Oceanic Administration) and self-styled overlord of the National Science Foundation (NSF). While Representative Smith seeks to politicize and skewer science, intimidate scientists and trample on funding for climate science research (and other types of basic science funding), our planet continues to warm.

Photomash-Climate-Skeptic-Climate-Facts

If you’re an open-minded scientist or just concerned about our planet this is not good.

So, it’s rather refreshing to see Representative Smith alongside a story showing that the month of December could be another temperature record breaker — the warmest on record for the northern tier of the continental US.

Images courtesy of the Washington Post, November 30, 2015.

Man-With-Beard and Negative Frequency-Dependent Sexual Selection

[tube]6i8IER7nTfc[/tube]

Culture watchers pronounced “peak beard” around the time of the US Academy Awards in 2013.  Since then celebrities (male) of all stripes and colors have been ditching the hairy chin for a more clean-shaven look. While, I have no interest in the amount or type of stubble on George Clooney’s face, the beard/no-beard debate does raise a more fascinating issue with profound evolutionary consequences. Research shows that certain physical characteristics, including facial hair, become more appealing when they are rare. The converse is also true: certain traits are less appealing when common. Furthermore, studies of social signalling and mating preference in various animals shows the same bias. So, men, if you’re trying to attract the attention of a potential mate it’s time to think more seriously about negative frequency-dependent sexual selection and ditch the conforming hirsute hipster look for something else. Here’s an idea: just be yourself instead of following the herd. Though, I do still like Manuel’s gallic mustache.

From the BBC:

The ebb and flow of men’s beard fashions may be guided by Darwinian selection, according to a new study.

The more beards there are, the less attractive they become – giving clean-shaven men a competitive advantage, say scientists in Sydney, Australia.

When “peak beard” frequency is reached, the pendulum swings back toward lesser-bristled chins – a trend we may be witnessing now, the scientists say.

Their study has been published in the Royal Society journal Biology Letters.

In the experiment, women and men were asked to rate different faces with “four standard levels of beardedness”.

Both beards and clean-shaven faces became more appealing when they were rare.

The pattern mirrors an evolutionary phenomenon – “negative frequency-dependent sexual selection”, or to put it more simply “an advantage to rare traits”.

The bright colours of male guppies vary by this force – which is driven by females’ changing preferences.

Scientists at the University of New South Wales decided to test this hypothesis for men’s facial hair – recruiting volunteers on their Facebook site, The Sex Lab.

“Big thick beards are back with an absolute vengeance and so we thought underlying this fashion, one of the dynamics that might be important is this idea of negative frequency dependence,” said Prof Rob Brooks, one of the study’s authors.

“The idea is that perhaps people start copying the George Clooneys and the Joaquin Phoenixs and start wearing those beards, but then when more and more people get onto the bandwagon the value of being on the bandwagon diminishes, so that might be why we’ve hit ‘peak beard’.”

“Peak beard” was the climax of the trend for beards in professions not naturally associated with a bristly chin – bankers, film stars, and even footballers began sporting facial hair.

Read the entire story here.

Video courtesy of Fawlty Towers / BBC Productions.

The Curious Case of the Strange Transit Signal

Something very strange is happening over at KIC 8462852. But, it may not be an alien intelligence.

KIC-8462852–Where-is-the-flux

NASA’s extrasolar, planet-hunting space telescope, found some odd changes in the luminosity of a star — KIC 8462852— located in the constellation of Cygnus, about 1,400 light-years from Earth. In a recent paper submitted to the Royal Astronomical Society, astronomers reported that:

“Over the duration of the Kepler mission, KIC 8462852 was observed to undergo irregularly shaped, aperiodic dips in flux down to below the 20 percent level.”

But despite several years of monitoring, astronomers have yet to come up with a feasible, natural explanation. And, this has conspiracy theorists, alien hunters and SciFi enthusiasts very excited. Could it be a massive alien structure shielding the star, or is there simpler and natural, but less amazing possibility? Occam’s razor could well prevail again, but I certainly hope not in this case.

From Wired:

Last week, astronomers—amateur and pro—got excited about some strange results from the Kepler Space Telescope, the NASA observatory tasked with searching for Earth-like planets. As those planets orbit their own distant suns, periodically blocking light from Kepler’s view, the telescope documents the flickers. But over the last several years, it has picked up a strange pattern of blips from one star in particular, KIC 8462852.

Light from that star dramatically plunges in irregular intervals—not the consistent pattern you’d expect from an orbiting planet. But what could possibly cause such a thing? Gotta be aliens, right? Clearly someone—something—has assembled a megastructure around its sun, like that hollow Celestial head in Guardians of the Galaxy. Or maybe it’s a solar array, collecting energy-giving radiation and preventing light from reaching NASA’s telescope.

This, of course, is almost certainly poppycock. When you’re searching the vast expanse of space, lots of things look like they could be signs of extraterrestrial life. Astronomical observers are constantly looking for tiny glimmers of information in the mess of noise that streams through space toward Earth, and often, things that at first look like signals end up being mirages. This has all happened before; it will all happen again. For example:

Pulsars

In 1967, astronomer Jocelyn Bell was monitoring signals from the Mullard Radio Astronomy Observatory, trying to analyze the behavior of quasars, energy-spewing regions surrounding supermassive black holes within distant galaxies. What she found, though, was a series of regular pulses, always from the same part of the sky, that she labeled LGM-1: Little Green Men. Soon, though, she found similar signals coming from another part of the sky, and realized that she wasn’t seeing messages from two different alien civilizations: It was radiation from a spinning, magnetized neutron star—the first measured pulsar.

Sparks at Parkes

In 1998, astronomers at the 64-meter Parkes radio telescope in Australia started noticing mysterious radio signals called perytons—unexplained, millisecond-long bursts. The researchers there didn’t immediately cry alien, though; they could tell that the radio signals were terrestrial in origin, because they showed up across the entire spectrum monitored by the telescope. They didn’t know until this year, however, exactly where those emissions came from: a microwave oven on the observatory’s campus, which released a short, powerful radio signal when staffers opened its door in the middle of heating.

Read the entire story here.

Image: Flux time series for KIC 8462852 showing different portions of the 4-year Kepler observations. Courtesy: T. S. Boyajian et al, Planet Hunters X. KIC 8462852 – Where’s the flux?

 

 

MondayMap: Our Beautiful Blue Home

OK, OK, I cheated a little this week. I don’t have a map story.

But I couldn’t resist posting the geographic-related news of NASA’s new website. Each day, the agency will post a handful of images of our gorgeous home, as seen from the DSCOVR spacecraft. DSCOVR is parked at the L-1 Lagrangian Point, about 1 million miles from Earth and 92 million from the Sun, where the gravitational forces of the three bodies balance. It’s a wonderful vantage point to peer at our beautiful blue planet.

DSCOVR-Earth-image-19Oct2015

You can check out NASA’s new website here.

Image: Earth as imaged from DSCOVR on October 19, 2015. Courtesy of NASA, NOAA and the U.S Air Force.

The Emperor and/is the Butterfly

In an earlier post I touched on the notion proposed by some cosmologists that our entire universe is some kind of highly advanced simulation. The hypothesis is that perhaps we are merely information elements within a vast mathematical fabrication, playthings of a much superior consciousness. Some draw upon parallels to The Matrix movie franchise.

Follow some of the story and video interviews here to learn more of this fascinating and somewhat unsettling idea. More unsettling still: did our overlord programmers leave a backdoor?

[tube]NEokFnAmmFE[/tube]

Video: David Brin – Could Our Universe Be a Fake? Courtesy of Closer to Truth.

Goodbye Poppy. Hello Narco-Yeast

S_cerevisiaeBioengineers have been successfully encoding and implanting custom genes into viruses, bacteria and yeast for a while now. These new genes usually cause these organisms to do something different, such as digest industrial waste, kill malignant hosts and manufacture useful chemicals.

So, it should come as no surprise to see the advent — only in the laboratory at the moment — of yeast capable of producing narcotics. There seems to be no end to our inventiveness.

Personally, I’m waiting for a bacteria that can synthesize Nutella and a fungus that can construct corporate Powerpoint presentations.

From the NYT:

In a widely expected advance that has opened a fierce debate about “home-brewed heroin,” scientists at Stanford have created strains of yeast that can produce narcotic drugs.

Until now, these drugs — known as opioids — have been derived only from the opium poppy. But the Stanford lab is one of several where researchers have been trying to find yeast-based alternatives. Their work is closely followed by pharmaceutical companies and by the Drug Enforcement Administration and Federal Bureau of Investigation.

Advocates of the rapidly advancing field of bioengineering say it promises to make the creation of important chemicals — in this case painkillers and cough suppressants — cheaper and more predictable than using poppies.

In one major advance more than a decade ago scientists in Berkeley added multiple genes to yeast until it produced a precursor to artemisinin, the most effective modern malaria drug, which previously had to be grown in sweet wormwood shrubs. Much of the world’s artemisinin is now produced in bioengineered yeast.

But some experts fear the technology will be more useful to drug traffickers than to pharmaceutical companies. Legitimate drug makers already have steady supplies of cheap raw materials from legal poppy fields in Turkey, India, Australia, France and elsewhere.

For now, both scientists and law-enforcement officials agree, it will be years before heroin can be grown in yeast. The new Stanford strain, described Thursday in the journal Science, would need to be 100,000 times as efficient in order to match the yield of poppies.

It would take 4,400 gallons of yeast to produce the amount of hydrocodone in a single Vicodin tablet, said Christina D. Smolke, the leader of the Stanford bioengineering team.

For now, she said, anyone looking for opioids “could buy poppy seeds from the grocery store and get higher concentrations.”But the technology is advancing so rapidly that it may match the efficiency of poppy farming within two to three years, Dr. Smolke added.

Read the story here.

Image: Saccharomyces cerevisiae cells in DIC microscopy. Public Domain.

Crispr – Designer DNA

The world welcomed basic genetic engineering in the mid-1970s, when biotech pioneers Herbert Boyer and Stanley Cohen transferred DNA from one organism to another (bacteria). In so doing they created the first genetically modified organism (GMO). A mere forty years later we now have extremely powerful and accessible (cheap) biochemical tools for tinkering with the molecules of heredity. One of these tools, known as Crispr-Cas9, makes it easy and fast to move any genes around, within and across any species.

The technique promises immense progress in the fight against inherited illness, cancer treatment and viral infection. It also opens the door to untold manipulation of DNA in lower organisms and plants to develop an infection resistant and faster growing food supply, and to reimagine a whole host of biochemical and industrial processes (such as ethanol production).

Yet as is the case with many technological advances that hold great promise, tremendous peril lies ahead from this next revolution. Our bioengineering prowess has yet to be matched with a sound and pervasive ethical framework. Can humans reach a consensus on how to shape, focus and limit the application of such techniques? And, equally importantly, can we enforce these bioethical constraints before it’s too late to “uninvent” designer babies and bioweapons?

From Wired:

Spiny grass and scraggly pines creep amid the arts-and-crafts buildings of the Asilomar Conference Grounds, 100 acres of dune where California’s Monterey Peninsula hammerheads into the Pacific. It’s a rugged landscape, designed to inspire people to contemplate their evolving place on Earth. So it was natural that 140 scientists gathered here in 1975 for an unprecedented conference.

They were worried about what people called “recombinant DNA,” the manipulation of the source code of life. It had been just 22 years since James Watson, Francis Crick, and Rosalind Franklin described what DNA was—deoxyribonucleic acid, four different structures called bases stuck to a backbone of sugar and phosphate, in sequences thousands of bases long. DNA is what genes are made of, and genes are the basis of heredity.

Preeminent genetic researchers like David Baltimore, then at MIT, went to Asilomar to grapple with the implications of being able to decrypt and reorder genes. It was a God-like power—to plug genes from one living thing into another. Used wisely, it had the potential to save millions of lives. But the scientists also knew their creations might slip out of their control. They wanted to consider what ought to be off-limits.

By 1975, other fields of science—like physics—were subject to broad restrictions. Hardly anyone was allowed to work on atomic bombs, say. But biology was different. Biologists still let the winding road of research guide their steps. On occasion, regulatory bodies had acted retrospectively—after Nuremberg, Tuskegee, and the human radiation experiments, external enforcement entities had told biologists they weren’t allowed to do that bad thing again. Asilomar, though, was about establishing prospective guidelines, a remarkably open and forward-thinking move.

At the end of the meeting, Baltimore and four other molecular biologists stayed up all night writing a consensus statement. They laid out ways to isolate potentially dangerous experiments and determined that cloning or otherwise messing with dangerous pathogens should be off-limits. A few attendees fretted about the idea of modifications of the human “germ line”—changes that would be passed on from one generation to the next—but most thought that was so far off as to be unrealistic. Engineering microbes was hard enough. The rules the Asilomar scientists hoped biology would follow didn’t look much further ahead than ideas and proposals already on their desks.

Earlier this year, Baltimore joined 17 other researchers for another California conference, this one at the Carneros Inn in Napa Valley. “It was a feeling of déjà vu,” Baltimore says. There he was again, gathered with some of the smartest scientists on earth to talk about the implications of genome engineering.

The stakes, however, have changed. Everyone at the Napa meeting had access to a gene-editing technique called Crispr-Cas9. The first term is an acronym for “clustered regularly interspaced short palindromic repeats,” a description of the genetic basis of the method; Cas9 is the name of a protein that makes it work. Technical details aside, Crispr-Cas9 makes it easy, cheap, and fast to move genes around—any genes, in any living thing, from bacteria to people. “These are monumental moments in the history of biomedical research,” Baltimore says. “They don’t happen every day.”

Using the three-year-old technique, researchers have already reversed mutations that cause blindness, stopped cancer cells from multiplying, and made cells impervious to the virus that causes AIDS. Agronomists have rendered wheat invulnerable to killer fungi like powdery mildew, hinting at engineered staple crops that can feed a population of 9 billion on an ever-warmer planet. Bioengineers have used Crispr to alter the DNA of yeast so that it consumes plant matter and excretes ethanol, promising an end to reliance on petrochemicals. Startups devoted to Crispr have launched. International pharmaceutical and agricultural companies have spun up Crispr R&D. Two of the most powerful universities in the US are engaged in a vicious war over the basic patent. Depending on what kind of person you are, Crispr makes you see a gleaming world of the future, a Nobel medallion, or dollar signs.

The technique is revolutionary, and like all revolutions, it’s perilous. Crispr goes well beyond anything the Asilomar conference discussed. It could at last allow genetics researchers to conjure everything anyone has ever worried they would—designer babies, invasive mutants, species-specific bioweapons, and a dozen other apocalyptic sci-fi tropes. It brings with it all-new rules for the practice of research in the life sciences. But no one knows what the rules are—or who will be the first to break them.

In a way, humans were genetic engineers long before anyone knew what a gene was. They could give living things new traits—sweeter kernels of corn, flatter bulldog faces—through selective breeding. But it took time, and it didn’t always pan out. By the 1930s refining nature got faster. Scientists bombarded seeds and insect eggs with x-rays, causing mutations to scatter through genomes like shrapnel. If one of hundreds of irradiated plants or insects grew up with the traits scientists desired, they bred it and tossed the rest. That’s where red grapefruits came from, and most barley for modern beer.

Genome modification has become less of a crapshoot. In 2002, molecular biologists learned to delete or replace specific genes using enzymes called zinc-finger nucleases; the next-generation technique used enzymes named TALENs.

Yet the procedures were expensive and complicated. They only worked on organisms whose molecular innards had been thoroughly dissected—like mice or fruit flies. Genome engineers went on the hunt for something better.

As it happened, the people who found it weren’t genome engineers at all. They were basic researchers, trying to unravel the origin of life by sequencing the genomes of ancient bacteria and microbes called Archaea (as in archaic), descendants of the first life on Earth. Deep amid the bases, the As, Ts, Gs, and Cs that made up those DNA sequences, microbiologists noticed recurring segments that were the same back to front and front to back—palindromes. The researchers didn’t know what these segments did, but they knew they were weird. In a branding exercise only scientists could love, they named these clusters of repeating palindromes Crispr.

Then, in 2005, a microbiologist named Rodolphe Barrangou, working at a Danish food company called Danisco, spotted some of those same palindromic repeats in Streptococcus thermophilus, the bacteria that the company uses to make yogurt and cheese. Barrangou and his colleagues discovered that the unidentified stretches of DNA between Crispr’s palindromes matched sequences from viruses that had infected their S. thermophilus colonies. Like most living things, bacteria get attacked by viruses—in this case they’re called bacteriophages, or phages for short. Barrangou’s team went on to show that the segments served an important role in the bacteria’s defense against the phages, a sort of immunological memory. If a phage infected a microbe whose Crispr carried its fingerprint, the bacteria could recognize the phage and fight back. Barrangou and his colleagues realized they could save their company some money by selecting S. thermophilus species with Crispr sequences that resisted common dairy viruses.

As more researchers sequenced more bacteria, they found Crisprs again and again—half of all bacteria had them. Most Archaea did too. And even stranger, some of Crispr’s sequences didn’t encode the eventual manufacture of a protein, as is typical of a gene, but instead led to RNA—single-stranded genetic material. (DNA, of course, is double-stranded.)

That pointed to a new hypothesis. Most present-day animals and plants defend themselves against viruses with structures made out of RNA. So a few researchers started to wonder if Crispr was a primordial immune system. Among the people working on that idea was Jill Banfield, a geomicrobiologist at UC Berkeley, who had found Crispr sequences in microbes she collected from acidic, 110-degree water from the defunct Iron Mountain Mine in Shasta County, California. But to figure out if she was right, she needed help.

Luckily, one of the country’s best-known RNA experts, a biochemist named Jennifer Doudna, worked on the other side of campus in an office with a view of the Bay and San Francisco’s skyline. It certainly wasn’t what Doudna had imagined for herself as a girl growing up on the Big Island of Hawaii. She simply liked math and chemistry—an affinity that took her to Harvard and then to a postdoc at the University of Colorado. That’s where she made her initial important discoveries, revealing the three-dimensional structure of complex RNA molecules that could, like enzymes, catalyze chemical reactions.

The mine bacteria piqued Doudna’s curiosity, but when Doudna pried Crispr apart, she didn’t see anything to suggest the bacterial immune system was related to the one plants and animals use. Still, she thought the system might be adapted for diagnostic tests.

Banfield wasn’t the only person to ask Doudna for help with a Crispr project. In 2011, Doudna was at an American Society for Microbiology meeting in San Juan, Puerto Rico, when an intense, dark-haired French scientist asked her if she wouldn’t mind stepping outside the conference hall for a chat. This was Emmanuelle Charpentier, a microbiologist at Ume?a University in Sweden.

As they wandered through the alleyways of old San Juan, Charpentier explained that one of Crispr’s associated proteins, named Csn1, appeared to be extraordinary. It seemed to search for specific DNA sequences in viruses and cut them apart like a microscopic multitool. Charpentier asked Doudna to help her figure out how it worked. “Somehow the way she said it, I literally—I can almost feel it now—I had this chill down my back,” Doudna says. “When she said ‘the mysterious Csn1’ I just had this feeling, there is going to be something good here.”

Read the whole story here.

Deep Time, Nuclear Semiotics and Atomic Priests

un-radioactive_warning_signTime seems to unfold over different — lengthier — scales in the desert southwest of the United States. Perhaps it’s the vastness of the eerie landscape that puts fleeting human moments into the context of deep geologic time. Or, perhaps it’s our monumental human structures that aim to encode our present for the distant future. Structures like the Hoover Dam, which regulates the mighty Colorado River, and the ill-fated Yucca Mountain project, once designed to store the nation’s nuclear waste, were conceived to last many centuries.

Yet these monuments to our impermanence raise a important issue beyond their construction — how are we to communicate their intent to humans living in a distant future, humans who will no longer be using any of our existing languages? Directions and warnings in English or contextual signs and images will not suffice. Consider Yucca Mountain. Now shuttered, Yucca Mountain was designed to be a repository for nuclear byproducts and waste from military and civilian programs. Keep in mind that some products of nuclear reactors, such as various isotopes of uranium, plutonium, technetium and neptunium, remain highly radioactive for tens of thousands to millions of years. So, how would we post warnings at Yucca Mountain about the entombed dangers to generations living 10,000 years and more from now? Those behind the Yucca Mountain project considered a number of fantastic (in its original sense) programs to carry dire warnings into the distant future including hostile architecture, radioactive cats and a pseudo-religious order. This was the work of the Human Interference Task Force.

From Motherboard:

Building the Hoover Dam rerouted the most powerful river in North America. It claimed the lives of 96 workers, and the beloved site dog, Little Niggy, who is entombed by the walkway in the shade of the canyon wall. Diverting the Colorado destroyed the ecology of the region, threatening fragile native plant life and driving several species of fish nearly to extinction. The dam brought water to 8 million people and created more than 5000 jobs. It required 6.6 million metric tons of concrete, all made from the desert; enough, famously, to pave a two lane road coast to coast across the US. Inside the dam’s walls that concrete is still curing, and will be for another 60 years.

Erik, photojournalist, and I have come here to try and get the measure of this place. Nevada is the uncanny locus of disparate monuments all concerned with charting deep time, leaving messages for future generations of human beings to puzzle over the meaning of: a star map, a nuclear waste repository and a clock able to keep time for 10,000 years—all of them within a few hours drive of Las Vegas through the harsh desert.

Hoover Dam is theorized in some structural stress projections to stand for tens of thousands of years from now, and what could be its eventual undoing is mussels. The mollusks which grow in the dam’s grates will no longer be scraped away, and will multiply eventually to such density that the built up stress of the river will burst the dam’s wall. That is if the Colorado continues to flow. Otherwise erosion will take much longer to claim the structure, and possibly Oskar J.W. Hansen’s vision will be realized: future humans will find the dam 14,000 years from now, at the end of the current Platonic Year.

A Platonic Year lasts for roughly 26,000 years. It’s also known as the precession of the equinoxes, first written into the historical record in the second century BC by the Greek mathematician, Hipparchus, though there is evidence that earlier people also solved this complex equation. Earth rotates in three ways: 365 days around the sun, on its 24 hours axis and on its precessional axis. The duration of the last is the Platonic Year, where Earth is incrementally turning on a tilt pointing to its true north as the Sun’s gravity pulls on us, leaving our planet spinning like a very slow top along its orbit around the sun.

Now Earth’s true-north pole star is Polaris, in Ursa Minor, as it was at the completion of Hoover Dam. At the end of the current Platonic Year it will be Vega, in the constellation Lyra. Hansen included this information in an amazingly accurate astronomical clock, or celestial map, embedded in the terrazzo floor of the dam’s dedication monument. Hansen wanted any future humans who came across the dam to be able to know exactly when it was built.

He used the clock to mark major historical events of the last several thousand years including the birth of Christ and the building of the pyramids, events which he thought were equal to the engineering feat of men bringing water to a desert in the 1930s. He reasoned that though current languages could be dead in this future, any people who had survived that long would have advanced astronomy, math and physics in their arsenal of survival tactics. Despite this, the monument is written entirely in English, which is for the benefit of current visitors, not our descendents of millennia from now.

The Hoover Dam is staggering. It is frankly impossible, even standing right on top of it, squinting in the blinding sunlight down its vertiginous drop, to imagine how it was ever built by human beings; even as I watch old documentary footage on my laptop back in the hotel at night on Fremont Street, showing me that exact thing, I don’t believe it. I cannot square it in my mind. I cannot conceive of nearly dying every day laboring in the brutally dry 100 degree heat, in a time before air-conditioning, in a time before being able to ever get even the slightest relief from the elements.

Hansen was more than aware of our propensity to build great monuments to ourselves and felt the weight of history as he submitted his bid for the job to design the dedication monument, writing, “Mankind itself is the subject of the sculptures at Hoover Dam.” Joan Didion described it as the most existentially terrifying place in America: “Since the afternoon in 1967 when I first saw Hoover Dam, its image has never been entirely absent from my inner eye.” Thirty-two people have chosen the dam as their place of suicide. It has no fences.

The reservoir is now the lowest it has ever been and California is living through the worst drought in 1200 years. You can swim in Lake Mead, so we did, sort of. It did provide some cool respite for a moment from the unrelenting heat of the desert. We waded around only up to our ankles because it smelled pretty terrible, the shoreline dirty with garbage.

Radioactive waste from spent nuclear fuel has a shelf life of hundreds of thousands of years. Maybe even more than a million, it’s not possible to precisely predict. Nuclear power plants around the US have produced 150 million metric tons of highly active nuclear waste that sits at dozens of sites around the country, awaiting a place to where it can all be carted and buried thousands of feet underground to be quarantined for the rest of time. For now a lot of it sits not far from major cities.

Yucca Mountain, 120 miles from Hoover Dam, is not that place. The site is one of the most intensely geologically surveyed and politically controversial pieces of land on Earth. Since 1987 it has been, at the cost of billions of dollars, the highly contested resting place for the majority of America’s high-risk nuclear waste. Those plans were officially shuttered in 2012, after states sued each other, states sued the federal Government, the Government sued contractors, and the people living near Yucca Mountain didn’t want, it turned out, for thousands of tons of nuclear waste to be carted through their counties and sacred lands via rail. President Obama cancelled its funding and officially ended the project.

It was said that there was a fault line running directly under the mountain; that the salt rock was not as absorbent as it was initially thought to be and that it posed the threat of leaking radiation into the water table; that more recently the possibility of fracking in the area would beget an ecological disaster. That a 10,000 year storage solution was nowhere near long enough to inculcate the Earth from the true shelf-life of the waste, which is realistically thought to be dangerous for many times that length of time. The site is now permanently closed, visible only from a distance through a cacophony of government warning signs blockading a security checkpoint.

We ask around the community of Amargosa Valley about the mountain. Sitting on 95 it’s the closest place to the site and consists only of a gas station, which trades in a huge amount of Area 51 themed merchandise, a boldly advertised sex shop, an alien motel and a firework store where you can let off rockets in the car park. Across the road is the vacant lot of what was once an RV park, with a couple of badly busted up vehicles looted beyond recognition and a small aquamarine boat lying on its side in the dirt.

At the gas station register a woman explains that no one really liked the idea of having waste so close to their homes (she repeats the story of the fault line), but they did like the idea of jobs, hundreds of which disappeared along with the project, leaving the surrounding areas, mainly long-tapped out mining communities, even more severely depressed.

We ask what would happen if we tried to actually get to the mountain itself, on government land.

“Plenty of people do try,” she says. “They’re trying to get to Area 51. They have sensors though, they’ll come get you real quick in their truck.”

Would we get shot?

“Shot? No. But they would throw you on the ground, break all your cameras and interrogate you for a long time.”

We decide just to take the road that used to go to the mountain as far as we can to the checkpoint, where in the distance beyond the electric fences at the other end of a stretch of desert land we see buildings and cars parked and most definitely some G-men who would see us before we even had the chance to try and sneak anywhere.

Before it was shut for good, Yucca Mountain had kilometers of tunnels bored into it and dozens of experiments undertaken within it, all of it now sealed behind an enormous vault door. It was also the focus of a branch of linguistics established specifically to warn future humans of the dangers of radioactive waste: nuclear semiotics. The Human Interference Task Force—a consortium of archeologists, architects, linguists, philosophers, engineers, designers—faced the opposite problem to Oskar Hansen at Hoover Dam; the Yucca Mountain repository was not hoping to attract the attentions of future humans to tell them of the glory of their forebears; it was to tell them that this place would kill them if they trod too near.

To create a universally readable warning system for humans living thirty generations from now, the signs will have to be instantly recognizable as expressing an immediate and lethal danger, as well as a deep sense of shunning: these were impulses that came up against each other; how to adequately express that the place was deadly while not at the same time enticing people to explore it, thinking it must contain something of great value if so much trouble had been gone to in order to keep people away? How to express this when all known written languages could very easily be dead? Signs as we know them now would almost certainly be completely unintelligible free of their social contexts which give them current meaning; a nuclear waste sign is just a dot with three rounded triangles sticking out of it to anyone not taught over a lifetime to know its warning.

Read the entire story here.

Image: United Nations radioactive symbol, 2007.

Forget Broccoli. It’s All About the Blue Zones

You should know how to live to be 100 years old by now. Tip number one: inherit good genes. Tip number two: forget uploading your consciousness to an AI, for now. Tip number three: live and eat in a so-called Blue Zone. Tip number four: walk fast, eat slowly.

From the NYT:

Dan Buettner and I were off to a good start. He approved of coffee.

“It’s one of the biggest sources of antioxidants in the American diet,” he said with chipper confidence, folding up his black Brompton bike.

As we walked through Greenwich Village, looking for a decent shot of joe to fuel an afternoon of shopping and cooking and talking about the enigma of longevity, he pointed out that the men and women of Icaria, a Greek island in the middle of the Aegean Sea, regularly slurp down two or three muddy cups a day.

This came as delightful news to me. Icaria has a key role in Mr. Buettner’s latest book, “The Blue Zones Solution,” which takes a deep dive into five places around the world where people have a beguiling habit of forgetting to die. In Icaria they stand a decent chance of living to see 100. Without coffee, I don’t see much point in making it to 50.

The purpose of our rendezvous was to see whether the insights of a longevity specialist like Mr. Buettner could be applied to the life of a food-obsessed writer in New York, a man whose occupational hazards happen to include chicken wings, cheeseburgers, martinis and marathon tasting menus.

Covering the world of gastronomy and mixology during the era of David Chang (career-defining dish: those Momofuku pork-belly buns) and April Bloomfield (career-defining dish: the lamb burger at the Breslin Bar and Dining Room) does not exactly feel like an enterprise that’s adding extra years to my life — or to my liver.

And the recent deaths (even if accidental) of men in my exact demographic — the food writer Joshua Ozersky, the tech entrepreneur Dave Goldberg — had put me in a mortality-anxious frame of mind.

With my own half-century mark eerily visible on the horizon, could Mr. Buettner, who has spent the last 10 years unlocking the mysteries of longevity, offer me a midcourse correction?

To that end, he had decided to cook me something of a longevity feast. Visiting from his home in Minnesota and camped out at the townhouse of his friends Andrew Solomon and John Habich in the Village, this trim, tanned, 55-year-old guru of the golden years was geared up to show me that living a long time was not about subsisting on a thin gruel of, well, gruel.

After that blast of coffee, which I dutifully diluted with soy milk (as instructed) at O Cafe on Avenue of the Americas, Mr. Buettner and I set forth on our quest at the aptly named LifeThyme market, where signs in the window trumpeted the wonders of wheatgrass. He reassured me, again, by letting me know that penitent hedge clippings had no place in our Blue Zones repast.

“People think, ‘If I eat more of this, then it’s O.K. to eat more burgers or candy,’ ” he said. Instead, as he ambled through the market dropping herbs and vegetables into his basket, he insisted that our life-extending banquet would hinge on normal affordable items that almost anyone can pick up at the grocery store. He grabbed fennel and broccoli, celery and carrots, tofu and coconut milk, a bag of frozen berries and a can of chickpeas and a jar of local honey.

The five communities spotlighted in “The Blue Zones Solution” (published by National Geographic) depend on simple methods of cooking that have evolved over centuries, and Mr. Buettner has developed a matter-of-fact disregard for gastro-trends of all stripes. At LifeThyme, he passed by refrigerated shelves full of vogue-ish juices in hues of green, orange and purple. He shook his head and said, “Bad!”

“The glycemic index on that is as bad as Coke,” he went on, snatching a bottle of carrot juice to scan the label. “For eight ounces, there’s 14 grams of sugar. People get suckered into thinking, ‘Oh, I’m drinking this juice.’ Skip the juicing. Eat the fruit. Or eat the vegetable.” (How about a protein shake? “No,” he said.)

So far, I was feeling pretty good about my chances of making it to 100. I love coffee, I’m not much of a juicer and I’ve never had a protein shake in my life. Bingo. I figured that pretty soon Mr. Buettner would throw me a dietary curveball (I noticed with vague concern that he was not putting any meat or cheese into his basket), but by this point I was already thinking about how fun it would be to meet my great-grandchildren.

I felt even better when he and I started talking about strenuous exercise, which for me falls somewhere between “root canal” and “Justin Bieber concert” on the personal aversion scale.

I like to go for long walks, and … well, that’s about it.

“That’s when I knew you’d be O.K.,” Mr. Buettner told me.

It turns out that walking is a popular mode of transport in the Blue Zones, too — particularly on the sun-splattered slopes of Sardinia, Italy, where many of those who make it to 100 are shepherds who devote the bulk of each day to wandering the hills and treating themselves to sips of red wine.

“A glass of wine is better than a glass of water with a Mediterranean meal,” Mr. Buettner told me.

Red wine and long walks? If that’s all it takes, people, you’re looking at Methuselah.

O.K., yes, Mr. Buettner moves his muscles a lot more than I do. He likes to go everywhere on that fold-up bike, which he hauls along with him on trips, and sometimes he does yoga and goes in-line skating. But he generally believes that the high-impact exercise mania as practiced in the major cities of the United States winds up doing as much harm as good.

“You can’t be pounding your joints with marathons and pumping iron,” he said. “You’ll never see me doing CrossFit.”

For that evening’s meal, Mr. Buettner planned to cook dishes that would make reference to the quintet of places that he focuses on in “The Blue Zones Solution”: along with Icaria and Sardinia, they are Okinawa, Japan; the Nicoya Peninsula in Costa Rica; and Loma Linda, Calif., where Seventh-day Adventists have a tendency to outlive their fellow Americans, thanks to a mostly vegetarian diet that is heavy on nuts, beans, oatmeal, 100 percent whole-grain bread and avocados.

We walked from the market to the townhouse. And it was here, as Mr. Buettner laid out his cooking ingredients on a table in Mr. Solomon’s and Mr. Habich’s commodious, state-of-the-art kitchen, that I noticed the first real disconnect between the lives of the Blue Zones sages and the life of a food writer who has enjoyed many a lunch hour scarfing down charcuterie, tapas and pork-belly-topped ramen at the Gotham West Market food court.

Where was the butter? Hadn’t some nice scientists determined that butter’s not so lethal for us, after all? (“My view is that butter, lard and other animal fats are a bit like radiation: a dollop a couple of times a week probably isn’t going to hurt you, but we don’t know the safe level,” Mr. Buettner later wrote in an email. “At any rate, I can send along a paper that largely refutes the whole ‘Butter is Back’ craze.” No, thanks, I’m good.)

Where was the meat? Where was the cheese? (No cheese? And here I thought we’d be friends for another 50 years, Mr. Buettner.)

Read the entire article here.

From a Million Miles

epicearthmoonstill

The Deep Space Climate Observatory (DSCOVR) spacecraft is now firmly in place about one million miles from Earth at its L1 (Legrange) point, a focus of gravitational balance between the sun and our planet. Jointly operated by NASA, NOAA (National Oceanic and Atmospheric Administration) and the U.S. Air Force, the spacecraft uses its digital optics to observe the Earth from sunrise to sunset. Researchers use its observations to measure a number of climate variables including ozone, aerosols, cloud heights, dust, and volcanic ash. The spacecraft also monitors the sun’s solar wind. Luckily, it also captures gorgeous images like the one above from July 16, 2015, of the moon, with dark side visible, as it transits over the Pacific Ocean.

Learn more about DSCOVR here.

Image: This image shows the far side of the moon, illuminated by the sun, as it crosses between the DSCOVR spacecraft’s Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth. Courtesy: NASA, NOAA.

Cause and Effect

One of the most fundamental tenets of our macroscopic world is the notion that an effect has a cause. Throw a pebble (cause) into a still pond and the ripples (effect) will be visible for all to see. Down at the microscopic level, physicists have determined through their mathematical convolutions that there is no such thing — there is nothing precluding the laws of physics running in reverse. Yet, we never witness ripples in a pond diminishing and ejecting a pebble, which then finds its way back to a catcher.

Of course, this quandary has kept many a philosopher’s pencil well sharpened while physicists continue to scratch their heads. So, is cause and effect merely an coincidental illusion? Or, does our physics only operate in one direction, determined by a yet to be discovered fundamental law?

Author of Causal Reasoning in Physics, philosopher Mathias Frisch, offers great summary of current thinking, but no fundamental breakthrough.

From Aeon:

Do early childhood vaccinations cause autism, as the American model Jenny McCarthy maintains? Are human carbon emissions at the root of global warming? Come to that, if I flick this switch, will it make the light on the porch come on? Presumably I don’t need to persuade you that these would be incredibly useful things to know.

Since anthropogenic greenhouse gas emissions do cause climate change, cutting our emissions would make a difference to future warming. By contrast, autism cannot be prevented by leaving children unvaccinated. Now, there’s a subtlety here. For our judgments to be much use to us, we have to distinguish between causal relations and mere correlations. From 1999 and 2009, the number of people in the US who fell into a swimming pool and drowned varies with the number of films in which Nicholas Cage appeared – but it seems unlikely that we could reduce the number of pool drownings by keeping Cage off the screen, desirable as the remedy might be for other reasons.

In short, a working knowledge of the way in which causes and effects relate to one another seems indispensible to our ability to make our way in the world. Yet there is a long and venerable tradition in philosophy, dating back at least to David Hume in the 18th century, that finds the notions of causality to be dubious. And that might be putting it kindly.

Hume argued that when we seek causal relations, we can never discover the real power; the, as it were, metaphysical glue that binds events together. All we are able to see are regularities – the ‘constant conjunction’ of certain sorts of observation. He concluded from this that any talk of causal powers is illegitimate. Which is not to say that he was ignorant of the central importance of causal reasoning; indeed, he said that it was only by means of such inferences that we can ‘go beyond the evidence of our memory and senses’. Causal reasoning was somehow both indispensable and illegitimate. We appear to have a dilemma.

Hume’s remedy for such metaphysical quandaries was arguably quite sensible, as far as it went: have a good meal, play backgammon with friends, and try to put it out of your mind. But in the late 19th and 20th centuries, his causal anxieties were reinforced by another problem, arguably harder to ignore. According to this new line of thought, causal notions seemed peculiarly out of place in our most fundamental science – physics.

There were two reasons for this. First, causes seemed too vague for a mathematically precise science. If you can’t observe them, how can you measure them? If you can’t measure them, how can you put them in your equations? Second, causality has a definite direction in time: causes have to happen before their effects. Yet the basic laws of physics (as distinct from such higher-level statistical generalisations as the laws of thermodynamics) appear to be time-symmetric: if a certain process is allowed under the basic laws of physics, a video of the same process played backwards will also depict a process that is allowed by the laws.

The 20th-century English philosopher Bertrand Russell concluded from these considerations that, since cause and effect play no fundamental role in physics, they should be removed from the philosophical vocabulary altogether. ‘The law of causality,’ he said with a flourish, ‘like much that passes muster among philosophers, is a relic of a bygone age, surviving, like the monarchy, only because it is erroneously supposed not to do harm.’

Neo-Russellians in the 21st century express their rejection of causes with no less rhetorical vigour. The philosopher of science John Earman of the University of Pittsburgh maintains that the wooliness of causal notions makes them inappropriate for physics: ‘A putative fundamental law of physics must be stated as a mathematical relation without the use of escape clauses or words that require a PhD in philosophy to apply (and two other PhDs to referee the application, and a third referee to break the tie of the inevitable disagreement of the first two).’

This is all very puzzling. Is it OK to think in terms of causes or not? If so, why, given the apparent hostility to causes in the underlying laws? And if not, why does it seem to work so well?

A clearer look at the physics might help us to find our way. Even though (most of) the basic laws are symmetrical in time, there are many arguably non-thermodynamic physical phenomena that can happen only one way. Imagine a stone thrown into a still pond: after the stone breaks the surface, waves spread concentrically from the point of impact. A common enough sight.

Now, imagine a video clip of the spreading waves played backwards. What we would see are concentrically converging waves. For some reason this second process, which is the time-reverse of the first, does not seem to occur in nature. The process of waves spreading from a source looks irreversible. And yet the underlying physical law describing the behaviour of waves – the wave equation – is as time-symmetric as any law in physics. It allows for both diverging and converging waves. So, given that the physical laws equally allow phenomena of both types, why do we frequently observe organised waves diverging from a source but never coherently converging waves?

Physicists and philosophers disagree on the correct answer to this question – which might be fine if it applied only to stones in ponds. But the problem also crops up with electromagnetic waves and the emission of light or radio waves: anywhere, in fact, that we find radiating waves. What to say about it?

On the one hand, many physicists (and some philosophers) invoke a causal principle to explain the asymmetry. Consider an antenna transmitting a radio signal. Since the source causes the signal, and since causes precede their effects, the radio waves diverge from the antenna after it is switched on simply because they are the repercussions of an initial disturbance, namely the switching on of the antenna. Imagine the time-reverse process: a radio wave steadily collapses into an antenna before the latter has been turned on. On the face of it, this conflicts with the idea of causality, because the wave would be present before its cause (the antenna) had done anything. David Griffiths, Emeritus Professor of Physics at Reed College in Oregon and the author of a widely used textbook on classical electrodynamics, favours this explanation, going so far as to call a time-asymmetric principle of causality ‘the most sacred tenet in all of physics’.

On the other hand, some physicists (and many philosophers) reject appeals to causal notions and maintain that the asymmetry ought to be explained statistically. The reason why we find coherently diverging waves but never coherently converging ones, they maintain, is not that wave sources cause waves, but that a converging wave would require the co?ordinated behaviour of ‘wavelets’ coming in from multiple different directions of space – delicately co?ordinated behaviour so improbable that it would strike us as nearly miraculous.

It so happens that this wave controversy has quite a distinguished history. In 1909, a few years before Russell’s pointed criticism of the notion of cause, Albert Einstein took part in a published debate concerning the radiation asymmetry. His opponent was the Swiss physicist Walther Ritz, a name you might not recognise.

It is in fact rather tragic that Ritz did not make larger waves in his own career, because his early reputation surpassed Einstein’s. The physicist Hermann Minkowski, who taught both Ritz and Einstein in Zurich, called Einstein a ‘lazy dog’ but had high praise for Ritz.  When the University of Zurich was looking to appoint its first professor of theoretical physics in 1909, Ritz was the top candidate for the position. According to one member of the hiring committee, he possessed ‘an exceptional talent, bordering on genius’. But he suffered from tuberculosis, and so, due to his failing health, he was passed over for the position, which went to Einstein instead. Ritz died that very year at age 31.

Months before his death, however, Ritz published a joint letter with Einstein summarising their disagreement. While Einstein thought that the irreversibility of radiation processes could be explained probabilistically, Ritz proposed what amounted to a causal explanation. He maintained that the reason for the asymmetry is that an elementary source of radiation has an influence on other sources in the future and not in the past.

This joint letter is something of a classic text, widely cited in the literature. What is less well-known is that, in the very same year, Einstein demonstrated a striking reversibility of his own. In a second published letter, he appears to take a position very close to Ritz’s – the very view he had dismissed just months earlier. According to the wave theory of light, Einstein now asserted, a wave source ‘produces a spherical wave that propagates outward. The inverse process does not exist as elementary process’. The only way in which converging waves can be produced, Einstein claimed, was by combining a very large number of coherently operating sources. He appears to have changed his mind.

Given Einstein’s titanic reputation, you might think that such a momentous shift would occasion a few ripples in the history of science. But I know of only one significant reference to his later statement: a letter from the philosopher Karl Popper to the journal Nature in 1956. In this letter, Popper describes the wave asymmetry in terms very similar to Einstein’s. And he also makes one particularly interesting remark, one that might help us to unpick the riddle. Coherently converging waves, Popper insisted, ‘would demand a vast number of distant coherent generators of waves the co?ordination of which, to be explicable, would have to be shown as originating from the centre’ (my italics).

This is, in fact, a particular instance of a much broader phenomenon. Consider two events that are spatially distant yet correlated with one another. If they are not related as cause and effect, they tend to be joint effects of a common cause. If, for example, two lamps in a room go out suddenly, it is unlikely that both bulbs just happened to burn out simultaneously. So we look for a common cause – perhaps a circuit breaker that tripped.

Common-cause inferences are so pervasive that it is difficult to imagine what we could know about the world beyond our immediate surroundings without them. Hume was right: judgments about causality are absolutely essential in going ‘beyond the evidence of the senses’. In his book The Direction of Time (1956), the philosopher Hans Reichenbach formulated a principle underlying such inferences: ‘If an improbable coincidence has occurred, there must exist a common cause.’ To the extent that we are bound to apply Reichenbach’s rule, we are all like the hard-boiled detective who doesn’t believe in coincidences.

Read the entire article here.

Passion, Persistence and Pluto

New Horizons Pluto Flyby

Alliterations aside this is a great story of how passion, persistence and persuasiveness can make a real impact. This is especially significant when you look at the triumphant climax to NASA’s unlikely New Horizons mission to Pluto. Over 20 years in the making and fraught with budget cuts and political infighting — NASA is known for its bureaucracy — the mission reached its zenith last week. While thanks go to the many hundreds engineers and scientists involved from its inception, the mission would not have succeeded without the vision and determination of one person — Alan Stern.

In a music track called “Over the Sea” by the 1980s (and 90s) band Information Society there is a sample of Star Trek’s Captain Kirk saying,

“In every revolution there is one man with a vision.”

How appropriate.

From Smithsonian

On July 14 at approximately 8 a.m. Eastern time, a half-ton NASA spacecraft that has been racing across the solar system for nine and a half years will finally catch up with tiny Pluto, at three billion miles from the Sun the most distant object that anyone or anything from Earth has ever visited. Invisible to the naked eye, Pluto wasn’t even discovered until 1930, and has been regarded as our solar system’s oddball ever since, completely different from the rocky planets close to the Sun, Earth included, and equally unlike the outer gas giants. This quirky and mysterious little world will swing into dramatic view as the New Horizons spacecraft makes its closest approach, just 6,000 miles away, and onboard cameras snap thousands of photographs. Other instruments will gauge Pluto’s topography, surface and atmospheric chemistry, temperature, magnetic field and more. New Horizons will also take a hard look at Pluto’s five known moons, including Charon, the largest. It might even find other moons, and maybe a ring or two.
It was barely 20 years ago when scientists first learned that Pluto, far from alone at the edge of the solar system, was just one in a vast swarm of small frozen bodies in wide, wide orbit around the Sun, like a ring of debris left at the outskirts of a construction zone. That insight, among others, has propelled the New Horizons mission. Understand Pluto and how it fits in with those remnant bodies, scientists say, and you can better understand the formation and evolution of the solar system itself.
If all goes well, “encounter day,” as the New Horizons team calls it, will be a cork-popping celebration of tremendous scientific and engineering prowess—it’s no small feat to fling a collection of precision instruments through the frigid void at speeds up to 47,000 miles an hour to rendezvous nearly a decade later with an icy sphere about half as wide as the United States is broad. The day will also be a sweet vindication for the leader of the mission, Alan Stern. A 57-year-old astronomer, aeronautical engineer, would-be astronaut and self-described “rabble-rouser,” Stern has spent the better part of his career fighting to get Pluto the attention he thinks it deserves. He began pushing NASA to approve a Pluto mission nearly a quarter of a century ago, then watched in frustration as the agency gave the green light to one Pluto probe after another, only to later cancel them. “It was incredibly frustrating,” he says, “like watching Lucy yank the football away from Charlie Brown, over and over.” Finally, Stern recruited other scientists and influential senators to join his lobbying effort, and because underdog Pluto has long been a favorite of children, proponents of the mission savvily enlisted kids to write to Congress, urging that funding for the spacecraft be approved.
New Horizons mission control is headquartered at Johns Hopkins University’s Applied Physics Laboratory near Baltimore, where Stern and several dozen other Plutonians will be installed for weeks around the big July event, but I caught up with Stern late last year in Boulder at the Southwest Research Institute, where he is an associate vice president for research and development. A picture window in his impressive office looks out onto the Rockies, where he often goes to hike and unwind. Trim and athletic at 5-foot-4, he’s also a runner, a sport he pursues with the exactitude of, well, a rocket scientist. He has calculated his stride rate, and says (only half-joking) that he’d be world-class if only his legs were longer. It wouldn’t be an overstatement to say that he is a polarizing figure in the planetary science community; his single-minded pursuit of Pluto has annoyed some colleagues. So has his passionate defense of Pluto in the years since astronomy officials famously demoted it to a “dwarf planet,” giving it the bum’s rush out of the exclusive solar system club, now limited to the eight biggies.
The timing of that insult, which is how Stern and other jilted Pluto-lovers see it, could not have been more dramatic, coming in August 2006, just months after New Horizons had rocketed into space from Cape Canaveral. What makes Pluto’s demotion even more painfully ironic to Stern is that some of the groundbreaking scientific discoveries that he had predicted greatly strengthened his opponents’ arguments, all while opening the door to a new age of planetary science. In fact, Stern himself used the term “dwarf planet” as early as the 1990s.
The wealthy astronomer Percival Lowell, widely known for insisting there were artificial canals on Mars, first started searching for Pluto at his private observatory in Arizona in 1905. Careful study of planetary orbits had suggested that Neptune was not the only object out there exerting a gravitational tug on Uranus, and Lowell set out to find what he dubbed “Planet X.” He died without success, but a young man named Clyde Tombaugh, who had a passion for astronomy though no college education, arrived at the observatory and picked up the search in 1929. After 7,000 hours staring at some 90 million star images, he caught sight of a new planet on his photographic plates in February 1930. The name Pluto, the Roman god of the underworld, was suggested by an 11-year-old British girl named Venetia Burney, who had been discussing the discovery with her grandfather. The name was unanimously adopted by the Lowell Observatory staff in part because the first two letters are Percival Lowell’s initials.
Pluto’s solitary nature baffled scientists for decades. Shouldn’t there be other, similar objects out beyond Neptune? Why did the solar system appear to run out of material so abruptly? “It seemed just weird that the outer solar system would be so empty, while the inner solar system was filled with planets and asteroids,” recalls David Jewitt, a planetary scientist at UCLA. Throughout the decades various astronomers proposed that there were smaller bodies out there, yet unseen. Comets that periodically sweep in to light up the night sky, they speculated, probably hailed from a belt or disk of debris at the solar system’s outer reaches.
Stern, in a paper published in 1991 in the journal Icarus, argued not only that the belt existed, but also that it contained things as big as Pluto. They were simply too far away, and too dim, to be easily seen. His reasoning: Neptune’s moon Triton is a near-twin of Pluto, and probably orbited the Sun before it was captured by Neptune’s gravity. Uranus has a drastically tilted axis of rotation, probably due to a collision eons ago with a Pluto-size object. That made three Pluto-like objects at least, which suggested to Stern there had to be more. The number of planets in the solar system would someday need to be revised upward, he thought. There were probably hundreds, with the majority, including Pluto, best assigned to a subcategory of “dwarf planets.”
Just a year later, the first object (other than Pluto and Charon) was discovered in that faraway region, called the Kuiper Belt after the Dutch-born astronomer Gerard Kuiper. Found by Jewitt and his colleague, Jane Luu, it’s only about 100 miles across, while Pluto spans 1,430 miles. A decade later, Caltech astronomers Mike Brown and Chad Trujillo discovered an object about half the size of Pluto, large enough to be spherical, which they named Quaoar (pronounced “kwa-war” and named for the creator god in the mythology of the pre-Columbian Tongva people native to the Los Angeles basin). It was followed in quick succession by Haumea, and in 2005, Brown’s group found Eris, about the same size as Pluto and also spherical.
Planetary scientists have spotted many hundreds of smaller Kuiper Belt Objects; there could be as many as ten billion that are a mile across or more. Stern will take a more accurate census of their sizes with the cameras on New Horizons. His simple idea is to map and measure Pluto’s and Charon’s craters, which are signs of collisions with other Kuiper Belt Objects and thus serve as a representative sample. When Pluto is closest to the Sun, frozen surface material evaporates into a temporary atmosphere, some of which escapes into space. This “escape erosion” can erase older craters, so Pluto will provide a recent census. Charon, without this erosion, will offer a record that spans cosmic history. In one leading theory, the original, much denser Kuiper Belt would have formed dozens of planets as big or bigger than Earth, but the orbital changes of Jupiter and Saturn flung most of the building blocks away before that could happen, nipping planet formation in the bud.
By the time New Horizons launched at Cape Canaveral on January 19, 2006, it had become difficult to argue that Pluto was materially different from many of its Kuiper Belt neighbors. Curiously, no strict definition of “planet” existed at the time, so some scientists argued that there should be a size cutoff, to avoid making the list of planets too long. If you called Pluto and the other relatively small bodies something else, you’d be left with a nice tidy eight planets—Mercury through Neptune. In 2000, Neil deGrasse Tyson, director of the Hayden Planetarium in New York City, had famously chosen the latter option, leaving Pluto out of a solar system exhibit.
Then, with New Horizons less than 15 percent of the way to Pluto, members of the International Astronomical Union, responsible for naming and classifying celestial objects, voted at a meeting in Prague to make that arrangement official. Pluto and the others were now to be known as dwarf planets, which, in contrast to Stern’s original meaning, were not planets. They were an entirely different sort of beast. Because he discovered Eris, Caltech’s Brown is sometimes blamed for the demotion. He has said he would have been fine with either outcome, but he did title his 2010 memoir How I Killed Pluto and Why It Had It Coming.
“It’s embarrassing,” recalls Stern, who wasn’t in Prague for the vote. “It’s wrong scientifically and it’s wrong pedagogically.” He said the same sort of things publicly at the time, in language that’s unusually blunt in the world of science. Among the dumbest arguments for demoting Pluto and the others, Stern noted, was the idea that having 20 or more planets would be somehow inconvenient. Also ridiculous, he says, is the notion that a dwarf planet isn’t really a planet. “Is a dwarf evergreen not an evergreen?” he asks.
Stern’s barely concealed contempt for what he considers foolishness of the bureaucratic and scientific varieties hasn’t always endeared him to colleagues. One astronomer I asked about Stern replied, “My mother taught me that if you can’t say anything nice about someone, don’t say anything.” Another said, “His last name is ‘Stern.’ That tells you all you need to know.”
DeGrasse Tyson, for his part, offers measured praise: “When it comes to everything from rousing public sentiment in support of astronomy to advocating space science missions to defending Pluto, Alan Stern is always there.”
Stern also inspires less reserved admiration. “Alan is incredibly creative and incredibly energetic,” says Richard Binzel, an MIT planetary scientist who has known Stern since their graduate-school days. “I don’t know where he gets it.”
Read the entire article here.

Image: New Horizons Principal Investigator Alan Stern of Southwest Research Institute (SwRI), Boulder, CO, celebrates with New Horizons Flight Controllers after they received confirmation from the spacecraft that it had successfully completed the flyby of Pluto, Tuesday, July 14, 2015 in the Mission Operations Center (MOC) of the Johns Hopkins University Applied Physics Laboratory (APL), Laurel, Maryland. Public domain.

The Big Breakthrough Listen

If you were a Russian billionaire with money to burn and a penchant for astronomy and physics what would you do? Well, rather than spend it on a 1,000 ft long super-yacht, you might want to spend it on the search for extraterrestrial intelligence. That’s what Yuri Milner is doing. So, hats off to him and his colleagues.

Though, I do hope any far-distant aliens have similar, or greater, sums of cash to throw at equipment to transmit a signal so that we may receive it. Also, I have to wonder what alien oligarchs spend their excess millions and billions on — and what type of monetary system they use (hopefully not Euros).

From the Guardian:

Astronomers are to embark on the most intensive search for alien life yet by listening out for potential radio signals coming from advanced civilisations far beyond the solar system.

Leading researchers have secured time on two of the world’s most powerful telescopes in the US and Australia to scan the Milky Way and neighbouring galaxies for radio emissions that betray the existence of life elsewhere. The search will be 50 times more sensitive, and cover 10 times more sky, than previous hunts for alien life.

The Green Bank Observatory in West Virginia, the largest steerable telescope on the planet, and the Parkes Observatory in New South Wales, are contracted to lead the unprecedented search that will start in January 2016. In tandem, the Lick Observatory in California will perform the most comprehensive search for optical laser transmissions beamed from other planets.

Operators have signed agreements that hand the scientists thousands of hours of telescope time per year to eavesdrop on planets that orbit the million stars closest to Earth and the 100 nearest galaxies. The telescopes will scan the centre of the Milky Way and the entire length of the galactic plane.

Launched on Monday at the Royal Society in London, with the Cambridge cosmologist Stephen Hawking, the Breakthrough Listen project has some of the world’s leading experts at the helm. Among them are Lord Martin Rees, the astronomer royal, Geoff Marcy, who has discovered more planets beyond the solar system than anyone, and the veteran US astronomer Frank Drake, a pioneer in the search for extraterrestrial intelligence (Seti).

Stephen Hawking said the effort was “critically important” and raised hopes for answering the question of whether humanity has company in the universe. “It’s time to commit to finding the answer, to search for life beyond Earth,” he said. “Mankind has a deep need to explore, to learn, to know. We also happen to be sociable creatures. It is important for us to know if we are alone in the dark.”

The project will not broadcast signals into space, because scientists on the project believe humans have more to gain from simply listening out for others. Hawking, however, warned against shouting into the cosmos, because some advanced alien civilisations might possess the same violent, aggressive and genocidal traits found among humans.

“A civilisation reading one of our messages could be billions of years ahead of us. If so they will be vastly more powerful and may not see us as any more valuable than we see bacteria,” he said.

The alien hunters are the latest scientists to benefit from the hefty bank balance of Yuri Milner, a Russian internet billionaire, who quit a PhD in physics to make his fortune. In the past five years, Milner has handed out prizes worth tens of millions of dollars to physicists, biologists and mathematicians, to raise the public profile of scientists. He is the sole funder of the $100m Breakthrough Listen project.

“It is our responsibility as human beings to use the best equipment we have to try to answer one of the biggest questions: are we alone?” Milner told the Guardian. “We cannot afford not to do this.”

Milner was named after Yuri Gagarin, who became the first person to fly in space in 1961, the year he was born.

The Green Bank and Parkes observatories are sensitive enough to pick up radio signals as strong as common aircraft radar from planets around the nearest 1,000 stars. Civilisations as far away as the centre of the Milky Way could be detected if they emit radio signals more than 10 times the power of the Arecibo planetary radar on Earth. The Lick Observatory can pick up laser signals as weak as 100W from nearby stars 25tn miles away.

Read the entire story here.

A Patent to End All Patents

You’ve seen the “we’ll help you file your patent application” infomercials on late night cable. The underlying promise is simple: your unique invention will find its way into every household on Earth and consequently will thrust you into the financial stratosphere making you the planet’s first gazillionaire. Of course, this will happen only after you part with your hard-earned cash for help in filing the patent. Incidentally, filing a patent with the US Patent and Trademark Office (USPTO) usually starts at around $10-15,000.

Some patents are truly extraordinary in their optimistic silliness: wind harnessing bicycle, apparatus for simulating a high-five, flatulence deodorizer, jet-powered surfboard, thong diaper, life-size interactive bowl of soup, nicotine infused coffee, edible business cards, magnetic rings to promote immortality, and so it goes. Remember, though, this is the United States, and most crazy things are possible and profitable. So, you could well find yourself becoming addicted to those 20oz nicotine infused lattes each time you pull up at the local coffee shop on your jet-powered surfboard.

But perhaps the most recent thoroughly earnest and whacky patent filing comes from Boeing no less. It’s for a laser-powered fusion-fission jet engine. The engine uses ultra-high powered lasers to fuse pellets of hydrogen, causing uranium to fission, which generates heat and subsequently electricity. All of this powering your next flight to Seattle. So, the next time you fly on a Boeing aircraft, keep in mind what some of the company’s engineers have in store for you 100 or 1,000 years from now. I think I’d prefer to be disassembled and beamed up.

From ars technica:

Assume the brace position: Boeing has received a patent for, I kid you not, a laser-powered fusion-fission jet propulsion system. Boeing envisions that this system could replace both rocket and turbofan engines, powering everything from spacecraft to missiles to airplanes.

The patent, US 9,068,562, combines inertial confinement fusion, fission, and a turbine that generates electricity. It sounds completely crazy because it is. Currently, this kind of engine is completely unrealistic given our mastery of fusion, or rather our lack thereof. Perhaps in the future (the distant, distant future that is), this could be a rather ingenious solution. For now, it’s yet another patent head-scratcher.

To begin with, imagine the silhouette of a big turbofan engine, like you’d see on a commercial jetliner. Somewhere in the middle of the engine there is a fusion chamber, with a number of very strong lasers focused on a single point. A hohlraum (pellet) containing a mix of deuterium and tritium (hydrogen isotopes) is placed at this focal point. The lasers are all turned on at the same instant, creating massive pressure on the pellet, which implodes and causes the hydrogen atoms to fuse. (This is called inertial confinement fusion, as opposed to the magnetic confinement fusion that is carried out in a tokamak.)

According to the patent, the hot gases produced by the fusion are pushed out of a nozzle at the back of the engine, creating thrust—but that’s not all! One of the by-products of hydrogen fusion is lots of fast neutrons. In Boeing’s patented design, there is a shield around the fusion chamber that’s coated with a fissionable material (uranium-238 is one example given). The neutrons hit the fissionable material, causing a fission reaction that generates lots of heat.

Finally, there’s some kind of heat exchanger system that takes the heat from the fission reaction and uses that heat (via a heated liquid or gas) to drive a turbine. This turbine generates the electricity that powers the lasers. Voilà: a fusion-fission rocket engine thing.

Let’s talk a little bit about why this is such an outlandish idea. To begin with, this patented design involves placing a lump of material that’s made radioactive in an airplane engine—and these vehicles are known to sometimes crash. Today, the only way we know of efficiently harvesting radioactive decay is a giant power plant, and we cannot get inertial fusion to fire more than once in a reasonable amount of time (much less on the short timescales needed to maintain thrust). This process requires building-sized lasers, like those found at the National Ignition Facility in California. Currently, the technique only works poorly. Those two traits are not conducive to air travel.

But this is the USA we’re talking about, where patents can be issued on firewalls (“being wielded in one of most outrageous trolling campaigns we have ever seen,” according to the EFF) and universities can claim such rights on “agent-based collaborative recognition-primed decision-making” (EFF: “The patent reads a little like what might result if you ate a dictionary filled with buzzwords and drank a bottle of tequila”). As far as patented products go, it is pretty hard to imagine this one actually being built in the real world. Putting aside the difficulties of inertial confinement fusion (we’re nowhere near hitting the break-even point), it’s also a bit far-fetched to shoehorn all of these disparate and rather difficult-to-work-with technologies into a small chassis that hangs from the wing of a commercial airplane.

Read the entire story here.

 

Europa Here We Come

NASA-Europa

With the the European Space Agency’s (ESA) Philae lander firmly rooted to a comet, NASA’s Dawn probe orbiting dwarf planet Ceres and its New Horizon’s spacecraft hurtling towards Pluto and Charon it would seem that we are doing lots of extraterrestrial exploration lately. Well, this is exciting, but for arm-chair explorers like myself this is still not enough. So, three cheers to NASA for giving a recent thumbs up to their next great mission — Europa Multi Flyby — to Jupiter’s moon, Europa.

Development is a go! But we’ll have to wait until the mid-2020s for lift-off. And, better yet, ESA has a mission to Europa planned for launch in 2022. Can’t wait — it looks spectacular.

From ars technica:

Get ready, we’re going to Europa! NASA’s plan to send a spacecraft to explore Jupiter’s moon just passed a major hurdle. The mission, planned for the 2020s, now has NASA’s official stamp of approval and was given the green light to move from concept phase to development phase.

Formerly known as Europa Clipper, the mission will temporarily be referred to as the Europa Multi Flyby Mission until it is given an official name. The current mission plan would include 45 separate flybys around the moon while orbiting Jupiter every two weeks. “We are taking an exciting step from concept to mission in our quest to find signs of life beyond Earth,” John Grunsfeld, associate administrator for NASA’s Science Mission Directorate, said in a press release.

Since Galileo first turned a spyglass up to the skies and discovered the Jovian moon, Europa has been a world of intrigue. In the 1970s, we received our first look at Europa through the eyes of Pioneer 10 and 11, followed closely by the twin Voyager satellites in the 1980s. Their images provided the first detailed view of the Solar System’s smoothest body. These photos also delivered evidence that the moon might be harboring a subsurface ocean. In the mid 1990s, the Galileo spacecraft gave us the best view to-date of Europa’s surface.

“Observations of Europa have provided us with tantalizing clues over the last two decades, and the time has come to seek answers to one of humanity’s most profound questions,” Grunsfeld said. “Mainly, is there life beyond Earth?”

Sending a probe to explore Jupiter’s icy companion will help scientists in the search for this life. If Europa can support microbial life, other glacial moons such as Enceladus might as well.

Water, chemistry, and energy are three components essential to the presence of life. Liquid water is present throughout the Solar System, but so far the only world known to support life is Earth. Scientists think that if we follow the water, we may find evidence of life beyond Earth.

However, water alone will not support life; the right combination of ingredients is key. This mission to Europa will explore the moon’s potential habitability as opposed to outright looking for life.

When we set out to explore new worlds, we do it in phases. First we flyby, then we send robotic landers, and then we send people. This three-step process is how we, as humans, have explored the Moon and how we are partly through the process of exploring Mars.

The flyby of Europa will be a preliminary mission with four objectives: explore the ice shell and subsurface ocean; determine the composition, distribution, and chemistry of various compounds and how they relate to the ocean composition; map surface features and determine if there is current geologic activity; characterize sites to determine where a future lander might safely touch down.

Europa, at 3,100 kilometers wide (1,900 miles), is the sixth largest moon in the Solar System. It has a 15 to 30 kilometer (9 to 18 mile) thick icy outer crust that covers a salty subsurface ocean. If that ocean is in contact with Europa’s rocky mantle, a number of complex chemical reactions are possible. Scientists think that hydrothermal vents lurk on the seafloor, and, just like the vents here on Earth, they could support life.

The Galileo orbiter taught us most of what we know about Europa through 12 flybys of the icy moon. The new mission is scheduled to conduct approximately 45 flybys over a 2.5-year period, providing even more insight into the moon’s habitability.

Read the article here.

Image: Europa. Europa is Jupiter’s sixth-closest moon, and the sixth-largest moon in the Solar System. Courtesy of NASA.

An Eleven Year Marathon

While 11 years is about how long my kids suggest it would take me to run a marathon, this marathon is entirely other-worldly. It’s taken NASA’s Opportunity rover this length of time to cover just over 26 miles. It may seem like an awfully long time to cover that short distance, but think of all the rest stops — for incredible scientific discovery — along the way.

Check out a time-lapse that compresses Opportunity’s incredible martian journey into a mere 8 minutes.

[tube]3b1DxICZbGc[/tube]

Video courtesy of NASA / JPL.

Thirty Going On Sixty or Sixty Going on Thirty?

By now you probably realize that I’m a glutton for human research studies. I’m particularly fond of studies that highlight a particular finding one week, only to be contradicted by the results of another study the following week.

However, despite lack of contradictions, this one published via the Proceedings of the National Academy of Sciences caught my eye. It suggests that we age at remarkably different rates. While most subjects showed a perceived, biological age within a handful of years of their actual, chronological age, there were some surprises. Some 30-year-olds showed a biological age twice that of their chronological age, while some appeared ten years younger.

From the BBC:

A study of people born within a year of each other has uncovered a huge gulf in the speed at which their bodies age.

The report, in Proceedings of the National Academy of Sciences, tracked traits such as weight, kidney function and gum health.

Some of the 38-year-olds were ageing so badly that their “biological age” was on the cusp of retirement.

The team said the next step was to discover what was affecting the pace of ageing.

The international research group followed 954 people from the same town in New Zealand who were all born in 1972-73.

The scientists looked at 18 different ageing-related traits when the group turned 26, 32 and 38 years old.

The analysis showed that at the age of 38, the people’s biological ages ranged from the late-20s to those who were nearly 60.

“They look rough, they look lacking in vitality,” said Prof Terrie Moffitt from Duke University in the US.

The study said some people had almost stopped ageing during the period of the study, while others were gaining nearly three years of biological age for every twelve months that passed.

People with older biological ages tended to do worse in tests of brain function and had a weaker grip.

Most people’s biological age was within a few years of their chronological age. It is unclear how the pace of biological ageing changes through life with these measures.

Read the entire story here.

Earth 2.0: Kepler 452b

452b_artistconcept_beautyshot

On July 23, 2015 NASA announced discovery of Kepler 452b, an Earth-like exoplanet, which they dubbed Earth 2.0. Found following a four-year trawl through data from the Kepler exoplanet-hunting space telescope, Kepler 452b is the closest exoplanet yet in its resemblance to Earth. It revolves around its sun-like home star in 380 days at a distance similar to that between Earth and our sun (93 million miles).

Unfortunately, Kepler 452b is a “mere” 1,400 light years away — so you can forget trying to strike up a real-time conversation with any of its intelligent inhabitants. If it does harbor life I have to hope that any sentient lifeforms have taken better care of their home than we earthlings do of our own. Then again, it may be better that the exoplanet hosts only non-intelligent life!

Here’s NASA’s technical paper.

Check out NASA’s briefing here.

Image: Artist rendition of Kepler 452b. Courtesy of NASA. Public Domain.

Where Are They?

Astrophysics professor Adam Frank reminds us to ponder Enrico Fermi‘s insightful question posed in the middle of the last century. Fermi’s question spawned his infamous, eponymous paradox, and goes something like this:

Why is there no evidence of extraterrestrial civilizations in our Milky Way galaxy given the age of the universe and vast number of stars within it?

Based on simple assumptions and family accurate estimates of the universe’s age, the number of galaxies and stars within it, the probability of Earth-like planets and the development of intelligent life on these planets it should be highly likely that some civilizations have already developed the capability for interstellar travel. In fact, even a slow pace of intra-galactic travel should have led to the colonization of our entire galaxy within just a few tens of millions of years, which is a blink of an eye on a cosmological timescale. Yet we see now evidence on Earth or anywhere beyond. And therein lies the conundrum.

The doomsayers might have us believe that extraterrestrial civilizations have indeed developed numerous times throughout our galaxy. But, none have made the crucial leap beyond ecological catastrophe and technological self-destruction before being able to shirk the bonds of their home planet. Do we have the power to avoid the same fate? I hope so.

From 13.7:

The story begins like this: In 1950, a group of high-powered physicists were lunching together near the Los Alamos National Laboratory.

Among those in attendance were Edward Teller (father of the nuclear bomb) and the Nobel Prize-winning Enrico Fermi. The discussion turned to a spate of recent UFO sightings and, then, on to the possibility of seeing an object (made by aliens) move faster than light. The conversation eventually turned to other topics when, out the blue, Fermi suddenly asked: “Where is everybody?”

While he’d startled his colleagues, they all quickly understood what he was referring to: Where are all the aliens?

What Fermi realized in his burst of insight was simple: If the universe was teeming with intelligent technological civilizations, why hadn’t they already made it to Earth? Indeed, why hadn’t they made it everywhere?

This question, known as “Fermi’s paradox,” is now a staple of astrobiological/SETI thinking. And while it might seem pretty abstract and inconsequential to our day-to-day existence, within Fermi’s paradox there lies a terrible possibility that haunts the fate of humanity.

Enough issues are packed into Fermi’s paradox for more than one post and — since Caleb Scharf and I are just starting a research project related to the question — I am sure to return to it. Today, however, I just want to unpack the basics of Fermi’s paradox and its consequences.

The most important thing to understand about Fermi’s paradox is that you don’t need faster-than-light travel, a warp drive or other exotic technology to take it seriously. Even if a technological civilization built ships that reached only a fraction of the speed of light, we might still expect all the stars (and the planets) to be “colonized.”

For example, let’s imagine that just one high-tech alien species emerges and starts sending ships out at one-hundredth of the speed of light. With that technology, they’d cross the typical distance between stars in “just” a few centuries to a millennium. If, once they got to a new solar system, they began using its resources to build more ships, then we can imagine how a wave of colonization begins propagating across the galaxy.

But how long does it take this colonization wave to spread?

Remarkably, it would only take a fraction of our galaxy’s lifetime before all the stars are inhabited. Depending on what you assume, the propagating wave of colonization could make it from one end of our Milky Way to the other in just 10 million years. While that might seem very long to you, it’s really just a blink of the eye to the 10-billion-year-old Milky Way (in other words, the colonization wave crosses in 0.001 times the age of the galaxy). That means if an alien civilization began at some random moment in the Milky Way’s history, odds are it has had time to colonize the entire galaxy.

You can choose your favorite sci-fi trope for what’s going on with these alien “slow ships.” Maybe they use cryogenic suspension. Maybe they’re using generation ships — mobile worlds whose inhabitants live out entire lives during the millennia-long crossing. Maybe the aliens don’t go themselves but send fully autonomous machines. Whatever scenario you choose, simple calculations, like the one above, tend to imply the aliens should be here already.

Of course, you can also come up with lots of resolutions to Fermi’s paradox. Maybe the aliens don’t want to colonize other worlds. Maybe none of the technologies for the ships described above really work. Maybe, maybe, maybe. We can take up some of those solutions in later 13.7 posts.

For today, however, let’s just consider the one answer that really matters for us, the existential one that is very, very freaky indeed: The aliens aren’t here because they don’t exist. We are the only sentient, technological species that exists in the entire galaxy.

It’s hard to overstate how profound this conclusion would be.

The consequences cut both ways. On the one hand, it’s possible that no other species has ever reached our state of development. Our galaxy with its 300 billion stars — meaning 300 billion chances for self-consciousness — has never awakened anywhere else. We would be the only ones looking into the night sky and asking questions. How impossibly lonely that would be.

Read the entire article here.

 

Hello Pluto

Pluto-New-Horizons-14Jul2015

Today NASA’s New Horizons spacecraft reached the (dwarf) planet Pluto and its five moons. After a 9.5 year voyage covering around 3 billion miles, the refrigerator-sized probe has finally reached its icy target. Unfortunately, New Horizons is traveling so quickly it will not enter orbit around Pluto but continue its 30,000 mph trek into interstellar space. The images, and science, that the craft will stream back to Earth over the coming months should be spectacular.

Check out more on the New Horizon’s mission here.

Image: Pluto as imaged by New Horizons, last image prior to its closest approach on July 14, 2015. Images courtesy of NASA.

The Devout Atheist

Dawkins_aaconfEvolutionary biologist Richard Dawkins sprang to the public’s attention via his immensely popular book The Selfish Gene. Since its publication almost 40 years ago, its author has assumed the unofficial mantle of Atheist-In-Chief. His passionate and impatient defense — some would call it crusading offense — of all things godless has rubbed many the wrong way, including numerous unbelievers. That said, his reasoning remains crystal clear and his focus laser-like. I just wish he would stay away from Twitter.

Check out his foundation here.

From the Guardian:

In Dublin, not long ago, Richard Dawkins visited a steakhouse called Darwin’s. He was in town to give a talk on the origins of life at Trinity College with the American physicist Lawrence Krauss. In the restaurant, a large model gorilla squatted in a corner and a series of sepia paintings of early man hung in the dining room – though, Dawkins pointed out, not quite in the right chronological order. A space by the bar had been refitted to resemble the interior of the Beagle, the vessel on which Charles Darwin sailed to South America in 1831 and conceived his theory of natural selection. “Oh look at this!” Dawkins said, examining the decor. “It’s terrific! Oh, wonderful.”

Over the years, Dawkins, a zoologist by training, has expressed admiration for Darwin in the way a schoolboy might worship a sporting giant. In his first memoir, Dawkins noted the “serendipitous realisation” that his full name – Clinton Richard Dawkins – shared the same initials as Charles Robert Darwin. He owns a prized first edition of On The Origin of Species, which he can quote from memory. For Dawkins, the book is totemic, the founding text of his career. “It’s such a thorough, unanswerable case,” he said one afternoon. “[Darwin] called it one long argument.” As a description of Dawkins’s own life, particularly its late phase, “one long argument” serves fairly well. As the global face of atheism over the last decade, Dawkins has ratcheted up the rhetoric in his self-declared war against religion. He is the general who chooses to fight on the front line – whose scorched-earth tactics have won him fervent admirers, and ferocious enemies. What is less clear, however, is whether he is winning.

Over dinner – chicken for Dawkins, steak for everyone else – he spoke little. He was anxious to leave early in order to discuss the format of the event with Krauss. Though Dawkins gives a talk roughly once a fortnight, he still obsessively overprepares. On this occasion, there was no need – he and Krauss had put on a similar show the night before at the University of Ulster in Belfast. They had also appeared on a radio talkshow, during which they had attempted to debate a creationist (an “idiot”, in Dawkins’s terminology). “She simply tried to shout down everything Lawrence and I said. So she was in effect going la la la la la.” Dawkins stuck his fingers in his ears as he sang.

Krauss and Dawkins have toured frequently as a double act, partners in a global quest to broadcast the wonder of science and the nonexistence of God. Dawkins has been on this mission ever since 1976, when he published The Selfish Gene, the book that made him famous, which has now sold over a million copies. Since then, he has written another 10 influential books on science and evolution, plus The God Delusion, his atheist blockbuster, and become the most prominent of the so-called New Atheists – a group of writers, including Christopher Hitchens and Sam Harris, who published anti-religion polemics in the years after 9/11.

An hour or so after dinner, the Burke Theatre in Trinity College, a large modern lecture hall with banked seating, was full. After separate presentations, Krauss and Dawkins conversed freely, swapping ideas on the origins of life. As he spoke, Dawkins took on a grandfatherly air, as though passing on hard-earned wisdom. He has always sought to inject beauty into biology, and his voice wavered with emotion as he shifted from dry fact to lyrical metaphor.

Dawkins has the stately confidence of one who has spent half a life behind a lectern. He has aged well, thanks to the determined jaw and carved cheekbones of a 1950s matinee idol. His hair remains in the style that has served him for 70 years, a lopsided sweep. A prominent brow and hawkish stare give him a look of constant urgency, as though he is waiting for everyone to catch up. In Dublin, his outfit was academic-on-tour: jacket, woolly jumper and tie, one of a collection hand-painted by his wife, Lalla Ward, which depict penguins, fish, birds of prey.

At the end of the Trinity event, a crowd of about 40 audience members descended on to the stage, clutching books to be signed. Dawkins eventually retreated into the wings to avoid a crush. One young schoolteacher lingered in the hallway long after the rest of the audience had left, in the hope of shaking Dawkins’s hand. Earlier that day, Dawkins had expressed bewilderment at his own celebrity. “I find the epidemic of selfies disconcerting,” he said. “It’s always, ‘one quick photo.’ One quick. But it never is.” Though he is used to receiving a steady flow of letters from fans of The God Delusion and new converts to atheism, he does not perceive himself as a figurehead. “I don’t need to say if I think of myself as a leader,” he said a few weeks later. “I simply need to say the book has sold three million copies.”

Dawkins turned 74 in March this year. To celebrate, he had dinner with Ward at Cherwell Boathouse, a smart restaurant overlooking the river in Oxford; the occasion was marred only slightly by a loud-voiced fellow diner, Dawkins recalled, “who quacked like Donald Duck”. An academic of his eminence could, by now, have eased into a distinguished late period: more books, the odd speech, master of an Oxford college, a gentle tending to his legacy. Though he is in a retrospective phase – one memoir published, a second on its way later this year – peaceful retreat from public life has not been the Dawkins way. “Some people might say why don’t you just get on with gardening,” he said. “I think [there’s a] passion for truth and a passion for justice that doesn’t allow me to do that.”

Instead, Dawkins remains indefatigably active. He rarely takes a holiday, but travels frequently to give talks – in the last four months he has been to Ireland, the Czech Republic, Bulgaria and Brazil. Though he says he prefers to speak about science, God inevitably looms. “I suppose some of what I do is an attempt to change people’s minds about religion,” he said, with some understatement, between events in Ireland. “And I do think that’s a politically important thing to be doing.” For Dawkins, who describes his own politics as “vaguely left”, this means a concern for the state of the world, and a desire, ultimately, to eradicate religion from society. In his mission, Dawkins is still, at heart, a teacher. “I would like to leave the world a better place,” he said. “I like to think my science books have had a positive educational effect, but I also want to leave the world a better place in influencing opinion in other fields where there is illogic, obscurantism, pretension.” Religious faith, for Dawkins, is above all a sign of faulty thinking, of ignorance; he wants to educate the ill-informed out of their mistakes. He sees religion, as he once put it on Twitter, as “an organised licence to be acceptably stupid”.

The two strands of Dawkins’s mission – promoting science, demolishing religion – are intended to be complementary. “If they are antagonistic to each other, that would be regrettable,” he said, “but I don’t see why they should be.” But antagonism is part of Dawkins’s daily life. “I suppose some of the passions that I show are more appropriate to a young man than somebody of my age.” Since his arrival on Twitter in 2008, his public pronouncements have become more combative – and, at times, flamboyantly irritable: “How dare you force your dopey unsubstantiated superstitions on innocent children too young to resist?,” he tweeted last June. “How DARE you?”

— Richard Dawkins (@RichardDawkins)June 10, 2014

How dare you force your dopey unsubstantiated superstitions on innocent children too young to resist? How DARE you?

Read the entire story here.

Image: Richard Dawkins, 34th annual conference of American Atheists (2008). Public domain.

Emmy Noether, Mathematician

Emmy-NoetherMost non-mathematicians have probably heard of Euclid, Pythagoras, Poincaré, Gauss, Lagrange, de Fermat, and Hilbert,  to name but a few. All giants in their various mathematical specialties. But, I would hazard a wager that even most mathematicians have never heard of Noether. Probably because Emmy Noether is a woman.

Yet learning of her exploits in the early 20th century, I can see how far we still have to travel to truly recognize the contributions of women in academia and science — and everywhere else for that matter — as on a par with those of men. Women like Noether succeeded despite tremendous (male) pressure against them, which makes their achievements even more astonishing.

From ars technica:

By 1915, any list of the world’s greatest living mathematicians included the name David Hilbert. And though Hilbert previously devoted his career to logic and pure mathematics, he, like many other critical thinkers at the time, eventually became obsessed with a bit of theoretical physics.

With World War I raging on throughout Europe, Hilbert could be found sitting in his office at the great university at Göttingen trying and trying again to understand one idea—Einstein’s new theory of gravity.

Göttingen served as the center of mathematics for the Western world by this point, and Hilbert stood as one of its most notorious thinkers. He was a prominent leader for the minority of mathematicians who preferred a symbolic, axiomatic development in contrast to a more concrete style that emphasized the construction of particular solutions. Many of his peers recoiled from these modern methods, one even calling them “theology.” But Hilbert eventually won over most critics through the power and fruitfulness of his research.

For Hilbert, his rigorous approach to mathematics stood out quite a bit from the common practice of scientists, causing him some consternation. “Physics is much too hard for physicists,” he famously quipped. So wanting to know more, he invited Einstein to Göttingen to lecture about gravity for a week.

Before the year ended, both men would submit papers deriving the complete equations of general relativity. But naturally, the papers differed entirely when it came to their methods. When it came to Einstein’s theory, Hilbert and his Göttingen colleagues simply couldn’t wrap their minds around a peculiarity having to do with energy. All other physical theories—including electromagnetism, hydrodynamics, and the classical theory of gravity—obeyed local energy conservation. With Einstein’s theory, one of the many paradoxical consequences of this failure of energy conservation was that an object could speed up as it lost energy by emitting gravity waves, whereas clearly it should slow down.

Unable to make progress, Hilbert turned to the only person he believed might have the specialized knowledge and insight to help. This would-be-savior wasn’t even allowed to be a student at Göttingen once upon a time, but Hilbert had long become a fan of this mathematician’s highly “abstract” approach (which Hilbert considered similar to his own style). He managed to recruit this soon-to-be partner to Göttingen about the same time Einstein showed up.

And that’s when a woman—one Emmy Noether—created what may be the most important single theoretical result in modern physics.

 …

During Noether’s stay at Göttingen, Hilbert contrived a way to allow her to lecture unofficially. He repeatedly attempted to get her hired as a Privatdozent, or an officially recognized lecturer. The science and mathematics faculty was generally in favor of this, but Hilbert could not overcome the resistance of the humanities professors, who simply could not stomach the idea of a female teacher. At one meeting of the faculty senate, frustrated again in his attempts to get Noether a job, he famously remarked, “I do not see that the sex of a candidate is an argument against her admission as Privatdozent. After all, we are a university, not a bathing establishment.”

Social barriers aside, Noether immediately grasped the problem with Einstein’s theory. Over the course of three years, she not only solved it, but in doing so she proved a theorem that simultaneously reached back to the dawn of physics and pushed forward to the physics of today. Noether’s Theorem, as it is now called, lies at the heart of modern physics, unifying everything from the orbits of planets to the theories of elementary particles.

Read the entire story here.

Image: Emmy Noether (1882-1935). Public domain.

 

It’s Official — Big Rip Coming!

San_Sebastian-Cementerio_de_PolloeThe UK’s Daily Telegraph newspaper just published this article, so it must be true. After all, the broadsheet has been a stalwart of conservative British journalism since, well, the dawn of time, some 6,000 year ago.

Apparently our universe will end in a so-called Big Rip, and not in a Big Freeze. Nor will it end in a Big Crunch, which is like the Big Bang in reverse. The Big Rip seems to be a rather calm and quiet version of the impending cosmological apocalypse. So, I’m all for it. I can’t wait… 22 billion years and counting.

From the Daily Telegraph:

A group of scientists claim to have evidence supporting the Big Rip theory, explaining how the universe will end – in 22 billion years.

Researchers at Vanderbilt University in Nashville, Tennessee, have discovered a new mathematical formulation that supports the Big Rip theory – that as the universe expands, it will eventually be ripped apart.

“The idea of the Big Rip is that eventually even the constituents of matter would start separating from each other. You’d be seeing all the atoms being ripped apart … it’s fair to say that it’s a dramatic scenario,” Dr Marcelo Disconzi told the Guardian.

Scientists observed distant supernovae to examine whether the Big Rip theory, which was first suggested in 2003, was possible.

The theory relies on the assumption that the universe continues to expand faster and faster, eventually causing the Big Rip.

“Mathematically we know what this means. But what it actually means in physical terms is hard to fathom,” said Dr Disconzi.

Conflicting theories for how the universe will end include the Big Crunch, whereby the Big Bang reverses and everything contracts, and the Big Freeze, where as the universe slowly expands it eventually becomes too cold to sustain life.

Previous questions raised over the Big Rip theory include explaining how sticky fluids – that have high levels of viscosity – can travel faster than the speed of light, defying the laws of physics.

However, the Vanderbilt team combined a series of equations, including some dating back to 1955, to show that viscosity may not be a barrier to a rapidly expanding universe.

“My result by no means settles the question of what the correct formulation of relativistic viscous fluids is. What it shows is that, under some assumptions, the equations put forward by Lichnerowicz have solutions and the solutions do not predict faster-than-light signals. But we still don’t know if these results remain valid under the most general situations relevant to physics,” Dr Disconzi told the New Statesman.

Read the story here.

Image: Cementerio de Polloe, en Donostia-San Sebastián, 2014. Courtesy of Zarateman. Public domain.