Crispr – Designer DNA

The world welcomed basic genetic engineering in the mid-1970s, when biotech pioneers Herbert Boyer and Stanley Cohen transferred DNA from one organism to another (bacteria). In so doing they created the first genetically modified organism (GMO). A mere forty years later we now have extremely powerful and accessible (cheap) biochemical tools for tinkering with the molecules of heredity. One of these tools, known as Crispr-Cas9, makes it easy and fast to move any genes around, within and across any species.

The technique promises immense progress in the fight against inherited illness, cancer treatment and viral infection. It also opens the door to untold manipulation of DNA in lower organisms and plants to develop an infection resistant and faster growing food supply, and to reimagine a whole host of biochemical and industrial processes (such as ethanol production).

Yet as is the case with many technological advances that hold great promise, tremendous peril lies ahead from this next revolution. Our bioengineering prowess has yet to be matched with a sound and pervasive ethical framework. Can humans reach a consensus on how to shape, focus and limit the application of such techniques? And, equally importantly, can we enforce these bioethical constraints before it’s too late to “uninvent” designer babies and bioweapons?

From Wired:

Spiny grass and scraggly pines creep amid the arts-and-crafts buildings of the Asilomar Conference Grounds, 100 acres of dune where California’s Monterey Peninsula hammerheads into the Pacific. It’s a rugged landscape, designed to inspire people to contemplate their evolving place on Earth. So it was natural that 140 scientists gathered here in 1975 for an unprecedented conference.

They were worried about what people called “recombinant DNA,” the manipulation of the source code of life. It had been just 22 years since James Watson, Francis Crick, and Rosalind Franklin described what DNA was—deoxyribonucleic acid, four different structures called bases stuck to a backbone of sugar and phosphate, in sequences thousands of bases long. DNA is what genes are made of, and genes are the basis of heredity.

Preeminent genetic researchers like David Baltimore, then at MIT, went to Asilomar to grapple with the implications of being able to decrypt and reorder genes. It was a God-like power—to plug genes from one living thing into another. Used wisely, it had the potential to save millions of lives. But the scientists also knew their creations might slip out of their control. They wanted to consider what ought to be off-limits.

By 1975, other fields of science—like physics—were subject to broad restrictions. Hardly anyone was allowed to work on atomic bombs, say. But biology was different. Biologists still let the winding road of research guide their steps. On occasion, regulatory bodies had acted retrospectively—after Nuremberg, Tuskegee, and the human radiation experiments, external enforcement entities had told biologists they weren’t allowed to do that bad thing again. Asilomar, though, was about establishing prospective guidelines, a remarkably open and forward-thinking move.

At the end of the meeting, Baltimore and four other molecular biologists stayed up all night writing a consensus statement. They laid out ways to isolate potentially dangerous experiments and determined that cloning or otherwise messing with dangerous pathogens should be off-limits. A few attendees fretted about the idea of modifications of the human “germ line”—changes that would be passed on from one generation to the next—but most thought that was so far off as to be unrealistic. Engineering microbes was hard enough. The rules the Asilomar scientists hoped biology would follow didn’t look much further ahead than ideas and proposals already on their desks.

Earlier this year, Baltimore joined 17 other researchers for another California conference, this one at the Carneros Inn in Napa Valley. “It was a feeling of déjà vu,” Baltimore says. There he was again, gathered with some of the smartest scientists on earth to talk about the implications of genome engineering.

The stakes, however, have changed. Everyone at the Napa meeting had access to a gene-editing technique called Crispr-Cas9. The first term is an acronym for “clustered regularly interspaced short palindromic repeats,” a description of the genetic basis of the method; Cas9 is the name of a protein that makes it work. Technical details aside, Crispr-Cas9 makes it easy, cheap, and fast to move genes around—any genes, in any living thing, from bacteria to people. “These are monumental moments in the history of biomedical research,” Baltimore says. “They don’t happen every day.”

Using the three-year-old technique, researchers have already reversed mutations that cause blindness, stopped cancer cells from multiplying, and made cells impervious to the virus that causes AIDS. Agronomists have rendered wheat invulnerable to killer fungi like powdery mildew, hinting at engineered staple crops that can feed a population of 9 billion on an ever-warmer planet. Bioengineers have used Crispr to alter the DNA of yeast so that it consumes plant matter and excretes ethanol, promising an end to reliance on petrochemicals. Startups devoted to Crispr have launched. International pharmaceutical and agricultural companies have spun up Crispr R&D. Two of the most powerful universities in the US are engaged in a vicious war over the basic patent. Depending on what kind of person you are, Crispr makes you see a gleaming world of the future, a Nobel medallion, or dollar signs.

The technique is revolutionary, and like all revolutions, it’s perilous. Crispr goes well beyond anything the Asilomar conference discussed. It could at last allow genetics researchers to conjure everything anyone has ever worried they would—designer babies, invasive mutants, species-specific bioweapons, and a dozen other apocalyptic sci-fi tropes. It brings with it all-new rules for the practice of research in the life sciences. But no one knows what the rules are—or who will be the first to break them.

In a way, humans were genetic engineers long before anyone knew what a gene was. They could give living things new traits—sweeter kernels of corn, flatter bulldog faces—through selective breeding. But it took time, and it didn’t always pan out. By the 1930s refining nature got faster. Scientists bombarded seeds and insect eggs with x-rays, causing mutations to scatter through genomes like shrapnel. If one of hundreds of irradiated plants or insects grew up with the traits scientists desired, they bred it and tossed the rest. That’s where red grapefruits came from, and most barley for modern beer.

Genome modification has become less of a crapshoot. In 2002, molecular biologists learned to delete or replace specific genes using enzymes called zinc-finger nucleases; the next-generation technique used enzymes named TALENs.

Yet the procedures were expensive and complicated. They only worked on organisms whose molecular innards had been thoroughly dissected—like mice or fruit flies. Genome engineers went on the hunt for something better.

As it happened, the people who found it weren’t genome engineers at all. They were basic researchers, trying to unravel the origin of life by sequencing the genomes of ancient bacteria and microbes called Archaea (as in archaic), descendants of the first life on Earth. Deep amid the bases, the As, Ts, Gs, and Cs that made up those DNA sequences, microbiologists noticed recurring segments that were the same back to front and front to back—palindromes. The researchers didn’t know what these segments did, but they knew they were weird. In a branding exercise only scientists could love, they named these clusters of repeating palindromes Crispr.

Then, in 2005, a microbiologist named Rodolphe Barrangou, working at a Danish food company called Danisco, spotted some of those same palindromic repeats in Streptococcus thermophilus, the bacteria that the company uses to make yogurt and cheese. Barrangou and his colleagues discovered that the unidentified stretches of DNA between Crispr’s palindromes matched sequences from viruses that had infected their S. thermophilus colonies. Like most living things, bacteria get attacked by viruses—in this case they’re called bacteriophages, or phages for short. Barrangou’s team went on to show that the segments served an important role in the bacteria’s defense against the phages, a sort of immunological memory. If a phage infected a microbe whose Crispr carried its fingerprint, the bacteria could recognize the phage and fight back. Barrangou and his colleagues realized they could save their company some money by selecting S. thermophilus species with Crispr sequences that resisted common dairy viruses.

As more researchers sequenced more bacteria, they found Crisprs again and again—half of all bacteria had them. Most Archaea did too. And even stranger, some of Crispr’s sequences didn’t encode the eventual manufacture of a protein, as is typical of a gene, but instead led to RNA—single-stranded genetic material. (DNA, of course, is double-stranded.)

That pointed to a new hypothesis. Most present-day animals and plants defend themselves against viruses with structures made out of RNA. So a few researchers started to wonder if Crispr was a primordial immune system. Among the people working on that idea was Jill Banfield, a geomicrobiologist at UC Berkeley, who had found Crispr sequences in microbes she collected from acidic, 110-degree water from the defunct Iron Mountain Mine in Shasta County, California. But to figure out if she was right, she needed help.

Luckily, one of the country’s best-known RNA experts, a biochemist named Jennifer Doudna, worked on the other side of campus in an office with a view of the Bay and San Francisco’s skyline. It certainly wasn’t what Doudna had imagined for herself as a girl growing up on the Big Island of Hawaii. She simply liked math and chemistry—an affinity that took her to Harvard and then to a postdoc at the University of Colorado. That’s where she made her initial important discoveries, revealing the three-dimensional structure of complex RNA molecules that could, like enzymes, catalyze chemical reactions.

The mine bacteria piqued Doudna’s curiosity, but when Doudna pried Crispr apart, she didn’t see anything to suggest the bacterial immune system was related to the one plants and animals use. Still, she thought the system might be adapted for diagnostic tests.

Banfield wasn’t the only person to ask Doudna for help with a Crispr project. In 2011, Doudna was at an American Society for Microbiology meeting in San Juan, Puerto Rico, when an intense, dark-haired French scientist asked her if she wouldn’t mind stepping outside the conference hall for a chat. This was Emmanuelle Charpentier, a microbiologist at Ume?a University in Sweden.

As they wandered through the alleyways of old San Juan, Charpentier explained that one of Crispr’s associated proteins, named Csn1, appeared to be extraordinary. It seemed to search for specific DNA sequences in viruses and cut them apart like a microscopic multitool. Charpentier asked Doudna to help her figure out how it worked. “Somehow the way she said it, I literally—I can almost feel it now—I had this chill down my back,” Doudna says. “When she said ‘the mysterious Csn1’ I just had this feeling, there is going to be something good here.”

Read the whole story here.

Send to Kindle

Deep Time, Nuclear Semiotics and Atomic Priests

un-radioactive_warning_signTime seems to unfold over different — lengthier — scales in the desert southwest of the United States. Perhaps it’s the vastness of the eerie landscape that puts fleeting human moments into the context of deep geologic time. Or, perhaps it’s our monumental human structures that aim to encode our present for the distant future. Structures like the Hoover Dam, which regulates the mighty Colorado River, and the ill-fated Yucca Mountain project, once designed to store the nation’s nuclear waste, were conceived to last many centuries.

Yet these monuments to our impermanence raise a important issue beyond their construction — how are we to communicate their intent to humans living in a distant future, humans who will no longer be using any of our existing languages? Directions and warnings in English or contextual signs and images will not suffice. Consider Yucca Mountain. Now shuttered, Yucca Mountain was designed to be a repository for nuclear byproducts and waste from military and civilian programs. Keep in mind that some products of nuclear reactors, such as various isotopes of uranium, plutonium, technetium and neptunium, remain highly radioactive for tens of thousands to millions of years. So, how would we post warnings at Yucca Mountain about the entombed dangers to generations living 10,000 years and more from now? Those behind the Yucca Mountain project considered a number of fantastic (in its original sense) programs to carry dire warnings into the distant future including hostile architecture, radioactive cats and a pseudo-religious order. This was the work of the Human Interference Task Force.

From Motherboard:

Building the Hoover Dam rerouted the most powerful river in North America. It claimed the lives of 96 workers, and the beloved site dog, Little Niggy, who is entombed by the walkway in the shade of the canyon wall. Diverting the Colorado destroyed the ecology of the region, threatening fragile native plant life and driving several species of fish nearly to extinction. The dam brought water to 8 million people and created more than 5000 jobs. It required 6.6 million metric tons of concrete, all made from the desert; enough, famously, to pave a two lane road coast to coast across the US. Inside the dam’s walls that concrete is still curing, and will be for another 60 years.

Erik, photojournalist, and I have come here to try and get the measure of this place. Nevada is the uncanny locus of disparate monuments all concerned with charting deep time, leaving messages for future generations of human beings to puzzle over the meaning of: a star map, a nuclear waste repository and a clock able to keep time for 10,000 years—all of them within a few hours drive of Las Vegas through the harsh desert.

Hoover Dam is theorized in some structural stress projections to stand for tens of thousands of years from now, and what could be its eventual undoing is mussels. The mollusks which grow in the dam’s grates will no longer be scraped away, and will multiply eventually to such density that the built up stress of the river will burst the dam’s wall. That is if the Colorado continues to flow. Otherwise erosion will take much longer to claim the structure, and possibly Oskar J.W. Hansen’s vision will be realized: future humans will find the dam 14,000 years from now, at the end of the current Platonic Year.

A Platonic Year lasts for roughly 26,000 years. It’s also known as the precession of the equinoxes, first written into the historical record in the second century BC by the Greek mathematician, Hipparchus, though there is evidence that earlier people also solved this complex equation. Earth rotates in three ways: 365 days around the sun, on its 24 hours axis and on its precessional axis. The duration of the last is the Platonic Year, where Earth is incrementally turning on a tilt pointing to its true north as the Sun’s gravity pulls on us, leaving our planet spinning like a very slow top along its orbit around the sun.

Now Earth’s true-north pole star is Polaris, in Ursa Minor, as it was at the completion of Hoover Dam. At the end of the current Platonic Year it will be Vega, in the constellation Lyra. Hansen included this information in an amazingly accurate astronomical clock, or celestial map, embedded in the terrazzo floor of the dam’s dedication monument. Hansen wanted any future humans who came across the dam to be able to know exactly when it was built.

He used the clock to mark major historical events of the last several thousand years including the birth of Christ and the building of the pyramids, events which he thought were equal to the engineering feat of men bringing water to a desert in the 1930s. He reasoned that though current languages could be dead in this future, any people who had survived that long would have advanced astronomy, math and physics in their arsenal of survival tactics. Despite this, the monument is written entirely in English, which is for the benefit of current visitors, not our descendents of millennia from now.

The Hoover Dam is staggering. It is frankly impossible, even standing right on top of it, squinting in the blinding sunlight down its vertiginous drop, to imagine how it was ever built by human beings; even as I watch old documentary footage on my laptop back in the hotel at night on Fremont Street, showing me that exact thing, I don’t believe it. I cannot square it in my mind. I cannot conceive of nearly dying every day laboring in the brutally dry 100 degree heat, in a time before air-conditioning, in a time before being able to ever get even the slightest relief from the elements.

Hansen was more than aware of our propensity to build great monuments to ourselves and felt the weight of history as he submitted his bid for the job to design the dedication monument, writing, “Mankind itself is the subject of the sculptures at Hoover Dam.” Joan Didion described it as the most existentially terrifying place in America: “Since the afternoon in 1967 when I first saw Hoover Dam, its image has never been entirely absent from my inner eye.” Thirty-two people have chosen the dam as their place of suicide. It has no fences.

The reservoir is now the lowest it has ever been and California is living through the worst drought in 1200 years. You can swim in Lake Mead, so we did, sort of. It did provide some cool respite for a moment from the unrelenting heat of the desert. We waded around only up to our ankles because it smelled pretty terrible, the shoreline dirty with garbage.

Radioactive waste from spent nuclear fuel has a shelf life of hundreds of thousands of years. Maybe even more than a million, it’s not possible to precisely predict. Nuclear power plants around the US have produced 150 million metric tons of highly active nuclear waste that sits at dozens of sites around the country, awaiting a place to where it can all be carted and buried thousands of feet underground to be quarantined for the rest of time. For now a lot of it sits not far from major cities.

Yucca Mountain, 120 miles from Hoover Dam, is not that place. The site is one of the most intensely geologically surveyed and politically controversial pieces of land on Earth. Since 1987 it has been, at the cost of billions of dollars, the highly contested resting place for the majority of America’s high-risk nuclear waste. Those plans were officially shuttered in 2012, after states sued each other, states sued the federal Government, the Government sued contractors, and the people living near Yucca Mountain didn’t want, it turned out, for thousands of tons of nuclear waste to be carted through their counties and sacred lands via rail. President Obama cancelled its funding and officially ended the project.

It was said that there was a fault line running directly under the mountain; that the salt rock was not as absorbent as it was initially thought to be and that it posed the threat of leaking radiation into the water table; that more recently the possibility of fracking in the area would beget an ecological disaster. That a 10,000 year storage solution was nowhere near long enough to inculcate the Earth from the true shelf-life of the waste, which is realistically thought to be dangerous for many times that length of time. The site is now permanently closed, visible only from a distance through a cacophony of government warning signs blockading a security checkpoint.

We ask around the community of Amargosa Valley about the mountain. Sitting on 95 it’s the closest place to the site and consists only of a gas station, which trades in a huge amount of Area 51 themed merchandise, a boldly advertised sex shop, an alien motel and a firework store where you can let off rockets in the car park. Across the road is the vacant lot of what was once an RV park, with a couple of badly busted up vehicles looted beyond recognition and a small aquamarine boat lying on its side in the dirt.

At the gas station register a woman explains that no one really liked the idea of having waste so close to their homes (she repeats the story of the fault line), but they did like the idea of jobs, hundreds of which disappeared along with the project, leaving the surrounding areas, mainly long-tapped out mining communities, even more severely depressed.

We ask what would happen if we tried to actually get to the mountain itself, on government land.

“Plenty of people do try,” she says. “They’re trying to get to Area 51. They have sensors though, they’ll come get you real quick in their truck.”

Would we get shot?

“Shot? No. But they would throw you on the ground, break all your cameras and interrogate you for a long time.”

We decide just to take the road that used to go to the mountain as far as we can to the checkpoint, where in the distance beyond the electric fences at the other end of a stretch of desert land we see buildings and cars parked and most definitely some G-men who would see us before we even had the chance to try and sneak anywhere.

Before it was shut for good, Yucca Mountain had kilometers of tunnels bored into it and dozens of experiments undertaken within it, all of it now sealed behind an enormous vault door. It was also the focus of a branch of linguistics established specifically to warn future humans of the dangers of radioactive waste: nuclear semiotics. The Human Interference Task Force—a consortium of archeologists, architects, linguists, philosophers, engineers, designers—faced the opposite problem to Oskar Hansen at Hoover Dam; the Yucca Mountain repository was not hoping to attract the attentions of future humans to tell them of the glory of their forebears; it was to tell them that this place would kill them if they trod too near.

To create a universally readable warning system for humans living thirty generations from now, the signs will have to be instantly recognizable as expressing an immediate and lethal danger, as well as a deep sense of shunning: these were impulses that came up against each other; how to adequately express that the place was deadly while not at the same time enticing people to explore it, thinking it must contain something of great value if so much trouble had been gone to in order to keep people away? How to express this when all known written languages could very easily be dead? Signs as we know them now would almost certainly be completely unintelligible free of their social contexts which give them current meaning; a nuclear waste sign is just a dot with three rounded triangles sticking out of it to anyone not taught over a lifetime to know its warning.

Read the entire story here.

Image: United Nations radioactive symbol, 2007.

Send to Kindle

The Absurdly Insane Exceptionalism of the American Political System

Some examples of recent American political exceptionalism: Dan Quayle, SuperPACs, Sarah Palin, Iran-Contra, Watergate, Michele Bachmann. But, just when you thought the United States’ political system could not possibly sink any lower along comes someone so truly exceptional that it becomes our duty to listen and watch… and gasp.

You see, contained solely within this one person we now have an unrivaled collection of inspirational leadership traits: racist, sexist, misogynist, demagogue, bigot, bully, narcissist, buffoon and crass loudmouth. A demonstration of all that is exceptional about the United States, and an exceptional next commander-in-chief for our modern age.

Trump-on-twitterImage courtesy of someone with a much-needed sense of humor during these dark times.

 

 

Send to Kindle

When 8 Equals 16

commercial-standard-cs215-58

I’m sure that most, if not all, mathematicians would tell you that their calling is at the heart of our understanding of the universe. Mathematics describes our world precisely and logically. But, mix it with the world of women’s fashion and this rigorous discipline becomes rather squishy, and far from absolute. A case in point: a women’s size 16 today is equivalent to a women’s size 8 from 1958.

This makes me wonder what the fundamental measurements and equations describing our universe would look like if controlled by advertisers and marketers. Though, Einstein’s work on Special and General Relativity may seem to fit the fashion industry quite well: one of the central tenets of relativity holds that measurements of various quantities (read: dress size) are relative to the velocities (market size) of observers (retailers). In particular, space (dress size) contracts and time (waist size) dilates.

From the Washington Post:

Here are some numbers that illustrate the insanity of women’s clothing sizes: A size 8 dress today is nearly the equivalent of a size 16 dress in 1958. And a size 8 dress of 1958 doesn’t even have a modern-day equivalent — the waist and bust measurements of a Mad Men-era 8 come in smaller than today’s size 00.

These measurements come from official sizing standards once maintained by the National Bureau of Standards (now the National Institute of Standards and Technology) and taken over in recent years by the American Society of Testing and Materials. Data visualizer Max Galka recently unearthed them for a blog post on America’s obesity epidemic.

Centers for Disease Control and Prevention data show that the average American woman today weighs about as much as the average 1960s man. And while the weight story is pretty straightforward — Americans got heavier — the story behind the dress sizes is a little more complicated, as any woman who’s ever shopped for clothes could probably tell you.

As Julia Felsenthal detailed over at Slate, today’s women’s clothing sizes have their roots in a depression-era government project to define the “Average American Woman” by sending a pair of statisticians to survey and measure nearly 15,000 women. They “hoped to determine whether any proportional relationships existed among measurements that could be broadly applied to create a simple, standardized system of sizing,” Felsenthal writes.

Sadly, they failed. Not surprisingly, women’s bodies defied standardization. The project did yield one lasting contribution to women’s clothing: The statisticians were the first to propose the notion of arbitrary numerical sizes that weren’t based on any specific measurement — similar to shoe sizes.

The government didn’t return to the project until the late 1950s, when the National Bureau of Standards published “Body Measurements for the Sizing of Women’s Patterns and Apparel” in 1958. The standard was based on the 15,000 women interviewed previously, with the addition of a group of women who had been in the Army during World War II. The document’s purpose? “To provide the consumer with a means of identifying her body type and size from the wide range of body types covered, and enable her to be fitted properly by the same size regardless of price, type of apparel, or manufacturer of the garment.”

Read the entire article here.

Image: Diagram from “Body Measurements for the Sizing of Women’s Patterns and Apparel”, 1958. Courtesy of National Bureau of Standards /  National Institute of Standards and Technology (NIST).

Send to Kindle

Forget Broccoli. It’s All About the Blue Zones

You should know how to live to be 100 years old by now. Tip number one: inherit good genes. Tip number two: forget uploading your consciousness to an AI, for now. Tip number three: live and eat in a so-called Blue Zone. Tip number four: walk fast, eat slowly.

From the NYT:

Dan Buettner and I were off to a good start. He approved of coffee.

“It’s one of the biggest sources of antioxidants in the American diet,” he said with chipper confidence, folding up his black Brompton bike.

As we walked through Greenwich Village, looking for a decent shot of joe to fuel an afternoon of shopping and cooking and talking about the enigma of longevity, he pointed out that the men and women of Icaria, a Greek island in the middle of the Aegean Sea, regularly slurp down two or three muddy cups a day.

This came as delightful news to me. Icaria has a key role in Mr. Buettner’s latest book, “The Blue Zones Solution,” which takes a deep dive into five places around the world where people have a beguiling habit of forgetting to die. In Icaria they stand a decent chance of living to see 100. Without coffee, I don’t see much point in making it to 50.

The purpose of our rendezvous was to see whether the insights of a longevity specialist like Mr. Buettner could be applied to the life of a food-obsessed writer in New York, a man whose occupational hazards happen to include chicken wings, cheeseburgers, martinis and marathon tasting menus.

Covering the world of gastronomy and mixology during the era of David Chang (career-defining dish: those Momofuku pork-belly buns) and April Bloomfield (career-defining dish: the lamb burger at the Breslin Bar and Dining Room) does not exactly feel like an enterprise that’s adding extra years to my life — or to my liver.

And the recent deaths (even if accidental) of men in my exact demographic — the food writer Joshua Ozersky, the tech entrepreneur Dave Goldberg — had put me in a mortality-anxious frame of mind.

With my own half-century mark eerily visible on the horizon, could Mr. Buettner, who has spent the last 10 years unlocking the mysteries of longevity, offer me a midcourse correction?

To that end, he had decided to cook me something of a longevity feast. Visiting from his home in Minnesota and camped out at the townhouse of his friends Andrew Solomon and John Habich in the Village, this trim, tanned, 55-year-old guru of the golden years was geared up to show me that living a long time was not about subsisting on a thin gruel of, well, gruel.

After that blast of coffee, which I dutifully diluted with soy milk (as instructed) at O Cafe on Avenue of the Americas, Mr. Buettner and I set forth on our quest at the aptly named LifeThyme market, where signs in the window trumpeted the wonders of wheatgrass. He reassured me, again, by letting me know that penitent hedge clippings had no place in our Blue Zones repast.

“People think, ‘If I eat more of this, then it’s O.K. to eat more burgers or candy,’ ” he said. Instead, as he ambled through the market dropping herbs and vegetables into his basket, he insisted that our life-extending banquet would hinge on normal affordable items that almost anyone can pick up at the grocery store. He grabbed fennel and broccoli, celery and carrots, tofu and coconut milk, a bag of frozen berries and a can of chickpeas and a jar of local honey.

The five communities spotlighted in “The Blue Zones Solution” (published by National Geographic) depend on simple methods of cooking that have evolved over centuries, and Mr. Buettner has developed a matter-of-fact disregard for gastro-trends of all stripes. At LifeThyme, he passed by refrigerated shelves full of vogue-ish juices in hues of green, orange and purple. He shook his head and said, “Bad!”

“The glycemic index on that is as bad as Coke,” he went on, snatching a bottle of carrot juice to scan the label. “For eight ounces, there’s 14 grams of sugar. People get suckered into thinking, ‘Oh, I’m drinking this juice.’ Skip the juicing. Eat the fruit. Or eat the vegetable.” (How about a protein shake? “No,” he said.)

So far, I was feeling pretty good about my chances of making it to 100. I love coffee, I’m not much of a juicer and I’ve never had a protein shake in my life. Bingo. I figured that pretty soon Mr. Buettner would throw me a dietary curveball (I noticed with vague concern that he was not putting any meat or cheese into his basket), but by this point I was already thinking about how fun it would be to meet my great-grandchildren.

I felt even better when he and I started talking about strenuous exercise, which for me falls somewhere between “root canal” and “Justin Bieber concert” on the personal aversion scale.

I like to go for long walks, and … well, that’s about it.

“That’s when I knew you’d be O.K.,” Mr. Buettner told me.

It turns out that walking is a popular mode of transport in the Blue Zones, too — particularly on the sun-splattered slopes of Sardinia, Italy, where many of those who make it to 100 are shepherds who devote the bulk of each day to wandering the hills and treating themselves to sips of red wine.

“A glass of wine is better than a glass of water with a Mediterranean meal,” Mr. Buettner told me.

Red wine and long walks? If that’s all it takes, people, you’re looking at Methuselah.

O.K., yes, Mr. Buettner moves his muscles a lot more than I do. He likes to go everywhere on that fold-up bike, which he hauls along with him on trips, and sometimes he does yoga and goes in-line skating. But he generally believes that the high-impact exercise mania as practiced in the major cities of the United States winds up doing as much harm as good.

“You can’t be pounding your joints with marathons and pumping iron,” he said. “You’ll never see me doing CrossFit.”

For that evening’s meal, Mr. Buettner planned to cook dishes that would make reference to the quintet of places that he focuses on in “The Blue Zones Solution”: along with Icaria and Sardinia, they are Okinawa, Japan; the Nicoya Peninsula in Costa Rica; and Loma Linda, Calif., where Seventh-day Adventists have a tendency to outlive their fellow Americans, thanks to a mostly vegetarian diet that is heavy on nuts, beans, oatmeal, 100 percent whole-grain bread and avocados.

We walked from the market to the townhouse. And it was here, as Mr. Buettner laid out his cooking ingredients on a table in Mr. Solomon’s and Mr. Habich’s commodious, state-of-the-art kitchen, that I noticed the first real disconnect between the lives of the Blue Zones sages and the life of a food writer who has enjoyed many a lunch hour scarfing down charcuterie, tapas and pork-belly-topped ramen at the Gotham West Market food court.

Where was the butter? Hadn’t some nice scientists determined that butter’s not so lethal for us, after all? (“My view is that butter, lard and other animal fats are a bit like radiation: a dollop a couple of times a week probably isn’t going to hurt you, but we don’t know the safe level,” Mr. Buettner later wrote in an email. “At any rate, I can send along a paper that largely refutes the whole ‘Butter is Back’ craze.” No, thanks, I’m good.)

Where was the meat? Where was the cheese? (No cheese? And here I thought we’d be friends for another 50 years, Mr. Buettner.)

Read the entire article here.

Send to Kindle

Digital Forensics and the Wayback Machine

Amazon-Aug1999

Many of us see history — the school subject — as rather dull and boring. After all, how can the topic be made interesting when it’s usually taught by a coach who has other things on his or her mind [no joke, I have evidence of this from both sides of the Atlantic!].

Yet we also know that history’s lessons are essential to shaping our current world view and our vision for the future, in a myriad of ways. Since humans could speak and then write, our ancestors have recorded and transmitted their histories through oral storytelling, and then through books and assorted media.

Then came the internet. The explosion of content, media formats and related technologies over the last quarter-century has led to an immense challenge for archivists and historians intent on cataloging our digital stories. One facet of this challenge is the tremendous volume of information and its accelerating growth. Another is the dynamic nature of the content — much of it being constantly replaced and refreshed.

But, all is not lost. The Internet Archive founded in 1996 has been quietly archiving text, pages, images, audio and more recently entire web sites from the Tubes of the vast Internets. Currently the non-profit has archived around half a trillion web pages. It’s our modern day equivalent of the Library of Alexandria.

Please say hello to the Internet Archive Wayback Machine, and give it a try. The Wayback Machine took the screenshot above of Amazon.com in 1999, in case you’ve ever wondered what Amazon looked like before it swallowed or destroyed entire retail sectors.

From the New Yorker:

Malaysia Airlines Flight 17 took off from Amsterdam at 10:31 A.M. G.M.T. on July 17, 2014, for a twelve-hour flight to Kuala Lumpur. Not much more than three hours later, the plane, a Boeing 777, crashed in a field outside Donetsk, Ukraine. All two hundred and ninety-eight people on board were killed. The plane’s last radio contact was at 1:20 P.M. G.M.T. At 2:50 P.M. G.M.T., Igor Girkin, a Ukrainian separatist leader also known as Strelkov, or someone acting on his behalf, posted a message on VKontakte, a Russian social-media site: “We just downed a plane, an AN-26.” (An Antonov 26 is a Soviet-built military cargo plane.) The post includes links to video of the wreckage of a plane; it appears to be a Boeing 777.

Two weeks before the crash, Anatol Shmelev, the curator of the Russia and Eurasia collection at the Hoover Institution, at Stanford, had submitted to the Internet Archive, a nonprofit library in California, a list of Ukrainian and Russian Web sites and blogs that ought to be recorded as part of the archive’s Ukraine Conflict collection. Shmelev is one of about a thousand librarians and archivists around the world who identify possible acquisitions for the Internet Archive’s subject collections, which are stored in its Wayback Machine, in San Francisco. Strelkov’s VKontakte page was on Shmelev’s list. “Strelkov is the field commander in Slaviansk and one of the most important figures in the conflict,” Shmelev had written in an e-mail to the Internet Archive on July 1st, and his page “deserves to be recorded twice a day.”

On July 17th, at 3:22 P.M. G.M.T., the Wayback Machine saved a screenshot of Strelkov’s VKontakte post about downing a plane. Two hours and twenty-two minutes later, Arthur Bright, the Europe editor of the Christian Science Monitor, tweeted a picture of the screenshot, along with the message “Grab of Donetsk militant Strelkov’s claim of downing what appears to have been MH17.” By then, Strelkov’s VKontakte page had already been edited: the claim about shooting down a plane was deleted. The only real evidence of the original claim lies in the Wayback Machine.

The average life of a Web page is about a hundred days. Strelkov’s “We just downed a plane” post lasted barely two hours. It might seem, and it often feels, as though stuff on the Web lasts forever, for better and frequently for worse: the embarrassing photograph, the regretted blog (more usually regrettable not in the way the slaughter of civilians is regrettable but in the way that bad hair is regrettable). No one believes any longer, if anyone ever did, that “if it’s on the Web it must be true,” but a lot of people do believe that if it’s on the Web it will stay on the Web. Chances are, though, that it actually won’t. In 2006, David Cameron gave a speech in which he said that Google was democratizing the world, because “making more information available to more people” was providing “the power for anyone to hold to account those who in the past might have had a monopoly of power.” Seven years later, Britain’s Conservative Party scrubbed from its Web site ten years’ worth of Tory speeches, including that one. Last year, BuzzFeed deleted more than four thousand of its staff writers’ early posts, apparently because, as time passed, they looked stupider and stupider. Social media, public records, junk: in the end, everything goes.

Web pages don’t have to be deliberately deleted to disappear. Sites hosted by corporations tend to die with their hosts. When MySpace, GeoCities, and Friendster were reconfigured or sold, millions of accounts vanished. (Some of those companies may have notified users, but Jason Scott, who started an outfit called Archive Team—its motto is “We are going to rescue your shit”—says that such notification is usually purely notional: “They were sending e-mail to dead e-mail addresses, saying, ‘Hello, Arthur Dent, your house is going to be crushed.’ ”) Facebook has been around for only a decade; it won’t be around forever. Twitter is a rare case: it has arranged to archive all of its tweets at the Library of Congress. In 2010, after the announcement, Andy Borowitz tweeted, “Library of Congress to acquire entire Twitter archive—will rename itself Museum of Crap.” Not long after that, Borowitz abandoned that Twitter account. You might, one day, be able to find his old tweets at the Library of Congress, but not anytime soon: the Twitter Archive is not yet open for research. Meanwhile, on the Web, if you click on a link to Borowitz’s tweet about the Museum of Crap, you get this message: “Sorry, that page doesn’t exist!”

The Web dwells in a never-ending present. It is—elementally—ethereal, ephemeral, unstable, and unreliable. Sometimes when you try to visit a Web page what you see is an error message: “Page Not Found.” This is known as “link rot,” and it’s a drag, but it’s better than the alternative. More often, you see an updated Web page; most likely the original has been overwritten. (To overwrite, in computing, means to destroy old data by storing new data in their place; overwriting is an artifact of an era when computer storage was very expensive.) Or maybe the page has been moved and something else is where it used to be. This is known as “content drift,” and it’s more pernicious than an error message, because it’s impossible to tell that what you’re seeing isn’t what you went to look for: the overwriting, erasure, or moving of the original is invisible. For the law and for the courts, link rot and content drift, which are collectively known as “reference rot,” have been disastrous. In providing evidence, legal scholars, lawyers, and judges often cite Web pages in their footnotes; they expect that evidence to remain where they found it as their proof, the way that evidence on paper—in court records and books and law journals—remains where they found it, in libraries and courthouses. But a 2013 survey of law- and policy-related publications found that, at the end of six years, nearly fifty per cent of the URLs cited in those publications no longer worked. According to a 2014 study conducted at Harvard Law School, “more than 70% of the URLs within the Harvard Law Review and other journals, and 50% of the URLs within United States Supreme Court opinions, do not link to the originally cited information.” The overwriting, drifting, and rotting of the Web is no less catastrophic for engineers, scientists, and doctors. Last month, a team of digital library researchers based at Los Alamos National Laboratory reported the results of an exacting study of three and a half million scholarly articles published in science, technology, and medical journals between 1997 and 2012: one in five links provided in the notes suffers from reference rot. It’s like trying to stand on quicksand.

The footnote, a landmark in the history of civilization, took centuries to invent and to spread. It has taken mere years nearly to destroy. A footnote used to say, “Here is how I know this and where I found it.” A footnote that’s a link says, “Here is what I used to know and where I once found it, but chances are it’s not there anymore.” It doesn’t matter whether footnotes are your stock-in-trade. Everybody’s in a pinch. Citing a Web page as the source for something you know—using a URL as evidence—is ubiquitous. Many people find themselves doing it three or four times before breakfast and five times more before lunch. What happens when your evidence vanishes by dinnertime?

The day after Strelkov’s “We just downed a plane” post was deposited into the Wayback Machine, Samantha Power, the U.S. Ambassador to the United Nations, told the U.N. Security Council, in New York, that Ukrainian separatist leaders had “boasted on social media about shooting down a plane, but later deleted these messages.” In San Francisco, the people who run the Wayback Machine posted on the Internet Archive’s Facebook page, “Here’s why we exist.”

Read the entire story here.

Image: Wayback Machine’s screenshot of Amazon.com’s home page, August 1999.

Send to Kindle

From a Million Miles

epicearthmoonstill

The Deep Space Climate Observatory (DSCOVR) spacecraft is now firmly in place about one million miles from Earth at its L1 (Legrange) point, a focus of gravitational balance between the sun and our planet. Jointly operated by NASA, NOAA (National Oceanic and Atmospheric Administration) and the U.S. Air Force, the spacecraft uses its digital optics to observe the Earth from sunrise to sunset. Researchers use its observations to measure a number of climate variables including ozone, aerosols, cloud heights, dust, and volcanic ash. The spacecraft also monitors the sun’s solar wind. Luckily, it also captures gorgeous images like the one above from July 16, 2015, of the moon, with dark side visible, as it transits over the Pacific Ocean.

Learn more about DSCOVR here.

Image: This image shows the far side of the moon, illuminated by the sun, as it crosses between the DSCOVR spacecraft’s Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth. Courtesy: NASA, NOAA.

Send to Kindle

Aspirational or Inspirational?

Both of my parents came from a background of chronic poverty and limited educational opportunity. They eventually overcame these constraints through a combination of hard work, persistence and passion. They instilled these traits in me, and somehow they did so in a way that fostered a belief in a well-balanced life containing both work and leisure.

But to many, especially in the United States, the live-to-work ethic thrives. This condition is so acute and prevalent that most Americans caught in corporate jobs never take their full — and yet meager by global standards — allotment of annual vacation. Our culture is replete with tales of driven, aspirational parents — think dragon mom — who seem to have their kid’s lives mapped out from the crib.

I have to agree with columnist George Monbiot: while naked ambition may gain our children monetary riches and a higher rung on the corporate ladder it does not a life make.

From the Guardian:

Perhaps because the alternative is too hideous to contemplate, we persuade ourselves that those who wield power know what they are doing. The belief in a guiding intelligence is hard to shake.

We know that our conditions of life are deteriorating. Most young people have little prospect of owning a home, or even of renting a decent one. Interesting jobs are sliced up, through digital Taylorism, into portions of meaningless drudgery. The natural world, whose wonders enhance our lives, and upon which our survival depends, is being rubbed out with horrible speed. Those to whom we look for guardianship, in government and among the economic elite, do not arrest this decline, they accelerate it.

The political system that delivers these outcomes is sustained by aspiration: the faith that if we try hard enough we could join the elite, even as living standards decline and social immobility becomes set almost in stone. But to what are we aspiring? A life that is better than our own, or worse?

Last week a note from an analyst at Barclays’ Global Power and Utilities group in New York was leaked. It addressed students about to begin a summer internship, and offered a glimpse of the toxic culture into which they are inducted.

“I wanted to introduce you to the 10 Power Commandments … For nine weeks you will live and die by these … We expect you to be the last ones to leave every night, no matter what … I recommend bringing a pillow to the office. It makes sleeping under your desk a lot more comfortable … the internship really is a nine-week commitment at the desk … an intern asked our staffer for a weekend off for a family reunion – he was told he could go. He was also asked to hand in his BlackBerry and pack up his desk … Play time is over and it’s time to buckle up.”

Play time is over, but did it ever begin? If these students have the kind of parents featured in the Financial Times last month, perhaps not. The article marked a new form of employment: the nursery consultant. These people, who charge from £290 an hour, must find a nursery that will put their clients’ toddlers on the right track to an elite university.

They spoke of parents who had already decided that their six-month-old son would go to Cambridge then Deutsche Bank, or whose two-year-old daughter “had a tutor for two afternoons a week (to keep on top of maths and literacy) as well as weekly phonics and reading classes, drama, piano, beginner French and swimming. They were considering adding Mandarin and Spanish. ‘The little girl was so exhausted and on edge she was terrified of opening her mouth.’”

In New York, playdate coaches charging $450 an hour train small children in the social skills that might help secure their admission to the most prestigious private schools. They are taught to hide traits that could suggest they’re on the autistic spectrum, which might reduce their chances of selection.

From infancy to employment, this is a life-denying, love-denying mindset, informed not by joy or contentment, but by an ambition that is both desperate and pointless, for it cannot compensate for what it displaces: childhood, family life, the joys of summer, meaningful and productive work, a sense of arrival, living in the moment. For the sake of this toxic culture, the economy is repurposed, the social contract is rewritten, the elite is released from tax, regulation and the other restraints imposed by democracy.

Where the elite goes, we are induced to follow. As if the assessment regimes were too lax in UK primary schools, last year the education secretary announced a new test for four-year-olds. A primary school in Cambridge has just taken the obvious next step: it is now streaming four-year-olds into classes according to perceived ability. The education and adoption bill, announced in the Queen’s speech, will turn the screw even tighter. Will this help children, or hurt them?

Read the entire column here.

Send to Kindle

Girlfriend or Nuclear Reactor?

YellowcakeAsk a typical 14 year-old boy if he’d prefer to have a girlfriend or a home-made nuclear fission reactor he’s highly likely to gravitate towards the former. Not so Taylor Wilson; he seems to prefer the company of Geiger counters, particle accelerators, vacuum tubes and radioactive materials.

From the Guardian:

Taylor Wilson has a Geiger counter watch on his wrist, a sleek, sporty-looking thing that sounds an alert in response to radiation. As we enter his parents’ garage and approach his precious jumble of electrical equipment, it emits an ominous beep. Wilson is in full flow, explaining the old-fashioned control panel in the corner, and ignores it. “This is one of the original atom smashers,” he says with pride. “It would accelerate particles up to, um, 2.5m volts – so kind of up there, for early nuclear physics work.” He pats the knobs.

It was in this garage that, at the age of 14, Wilson built a working nuclear fusion reactor, bringing the temperature of its plasma core to 580mC – 40 times as hot as the core of the sun. This skinny kid from Arkansas, the son of a Coca-Cola bottler and a yoga instructor, experimented for years, painstakingly acquiring materials, instruments and expertise until he was able to join the elite club of scientists who have created a miniature sun on Earth.

Not long after, Wilson won $50,000 at a science fair, for a device that can detect nuclear materials in cargo containers – a counter-terrorism innovation he later showed to a wowed Barack Obama at a White House-sponsored science fair.

Wilson’s two TED talks (Yup, I Built A Nuclear Fusion Reactor and My Radical Plan For Small Nuclear Fission Reactors) have been viewed almost 4m times. A Hollywood biopic is planned, based on an imminent biography. Meanwhile, corporations have wooed him and the government has offered to buy some of his inventions. Former US under-secretary for energy, Kristina Johnson, told his biographer, Tom Clynes: “I would say someone like him comes along maybe once in a generation. He’s not just smart – he’s cool and articulate. I think he may be the most amazing kid I’ve ever met.”

Seven years on from fusing the atom, the gangly teen with a mop of blond hair is now a gangly 21-year-old with a mop of blond hair, who shuttles between his garage-cum-lab in the family’s home in Reno, Nevada, and other more conventional labs. In addition to figuring out how to intercept dirty bombs, he looks at ways of improving cancer treatment and lowering energy prices – while plotting a hi-tech business empire around the patents.

As we tour his parents’ garage, Wilson shows me what appears to be a collection of nuggets. His watch sounds another alert, but he continues lovingly to detail his inventory. “The first thing I got for my fusion project was a mass spectrometer from an ex-astronaut in Houston, Texas,” he explains. This was a treasure he obtained simply by writing a letter asking for it. He ambles over to a large steel safe, with a yellow and black nuclear hazard sticker on the front. He spins the handle, opens the door and extracts a vial with pale powder in it.

“That’s some yellowcake I made – the famous stuff that Saddam Hussein was supposedly buying from Niger. This is basically the starting point for nuclear, whether it’s a weapons programme or civilian energy production.” He gives the vial a shake. A vision of dodgy dossiers, atomic intrigue and mushroom clouds swims before me, a reverie broken by fresh beeping. “That’ll be the allanite. It’s a rare earth mineral,” Wilson explains. He picks up a dark, knobbly little rock streaked with silver. “It has thorium, a potential nuclear fuel.”

I think now may be a good moment to exit the garage, but the tour is not over. “One of the things people are surprised by is how ubiquitous radiation and radioactivity is,” Wilson says, giving me a reassuring look. “I’m very cautious. I’m actually a bit of a hypochondriac. It’s all about relative risk.”

He paces over to a plump steel tube, elevated to chest level – an object that resembles an industrial vacuum cleaner, and gleams in the gloom. This is the jewel in Wilson’s crown, the reactor he built at 14, and he gives it a tender caress. “This is safer than many things,” he says, gesturing to his Aladdin’s cave of atomic accessories. “For instance, horse riding. People fear radioactivity because it is very mysterious. You want to have respect for it, but not be paralysed by fear.”

The Wilson family home is a handsome, hacienda-style house tucked into foothills outside Reno. Unusually for the high desert at this time of year, grey clouds with bellies of rain rumble overhead. Wilson, by contrast, is all sunny smiles. He is still the slightly ethereal figure you see in the TED talks (I have to stop myself from offering him a sandwich), but the handshake is firm, the eye contact good and the energy enviable – even though Wilson has just flown back from a weekend visiting friends in Los Angeles. “I had an hour’s sleep last night. Three hours the night before that,” he says, with a hint of pride.

He does not drink or smoke, is a natty dresser (in suede jacket, skinny tie, jeans and Converse-style trainers) and he is a talker. From the moment we meet until we part hours later, he talks and talks, great billows of words about the origin of his gift and the responsibility it brings; about trying to be normal when he knows he’s special; about Fukushima, nuclear power and climate change; about fame and ego, and seeing his entire life chronicled in a book for all the world to see when he’s barely an adult and still wrestling with how to ask a girl out on a date.

The future feels urgent and mysterious. “My life has been this series of events that I didn’t see coming. It’s both exciting and daunting to know you’re going to be constantly trying to one-up yourself,” he says. “People can have their opinions about what I should do next, but my biggest pressure is internal. I hate resting on laurels. If I burn out, I burn out – but I don’t see that happening. I’ve more ideas than I have time to execute.”

Wilson credits his parents with huge influence, but wavers on the nature versus nurture debate: was he born brilliant or educated into it? “I don’t have an answer. I go back and forth.” The pace of technological change makes predicting his future a fool’s errand, he says. “It’s amazing – amazing – what I can do today that I couldn’t have done if I was born 10 years earlier.” And his ambitions are sky-high: he mentions, among many other plans, bringing electricity and state-of-the-art healthcare to the developing world.

Read the entire fascinating story here.

Image: Yellowcake, a type of uranium concentrate powder, an intermediate step in the processing of uranium ores. Courtesy of United States Department of Energy. Public Domain.

Send to Kindle

Creativity and Mental Illness

Vincent_van_Gogh-Self_portrait_with_bandaged_ear

The creative genius — oft misunderstood, outcast, tortured, misanthropic, fueled by demon spirits. Yet, this same description would seem to be equally apt at describing many of those who are unfortunate enough to suffer from mental illness. So, could creativity and mental illness be high-level symptoms of a broader underlying spectrum “disorder”? After all, a not insignificant number of people and businesses tend to regard creativity as a behavioral problem — best left outside the front-door to the office. Time to check out the results of the latest psychological study.

From the Guardian:

The ancient Greeks were first to make the point. Shakespeare raised the prospect too. But Lord Byron was, perhaps, the most direct of them all: “We of the craft are all crazy,” he told the Countess of Blessington, casting a wary eye over his fellow poets.

The notion of the tortured artist is a stubborn meme. Creativity, it states, is fuelled by the demons that artists wrestle in their darkest hours. The idea is fanciful to many scientists. But a new study claims the link may be well-founded after all, and written into the twisted molecules of our DNA.

In a large study published on Monday, scientists in Iceland report that genetic factors that raise the risk of bipolar disorder and schizophrenia are found more often in people in creative professions. Painters, musicians, writers and dancers were, on average, 25% more likely to carry the gene variants than professions the scientists judged to be less creative, among which were farmers, manual labourers and salespeople.

Kari Stefansson, founder and CEO of deCODE, a genetics company based in Reykjavik, said the findings, described in the journal Nature Neuroscience, point to a common biology for some mental disorders and creativity. “To be creative, you have to think differently,” he told the Guardian. “And when we are different, we have a tendency to be labelled strange, crazy and even insane.”

The scientists drew on genetic and medical information from 86,000 Icelanders to find genetic variants that doubled the average risk of schizophrenia, and raised the risk of bipolar disorder by more than a third. When they looked at how common these variants were in members of national arts societies, they found a 17% increase compared with non-members.

The researchers went on to check their findings in large medical databases held in the Netherlands and Sweden. Among these 35,000 people, those deemed to be creative (by profession or through answers to a questionnaire) were nearly 25% more likely to carry the mental disorder variants.

Stefansson believes that scores of genes increase the risk of schizophrenia and bipolar disorder. These may alter the ways in which many people think, but in most people do nothing very harmful. But for 1% of the population, genetic factors, life experiences and other influences can culminate in problems, and a diagnosis of mental illness.

“Often, when people are creating something new, they end up straddling between sanity and insanity,” said Stefansson. “I think these results support the old concept of the mad genius. Creativity is a quality that has given us Mozart, Bach, Van Gogh. It’s a quality that is very important for our society. But it comes at a risk to the individual, and 1% of the population pays the price for it.”

Stefansson concedes that his study found only a weak link between the genetic variants for mental illness and creativity. And it is this that other scientists pick up on. The genetic factors that raise the risk of mental problems explained only about 0.25% of the variation in peoples’ artistic ability, the study found. David Cutler, a geneticist at Emory University in Atlanta, puts that number in perspective: “If the distance between me, the least artistic person you are going to meet, and an actual artist is one mile, these variants appear to collectively explain 13 feet of the distance,” he said.

Most of the artist’s creative flair, then, is down to different genetic factors, or to other influences altogether, such as life experiences, that set them on their creative journey.

For Stefansson, even a small overlap between the biology of mental illness and creativity is fascinating. “It means that a lot of the good things we get in life, through creativity, come at a price. It tells me that when it comes to our biology, we have to understand that everything is in some way good and in some way bad,” he said.

Read the entire article here.

Image: Vincent van Gogh, self-portrait, 1889. Courtesy of Courtauld Institute Galleries, London. Wikipaintings.org. Public Domain.

Send to Kindle

Monsters of Our Own Making

For parents: a few brief tips on how to deal with young adult children — that most pampered of generations. Tip number 1: turn off junior’s access to the family Netflix account.

From WSJ:

Congratulations. Two months ago, your kid graduated from college, bravely finishing his degree rather than dropping out to make millions on his idea for a dating app for people who throw up during Cross Fit training. If he’s like a great many of his peers, he’s moved back home, where he’s figuring out how to become an adult in the same room that still has his orthodontic headgear strapped to an Iron Man helmet.

Now we’re deep into summer, and the logistical challenges of your grad really being home are sinking in. You’re constantly juggling cars, cleaning more dishes and dealing with your daughter’s boyfriend, who not only slept over but also drank your last can of Pure Protein Frosty Chocolate shake.

But the real challenge here is a problem of your own making. You see, these children are members of the Most-Loved Generation: They’ve grown up with their lives stage-managed by us, their college-acceptance-obsessed parents. Remember when Eva, at age 7, was obsessed with gymnastics…for exactly 10 months, which is why the TV in your guest room sits on top of a $2,500 pommel horse?

Now that they’re out of college, you realize what wasn’t included in that $240,000 education: classes in life skills and decision-making.

With your kid at home, you find that he’s incapable of making a single choice on his own. Like when you’re working and he interrupts to ask how many blades is the best number for a multi-blade razor. Or when you’ve just crawled into bed and hear the familiar refrain of, “Mom, what can we eat?” All those years being your kid’s concierge and coach have created a monster.

So the time has come for you to cut the cord. And by that I mean: Take your kid off your Netflix account. He will be confused and upset at first, not understanding why this is happening to him, but it’s a great opportunity for him to sign up for something all by himself.

Which brings us to money. It’s finally time to channel your Angela Merkel and get tough with your young Alexis Tsipras. Put him on a consistent allowance and make him pay the extra fees incurred when he uses the ATM at the weird little deli rather than the one at his bank, a half-block away.

Next, nudge your kid to read books about self-motivation. Begin with baby steps: Don’t just hand her “Lean In” and “I Am Malala.” Your daughter’s great, but she’s no Malala. And the only thing she’s leaning in to is a bag of kettle corn while binge-watching “Orange Is the New Black.”

Instead, over dinner, casually drop a few pearls of wisdom from “Coach Wooden’s Pyramid of Success,” such as, “Make each day your masterpiece.” Let your kid decide whether getting a high score on her “Panda Pop Bubble Shooter” iPhone game qualifies. Then hope that John Wooden has piqued her curiosity and leave his book out with a packet of Sour Patch Xploderz on top. With luck, she’ll take the bait (candy and book).

Now it’s time to work on your kid’s inability to make a decision, which, let’s be honest, you’ve instilled over the years by jumping to answer all of her texts, even that time you were at the opera. “But,” you object, “it could have been an emergency!” It wasn’t. She couldn’t remember whether she liked Dijon mustard or mayo on her turkey wrap.

Set up some outings that nurture independence. Send your kid to the grocery store with orders to buy a week of dinner supplies. She’ll ask a hundred questions about what to get, but just respond with, “Whatever looks good to you” or, “Have fun with it.” She will look at you with panic, but don’t lose your resolve. Send her out and turn your phone off to avoid a barrage of texts, such as, “They’re out of bacterial wipes to clean off the shopping cart handle. What should I do?”

Rest assured, in a couple of hours, she’ll return with “dinner”—frozen waffles and a bag of Skinny Pop popcorn. Tough it out and serve it for dinner: The name of the game is positive reinforcement.

Once she’s back you’ll inevitably get hit with more questions, like, “It’s not lost, but how expensive is that remote key for the car?” Take a deep breath and just say, “Um, I’m not sure. Why don’t you Google it?”

Read the entire story here.

Send to Kindle

The Literal Word

Abraham-Sarah-Hagar

I’ve been following the recent story of a country clerk in Kentucky who is refusing to grant marriage licenses to same-sex couples. The clerk cites her profound Christian beliefs for contravening the new law of the land. I’m reminded that most people who ardently follow a faith, as proscribed by the literal word from a God, tend to interpret, cherry-pick and obey what they wish. And, those same individuals will fervently ignore many less palatable demands from their God. So, let’s review a few biblical pronouncements, lest we forget what all believers in the Christian bible should be doing.

From the Independent:

Social conservatives who object to marriage licenses for gay couples claim to defend “Christian marriage,” meaning one man paired with one woman for life, which they say is prescribed by God in the Bible.

But in fact, Bible writers give the divine thumbs-up to many kinds of sexual union or marriage. They also use several literary devices to signal God’s approval for one or another sexual liaison: The law or a prophet might prescribe it, Jesus might endorse it, or God might reward it with the greatest of all blessings: boy babies who go on to become powerful men.

While the approved list does include one man coupled with one woman, the Bible explicitly endorses polygamy and sexual slavery, providing detailed regulations for each; and at times it also rewards rape and incest.

Polygamy. Polygamy is the norm in the Old Testament and accepted without reproof by Jesus (Matthew 22:23-32). Biblicalpolygamy.com contains pages dedicated to 40 biblical figures, each of whom had multiple wives.

Sex slaves. The Bible provides instructions on how to acquire several types of sex slaves. For example, if a man buys a Hebrew girl and “she please not her master” he can’t sell her to a foreigner; and he must allow her to go free if he doesn’t provide for her (Exodus 21:8).

War booty. Virgin females are counted, literally, among the booty of war. In the book of Numbers (31:18) God’s servant commands the Israelites to kill all of the used Midianite women along with all boy children, but to keep the virgin girls for themselves. The Law of Moses spells out a ritual to purify a captive virgin before sex. (Deuteronomy 21:10-14).

Incest. Incest is mostly forbidden in the Bible, but God makes exceptions. Abraham and Sarah, much favoured by God, are said to be half-siblings. Lot’s daughters get him drunk and mount him, and God rewards them with male babies who become patriarchs of great nations (Genesis 19).

Brother’s widow. If a brother dies with no children, it becomes a man’s duty to impregnate the brother’s widow. Onan is struck dead by God because he prefers to spill his seed on the ground rather than providing offspring for his brother (Genesis 38:8-10). A New Testament story (Matthew 22:24-28) shows that the tradition has survived.

Wife’s handmaid. After seven childless decades, Abraham’s frustrated wife Sarah says, “Go, sleep with my slave; perhaps I can build a family through her.”  Her slave, Hagar, becomes pregnant. Two generations later, the sister-wives of Jacob repeatedly send their slaves to him, each trying to produce more sons than the other (Genesis 30:1-22).

Read the entire story here.

Image: Biblical engraving: Sarah Offering Hagar to Her Husband, Abraham, c1897. Courtesy of Wikipedia.

Send to Kindle

The Post-Capitalism Dream

Anti-capitalism_color

I’m not sure that I fully agree with the premises and conclusions that author Paul Mason outlines in his essay below excerpted from his new book, Postcapitalism (published on 30 July 2015). However, I’d like to believe that we could all very soon thrive in a much more equitable and socially just future society. While the sharing economy has gone someway to democratizing work effort, Mason points out other, and growing, areas of society that are marching to the beat of a different, non-capitalist drum: volunteerism, alternative currencies, cooperatives, gig-economy, self-managed spaces, social sharing, time banks. This is all good.

It will undoubtedly take generations for society to grapple with the consequences of these shifts and more importantly dealing with the ongoing and accelerating upheaval wrought by ubiquitous automation. Meanwhile, the vested interests — the capitalist heads of state, the oligarchs, the monopolists, the aging plutocrats and their assorted (political) sycophants  — will most certainly fight until the very bitter end to maintain an iron grip on the invisible hand of the market.

From the Guardian:

The red flags and marching songs of Syriza during the Greek crisis, plus the expectation that the banks would be nationalised, revived briefly a 20th-century dream: the forced destruction of the market from above. For much of the 20th century this was how the left conceived the first stage of an economy beyond capitalism. The force would be applied by the working class, either at the ballot box or on the barricades. The lever would be the state. The opportunity would come through frequent episodes of economic collapse.

Instead over the past 25 years it has been the left’s project that has collapsed. The market destroyed the plan; individualism replaced collectivism and solidarity; the hugely expanded workforce of the world looks like a “proletariat”, but no longer thinks or behaves as it once did.

If you lived through all this, and disliked capitalism, it was traumatic. But in the process technology has created a new route out, which the remnants of the old left – and all other forces influenced by it – have either to embrace or die. Capitalism, it turns out, will not be abolished by forced-march techniques. It will be abolished by creating something more dynamic that exists, at first, almost unseen within the old system, but which will break through, reshaping the economy around new values and behaviours. I call this postcapitalism.

As with the end of feudalism 500 years ago, capitalism’s replacement by postcapitalism will be accelerated by external shocks and shaped by the emergence of a new kind of human being. And it has started.

Postcapitalism is possible because of three major changes information technology has brought about in the past 25 years. First, it has reduced the need for work, blurred the edges between work and free time and loosened the relationship between work and wages. The coming wave of automation, currently stalled because our social infrastructure cannot bear the consequences, will hugely diminish the amount of work needed – not just to subsist but to provide a decent life for all.

Second, information is corroding the market’s ability to form prices correctly. That is because markets are based on scarcity while information is abundant. The system’s defence mechanism is to form monopolies – the giant tech companies – on a scale not seen in the past 200 years, yet they cannot last. By building business models and share valuations based on the capture and privatisation of all socially produced information, such firms are constructing a fragile corporate edifice at odds with the most basic need of humanity, which is to use ideas freely.

Third, we’re seeing the spontaneous rise of collaborative production: goods, services and organisations are appearing that no longer respond to the dictates of the market and the managerial hierarchy. The biggest information product in the world – Wikipedia – is made by volunteers for free, abolishing the encyclopedia business and depriving the advertising industry of an estimated $3bn a year in revenue.

Almost unnoticed, in the niches and hollows of the market system, whole swaths of economic life are beginning to move to a different rhythm. Parallel currencies, time banks, cooperatives and self-managed spaces have proliferated, barely noticed by the economics profession, and often as a direct result of the shattering of the old structures in the post-2008 crisis.

You only find this new economy if you look hard for it. In Greece, when a grassroots NGO mapped the country’s food co-ops, alternative producers, parallel currencies and local exchange systems they found more than 70 substantive projects and hundreds of smaller initiatives ranging from squats to carpools to free kindergartens. To mainstream economics such things seem barely to qualify as economic activity – but that’s the point. They exist because they trade, however haltingly and inefficiently, in the currency of postcapitalism: free time, networked activity and free stuff. It seems a meagre and unofficial and even dangerous thing from which to craft an entire alternative to a global system, but so did money and credit in the age of Edward III.

New forms of ownership, new forms of lending, new legal contracts: a whole business subculture has emerged over the past 10 years, which the media has dubbed the “sharing economy”. Buzzwords such as the “commons” and “peer-production” are thrown around, but few have bothered to ask what this development means for capitalism itself.

I believe it offers an escape route – but only if these micro-level projects are nurtured, promoted and protected by a fundamental change in what governments do. And this must be driven by a change in our thinking – about technology, ownership and work. So that, when we create the elements of the new system, we can say to ourselves, and to others: “This is no longer simply my survival mechanism, my bolt hole from the neoliberal world; this is a new way of living in the process of formation.”

The power of imagination will become critical. In an information society, no thought, debate or dream is wasted – whether conceived in a tent camp, prison cell or the table football space of a startup company.

As with virtual manufacturing, in the transition to postcapitalism the work done at the design stage can reduce mistakes in the implementation stage. And the design of the postcapitalist world, as with software, can be modular. Different people can work on it in different places, at different speeds, with relative autonomy from each other. If I could summon one thing into existence for free it would be a global institution that modelled capitalism correctly: an open source model of the whole economy; official, grey and black. Every experiment run through it would enrich it; it would be open source and with as many datapoints as the most complex climate models.

The main contradiction today is between the possibility of free, abundant goods and information; and a system of monopolies, banks and governments trying to keep things private, scarce and commercial. Everything comes down to the struggle between the network and the hierarchy: between old forms of society moulded around capitalism and new forms of society that prefigure what comes next.

Is it utopian to believe we’re on the verge of an evolution beyond capitalism? We live in a world in which gay men and women can marry, and in which contraception has, within the space of 50 years, made the average working-class woman freer than the craziest libertine of the Bloomsbury era. Why do we, then, find it so hard to imagine economic freedom?

It is the elites – cut off in their dark-limo world – whose project looks as forlorn as that of the millennial sects of the 19th century. The democracy of riot squads, corrupt politicians, magnate-controlled newspapers and the surveillance state looks as phoney and fragile as East Germany did 30 years ago.

All readings of human history have to allow for the possibility of a negative outcome. It haunts us in the zombie movie, the disaster movie, in the post-apocalytic wasteland of films such as The Road or Elysium. But why should we not form a picture of the ideal life, built out of abundant information, non-hierarchical work and the dissociation of work from wages?

Millions of people are beginning to realise they have been sold a dream at odds with what reality can deliver. Their response is anger – and retreat towards national forms of capitalism that can only tear the world apart. Watching these emerge, from the pro-Grexit left factions in Syriza to the Front National and the isolationism of the American right has been like watching the nightmares we had during the Lehman Brothers crisis come true.

We need more than just a bunch of utopian dreams and small-scale horizontal projects. We need a project based on reason, evidence and testable designs, that cuts with the grain of history and is sustainable by the planet. And we need to get on with it.

Read the excerpt here.

Image: The Industrial Workers of the World poster “Pyramid of Capitalist System” (1911). Courtesy of Wikipedia. Public Domain.

Send to Kindle

Cause and Effect

One of the most fundamental tenets of our macroscopic world is the notion that an effect has a cause. Throw a pebble (cause) into a still pond and the ripples (effect) will be visible for all to see. Down at the microscopic level, physicists have determined through their mathematical convolutions that there is no such thing — there is nothing precluding the laws of physics running in reverse. Yet, we never witness ripples in a pond diminishing and ejecting a pebble, which then finds its way back to a catcher.

Of course, this quandary has kept many a philosopher’s pencil well sharpened while physicists continue to scratch their heads. So, is cause and effect merely an coincidental illusion? Or, does our physics only operate in one direction, determined by a yet to be discovered fundamental law?

Author of Causal Reasoning in Physics, philosopher Mathias Frisch, offers great summary of current thinking, but no fundamental breakthrough.

From Aeon:

Do early childhood vaccinations cause autism, as the American model Jenny McCarthy maintains? Are human carbon emissions at the root of global warming? Come to that, if I flick this switch, will it make the light on the porch come on? Presumably I don’t need to persuade you that these would be incredibly useful things to know.

Since anthropogenic greenhouse gas emissions do cause climate change, cutting our emissions would make a difference to future warming. By contrast, autism cannot be prevented by leaving children unvaccinated. Now, there’s a subtlety here. For our judgments to be much use to us, we have to distinguish between causal relations and mere correlations. From 1999 and 2009, the number of people in the US who fell into a swimming pool and drowned varies with the number of films in which Nicholas Cage appeared – but it seems unlikely that we could reduce the number of pool drownings by keeping Cage off the screen, desirable as the remedy might be for other reasons.

In short, a working knowledge of the way in which causes and effects relate to one another seems indispensible to our ability to make our way in the world. Yet there is a long and venerable tradition in philosophy, dating back at least to David Hume in the 18th century, that finds the notions of causality to be dubious. And that might be putting it kindly.

Hume argued that when we seek causal relations, we can never discover the real power; the, as it were, metaphysical glue that binds events together. All we are able to see are regularities – the ‘constant conjunction’ of certain sorts of observation. He concluded from this that any talk of causal powers is illegitimate. Which is not to say that he was ignorant of the central importance of causal reasoning; indeed, he said that it was only by means of such inferences that we can ‘go beyond the evidence of our memory and senses’. Causal reasoning was somehow both indispensable and illegitimate. We appear to have a dilemma.

Hume’s remedy for such metaphysical quandaries was arguably quite sensible, as far as it went: have a good meal, play backgammon with friends, and try to put it out of your mind. But in the late 19th and 20th centuries, his causal anxieties were reinforced by another problem, arguably harder to ignore. According to this new line of thought, causal notions seemed peculiarly out of place in our most fundamental science – physics.

There were two reasons for this. First, causes seemed too vague for a mathematically precise science. If you can’t observe them, how can you measure them? If you can’t measure them, how can you put them in your equations? Second, causality has a definite direction in time: causes have to happen before their effects. Yet the basic laws of physics (as distinct from such higher-level statistical generalisations as the laws of thermodynamics) appear to be time-symmetric: if a certain process is allowed under the basic laws of physics, a video of the same process played backwards will also depict a process that is allowed by the laws.

The 20th-century English philosopher Bertrand Russell concluded from these considerations that, since cause and effect play no fundamental role in physics, they should be removed from the philosophical vocabulary altogether. ‘The law of causality,’ he said with a flourish, ‘like much that passes muster among philosophers, is a relic of a bygone age, surviving, like the monarchy, only because it is erroneously supposed not to do harm.’

Neo-Russellians in the 21st century express their rejection of causes with no less rhetorical vigour. The philosopher of science John Earman of the University of Pittsburgh maintains that the wooliness of causal notions makes them inappropriate for physics: ‘A putative fundamental law of physics must be stated as a mathematical relation without the use of escape clauses or words that require a PhD in philosophy to apply (and two other PhDs to referee the application, and a third referee to break the tie of the inevitable disagreement of the first two).’

This is all very puzzling. Is it OK to think in terms of causes or not? If so, why, given the apparent hostility to causes in the underlying laws? And if not, why does it seem to work so well?

A clearer look at the physics might help us to find our way. Even though (most of) the basic laws are symmetrical in time, there are many arguably non-thermodynamic physical phenomena that can happen only one way. Imagine a stone thrown into a still pond: after the stone breaks the surface, waves spread concentrically from the point of impact. A common enough sight.

Now, imagine a video clip of the spreading waves played backwards. What we would see are concentrically converging waves. For some reason this second process, which is the time-reverse of the first, does not seem to occur in nature. The process of waves spreading from a source looks irreversible. And yet the underlying physical law describing the behaviour of waves – the wave equation – is as time-symmetric as any law in physics. It allows for both diverging and converging waves. So, given that the physical laws equally allow phenomena of both types, why do we frequently observe organised waves diverging from a source but never coherently converging waves?

Physicists and philosophers disagree on the correct answer to this question – which might be fine if it applied only to stones in ponds. But the problem also crops up with electromagnetic waves and the emission of light or radio waves: anywhere, in fact, that we find radiating waves. What to say about it?

On the one hand, many physicists (and some philosophers) invoke a causal principle to explain the asymmetry. Consider an antenna transmitting a radio signal. Since the source causes the signal, and since causes precede their effects, the radio waves diverge from the antenna after it is switched on simply because they are the repercussions of an initial disturbance, namely the switching on of the antenna. Imagine the time-reverse process: a radio wave steadily collapses into an antenna before the latter has been turned on. On the face of it, this conflicts with the idea of causality, because the wave would be present before its cause (the antenna) had done anything. David Griffiths, Emeritus Professor of Physics at Reed College in Oregon and the author of a widely used textbook on classical electrodynamics, favours this explanation, going so far as to call a time-asymmetric principle of causality ‘the most sacred tenet in all of physics’.

On the other hand, some physicists (and many philosophers) reject appeals to causal notions and maintain that the asymmetry ought to be explained statistically. The reason why we find coherently diverging waves but never coherently converging ones, they maintain, is not that wave sources cause waves, but that a converging wave would require the co?ordinated behaviour of ‘wavelets’ coming in from multiple different directions of space – delicately co?ordinated behaviour so improbable that it would strike us as nearly miraculous.

It so happens that this wave controversy has quite a distinguished history. In 1909, a few years before Russell’s pointed criticism of the notion of cause, Albert Einstein took part in a published debate concerning the radiation asymmetry. His opponent was the Swiss physicist Walther Ritz, a name you might not recognise.

It is in fact rather tragic that Ritz did not make larger waves in his own career, because his early reputation surpassed Einstein’s. The physicist Hermann Minkowski, who taught both Ritz and Einstein in Zurich, called Einstein a ‘lazy dog’ but had high praise for Ritz.  When the University of Zurich was looking to appoint its first professor of theoretical physics in 1909, Ritz was the top candidate for the position. According to one member of the hiring committee, he possessed ‘an exceptional talent, bordering on genius’. But he suffered from tuberculosis, and so, due to his failing health, he was passed over for the position, which went to Einstein instead. Ritz died that very year at age 31.

Months before his death, however, Ritz published a joint letter with Einstein summarising their disagreement. While Einstein thought that the irreversibility of radiation processes could be explained probabilistically, Ritz proposed what amounted to a causal explanation. He maintained that the reason for the asymmetry is that an elementary source of radiation has an influence on other sources in the future and not in the past.

This joint letter is something of a classic text, widely cited in the literature. What is less well-known is that, in the very same year, Einstein demonstrated a striking reversibility of his own. In a second published letter, he appears to take a position very close to Ritz’s – the very view he had dismissed just months earlier. According to the wave theory of light, Einstein now asserted, a wave source ‘produces a spherical wave that propagates outward. The inverse process does not exist as elementary process’. The only way in which converging waves can be produced, Einstein claimed, was by combining a very large number of coherently operating sources. He appears to have changed his mind.

Given Einstein’s titanic reputation, you might think that such a momentous shift would occasion a few ripples in the history of science. But I know of only one significant reference to his later statement: a letter from the philosopher Karl Popper to the journal Nature in 1956. In this letter, Popper describes the wave asymmetry in terms very similar to Einstein’s. And he also makes one particularly interesting remark, one that might help us to unpick the riddle. Coherently converging waves, Popper insisted, ‘would demand a vast number of distant coherent generators of waves the co?ordination of which, to be explicable, would have to be shown as originating from the centre’ (my italics).

This is, in fact, a particular instance of a much broader phenomenon. Consider two events that are spatially distant yet correlated with one another. If they are not related as cause and effect, they tend to be joint effects of a common cause. If, for example, two lamps in a room go out suddenly, it is unlikely that both bulbs just happened to burn out simultaneously. So we look for a common cause – perhaps a circuit breaker that tripped.

Common-cause inferences are so pervasive that it is difficult to imagine what we could know about the world beyond our immediate surroundings without them. Hume was right: judgments about causality are absolutely essential in going ‘beyond the evidence of the senses’. In his book The Direction of Time (1956), the philosopher Hans Reichenbach formulated a principle underlying such inferences: ‘If an improbable coincidence has occurred, there must exist a common cause.’ To the extent that we are bound to apply Reichenbach’s rule, we are all like the hard-boiled detective who doesn’t believe in coincidences.

Read the entire article here.

Send to Kindle

Dismaland

Google-search-Dismaland

A dreary, sardonic, anti-establishment theme park could only happen in the UK. Let’s face it, the corporate optimists running the US would never allow such a pessimistic and apocalyptic vision to unfold in the land of Disney and Nickelodeon.

Thus, residents of the UK are the sole, fortunate recipients of a sarcastic visual nightmare curated by Banksy and a posse of fellow pop-culture-skewering artists. Dismaland — a Bemusement Park — is hosted in appropriately grey seafront venue of Weston-super-Mare. But, grab your tickets soon, the un-theme park is only open from August 22 to September 27, 2015.

Visit Dismaland online, here.

Image courtesy of Google Search.

Send to Kindle