The Tech Emperor Has No Clothes

OLYMPUS DIGITAL CAMERA

Bill Hewlett. David Packard. Bill Gates. Steve Allen. Steve Jobs. Larry Ellison. Gordon Moore. Tech titans. Moguls of the microprocessor. Their names hold a key place in the founding and shaping of our technological evolution. That they catalyzed and helped create entire economic sectors goes without doubt. Yet, a deeper, objective analysis of market innovation shows that the view of the lone, great-man (or two) — combating and succeeding against all-comers — may be more of a self-perpetuating myth than actual reality. The idea that single, visionary individual drives history and shapes the future is but a long and enduring invention.

From Technology Review:

Since Steve Jobs’s death, in 2011, Elon Musk has emerged as the leading celebrity of Silicon Valley. Musk is the CEO of Tesla Motors, which produces electric cars; the CEO of SpaceX, which makes rockets; and the chairman of SolarCity, which provides solar power systems. A self-made billionaire, programmer, and engineer—as well as an inspiration for Robert Downey Jr.’s Tony Stark in the Iron Man movies—he has been on the cover of Fortune and Time. In 2013, he was first on the Atlantic’s list of “today’s greatest inventors,” nominated by leaders at Yahoo, Oracle, and Google. To believers, Musk is steering the history of technology. As one profile described his mystique, his “brilliance, his vision, and the breadth of his ambition make him the one-man embodiment of the future.”

Musk’s companies have the potential to change their sectors in fundamental ways. Still, the stories around these advances—and around Musk’s role, in particular—can feel strangely outmoded.

The idea of “great men” as engines of change grew popular in the 19th century. In 1840, the Scottish philosopher Thomas Carlyle wrote that “the history of what man has accomplished in this world is at bottom the history of the Great Men who have worked here.” It wasn’t long, however, before critics questioned this one–dimensional view, arguing that historical change is driven by a complex mix of trends and not by any one person’s achievements. “All of those changes of which he is the proximate initiator have their chief causes in the generations he descended from,” Herbert Spencer wrote in 1873. And today, most historians of science and technology do not believe that major innovation is driven by “a lone inventor who relies only on his own imagination, drive, and intellect,” says Daniel Kevles, a historian at Yale. Scholars are “eager to identify and give due credit to significant people but also recognize that they are operating in a context which enables the work.” In other words, great leaders rely on the resources and opportunities available to them, which means they do not shape history as much as they are molded by the moments in which they live.

Musk’s success would not have been possible without, among other things, government funding for basic research and subsidies for electric cars and solar panels. Above all, he has benefited from a long series of innovations in batteries, solar cells, and space travel. He no more produced the technological landscape in which he operates than the Russians created the harsh winter that allowed them to vanquish Napoleon. Yet in the press and among venture capitalists, the great-man model of Musk persists, with headlines citing, for instance, “His Plan to Change the Way the World Uses Energy” and his own claim of “changing history.”

The problem with such portrayals is not merely that they are inaccurate and unfair to the many contributors to new technologies. By warping the popular understanding of how technologies develop, great-man myths threaten to undermine the structure that is actually necessary for future innovations.

Space cowboy

Elon Musk, the best-selling biography by business writer Ashlee Vance, describes Musk’s personal and professional trajectory—and seeks to explain how, exactly, the man’s repeated “willingness to tackle impossible things” has “turned him into a deity in Silicon Valley.”

Born in South Africa in 1971, Musk moved to Canada at age 17; he took a job cleaning the boiler room of a lumber mill and then talked his way into an internship at a bank by cold-calling a top executive. After studying physics and economics in Canada and at the Wharton School of the University of Pennsylvania, he enrolled in a PhD program at Stanford but opted out after a couple of days. Instead, in 1995, he cofounded a company called Zip2, which provided an online map of businesses—“a primitive Google maps meets Yelp,” as Vance puts it. Although he was not the most polished coder, Musk worked around the clock and slept “on a beanbag next to his desk.” This drive is “what the VCs saw—that he was willing to stake his existence on building out this platform,” an early employee told Vance. After Compaq bought Zip2, in 1999, Musk helped found an online financial services company that eventually became PayPal. This was when he “began to hone his trademark style of entering an ultracomplex business and not letting the fact that he knew very little about the industry’s nuances bother him,” Vance writes.

When eBay bought PayPal for $1.5 billion, in 2002, Musk emerged with the wherewithal to pursue two passions he believed could change the world. He founded SpaceX with the goal of building cheaper rockets that would facilitate research and space travel. Investing over $100 million of his personal fortune, he hired engineers with aeronautics experience, built a factory in Los Angeles, and began to oversee test launches from a remote island between Hawaii and Guam. At the same time, Musk cofounded Tesla Motors to develop battery technology and electric cars. Over the years, he cultivated a media persona that was “part playboy, part space cowboy,” Vance writes.

Musk sells himself as a singular mover of mountains and does not like to share credit for his success. At SpaceX, in particular, the engineers “flew into a collective rage every time they caught Musk in the press claiming to have designed the Falcon rocket more or less by himself,” Vance writes, referring to one of the company’s early models. In fact, Musk depends heavily on people with more technical expertise in rockets and cars, more experience with aeronautics and energy, and perhaps more social grace in managing an organization. Those who survive under Musk tend to be workhorses willing to forgo public acclaim. At SpaceX, there is Gwynne Shotwell, the company president, who manages operations and oversees complex negotiations. At Tesla, there is JB Straubel, the chief technology officer, responsible for major technical advances. Shotwell and Straubel are among “the steady hands that will forever be expected to stay in the shadows,” writes Vance. (Martin Eberhard, one of the founders of Tesla and its first CEO, arguably contributed far more to its engineering achievements. He had a bitter feud with Musk and left the company years ago.)

Likewise, Musk’s success at Tesla is undergirded by public-sector investment and political support for clean tech. For starters, Tesla relies on lithium-ion batteries pioneered in the late 1980s with major funding from the Department of Energy and the National Science Foundation. Tesla has benefited significantly from guaranteed loans and state and federal subsidies. In 2010, the company reached a loan agreement with the Department of Energy worth $465 million. (Under this arrangement, Tesla agreed to produce battery packs that other companies could benefit from and promised to manufacture electric cars in the United States.) In addition, Tesla has received $1.29 billion in tax incentives from Nevada, where it is building a “gigafactory” to produce batteries for cars and consumers. It has won an array of other loans and tax credits, plus rebates for its consumers, totaling another $1 billion, according to a recent series by the Los Angeles Times.

It is striking, then, that Musk insists on a success story that fails to acknowledge the importance of public-sector support. (He called the L.A. Times series “misleading and deceptive,” for instance, and told CNBC that “none of the government subsidies are necessary,” though he did admit they are “helpful.”)

If Musk’s unwillingness to look beyond himself sounds familiar, Steve Jobs provides a recent antecedent. Like Musk, who obsessed over Tesla cars’ door handles and touch screens and the layout of the SpaceX factory, Jobs brought a fierce intensity to product design, even if he did not envision the key features of the Mac, the iPod, or the iPhone. An accurate version of Apple’s story would give more acknowledgment not only to the work of other individuals, from designer Jonathan Ive on down, but also to the specific historical context in which Apple’s innovation occurred. “There is not a single key technology behind the iPhone that has not been state funded,” says economist Mazzucato. This includes the wireless networks, “the Internet, GPS, a touch-screen display, and … the voice-activated personal assistant Siri.” Apple has recombined these technologies impressively. But its achievements rest on many years of public-sector investment. To put it another way, do we really think that if Jobs and Musk had never come along, there would have been no smartphone revolution, no surge of interest in electric vehicles?

Read the entire story here.

Image: Titan Oceanus. Trevi Fountain, Rome. Public Domain.

Streampunk Elevator (Lift)

The University of Leicester has one of these in its Attenborough Tower. In fact, it’s one of the few working examples left in Britain. Germany has several, mostly deployed in government buildings. For me, and all other Leicester students who came before and after, riding it was — and probably still is — a rite of passage. Many of the remaining contraptions have been mothballed due to safety fears and limited accessibility. What is it?

[tube]OXSnNzGJDdg[/tube]

The paternoster — is a dual-shaft revolving elevator (or lift). Despite the odd name (from the Latin for the Our Father prayer, often recited while fingering through rosary beads on a looped chain), it’s a wonderful Victorian invention that needs to be preserved, and cherished. Oh, and do you wonder what happens at the top or bottom of the loop? Do you get crushed? Does the paternoster cabin emerge upside down, with you inside? You’ll have to visit and ride one to find out!

From the Guardian:

As the paternoster cabin in which he was slowly descending into the bowels of Stuttgart’s town hall plunged into darkness, Dejan Tuco giggled infectiously. He pointed out the oily cogs of its internal workings that were just about visible as it shuddered to the left, and gripped his stomach when it rose again with a gentle jolt. “We’re not supposed to do the full circuit,” he said. “But that’s the best way to feel like you’re on a ferris wheel or a gondola.”

The 12-year-old German-Serb schoolboy was on a roll, spending several hours one day last week riding the open elevator shaft known as a paternoster, a 19th-century invention that has just been given a stay of execution after campaigners persuaded Germany’s government to reverse a decision to ban its public use.

That the doorless lift, which consists of two shafts side by side within which a chain of open cabins descend and ascend continuously on a belt, has narrowly escaped becoming a victim of safety regulations, has everything to do with a deeply felt German affection for what many consider an old-fashioned yet efficient form of transport.

In the UK, where paternosters were invented in the 1860s, only one or two are believed to be in use. In Germany which first adopted them in the 1870s, there are an estimated 250 and there was an outcry, particularly among civil servants, when they were brought to a standstill this summer while the legislation was reviewed.

Officials in Stuttgart were among the loudest protesters against the labour minister Andrea Nahles’ new workplace safety regulations, which stated that the lifts could only be used by employees trained in paternoster riding.

“It took the heart out of this place when our paternoster was brought to a halt, and it slowed down our work considerably,” said Wolfgang Wölfle, Stuttgart’s deputy mayor, who vociferously fought the ban and called for the reinstatement of the town hall’s lift, which has been running since 1956.

“They suit the German character very well. I’m too impatient to wait for a conventional lift and the best thing about a paternoster is that you can hop on and off it as you please. You can also communicate with people between floors when they’re riding on one. I see colleagues flirt in them all the time,” he added, celebrating its reopening at a recent town hall party to which hundreds of members of the public were invited.

Among the streams of those who jumped on and off as tunes such as Roxette’s Joyride and Aerosmith’s Love in an Elevator pumped out of speakers, were a Polish woman and her poodle, couples who held hands in the anxious seconds before hopping on board, a one-legged man who joked that the paternoster was not to blame for the loss of his limb, and Dejan, who rushed to the town hall straight from school and spent three hours tirelessly riding up and down. Some passengers were as confident as ballet dancers, others somewhat more hesitant.

Read the whole story here.

Video: Paternoster, Attenborough Tower, University of Leicester. Courtesy of inoy0.

Goodbye Poppy. Hello Narco-Yeast

S_cerevisiaeBioengineers have been successfully encoding and implanting custom genes into viruses, bacteria and yeast for a while now. These new genes usually cause these organisms to do something different, such as digest industrial waste, kill malignant hosts and manufacture useful chemicals.

So, it should come as no surprise to see the advent — only in the laboratory at the moment — of yeast capable of producing narcotics. There seems to be no end to our inventiveness.

Personally, I’m waiting for a bacteria that can synthesize Nutella and a fungus that can construct corporate Powerpoint presentations.

From the NYT:

In a widely expected advance that has opened a fierce debate about “home-brewed heroin,” scientists at Stanford have created strains of yeast that can produce narcotic drugs.

Until now, these drugs — known as opioids — have been derived only from the opium poppy. But the Stanford lab is one of several where researchers have been trying to find yeast-based alternatives. Their work is closely followed by pharmaceutical companies and by the Drug Enforcement Administration and Federal Bureau of Investigation.

Advocates of the rapidly advancing field of bioengineering say it promises to make the creation of important chemicals — in this case painkillers and cough suppressants — cheaper and more predictable than using poppies.

In one major advance more than a decade ago scientists in Berkeley added multiple genes to yeast until it produced a precursor to artemisinin, the most effective modern malaria drug, which previously had to be grown in sweet wormwood shrubs. Much of the world’s artemisinin is now produced in bioengineered yeast.

But some experts fear the technology will be more useful to drug traffickers than to pharmaceutical companies. Legitimate drug makers already have steady supplies of cheap raw materials from legal poppy fields in Turkey, India, Australia, France and elsewhere.

For now, both scientists and law-enforcement officials agree, it will be years before heroin can be grown in yeast. The new Stanford strain, described Thursday in the journal Science, would need to be 100,000 times as efficient in order to match the yield of poppies.

It would take 4,400 gallons of yeast to produce the amount of hydrocodone in a single Vicodin tablet, said Christina D. Smolke, the leader of the Stanford bioengineering team.

For now, she said, anyone looking for opioids “could buy poppy seeds from the grocery store and get higher concentrations.”But the technology is advancing so rapidly that it may match the efficiency of poppy farming within two to three years, Dr. Smolke added.

Read the story here.

Image: Saccharomyces cerevisiae cells in DIC microscopy. Public Domain.

Crispr – Designer DNA

The world welcomed basic genetic engineering in the mid-1970s, when biotech pioneers Herbert Boyer and Stanley Cohen transferred DNA from one organism to another (bacteria). In so doing they created the first genetically modified organism (GMO). A mere forty years later we now have extremely powerful and accessible (cheap) biochemical tools for tinkering with the molecules of heredity. One of these tools, known as Crispr-Cas9, makes it easy and fast to move any genes around, within and across any species.

The technique promises immense progress in the fight against inherited illness, cancer treatment and viral infection. It also opens the door to untold manipulation of DNA in lower organisms and plants to develop an infection resistant and faster growing food supply, and to reimagine a whole host of biochemical and industrial processes (such as ethanol production).

Yet as is the case with many technological advances that hold great promise, tremendous peril lies ahead from this next revolution. Our bioengineering prowess has yet to be matched with a sound and pervasive ethical framework. Can humans reach a consensus on how to shape, focus and limit the application of such techniques? And, equally importantly, can we enforce these bioethical constraints before it’s too late to “uninvent” designer babies and bioweapons?

From Wired:

Spiny grass and scraggly pines creep amid the arts-and-crafts buildings of the Asilomar Conference Grounds, 100 acres of dune where California’s Monterey Peninsula hammerheads into the Pacific. It’s a rugged landscape, designed to inspire people to contemplate their evolving place on Earth. So it was natural that 140 scientists gathered here in 1975 for an unprecedented conference.

They were worried about what people called “recombinant DNA,” the manipulation of the source code of life. It had been just 22 years since James Watson, Francis Crick, and Rosalind Franklin described what DNA was—deoxyribonucleic acid, four different structures called bases stuck to a backbone of sugar and phosphate, in sequences thousands of bases long. DNA is what genes are made of, and genes are the basis of heredity.

Preeminent genetic researchers like David Baltimore, then at MIT, went to Asilomar to grapple with the implications of being able to decrypt and reorder genes. It was a God-like power—to plug genes from one living thing into another. Used wisely, it had the potential to save millions of lives. But the scientists also knew their creations might slip out of their control. They wanted to consider what ought to be off-limits.

By 1975, other fields of science—like physics—were subject to broad restrictions. Hardly anyone was allowed to work on atomic bombs, say. But biology was different. Biologists still let the winding road of research guide their steps. On occasion, regulatory bodies had acted retrospectively—after Nuremberg, Tuskegee, and the human radiation experiments, external enforcement entities had told biologists they weren’t allowed to do that bad thing again. Asilomar, though, was about establishing prospective guidelines, a remarkably open and forward-thinking move.

At the end of the meeting, Baltimore and four other molecular biologists stayed up all night writing a consensus statement. They laid out ways to isolate potentially dangerous experiments and determined that cloning or otherwise messing with dangerous pathogens should be off-limits. A few attendees fretted about the idea of modifications of the human “germ line”—changes that would be passed on from one generation to the next—but most thought that was so far off as to be unrealistic. Engineering microbes was hard enough. The rules the Asilomar scientists hoped biology would follow didn’t look much further ahead than ideas and proposals already on their desks.

Earlier this year, Baltimore joined 17 other researchers for another California conference, this one at the Carneros Inn in Napa Valley. “It was a feeling of déjà vu,” Baltimore says. There he was again, gathered with some of the smartest scientists on earth to talk about the implications of genome engineering.

The stakes, however, have changed. Everyone at the Napa meeting had access to a gene-editing technique called Crispr-Cas9. The first term is an acronym for “clustered regularly interspaced short palindromic repeats,” a description of the genetic basis of the method; Cas9 is the name of a protein that makes it work. Technical details aside, Crispr-Cas9 makes it easy, cheap, and fast to move genes around—any genes, in any living thing, from bacteria to people. “These are monumental moments in the history of biomedical research,” Baltimore says. “They don’t happen every day.”

Using the three-year-old technique, researchers have already reversed mutations that cause blindness, stopped cancer cells from multiplying, and made cells impervious to the virus that causes AIDS. Agronomists have rendered wheat invulnerable to killer fungi like powdery mildew, hinting at engineered staple crops that can feed a population of 9 billion on an ever-warmer planet. Bioengineers have used Crispr to alter the DNA of yeast so that it consumes plant matter and excretes ethanol, promising an end to reliance on petrochemicals. Startups devoted to Crispr have launched. International pharmaceutical and agricultural companies have spun up Crispr R&D. Two of the most powerful universities in the US are engaged in a vicious war over the basic patent. Depending on what kind of person you are, Crispr makes you see a gleaming world of the future, a Nobel medallion, or dollar signs.

The technique is revolutionary, and like all revolutions, it’s perilous. Crispr goes well beyond anything the Asilomar conference discussed. It could at last allow genetics researchers to conjure everything anyone has ever worried they would—designer babies, invasive mutants, species-specific bioweapons, and a dozen other apocalyptic sci-fi tropes. It brings with it all-new rules for the practice of research in the life sciences. But no one knows what the rules are—or who will be the first to break them.

In a way, humans were genetic engineers long before anyone knew what a gene was. They could give living things new traits—sweeter kernels of corn, flatter bulldog faces—through selective breeding. But it took time, and it didn’t always pan out. By the 1930s refining nature got faster. Scientists bombarded seeds and insect eggs with x-rays, causing mutations to scatter through genomes like shrapnel. If one of hundreds of irradiated plants or insects grew up with the traits scientists desired, they bred it and tossed the rest. That’s where red grapefruits came from, and most barley for modern beer.

Genome modification has become less of a crapshoot. In 2002, molecular biologists learned to delete or replace specific genes using enzymes called zinc-finger nucleases; the next-generation technique used enzymes named TALENs.

Yet the procedures were expensive and complicated. They only worked on organisms whose molecular innards had been thoroughly dissected—like mice or fruit flies. Genome engineers went on the hunt for something better.

As it happened, the people who found it weren’t genome engineers at all. They were basic researchers, trying to unravel the origin of life by sequencing the genomes of ancient bacteria and microbes called Archaea (as in archaic), descendants of the first life on Earth. Deep amid the bases, the As, Ts, Gs, and Cs that made up those DNA sequences, microbiologists noticed recurring segments that were the same back to front and front to back—palindromes. The researchers didn’t know what these segments did, but they knew they were weird. In a branding exercise only scientists could love, they named these clusters of repeating palindromes Crispr.

Then, in 2005, a microbiologist named Rodolphe Barrangou, working at a Danish food company called Danisco, spotted some of those same palindromic repeats in Streptococcus thermophilus, the bacteria that the company uses to make yogurt and cheese. Barrangou and his colleagues discovered that the unidentified stretches of DNA between Crispr’s palindromes matched sequences from viruses that had infected their S. thermophilus colonies. Like most living things, bacteria get attacked by viruses—in this case they’re called bacteriophages, or phages for short. Barrangou’s team went on to show that the segments served an important role in the bacteria’s defense against the phages, a sort of immunological memory. If a phage infected a microbe whose Crispr carried its fingerprint, the bacteria could recognize the phage and fight back. Barrangou and his colleagues realized they could save their company some money by selecting S. thermophilus species with Crispr sequences that resisted common dairy viruses.

As more researchers sequenced more bacteria, they found Crisprs again and again—half of all bacteria had them. Most Archaea did too. And even stranger, some of Crispr’s sequences didn’t encode the eventual manufacture of a protein, as is typical of a gene, but instead led to RNA—single-stranded genetic material. (DNA, of course, is double-stranded.)

That pointed to a new hypothesis. Most present-day animals and plants defend themselves against viruses with structures made out of RNA. So a few researchers started to wonder if Crispr was a primordial immune system. Among the people working on that idea was Jill Banfield, a geomicrobiologist at UC Berkeley, who had found Crispr sequences in microbes she collected from acidic, 110-degree water from the defunct Iron Mountain Mine in Shasta County, California. But to figure out if she was right, she needed help.

Luckily, one of the country’s best-known RNA experts, a biochemist named Jennifer Doudna, worked on the other side of campus in an office with a view of the Bay and San Francisco’s skyline. It certainly wasn’t what Doudna had imagined for herself as a girl growing up on the Big Island of Hawaii. She simply liked math and chemistry—an affinity that took her to Harvard and then to a postdoc at the University of Colorado. That’s where she made her initial important discoveries, revealing the three-dimensional structure of complex RNA molecules that could, like enzymes, catalyze chemical reactions.

The mine bacteria piqued Doudna’s curiosity, but when Doudna pried Crispr apart, she didn’t see anything to suggest the bacterial immune system was related to the one plants and animals use. Still, she thought the system might be adapted for diagnostic tests.

Banfield wasn’t the only person to ask Doudna for help with a Crispr project. In 2011, Doudna was at an American Society for Microbiology meeting in San Juan, Puerto Rico, when an intense, dark-haired French scientist asked her if she wouldn’t mind stepping outside the conference hall for a chat. This was Emmanuelle Charpentier, a microbiologist at Ume?a University in Sweden.

As they wandered through the alleyways of old San Juan, Charpentier explained that one of Crispr’s associated proteins, named Csn1, appeared to be extraordinary. It seemed to search for specific DNA sequences in viruses and cut them apart like a microscopic multitool. Charpentier asked Doudna to help her figure out how it worked. “Somehow the way she said it, I literally—I can almost feel it now—I had this chill down my back,” Doudna says. “When she said ‘the mysterious Csn1’ I just had this feeling, there is going to be something good here.”

Read the whole story here.

Deep Time, Nuclear Semiotics and Atomic Priests

un-radioactive_warning_signTime seems to unfold over different — lengthier — scales in the desert southwest of the United States. Perhaps it’s the vastness of the eerie landscape that puts fleeting human moments into the context of deep geologic time. Or, perhaps it’s our monumental human structures that aim to encode our present for the distant future. Structures like the Hoover Dam, which regulates the mighty Colorado River, and the ill-fated Yucca Mountain project, once designed to store the nation’s nuclear waste, were conceived to last many centuries.

Yet these monuments to our impermanence raise a important issue beyond their construction — how are we to communicate their intent to humans living in a distant future, humans who will no longer be using any of our existing languages? Directions and warnings in English or contextual signs and images will not suffice. Consider Yucca Mountain. Now shuttered, Yucca Mountain was designed to be a repository for nuclear byproducts and waste from military and civilian programs. Keep in mind that some products of nuclear reactors, such as various isotopes of uranium, plutonium, technetium and neptunium, remain highly radioactive for tens of thousands to millions of years. So, how would we post warnings at Yucca Mountain about the entombed dangers to generations living 10,000 years and more from now? Those behind the Yucca Mountain project considered a number of fantastic (in its original sense) programs to carry dire warnings into the distant future including hostile architecture, radioactive cats and a pseudo-religious order. This was the work of the Human Interference Task Force.

From Motherboard:

Building the Hoover Dam rerouted the most powerful river in North America. It claimed the lives of 96 workers, and the beloved site dog, Little Niggy, who is entombed by the walkway in the shade of the canyon wall. Diverting the Colorado destroyed the ecology of the region, threatening fragile native plant life and driving several species of fish nearly to extinction. The dam brought water to 8 million people and created more than 5000 jobs. It required 6.6 million metric tons of concrete, all made from the desert; enough, famously, to pave a two lane road coast to coast across the US. Inside the dam’s walls that concrete is still curing, and will be for another 60 years.

Erik, photojournalist, and I have come here to try and get the measure of this place. Nevada is the uncanny locus of disparate monuments all concerned with charting deep time, leaving messages for future generations of human beings to puzzle over the meaning of: a star map, a nuclear waste repository and a clock able to keep time for 10,000 years—all of them within a few hours drive of Las Vegas through the harsh desert.

Hoover Dam is theorized in some structural stress projections to stand for tens of thousands of years from now, and what could be its eventual undoing is mussels. The mollusks which grow in the dam’s grates will no longer be scraped away, and will multiply eventually to such density that the built up stress of the river will burst the dam’s wall. That is if the Colorado continues to flow. Otherwise erosion will take much longer to claim the structure, and possibly Oskar J.W. Hansen’s vision will be realized: future humans will find the dam 14,000 years from now, at the end of the current Platonic Year.

A Platonic Year lasts for roughly 26,000 years. It’s also known as the precession of the equinoxes, first written into the historical record in the second century BC by the Greek mathematician, Hipparchus, though there is evidence that earlier people also solved this complex equation. Earth rotates in three ways: 365 days around the sun, on its 24 hours axis and on its precessional axis. The duration of the last is the Platonic Year, where Earth is incrementally turning on a tilt pointing to its true north as the Sun’s gravity pulls on us, leaving our planet spinning like a very slow top along its orbit around the sun.

Now Earth’s true-north pole star is Polaris, in Ursa Minor, as it was at the completion of Hoover Dam. At the end of the current Platonic Year it will be Vega, in the constellation Lyra. Hansen included this information in an amazingly accurate astronomical clock, or celestial map, embedded in the terrazzo floor of the dam’s dedication monument. Hansen wanted any future humans who came across the dam to be able to know exactly when it was built.

He used the clock to mark major historical events of the last several thousand years including the birth of Christ and the building of the pyramids, events which he thought were equal to the engineering feat of men bringing water to a desert in the 1930s. He reasoned that though current languages could be dead in this future, any people who had survived that long would have advanced astronomy, math and physics in their arsenal of survival tactics. Despite this, the monument is written entirely in English, which is for the benefit of current visitors, not our descendents of millennia from now.

The Hoover Dam is staggering. It is frankly impossible, even standing right on top of it, squinting in the blinding sunlight down its vertiginous drop, to imagine how it was ever built by human beings; even as I watch old documentary footage on my laptop back in the hotel at night on Fremont Street, showing me that exact thing, I don’t believe it. I cannot square it in my mind. I cannot conceive of nearly dying every day laboring in the brutally dry 100 degree heat, in a time before air-conditioning, in a time before being able to ever get even the slightest relief from the elements.

Hansen was more than aware of our propensity to build great monuments to ourselves and felt the weight of history as he submitted his bid for the job to design the dedication monument, writing, “Mankind itself is the subject of the sculptures at Hoover Dam.” Joan Didion described it as the most existentially terrifying place in America: “Since the afternoon in 1967 when I first saw Hoover Dam, its image has never been entirely absent from my inner eye.” Thirty-two people have chosen the dam as their place of suicide. It has no fences.

The reservoir is now the lowest it has ever been and California is living through the worst drought in 1200 years. You can swim in Lake Mead, so we did, sort of. It did provide some cool respite for a moment from the unrelenting heat of the desert. We waded around only up to our ankles because it smelled pretty terrible, the shoreline dirty with garbage.

Radioactive waste from spent nuclear fuel has a shelf life of hundreds of thousands of years. Maybe even more than a million, it’s not possible to precisely predict. Nuclear power plants around the US have produced 150 million metric tons of highly active nuclear waste that sits at dozens of sites around the country, awaiting a place to where it can all be carted and buried thousands of feet underground to be quarantined for the rest of time. For now a lot of it sits not far from major cities.

Yucca Mountain, 120 miles from Hoover Dam, is not that place. The site is one of the most intensely geologically surveyed and politically controversial pieces of land on Earth. Since 1987 it has been, at the cost of billions of dollars, the highly contested resting place for the majority of America’s high-risk nuclear waste. Those plans were officially shuttered in 2012, after states sued each other, states sued the federal Government, the Government sued contractors, and the people living near Yucca Mountain didn’t want, it turned out, for thousands of tons of nuclear waste to be carted through their counties and sacred lands via rail. President Obama cancelled its funding and officially ended the project.

It was said that there was a fault line running directly under the mountain; that the salt rock was not as absorbent as it was initially thought to be and that it posed the threat of leaking radiation into the water table; that more recently the possibility of fracking in the area would beget an ecological disaster. That a 10,000 year storage solution was nowhere near long enough to inculcate the Earth from the true shelf-life of the waste, which is realistically thought to be dangerous for many times that length of time. The site is now permanently closed, visible only from a distance through a cacophony of government warning signs blockading a security checkpoint.

We ask around the community of Amargosa Valley about the mountain. Sitting on 95 it’s the closest place to the site and consists only of a gas station, which trades in a huge amount of Area 51 themed merchandise, a boldly advertised sex shop, an alien motel and a firework store where you can let off rockets in the car park. Across the road is the vacant lot of what was once an RV park, with a couple of badly busted up vehicles looted beyond recognition and a small aquamarine boat lying on its side in the dirt.

At the gas station register a woman explains that no one really liked the idea of having waste so close to their homes (she repeats the story of the fault line), but they did like the idea of jobs, hundreds of which disappeared along with the project, leaving the surrounding areas, mainly long-tapped out mining communities, even more severely depressed.

We ask what would happen if we tried to actually get to the mountain itself, on government land.

“Plenty of people do try,” she says. “They’re trying to get to Area 51. They have sensors though, they’ll come get you real quick in their truck.”

Would we get shot?

“Shot? No. But they would throw you on the ground, break all your cameras and interrogate you for a long time.”

We decide just to take the road that used to go to the mountain as far as we can to the checkpoint, where in the distance beyond the electric fences at the other end of a stretch of desert land we see buildings and cars parked and most definitely some G-men who would see us before we even had the chance to try and sneak anywhere.

Before it was shut for good, Yucca Mountain had kilometers of tunnels bored into it and dozens of experiments undertaken within it, all of it now sealed behind an enormous vault door. It was also the focus of a branch of linguistics established specifically to warn future humans of the dangers of radioactive waste: nuclear semiotics. The Human Interference Task Force—a consortium of archeologists, architects, linguists, philosophers, engineers, designers—faced the opposite problem to Oskar Hansen at Hoover Dam; the Yucca Mountain repository was not hoping to attract the attentions of future humans to tell them of the glory of their forebears; it was to tell them that this place would kill them if they trod too near.

To create a universally readable warning system for humans living thirty generations from now, the signs will have to be instantly recognizable as expressing an immediate and lethal danger, as well as a deep sense of shunning: these were impulses that came up against each other; how to adequately express that the place was deadly while not at the same time enticing people to explore it, thinking it must contain something of great value if so much trouble had been gone to in order to keep people away? How to express this when all known written languages could very easily be dead? Signs as we know them now would almost certainly be completely unintelligible free of their social contexts which give them current meaning; a nuclear waste sign is just a dot with three rounded triangles sticking out of it to anyone not taught over a lifetime to know its warning.

Read the entire story here.

Image: United Nations radioactive symbol, 2007.

The Absurdly Insane Exceptionalism of the American Political System

Some examples of recent American political exceptionalism: Dan Quayle, SuperPACs, Sarah Palin, Iran-Contra, Watergate, Michele Bachmann. But, just when you thought the United States’ political system could not possibly sink any lower along comes someone so truly exceptional that it becomes our duty to listen and watch… and gasp.

You see, contained solely within this one person we now have an unrivaled collection of inspirational leadership traits: racist, sexist, misogynist, demagogue, bigot, bully, narcissist, buffoon and crass loudmouth. A demonstration of all that is exceptional about the United States, and an exceptional next commander-in-chief for our modern age.

Trump-on-twitterImage courtesy of someone with a much-needed sense of humor during these dark times.

 

 

When 8 Equals 16

commercial-standard-cs215-58

I’m sure that most, if not all, mathematicians would tell you that their calling is at the heart of our understanding of the universe. Mathematics describes our world precisely and logically. But, mix it with the world of women’s fashion and this rigorous discipline becomes rather squishy, and far from absolute. A case in point: a women’s size 16 today is equivalent to a women’s size 8 from 1958.

This makes me wonder what the fundamental measurements and equations describing our universe would look like if controlled by advertisers and marketers. Though, Einstein’s work on Special and General Relativity may seem to fit the fashion industry quite well: one of the central tenets of relativity holds that measurements of various quantities (read: dress size) are relative to the velocities (market size) of observers (retailers). In particular, space (dress size) contracts and time (waist size) dilates.

From the Washington Post:

Here are some numbers that illustrate the insanity of women’s clothing sizes: A size 8 dress today is nearly the equivalent of a size 16 dress in 1958. And a size 8 dress of 1958 doesn’t even have a modern-day equivalent — the waist and bust measurements of a Mad Men-era 8 come in smaller than today’s size 00.

These measurements come from official sizing standards once maintained by the National Bureau of Standards (now the National Institute of Standards and Technology) and taken over in recent years by the American Society of Testing and Materials. Data visualizer Max Galka recently unearthed them for a blog post on America’s obesity epidemic.

Centers for Disease Control and Prevention data show that the average American woman today weighs about as much as the average 1960s man. And while the weight story is pretty straightforward — Americans got heavier — the story behind the dress sizes is a little more complicated, as any woman who’s ever shopped for clothes could probably tell you.

As Julia Felsenthal detailed over at Slate, today’s women’s clothing sizes have their roots in a depression-era government project to define the “Average American Woman” by sending a pair of statisticians to survey and measure nearly 15,000 women. They “hoped to determine whether any proportional relationships existed among measurements that could be broadly applied to create a simple, standardized system of sizing,” Felsenthal writes.

Sadly, they failed. Not surprisingly, women’s bodies defied standardization. The project did yield one lasting contribution to women’s clothing: The statisticians were the first to propose the notion of arbitrary numerical sizes that weren’t based on any specific measurement — similar to shoe sizes.

The government didn’t return to the project until the late 1950s, when the National Bureau of Standards published “Body Measurements for the Sizing of Women’s Patterns and Apparel” in 1958. The standard was based on the 15,000 women interviewed previously, with the addition of a group of women who had been in the Army during World War II. The document’s purpose? “To provide the consumer with a means of identifying her body type and size from the wide range of body types covered, and enable her to be fitted properly by the same size regardless of price, type of apparel, or manufacturer of the garment.”

Read the entire article here.

Image: Diagram from “Body Measurements for the Sizing of Women’s Patterns and Apparel”, 1958. Courtesy of National Bureau of Standards /  National Institute of Standards and Technology (NIST).

Forget Broccoli. It’s All About the Blue Zones

You should know how to live to be 100 years old by now. Tip number one: inherit good genes. Tip number two: forget uploading your consciousness to an AI, for now. Tip number three: live and eat in a so-called Blue Zone. Tip number four: walk fast, eat slowly.

From the NYT:

Dan Buettner and I were off to a good start. He approved of coffee.

“It’s one of the biggest sources of antioxidants in the American diet,” he said with chipper confidence, folding up his black Brompton bike.

As we walked through Greenwich Village, looking for a decent shot of joe to fuel an afternoon of shopping and cooking and talking about the enigma of longevity, he pointed out that the men and women of Icaria, a Greek island in the middle of the Aegean Sea, regularly slurp down two or three muddy cups a day.

This came as delightful news to me. Icaria has a key role in Mr. Buettner’s latest book, “The Blue Zones Solution,” which takes a deep dive into five places around the world where people have a beguiling habit of forgetting to die. In Icaria they stand a decent chance of living to see 100. Without coffee, I don’t see much point in making it to 50.

The purpose of our rendezvous was to see whether the insights of a longevity specialist like Mr. Buettner could be applied to the life of a food-obsessed writer in New York, a man whose occupational hazards happen to include chicken wings, cheeseburgers, martinis and marathon tasting menus.

Covering the world of gastronomy and mixology during the era of David Chang (career-defining dish: those Momofuku pork-belly buns) and April Bloomfield (career-defining dish: the lamb burger at the Breslin Bar and Dining Room) does not exactly feel like an enterprise that’s adding extra years to my life — or to my liver.

And the recent deaths (even if accidental) of men in my exact demographic — the food writer Joshua Ozersky, the tech entrepreneur Dave Goldberg — had put me in a mortality-anxious frame of mind.

With my own half-century mark eerily visible on the horizon, could Mr. Buettner, who has spent the last 10 years unlocking the mysteries of longevity, offer me a midcourse correction?

To that end, he had decided to cook me something of a longevity feast. Visiting from his home in Minnesota and camped out at the townhouse of his friends Andrew Solomon and John Habich in the Village, this trim, tanned, 55-year-old guru of the golden years was geared up to show me that living a long time was not about subsisting on a thin gruel of, well, gruel.

After that blast of coffee, which I dutifully diluted with soy milk (as instructed) at O Cafe on Avenue of the Americas, Mr. Buettner and I set forth on our quest at the aptly named LifeThyme market, where signs in the window trumpeted the wonders of wheatgrass. He reassured me, again, by letting me know that penitent hedge clippings had no place in our Blue Zones repast.

“People think, ‘If I eat more of this, then it’s O.K. to eat more burgers or candy,’ ” he said. Instead, as he ambled through the market dropping herbs and vegetables into his basket, he insisted that our life-extending banquet would hinge on normal affordable items that almost anyone can pick up at the grocery store. He grabbed fennel and broccoli, celery and carrots, tofu and coconut milk, a bag of frozen berries and a can of chickpeas and a jar of local honey.

The five communities spotlighted in “The Blue Zones Solution” (published by National Geographic) depend on simple methods of cooking that have evolved over centuries, and Mr. Buettner has developed a matter-of-fact disregard for gastro-trends of all stripes. At LifeThyme, he passed by refrigerated shelves full of vogue-ish juices in hues of green, orange and purple. He shook his head and said, “Bad!”

“The glycemic index on that is as bad as Coke,” he went on, snatching a bottle of carrot juice to scan the label. “For eight ounces, there’s 14 grams of sugar. People get suckered into thinking, ‘Oh, I’m drinking this juice.’ Skip the juicing. Eat the fruit. Or eat the vegetable.” (How about a protein shake? “No,” he said.)

So far, I was feeling pretty good about my chances of making it to 100. I love coffee, I’m not much of a juicer and I’ve never had a protein shake in my life. Bingo. I figured that pretty soon Mr. Buettner would throw me a dietary curveball (I noticed with vague concern that he was not putting any meat or cheese into his basket), but by this point I was already thinking about how fun it would be to meet my great-grandchildren.

I felt even better when he and I started talking about strenuous exercise, which for me falls somewhere between “root canal” and “Justin Bieber concert” on the personal aversion scale.

I like to go for long walks, and … well, that’s about it.

“That’s when I knew you’d be O.K.,” Mr. Buettner told me.

It turns out that walking is a popular mode of transport in the Blue Zones, too — particularly on the sun-splattered slopes of Sardinia, Italy, where many of those who make it to 100 are shepherds who devote the bulk of each day to wandering the hills and treating themselves to sips of red wine.

“A glass of wine is better than a glass of water with a Mediterranean meal,” Mr. Buettner told me.

Red wine and long walks? If that’s all it takes, people, you’re looking at Methuselah.

O.K., yes, Mr. Buettner moves his muscles a lot more than I do. He likes to go everywhere on that fold-up bike, which he hauls along with him on trips, and sometimes he does yoga and goes in-line skating. But he generally believes that the high-impact exercise mania as practiced in the major cities of the United States winds up doing as much harm as good.

“You can’t be pounding your joints with marathons and pumping iron,” he said. “You’ll never see me doing CrossFit.”

For that evening’s meal, Mr. Buettner planned to cook dishes that would make reference to the quintet of places that he focuses on in “The Blue Zones Solution”: along with Icaria and Sardinia, they are Okinawa, Japan; the Nicoya Peninsula in Costa Rica; and Loma Linda, Calif., where Seventh-day Adventists have a tendency to outlive their fellow Americans, thanks to a mostly vegetarian diet that is heavy on nuts, beans, oatmeal, 100 percent whole-grain bread and avocados.

We walked from the market to the townhouse. And it was here, as Mr. Buettner laid out his cooking ingredients on a table in Mr. Solomon’s and Mr. Habich’s commodious, state-of-the-art kitchen, that I noticed the first real disconnect between the lives of the Blue Zones sages and the life of a food writer who has enjoyed many a lunch hour scarfing down charcuterie, tapas and pork-belly-topped ramen at the Gotham West Market food court.

Where was the butter? Hadn’t some nice scientists determined that butter’s not so lethal for us, after all? (“My view is that butter, lard and other animal fats are a bit like radiation: a dollop a couple of times a week probably isn’t going to hurt you, but we don’t know the safe level,” Mr. Buettner later wrote in an email. “At any rate, I can send along a paper that largely refutes the whole ‘Butter is Back’ craze.” No, thanks, I’m good.)

Where was the meat? Where was the cheese? (No cheese? And here I thought we’d be friends for another 50 years, Mr. Buettner.)

Read the entire article here.

Digital Forensics and the Wayback Machine

Amazon-Aug1999

Many of us see history — the school subject — as rather dull and boring. After all, how can the topic be made interesting when it’s usually taught by a coach who has other things on his or her mind [no joke, I have evidence of this from both sides of the Atlantic!].

Yet we also know that history’s lessons are essential to shaping our current world view and our vision for the future, in a myriad of ways. Since humans could speak and then write, our ancestors have recorded and transmitted their histories through oral storytelling, and then through books and assorted media.

Then came the internet. The explosion of content, media formats and related technologies over the last quarter-century has led to an immense challenge for archivists and historians intent on cataloging our digital stories. One facet of this challenge is the tremendous volume of information and its accelerating growth. Another is the dynamic nature of the content — much of it being constantly replaced and refreshed.

But, all is not lost. The Internet Archive founded in 1996 has been quietly archiving text, pages, images, audio and more recently entire web sites from the Tubes of the vast Internets. Currently the non-profit has archived around half a trillion web pages. It’s our modern day equivalent of the Library of Alexandria.

Please say hello to the Internet Archive Wayback Machine, and give it a try. The Wayback Machine took the screenshot above of Amazon.com in 1999, in case you’ve ever wondered what Amazon looked like before it swallowed or destroyed entire retail sectors.

From the New Yorker:

Malaysia Airlines Flight 17 took off from Amsterdam at 10:31 A.M. G.M.T. on July 17, 2014, for a twelve-hour flight to Kuala Lumpur. Not much more than three hours later, the plane, a Boeing 777, crashed in a field outside Donetsk, Ukraine. All two hundred and ninety-eight people on board were killed. The plane’s last radio contact was at 1:20 P.M. G.M.T. At 2:50 P.M. G.M.T., Igor Girkin, a Ukrainian separatist leader also known as Strelkov, or someone acting on his behalf, posted a message on VKontakte, a Russian social-media site: “We just downed a plane, an AN-26.” (An Antonov 26 is a Soviet-built military cargo plane.) The post includes links to video of the wreckage of a plane; it appears to be a Boeing 777.

Two weeks before the crash, Anatol Shmelev, the curator of the Russia and Eurasia collection at the Hoover Institution, at Stanford, had submitted to the Internet Archive, a nonprofit library in California, a list of Ukrainian and Russian Web sites and blogs that ought to be recorded as part of the archive’s Ukraine Conflict collection. Shmelev is one of about a thousand librarians and archivists around the world who identify possible acquisitions for the Internet Archive’s subject collections, which are stored in its Wayback Machine, in San Francisco. Strelkov’s VKontakte page was on Shmelev’s list. “Strelkov is the field commander in Slaviansk and one of the most important figures in the conflict,” Shmelev had written in an e-mail to the Internet Archive on July 1st, and his page “deserves to be recorded twice a day.”

On July 17th, at 3:22 P.M. G.M.T., the Wayback Machine saved a screenshot of Strelkov’s VKontakte post about downing a plane. Two hours and twenty-two minutes later, Arthur Bright, the Europe editor of the Christian Science Monitor, tweeted a picture of the screenshot, along with the message “Grab of Donetsk militant Strelkov’s claim of downing what appears to have been MH17.” By then, Strelkov’s VKontakte page had already been edited: the claim about shooting down a plane was deleted. The only real evidence of the original claim lies in the Wayback Machine.

The average life of a Web page is about a hundred days. Strelkov’s “We just downed a plane” post lasted barely two hours. It might seem, and it often feels, as though stuff on the Web lasts forever, for better and frequently for worse: the embarrassing photograph, the regretted blog (more usually regrettable not in the way the slaughter of civilians is regrettable but in the way that bad hair is regrettable). No one believes any longer, if anyone ever did, that “if it’s on the Web it must be true,” but a lot of people do believe that if it’s on the Web it will stay on the Web. Chances are, though, that it actually won’t. In 2006, David Cameron gave a speech in which he said that Google was democratizing the world, because “making more information available to more people” was providing “the power for anyone to hold to account those who in the past might have had a monopoly of power.” Seven years later, Britain’s Conservative Party scrubbed from its Web site ten years’ worth of Tory speeches, including that one. Last year, BuzzFeed deleted more than four thousand of its staff writers’ early posts, apparently because, as time passed, they looked stupider and stupider. Social media, public records, junk: in the end, everything goes.

Web pages don’t have to be deliberately deleted to disappear. Sites hosted by corporations tend to die with their hosts. When MySpace, GeoCities, and Friendster were reconfigured or sold, millions of accounts vanished. (Some of those companies may have notified users, but Jason Scott, who started an outfit called Archive Team—its motto is “We are going to rescue your shit”—says that such notification is usually purely notional: “They were sending e-mail to dead e-mail addresses, saying, ‘Hello, Arthur Dent, your house is going to be crushed.’ ”) Facebook has been around for only a decade; it won’t be around forever. Twitter is a rare case: it has arranged to archive all of its tweets at the Library of Congress. In 2010, after the announcement, Andy Borowitz tweeted, “Library of Congress to acquire entire Twitter archive—will rename itself Museum of Crap.” Not long after that, Borowitz abandoned that Twitter account. You might, one day, be able to find his old tweets at the Library of Congress, but not anytime soon: the Twitter Archive is not yet open for research. Meanwhile, on the Web, if you click on a link to Borowitz’s tweet about the Museum of Crap, you get this message: “Sorry, that page doesn’t exist!”

The Web dwells in a never-ending present. It is—elementally—ethereal, ephemeral, unstable, and unreliable. Sometimes when you try to visit a Web page what you see is an error message: “Page Not Found.” This is known as “link rot,” and it’s a drag, but it’s better than the alternative. More often, you see an updated Web page; most likely the original has been overwritten. (To overwrite, in computing, means to destroy old data by storing new data in their place; overwriting is an artifact of an era when computer storage was very expensive.) Or maybe the page has been moved and something else is where it used to be. This is known as “content drift,” and it’s more pernicious than an error message, because it’s impossible to tell that what you’re seeing isn’t what you went to look for: the overwriting, erasure, or moving of the original is invisible. For the law and for the courts, link rot and content drift, which are collectively known as “reference rot,” have been disastrous. In providing evidence, legal scholars, lawyers, and judges often cite Web pages in their footnotes; they expect that evidence to remain where they found it as their proof, the way that evidence on paper—in court records and books and law journals—remains where they found it, in libraries and courthouses. But a 2013 survey of law- and policy-related publications found that, at the end of six years, nearly fifty per cent of the URLs cited in those publications no longer worked. According to a 2014 study conducted at Harvard Law School, “more than 70% of the URLs within the Harvard Law Review and other journals, and 50% of the URLs within United States Supreme Court opinions, do not link to the originally cited information.” The overwriting, drifting, and rotting of the Web is no less catastrophic for engineers, scientists, and doctors. Last month, a team of digital library researchers based at Los Alamos National Laboratory reported the results of an exacting study of three and a half million scholarly articles published in science, technology, and medical journals between 1997 and 2012: one in five links provided in the notes suffers from reference rot. It’s like trying to stand on quicksand.

The footnote, a landmark in the history of civilization, took centuries to invent and to spread. It has taken mere years nearly to destroy. A footnote used to say, “Here is how I know this and where I found it.” A footnote that’s a link says, “Here is what I used to know and where I once found it, but chances are it’s not there anymore.” It doesn’t matter whether footnotes are your stock-in-trade. Everybody’s in a pinch. Citing a Web page as the source for something you know—using a URL as evidence—is ubiquitous. Many people find themselves doing it three or four times before breakfast and five times more before lunch. What happens when your evidence vanishes by dinnertime?

The day after Strelkov’s “We just downed a plane” post was deposited into the Wayback Machine, Samantha Power, the U.S. Ambassador to the United Nations, told the U.N. Security Council, in New York, that Ukrainian separatist leaders had “boasted on social media about shooting down a plane, but later deleted these messages.” In San Francisco, the people who run the Wayback Machine posted on the Internet Archive’s Facebook page, “Here’s why we exist.”

Read the entire story here.

Image: Wayback Machine’s screenshot of Amazon.com’s home page, August 1999.

From a Million Miles

epicearthmoonstill

The Deep Space Climate Observatory (DSCOVR) spacecraft is now firmly in place about one million miles from Earth at its L1 (Legrange) point, a focus of gravitational balance between the sun and our planet. Jointly operated by NASA, NOAA (National Oceanic and Atmospheric Administration) and the U.S. Air Force, the spacecraft uses its digital optics to observe the Earth from sunrise to sunset. Researchers use its observations to measure a number of climate variables including ozone, aerosols, cloud heights, dust, and volcanic ash. The spacecraft also monitors the sun’s solar wind. Luckily, it also captures gorgeous images like the one above from July 16, 2015, of the moon, with dark side visible, as it transits over the Pacific Ocean.

Learn more about DSCOVR here.

Image: This image shows the far side of the moon, illuminated by the sun, as it crosses between the DSCOVR spacecraft’s Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth. Courtesy: NASA, NOAA.

Aspirational or Inspirational?

Both of my parents came from a background of chronic poverty and limited educational opportunity. They eventually overcame these constraints through a combination of hard work, persistence and passion. They instilled these traits in me, and somehow they did so in a way that fostered a belief in a well-balanced life containing both work and leisure.

But to many, especially in the United States, the live-to-work ethic thrives. This condition is so acute and prevalent that most Americans caught in corporate jobs never take their full — and yet meager by global standards — allotment of annual vacation. Our culture is replete with tales of driven, aspirational parents — think dragon mom — who seem to have their kid’s lives mapped out from the crib.

I have to agree with columnist George Monbiot: while naked ambition may gain our children monetary riches and a higher rung on the corporate ladder it does not a life make.

From the Guardian:

Perhaps because the alternative is too hideous to contemplate, we persuade ourselves that those who wield power know what they are doing. The belief in a guiding intelligence is hard to shake.

We know that our conditions of life are deteriorating. Most young people have little prospect of owning a home, or even of renting a decent one. Interesting jobs are sliced up, through digital Taylorism, into portions of meaningless drudgery. The natural world, whose wonders enhance our lives, and upon which our survival depends, is being rubbed out with horrible speed. Those to whom we look for guardianship, in government and among the economic elite, do not arrest this decline, they accelerate it.

The political system that delivers these outcomes is sustained by aspiration: the faith that if we try hard enough we could join the elite, even as living standards decline and social immobility becomes set almost in stone. But to what are we aspiring? A life that is better than our own, or worse?

Last week a note from an analyst at Barclays’ Global Power and Utilities group in New York was leaked. It addressed students about to begin a summer internship, and offered a glimpse of the toxic culture into which they are inducted.

“I wanted to introduce you to the 10 Power Commandments … For nine weeks you will live and die by these … We expect you to be the last ones to leave every night, no matter what … I recommend bringing a pillow to the office. It makes sleeping under your desk a lot more comfortable … the internship really is a nine-week commitment at the desk … an intern asked our staffer for a weekend off for a family reunion – he was told he could go. He was also asked to hand in his BlackBerry and pack up his desk … Play time is over and it’s time to buckle up.”

Play time is over, but did it ever begin? If these students have the kind of parents featured in the Financial Times last month, perhaps not. The article marked a new form of employment: the nursery consultant. These people, who charge from £290 an hour, must find a nursery that will put their clients’ toddlers on the right track to an elite university.

They spoke of parents who had already decided that their six-month-old son would go to Cambridge then Deutsche Bank, or whose two-year-old daughter “had a tutor for two afternoons a week (to keep on top of maths and literacy) as well as weekly phonics and reading classes, drama, piano, beginner French and swimming. They were considering adding Mandarin and Spanish. ‘The little girl was so exhausted and on edge she was terrified of opening her mouth.’”

In New York, playdate coaches charging $450 an hour train small children in the social skills that might help secure their admission to the most prestigious private schools. They are taught to hide traits that could suggest they’re on the autistic spectrum, which might reduce their chances of selection.

From infancy to employment, this is a life-denying, love-denying mindset, informed not by joy or contentment, but by an ambition that is both desperate and pointless, for it cannot compensate for what it displaces: childhood, family life, the joys of summer, meaningful and productive work, a sense of arrival, living in the moment. For the sake of this toxic culture, the economy is repurposed, the social contract is rewritten, the elite is released from tax, regulation and the other restraints imposed by democracy.

Where the elite goes, we are induced to follow. As if the assessment regimes were too lax in UK primary schools, last year the education secretary announced a new test for four-year-olds. A primary school in Cambridge has just taken the obvious next step: it is now streaming four-year-olds into classes according to perceived ability. The education and adoption bill, announced in the Queen’s speech, will turn the screw even tighter. Will this help children, or hurt them?

Read the entire column here.

Girlfriend or Nuclear Reactor?

YellowcakeAsk a typical 14 year-old boy if he’d prefer to have a girlfriend or a home-made nuclear fission reactor he’s highly likely to gravitate towards the former. Not so Taylor Wilson; he seems to prefer the company of Geiger counters, particle accelerators, vacuum tubes and radioactive materials.

From the Guardian:

Taylor Wilson has a Geiger counter watch on his wrist, a sleek, sporty-looking thing that sounds an alert in response to radiation. As we enter his parents’ garage and approach his precious jumble of electrical equipment, it emits an ominous beep. Wilson is in full flow, explaining the old-fashioned control panel in the corner, and ignores it. “This is one of the original atom smashers,” he says with pride. “It would accelerate particles up to, um, 2.5m volts – so kind of up there, for early nuclear physics work.” He pats the knobs.

It was in this garage that, at the age of 14, Wilson built a working nuclear fusion reactor, bringing the temperature of its plasma core to 580mC – 40 times as hot as the core of the sun. This skinny kid from Arkansas, the son of a Coca-Cola bottler and a yoga instructor, experimented for years, painstakingly acquiring materials, instruments and expertise until he was able to join the elite club of scientists who have created a miniature sun on Earth.

Not long after, Wilson won $50,000 at a science fair, for a device that can detect nuclear materials in cargo containers – a counter-terrorism innovation he later showed to a wowed Barack Obama at a White House-sponsored science fair.

Wilson’s two TED talks (Yup, I Built A Nuclear Fusion Reactor and My Radical Plan For Small Nuclear Fission Reactors) have been viewed almost 4m times. A Hollywood biopic is planned, based on an imminent biography. Meanwhile, corporations have wooed him and the government has offered to buy some of his inventions. Former US under-secretary for energy, Kristina Johnson, told his biographer, Tom Clynes: “I would say someone like him comes along maybe once in a generation. He’s not just smart – he’s cool and articulate. I think he may be the most amazing kid I’ve ever met.”

Seven years on from fusing the atom, the gangly teen with a mop of blond hair is now a gangly 21-year-old with a mop of blond hair, who shuttles between his garage-cum-lab in the family’s home in Reno, Nevada, and other more conventional labs. In addition to figuring out how to intercept dirty bombs, he looks at ways of improving cancer treatment and lowering energy prices – while plotting a hi-tech business empire around the patents.

As we tour his parents’ garage, Wilson shows me what appears to be a collection of nuggets. His watch sounds another alert, but he continues lovingly to detail his inventory. “The first thing I got for my fusion project was a mass spectrometer from an ex-astronaut in Houston, Texas,” he explains. This was a treasure he obtained simply by writing a letter asking for it. He ambles over to a large steel safe, with a yellow and black nuclear hazard sticker on the front. He spins the handle, opens the door and extracts a vial with pale powder in it.

“That’s some yellowcake I made – the famous stuff that Saddam Hussein was supposedly buying from Niger. This is basically the starting point for nuclear, whether it’s a weapons programme or civilian energy production.” He gives the vial a shake. A vision of dodgy dossiers, atomic intrigue and mushroom clouds swims before me, a reverie broken by fresh beeping. “That’ll be the allanite. It’s a rare earth mineral,” Wilson explains. He picks up a dark, knobbly little rock streaked with silver. “It has thorium, a potential nuclear fuel.”

I think now may be a good moment to exit the garage, but the tour is not over. “One of the things people are surprised by is how ubiquitous radiation and radioactivity is,” Wilson says, giving me a reassuring look. “I’m very cautious. I’m actually a bit of a hypochondriac. It’s all about relative risk.”

He paces over to a plump steel tube, elevated to chest level – an object that resembles an industrial vacuum cleaner, and gleams in the gloom. This is the jewel in Wilson’s crown, the reactor he built at 14, and he gives it a tender caress. “This is safer than many things,” he says, gesturing to his Aladdin’s cave of atomic accessories. “For instance, horse riding. People fear radioactivity because it is very mysterious. You want to have respect for it, but not be paralysed by fear.”

The Wilson family home is a handsome, hacienda-style house tucked into foothills outside Reno. Unusually for the high desert at this time of year, grey clouds with bellies of rain rumble overhead. Wilson, by contrast, is all sunny smiles. He is still the slightly ethereal figure you see in the TED talks (I have to stop myself from offering him a sandwich), but the handshake is firm, the eye contact good and the energy enviable – even though Wilson has just flown back from a weekend visiting friends in Los Angeles. “I had an hour’s sleep last night. Three hours the night before that,” he says, with a hint of pride.

He does not drink or smoke, is a natty dresser (in suede jacket, skinny tie, jeans and Converse-style trainers) and he is a talker. From the moment we meet until we part hours later, he talks and talks, great billows of words about the origin of his gift and the responsibility it brings; about trying to be normal when he knows he’s special; about Fukushima, nuclear power and climate change; about fame and ego, and seeing his entire life chronicled in a book for all the world to see when he’s barely an adult and still wrestling with how to ask a girl out on a date.

The future feels urgent and mysterious. “My life has been this series of events that I didn’t see coming. It’s both exciting and daunting to know you’re going to be constantly trying to one-up yourself,” he says. “People can have their opinions about what I should do next, but my biggest pressure is internal. I hate resting on laurels. If I burn out, I burn out – but I don’t see that happening. I’ve more ideas than I have time to execute.”

Wilson credits his parents with huge influence, but wavers on the nature versus nurture debate: was he born brilliant or educated into it? “I don’t have an answer. I go back and forth.” The pace of technological change makes predicting his future a fool’s errand, he says. “It’s amazing – amazing – what I can do today that I couldn’t have done if I was born 10 years earlier.” And his ambitions are sky-high: he mentions, among many other plans, bringing electricity and state-of-the-art healthcare to the developing world.

Read the entire fascinating story here.

Image: Yellowcake, a type of uranium concentrate powder, an intermediate step in the processing of uranium ores. Courtesy of United States Department of Energy. Public Domain.

Creativity and Mental Illness

Vincent_van_Gogh-Self_portrait_with_bandaged_ear

The creative genius — oft misunderstood, outcast, tortured, misanthropic, fueled by demon spirits. Yet, this same description would seem to be equally apt at describing many of those who are unfortunate enough to suffer from mental illness. So, could creativity and mental illness be high-level symptoms of a broader underlying spectrum “disorder”? After all, a not insignificant number of people and businesses tend to regard creativity as a behavioral problem — best left outside the front-door to the office. Time to check out the results of the latest psychological study.

From the Guardian:

The ancient Greeks were first to make the point. Shakespeare raised the prospect too. But Lord Byron was, perhaps, the most direct of them all: “We of the craft are all crazy,” he told the Countess of Blessington, casting a wary eye over his fellow poets.

The notion of the tortured artist is a stubborn meme. Creativity, it states, is fuelled by the demons that artists wrestle in their darkest hours. The idea is fanciful to many scientists. But a new study claims the link may be well-founded after all, and written into the twisted molecules of our DNA.

In a large study published on Monday, scientists in Iceland report that genetic factors that raise the risk of bipolar disorder and schizophrenia are found more often in people in creative professions. Painters, musicians, writers and dancers were, on average, 25% more likely to carry the gene variants than professions the scientists judged to be less creative, among which were farmers, manual labourers and salespeople.

Kari Stefansson, founder and CEO of deCODE, a genetics company based in Reykjavik, said the findings, described in the journal Nature Neuroscience, point to a common biology for some mental disorders and creativity. “To be creative, you have to think differently,” he told the Guardian. “And when we are different, we have a tendency to be labelled strange, crazy and even insane.”

The scientists drew on genetic and medical information from 86,000 Icelanders to find genetic variants that doubled the average risk of schizophrenia, and raised the risk of bipolar disorder by more than a third. When they looked at how common these variants were in members of national arts societies, they found a 17% increase compared with non-members.

The researchers went on to check their findings in large medical databases held in the Netherlands and Sweden. Among these 35,000 people, those deemed to be creative (by profession or through answers to a questionnaire) were nearly 25% more likely to carry the mental disorder variants.

Stefansson believes that scores of genes increase the risk of schizophrenia and bipolar disorder. These may alter the ways in which many people think, but in most people do nothing very harmful. But for 1% of the population, genetic factors, life experiences and other influences can culminate in problems, and a diagnosis of mental illness.

“Often, when people are creating something new, they end up straddling between sanity and insanity,” said Stefansson. “I think these results support the old concept of the mad genius. Creativity is a quality that has given us Mozart, Bach, Van Gogh. It’s a quality that is very important for our society. But it comes at a risk to the individual, and 1% of the population pays the price for it.”

Stefansson concedes that his study found only a weak link between the genetic variants for mental illness and creativity. And it is this that other scientists pick up on. The genetic factors that raise the risk of mental problems explained only about 0.25% of the variation in peoples’ artistic ability, the study found. David Cutler, a geneticist at Emory University in Atlanta, puts that number in perspective: “If the distance between me, the least artistic person you are going to meet, and an actual artist is one mile, these variants appear to collectively explain 13 feet of the distance,” he said.

Most of the artist’s creative flair, then, is down to different genetic factors, or to other influences altogether, such as life experiences, that set them on their creative journey.

For Stefansson, even a small overlap between the biology of mental illness and creativity is fascinating. “It means that a lot of the good things we get in life, through creativity, come at a price. It tells me that when it comes to our biology, we have to understand that everything is in some way good and in some way bad,” he said.

Read the entire article here.

Image: Vincent van Gogh, self-portrait, 1889. Courtesy of Courtauld Institute Galleries, London. Wikipaintings.org. Public Domain.

Monsters of Our Own Making

For parents: a few brief tips on how to deal with young adult children — that most pampered of generations. Tip number 1: turn off junior’s access to the family Netflix account.

From WSJ:

Congratulations. Two months ago, your kid graduated from college, bravely finishing his degree rather than dropping out to make millions on his idea for a dating app for people who throw up during Cross Fit training. If he’s like a great many of his peers, he’s moved back home, where he’s figuring out how to become an adult in the same room that still has his orthodontic headgear strapped to an Iron Man helmet.

Now we’re deep into summer, and the logistical challenges of your grad really being home are sinking in. You’re constantly juggling cars, cleaning more dishes and dealing with your daughter’s boyfriend, who not only slept over but also drank your last can of Pure Protein Frosty Chocolate shake.

But the real challenge here is a problem of your own making. You see, these children are members of the Most-Loved Generation: They’ve grown up with their lives stage-managed by us, their college-acceptance-obsessed parents. Remember when Eva, at age 7, was obsessed with gymnastics…for exactly 10 months, which is why the TV in your guest room sits on top of a $2,500 pommel horse?

Now that they’re out of college, you realize what wasn’t included in that $240,000 education: classes in life skills and decision-making.

With your kid at home, you find that he’s incapable of making a single choice on his own. Like when you’re working and he interrupts to ask how many blades is the best number for a multi-blade razor. Or when you’ve just crawled into bed and hear the familiar refrain of, “Mom, what can we eat?” All those years being your kid’s concierge and coach have created a monster.

So the time has come for you to cut the cord. And by that I mean: Take your kid off your Netflix account. He will be confused and upset at first, not understanding why this is happening to him, but it’s a great opportunity for him to sign up for something all by himself.

Which brings us to money. It’s finally time to channel your Angela Merkel and get tough with your young Alexis Tsipras. Put him on a consistent allowance and make him pay the extra fees incurred when he uses the ATM at the weird little deli rather than the one at his bank, a half-block away.

Next, nudge your kid to read books about self-motivation. Begin with baby steps: Don’t just hand her “Lean In” and “I Am Malala.” Your daughter’s great, but she’s no Malala. And the only thing she’s leaning in to is a bag of kettle corn while binge-watching “Orange Is the New Black.”

Instead, over dinner, casually drop a few pearls of wisdom from “Coach Wooden’s Pyramid of Success,” such as, “Make each day your masterpiece.” Let your kid decide whether getting a high score on her “Panda Pop Bubble Shooter” iPhone game qualifies. Then hope that John Wooden has piqued her curiosity and leave his book out with a packet of Sour Patch Xploderz on top. With luck, she’ll take the bait (candy and book).

Now it’s time to work on your kid’s inability to make a decision, which, let’s be honest, you’ve instilled over the years by jumping to answer all of her texts, even that time you were at the opera. “But,” you object, “it could have been an emergency!” It wasn’t. She couldn’t remember whether she liked Dijon mustard or mayo on her turkey wrap.

Set up some outings that nurture independence. Send your kid to the grocery store with orders to buy a week of dinner supplies. She’ll ask a hundred questions about what to get, but just respond with, “Whatever looks good to you” or, “Have fun with it.” She will look at you with panic, but don’t lose your resolve. Send her out and turn your phone off to avoid a barrage of texts, such as, “They’re out of bacterial wipes to clean off the shopping cart handle. What should I do?”

Rest assured, in a couple of hours, she’ll return with “dinner”—frozen waffles and a bag of Skinny Pop popcorn. Tough it out and serve it for dinner: The name of the game is positive reinforcement.

Once she’s back you’ll inevitably get hit with more questions, like, “It’s not lost, but how expensive is that remote key for the car?” Take a deep breath and just say, “Um, I’m not sure. Why don’t you Google it?”

Read the entire story here.

The Literal Word

Abraham-Sarah-Hagar

I’ve been following the recent story of a country clerk in Kentucky who is refusing to grant marriage licenses to same-sex couples. The clerk cites her profound Christian beliefs for contravening the new law of the land. I’m reminded that most people who ardently follow a faith, as proscribed by the literal word from a God, tend to interpret, cherry-pick and obey what they wish. And, those same individuals will fervently ignore many less palatable demands from their God. So, let’s review a few biblical pronouncements, lest we forget what all believers in the Christian bible should be doing.

From the Independent:

Social conservatives who object to marriage licenses for gay couples claim to defend “Christian marriage,” meaning one man paired with one woman for life, which they say is prescribed by God in the Bible.

But in fact, Bible writers give the divine thumbs-up to many kinds of sexual union or marriage. They also use several literary devices to signal God’s approval for one or another sexual liaison: The law or a prophet might prescribe it, Jesus might endorse it, or God might reward it with the greatest of all blessings: boy babies who go on to become powerful men.

While the approved list does include one man coupled with one woman, the Bible explicitly endorses polygamy and sexual slavery, providing detailed regulations for each; and at times it also rewards rape and incest.

Polygamy. Polygamy is the norm in the Old Testament and accepted without reproof by Jesus (Matthew 22:23-32). Biblicalpolygamy.com contains pages dedicated to 40 biblical figures, each of whom had multiple wives.

Sex slaves. The Bible provides instructions on how to acquire several types of sex slaves. For example, if a man buys a Hebrew girl and “she please not her master” he can’t sell her to a foreigner; and he must allow her to go free if he doesn’t provide for her (Exodus 21:8).

War booty. Virgin females are counted, literally, among the booty of war. In the book of Numbers (31:18) God’s servant commands the Israelites to kill all of the used Midianite women along with all boy children, but to keep the virgin girls for themselves. The Law of Moses spells out a ritual to purify a captive virgin before sex. (Deuteronomy 21:10-14).

Incest. Incest is mostly forbidden in the Bible, but God makes exceptions. Abraham and Sarah, much favoured by God, are said to be half-siblings. Lot’s daughters get him drunk and mount him, and God rewards them with male babies who become patriarchs of great nations (Genesis 19).

Brother’s widow. If a brother dies with no children, it becomes a man’s duty to impregnate the brother’s widow. Onan is struck dead by God because he prefers to spill his seed on the ground rather than providing offspring for his brother (Genesis 38:8-10). A New Testament story (Matthew 22:24-28) shows that the tradition has survived.

Wife’s handmaid. After seven childless decades, Abraham’s frustrated wife Sarah says, “Go, sleep with my slave; perhaps I can build a family through her.”  Her slave, Hagar, becomes pregnant. Two generations later, the sister-wives of Jacob repeatedly send their slaves to him, each trying to produce more sons than the other (Genesis 30:1-22).

Read the entire story here.

Image: Biblical engraving: Sarah Offering Hagar to Her Husband, Abraham, c1897. Courtesy of Wikipedia.

The Post-Capitalism Dream

Anti-capitalism_color

I’m not sure that I fully agree with the premises and conclusions that author Paul Mason outlines in his essay below excerpted from his new book, Postcapitalism (published on 30 July 2015). However, I’d like to believe that we could all very soon thrive in a much more equitable and socially just future society. While the sharing economy has gone someway to democratizing work effort, Mason points out other, and growing, areas of society that are marching to the beat of a different, non-capitalist drum: volunteerism, alternative currencies, cooperatives, gig-economy, self-managed spaces, social sharing, time banks. This is all good.

It will undoubtedly take generations for society to grapple with the consequences of these shifts and more importantly dealing with the ongoing and accelerating upheaval wrought by ubiquitous automation. Meanwhile, the vested interests — the capitalist heads of state, the oligarchs, the monopolists, the aging plutocrats and their assorted (political) sycophants  — will most certainly fight until the very bitter end to maintain an iron grip on the invisible hand of the market.

From the Guardian:

The red flags and marching songs of Syriza during the Greek crisis, plus the expectation that the banks would be nationalised, revived briefly a 20th-century dream: the forced destruction of the market from above. For much of the 20th century this was how the left conceived the first stage of an economy beyond capitalism. The force would be applied by the working class, either at the ballot box or on the barricades. The lever would be the state. The opportunity would come through frequent episodes of economic collapse.

Instead over the past 25 years it has been the left’s project that has collapsed. The market destroyed the plan; individualism replaced collectivism and solidarity; the hugely expanded workforce of the world looks like a “proletariat”, but no longer thinks or behaves as it once did.

If you lived through all this, and disliked capitalism, it was traumatic. But in the process technology has created a new route out, which the remnants of the old left – and all other forces influenced by it – have either to embrace or die. Capitalism, it turns out, will not be abolished by forced-march techniques. It will be abolished by creating something more dynamic that exists, at first, almost unseen within the old system, but which will break through, reshaping the economy around new values and behaviours. I call this postcapitalism.

As with the end of feudalism 500 years ago, capitalism’s replacement by postcapitalism will be accelerated by external shocks and shaped by the emergence of a new kind of human being. And it has started.

Postcapitalism is possible because of three major changes information technology has brought about in the past 25 years. First, it has reduced the need for work, blurred the edges between work and free time and loosened the relationship between work and wages. The coming wave of automation, currently stalled because our social infrastructure cannot bear the consequences, will hugely diminish the amount of work needed – not just to subsist but to provide a decent life for all.

Second, information is corroding the market’s ability to form prices correctly. That is because markets are based on scarcity while information is abundant. The system’s defence mechanism is to form monopolies – the giant tech companies – on a scale not seen in the past 200 years, yet they cannot last. By building business models and share valuations based on the capture and privatisation of all socially produced information, such firms are constructing a fragile corporate edifice at odds with the most basic need of humanity, which is to use ideas freely.

Third, we’re seeing the spontaneous rise of collaborative production: goods, services and organisations are appearing that no longer respond to the dictates of the market and the managerial hierarchy. The biggest information product in the world – Wikipedia – is made by volunteers for free, abolishing the encyclopedia business and depriving the advertising industry of an estimated $3bn a year in revenue.

Almost unnoticed, in the niches and hollows of the market system, whole swaths of economic life are beginning to move to a different rhythm. Parallel currencies, time banks, cooperatives and self-managed spaces have proliferated, barely noticed by the economics profession, and often as a direct result of the shattering of the old structures in the post-2008 crisis.

You only find this new economy if you look hard for it. In Greece, when a grassroots NGO mapped the country’s food co-ops, alternative producers, parallel currencies and local exchange systems they found more than 70 substantive projects and hundreds of smaller initiatives ranging from squats to carpools to free kindergartens. To mainstream economics such things seem barely to qualify as economic activity – but that’s the point. They exist because they trade, however haltingly and inefficiently, in the currency of postcapitalism: free time, networked activity and free stuff. It seems a meagre and unofficial and even dangerous thing from which to craft an entire alternative to a global system, but so did money and credit in the age of Edward III.

New forms of ownership, new forms of lending, new legal contracts: a whole business subculture has emerged over the past 10 years, which the media has dubbed the “sharing economy”. Buzzwords such as the “commons” and “peer-production” are thrown around, but few have bothered to ask what this development means for capitalism itself.

I believe it offers an escape route – but only if these micro-level projects are nurtured, promoted and protected by a fundamental change in what governments do. And this must be driven by a change in our thinking – about technology, ownership and work. So that, when we create the elements of the new system, we can say to ourselves, and to others: “This is no longer simply my survival mechanism, my bolt hole from the neoliberal world; this is a new way of living in the process of formation.”

The power of imagination will become critical. In an information society, no thought, debate or dream is wasted – whether conceived in a tent camp, prison cell or the table football space of a startup company.

As with virtual manufacturing, in the transition to postcapitalism the work done at the design stage can reduce mistakes in the implementation stage. And the design of the postcapitalist world, as with software, can be modular. Different people can work on it in different places, at different speeds, with relative autonomy from each other. If I could summon one thing into existence for free it would be a global institution that modelled capitalism correctly: an open source model of the whole economy; official, grey and black. Every experiment run through it would enrich it; it would be open source and with as many datapoints as the most complex climate models.

The main contradiction today is between the possibility of free, abundant goods and information; and a system of monopolies, banks and governments trying to keep things private, scarce and commercial. Everything comes down to the struggle between the network and the hierarchy: between old forms of society moulded around capitalism and new forms of society that prefigure what comes next.

Is it utopian to believe we’re on the verge of an evolution beyond capitalism? We live in a world in which gay men and women can marry, and in which contraception has, within the space of 50 years, made the average working-class woman freer than the craziest libertine of the Bloomsbury era. Why do we, then, find it so hard to imagine economic freedom?

It is the elites – cut off in their dark-limo world – whose project looks as forlorn as that of the millennial sects of the 19th century. The democracy of riot squads, corrupt politicians, magnate-controlled newspapers and the surveillance state looks as phoney and fragile as East Germany did 30 years ago.

All readings of human history have to allow for the possibility of a negative outcome. It haunts us in the zombie movie, the disaster movie, in the post-apocalytic wasteland of films such as The Road or Elysium. But why should we not form a picture of the ideal life, built out of abundant information, non-hierarchical work and the dissociation of work from wages?

Millions of people are beginning to realise they have been sold a dream at odds with what reality can deliver. Their response is anger – and retreat towards national forms of capitalism that can only tear the world apart. Watching these emerge, from the pro-Grexit left factions in Syriza to the Front National and the isolationism of the American right has been like watching the nightmares we had during the Lehman Brothers crisis come true.

We need more than just a bunch of utopian dreams and small-scale horizontal projects. We need a project based on reason, evidence and testable designs, that cuts with the grain of history and is sustainable by the planet. And we need to get on with it.

Read the excerpt here.

Image: The Industrial Workers of the World poster “Pyramid of Capitalist System” (1911). Courtesy of Wikipedia. Public Domain.

Cause and Effect

One of the most fundamental tenets of our macroscopic world is the notion that an effect has a cause. Throw a pebble (cause) into a still pond and the ripples (effect) will be visible for all to see. Down at the microscopic level, physicists have determined through their mathematical convolutions that there is no such thing — there is nothing precluding the laws of physics running in reverse. Yet, we never witness ripples in a pond diminishing and ejecting a pebble, which then finds its way back to a catcher.

Of course, this quandary has kept many a philosopher’s pencil well sharpened while physicists continue to scratch their heads. So, is cause and effect merely an coincidental illusion? Or, does our physics only operate in one direction, determined by a yet to be discovered fundamental law?

Author of Causal Reasoning in Physics, philosopher Mathias Frisch, offers great summary of current thinking, but no fundamental breakthrough.

From Aeon:

Do early childhood vaccinations cause autism, as the American model Jenny McCarthy maintains? Are human carbon emissions at the root of global warming? Come to that, if I flick this switch, will it make the light on the porch come on? Presumably I don’t need to persuade you that these would be incredibly useful things to know.

Since anthropogenic greenhouse gas emissions do cause climate change, cutting our emissions would make a difference to future warming. By contrast, autism cannot be prevented by leaving children unvaccinated. Now, there’s a subtlety here. For our judgments to be much use to us, we have to distinguish between causal relations and mere correlations. From 1999 and 2009, the number of people in the US who fell into a swimming pool and drowned varies with the number of films in which Nicholas Cage appeared – but it seems unlikely that we could reduce the number of pool drownings by keeping Cage off the screen, desirable as the remedy might be for other reasons.

In short, a working knowledge of the way in which causes and effects relate to one another seems indispensible to our ability to make our way in the world. Yet there is a long and venerable tradition in philosophy, dating back at least to David Hume in the 18th century, that finds the notions of causality to be dubious. And that might be putting it kindly.

Hume argued that when we seek causal relations, we can never discover the real power; the, as it were, metaphysical glue that binds events together. All we are able to see are regularities – the ‘constant conjunction’ of certain sorts of observation. He concluded from this that any talk of causal powers is illegitimate. Which is not to say that he was ignorant of the central importance of causal reasoning; indeed, he said that it was only by means of such inferences that we can ‘go beyond the evidence of our memory and senses’. Causal reasoning was somehow both indispensable and illegitimate. We appear to have a dilemma.

Hume’s remedy for such metaphysical quandaries was arguably quite sensible, as far as it went: have a good meal, play backgammon with friends, and try to put it out of your mind. But in the late 19th and 20th centuries, his causal anxieties were reinforced by another problem, arguably harder to ignore. According to this new line of thought, causal notions seemed peculiarly out of place in our most fundamental science – physics.

There were two reasons for this. First, causes seemed too vague for a mathematically precise science. If you can’t observe them, how can you measure them? If you can’t measure them, how can you put them in your equations? Second, causality has a definite direction in time: causes have to happen before their effects. Yet the basic laws of physics (as distinct from such higher-level statistical generalisations as the laws of thermodynamics) appear to be time-symmetric: if a certain process is allowed under the basic laws of physics, a video of the same process played backwards will also depict a process that is allowed by the laws.

The 20th-century English philosopher Bertrand Russell concluded from these considerations that, since cause and effect play no fundamental role in physics, they should be removed from the philosophical vocabulary altogether. ‘The law of causality,’ he said with a flourish, ‘like much that passes muster among philosophers, is a relic of a bygone age, surviving, like the monarchy, only because it is erroneously supposed not to do harm.’

Neo-Russellians in the 21st century express their rejection of causes with no less rhetorical vigour. The philosopher of science John Earman of the University of Pittsburgh maintains that the wooliness of causal notions makes them inappropriate for physics: ‘A putative fundamental law of physics must be stated as a mathematical relation without the use of escape clauses or words that require a PhD in philosophy to apply (and two other PhDs to referee the application, and a third referee to break the tie of the inevitable disagreement of the first two).’

This is all very puzzling. Is it OK to think in terms of causes or not? If so, why, given the apparent hostility to causes in the underlying laws? And if not, why does it seem to work so well?

A clearer look at the physics might help us to find our way. Even though (most of) the basic laws are symmetrical in time, there are many arguably non-thermodynamic physical phenomena that can happen only one way. Imagine a stone thrown into a still pond: after the stone breaks the surface, waves spread concentrically from the point of impact. A common enough sight.

Now, imagine a video clip of the spreading waves played backwards. What we would see are concentrically converging waves. For some reason this second process, which is the time-reverse of the first, does not seem to occur in nature. The process of waves spreading from a source looks irreversible. And yet the underlying physical law describing the behaviour of waves – the wave equation – is as time-symmetric as any law in physics. It allows for both diverging and converging waves. So, given that the physical laws equally allow phenomena of both types, why do we frequently observe organised waves diverging from a source but never coherently converging waves?

Physicists and philosophers disagree on the correct answer to this question – which might be fine if it applied only to stones in ponds. But the problem also crops up with electromagnetic waves and the emission of light or radio waves: anywhere, in fact, that we find radiating waves. What to say about it?

On the one hand, many physicists (and some philosophers) invoke a causal principle to explain the asymmetry. Consider an antenna transmitting a radio signal. Since the source causes the signal, and since causes precede their effects, the radio waves diverge from the antenna after it is switched on simply because they are the repercussions of an initial disturbance, namely the switching on of the antenna. Imagine the time-reverse process: a radio wave steadily collapses into an antenna before the latter has been turned on. On the face of it, this conflicts with the idea of causality, because the wave would be present before its cause (the antenna) had done anything. David Griffiths, Emeritus Professor of Physics at Reed College in Oregon and the author of a widely used textbook on classical electrodynamics, favours this explanation, going so far as to call a time-asymmetric principle of causality ‘the most sacred tenet in all of physics’.

On the other hand, some physicists (and many philosophers) reject appeals to causal notions and maintain that the asymmetry ought to be explained statistically. The reason why we find coherently diverging waves but never coherently converging ones, they maintain, is not that wave sources cause waves, but that a converging wave would require the co?ordinated behaviour of ‘wavelets’ coming in from multiple different directions of space – delicately co?ordinated behaviour so improbable that it would strike us as nearly miraculous.

It so happens that this wave controversy has quite a distinguished history. In 1909, a few years before Russell’s pointed criticism of the notion of cause, Albert Einstein took part in a published debate concerning the radiation asymmetry. His opponent was the Swiss physicist Walther Ritz, a name you might not recognise.

It is in fact rather tragic that Ritz did not make larger waves in his own career, because his early reputation surpassed Einstein’s. The physicist Hermann Minkowski, who taught both Ritz and Einstein in Zurich, called Einstein a ‘lazy dog’ but had high praise for Ritz.  When the University of Zurich was looking to appoint its first professor of theoretical physics in 1909, Ritz was the top candidate for the position. According to one member of the hiring committee, he possessed ‘an exceptional talent, bordering on genius’. But he suffered from tuberculosis, and so, due to his failing health, he was passed over for the position, which went to Einstein instead. Ritz died that very year at age 31.

Months before his death, however, Ritz published a joint letter with Einstein summarising their disagreement. While Einstein thought that the irreversibility of radiation processes could be explained probabilistically, Ritz proposed what amounted to a causal explanation. He maintained that the reason for the asymmetry is that an elementary source of radiation has an influence on other sources in the future and not in the past.

This joint letter is something of a classic text, widely cited in the literature. What is less well-known is that, in the very same year, Einstein demonstrated a striking reversibility of his own. In a second published letter, he appears to take a position very close to Ritz’s – the very view he had dismissed just months earlier. According to the wave theory of light, Einstein now asserted, a wave source ‘produces a spherical wave that propagates outward. The inverse process does not exist as elementary process’. The only way in which converging waves can be produced, Einstein claimed, was by combining a very large number of coherently operating sources. He appears to have changed his mind.

Given Einstein’s titanic reputation, you might think that such a momentous shift would occasion a few ripples in the history of science. But I know of only one significant reference to his later statement: a letter from the philosopher Karl Popper to the journal Nature in 1956. In this letter, Popper describes the wave asymmetry in terms very similar to Einstein’s. And he also makes one particularly interesting remark, one that might help us to unpick the riddle. Coherently converging waves, Popper insisted, ‘would demand a vast number of distant coherent generators of waves the co?ordination of which, to be explicable, would have to be shown as originating from the centre’ (my italics).

This is, in fact, a particular instance of a much broader phenomenon. Consider two events that are spatially distant yet correlated with one another. If they are not related as cause and effect, they tend to be joint effects of a common cause. If, for example, two lamps in a room go out suddenly, it is unlikely that both bulbs just happened to burn out simultaneously. So we look for a common cause – perhaps a circuit breaker that tripped.

Common-cause inferences are so pervasive that it is difficult to imagine what we could know about the world beyond our immediate surroundings without them. Hume was right: judgments about causality are absolutely essential in going ‘beyond the evidence of the senses’. In his book The Direction of Time (1956), the philosopher Hans Reichenbach formulated a principle underlying such inferences: ‘If an improbable coincidence has occurred, there must exist a common cause.’ To the extent that we are bound to apply Reichenbach’s rule, we are all like the hard-boiled detective who doesn’t believe in coincidences.

Read the entire article here.

Dismaland

Google-search-Dismaland

A dreary, sardonic, anti-establishment theme park could only happen in the UK. Let’s face it, the corporate optimists running the US would never allow such a pessimistic and apocalyptic vision to unfold in the land of Disney and Nickelodeon.

Thus, residents of the UK are the sole, fortunate recipients of a sarcastic visual nightmare curated by Banksy and a posse of fellow pop-culture-skewering artists. Dismaland — a Bemusement Park — is hosted in appropriately grey seafront venue of Weston-super-Mare. But, grab your tickets soon, the un-theme park is only open from August 22 to September 27, 2015.

Visit Dismaland online, here.

Image courtesy of Google Search.

Psychic Media Watch

Watching the media is one of my favorite amateur pursuits. It’s a continuous source of paradox, infotainment, hypocrisy, truthiness (Stephen Colbert, 2005), loud-mouthery (me, 2015) and hence, enjoyment. So, when two opposing headlines collide mid-way across the Atlantic it’s hard for me to resist highlighting the dissonance. I snapped both these stories on the same day, August 28, 2015. The headlines read:

New York Times:

Psychic-news-28Aug2015-NYTApparently, fortunetelling is “a scam”, according to convicted New York psychic, Celia Mitchell.

The Independent:

Psychic-news-28Aug2015-Independent

Yet, in the UK, the College of Policing recommends using psychics to find missing persons.

Enjoy.

Bang Bang, You’re Dead. The Next Great Reality TV Show

Google-search-reality-tv

Aside from my disbelief that America can let the pathetic and harrowing violence from guns continue, the latest shocking episode in Virginia raises another disturbing thought. And, Jonathan Jones has captured it quite aptly. Are we increasingly internalizing real world violence as a vivid but trivial game? Despite trails of murder victims and untold trauma to families and friends, the rest of us are lulled into dream-like detachment. The violence is just like a video game, right? The violence is played out as a reality TV show, right? And we know both are just fiction — it’s not news, it’s titillating, voyeuristic entertainment. So, there is no need for us to do anything. Let’s just all sit back and wait for the next innovative installment in America’s murderous screenplay. Bang bang, you’re dead! The show must go on.

Or, you could do something different, however small, and I don’t mean recite your go-to prayer or converge around a candle!

From Jonathan Jones over at the Guardian:

Vester Flanagan’s video of his own murderous shooting of Alison Parker and Adam Ward shows a brutal double killing from the shooter’s point of view. While such a sick stunt echoes the horror film Peeping Tom by British director Michael Powell, in which a cameraman films his murders, this is not fiction. It is reality – or the closest modern life gets to reality.

I agree with those who say such excreta of violence should not be shared on social media, let alone screened by television stations or hosted by news websites. But like everything else that simply should not happen, the broadcasting and circulation of this monstrous video has happened. It is one more step in the destruction of boundaries that seems a relentless rush of our time. Nothing is sacred. Not even the very last moments of Alison Parker as we see, from Flanagan’s point of view, Flanagan’s gun pointing at her.

Like the giant gun Alfred Hitchcock used to create a disturbing point of view shot in Spellbound, the weapon dominates the sequence I have seen (I have no intention of seeking out the other scenes). The gun is in Flanagan’s hand and it gives him power. It is held there, shown to the camera, like a child’s proud toy or an exposed dick in his hand – it is obscene because you can see that it is so important to him, that it is supposed to be some kind of answer, revenger or – as gun fans like to nickname America’s most famous gun the Colt 45 – “the Equaliser”. The way Flanagan focuses on his gun revealed the madness of America’s gun laws because it shows the infantile and pathetic relationship the killer appears to have with his weapon. How can it make sense to give guns so readily to troubled individuals?

What did the killer expect viewers to get from watching his video? The horrible conclusion has to be that he expected empathy. Surely, that is not possible. The person who you care about when seeing this is unambiguously his victim. This is, viewed with any humanity at all, a harrowing view of the evil of killing another person. I watched it once. I can’t look again at Alison Parker’s realization of her plight.

The sense that we somehow have a right to see this, the decision of many media outlets to screen it, has a lot to do with the television trappings of this crime. Because part of the attack was seen and heard live on air, because the victims and the perpetrator all worked for the same TV station, there’s something stagey about it all. Sadly people so enjoy true life crime stories and this one has a hokey TV setting that recalls many fictional plots of films and TV programs.

It exposes the paradox of ‘reality television’ – that people on television are not real to the audience at all. The death of a presenter is therefore something that can be replayed on screens with impunity. To see how bizarre and improper this is, imagine if anyone broadcast or hosted a serial killer’s videos of graphic murders. How is viewing this better?

But there is still another level of unreality. The view of that gun pointing at Parker resembles video games like Call of Duty that similarly show your gun pointing at virtual enemies. Is this more than a coincidence? It is complicated by the fact that Flanagan had worked in television. His experience of cameras was not just virtual. So his act of videoing his crime would seem to be another crass, mad way of getting “revenge” on former colleagues. But the resemblance to video games is nevertheless eerie. It adds to the depressing conclusion that we may see more images taken by killers, more dead-eyed recordings of inhuman acts. For video games do create fantasy worlds in which pointing a gun is such a light thing to do.

In this film from the abyss the gun is used as if it was game. Pointed at real people with the ease of manipulating a joystick. And bang bang, they are dead.

Read the entire article here.

Image courtesy of Google Search.

The Tragedy. The Reaction

gun-violence-reaction

Another day, another dark and twisted murder in the United States facilitated by the simple convenience of a gun. The violence and horror seems to become more incredible each time: murder in restaurants, murder at the movie theater, murder on the highway, murder in the convenience store, murder at work, murder in a place of worship, and now murder on-air, live and staged via social media.

But, as I’ve mentioned before the real tragedy is the inaction of the people. Oh apologies, there is a modicum of action, but it is inconsequential, with apologies to the victims’ families. After each mass shooting — we don’t hear much about individual murder anymore (far too common) — the pattern is lamentably predictable: tears and grief; headlines of disbelief and horror; mass soul-searching (lasting several minutes at most); prayer and words, often spoken by a community or national leader; tributes to the victims and sympathy for the families and friends; candlelight vigils, balloons, flowers and cards at the crime scene. It’s all so sad and pathetic. Another day, another mass murder. Repeat the inaction.

Until individuals, neighbors and communities actually take real action to curb gun violence these sad tragedies and empty gestures will continue to loop endlessly.

Image courtesy of Google Search.

HR and the Evil Omnipotence of the Passive Construction

Next time you browse through your company’s compensation or business expense policies, or for that matter, anything written by the human resources (HR) department, cast your mind to George Orwell. In one of his critical essays Politics and the English Language, Orwell makes a clear case for the connection between linguistic obfuscation and political power. While Orwell’s obsession was on the political machine, you could just as well apply his reasoning to the mangled literary machinations of every corporate HR department.

Oh, the pen is indeed mightier than the sword, especially when it is used to construct obtuse passive sentences without a subject — perfect for a rulebook that all citizens must follow and that no one can challenge.

From the Guardian:

In our age there is no such thing as ‘keeping out of human resources’. All issues are human resource issues, and human resources itself is a mass of lies, evasions, folly, hatred and schizophrenia.

OK, that’s not exactly what Orwell wrote. The hair-splitters among you will moan that I’ve taken the word “politics” out of the above and replaced it with “human resources”. Sorry.

But I think there’s no denying that had he been alive today, Orwell – the great opponent and satirist of totalitarianism – would have deplored the bureaucratic repression of HR. He would have hated their blind loyalty to power, their unquestioning faithfulness to process, their abhorrence of anything or anyone deviating from the mean.

In particular, Orwell would have utterly despised the language that HR people use. In his excellent essay Politics and the English Language (where he began the thought that ended with Newspeak), Orwell railed against the language crimes committed by politicians.

In our time, political speech and writing are largely the defence of the indefensible … Thus political language has to consist largely of euphemism, question-begging and sheer cloudy vagueness. Defenceless villages are bombarded from the air, the inhabitants driven out into the countryside, the cattle machine-gunned, the huts set on fire with incendiary bullets: this is called pacification. Millions of peasants are robbed of their farms and sent trudging along the roads with no more than they can carry: this is called transfer of population or rectification of frontiers. People are imprisoned for years without trial, or shot in the back of the neck or sent to die of scurvy in Arctic lumber camps: this is called elimination of unreliable elements.

Repeat the politics/human resources switch in the above and the argument remains broadly the same. Yes, HR is not explaining away murders, but it nonetheless deliberately misuse language as a sort of low-tech mind control to avert our eyes from office atrocities and keep us fixed on our inboxes. Thus mass sackings are wrapped up in cowardly sophistry and called rightsizings, individuals are offboarded to the jobcentre and the few hardy souls left are consoled by their membership of a more streamlined organisation.

Orwell would have despised the passive constructions that are the HR department’s default setting. Want some flexibility in your contract? HR says company policy is unable to support that. Forgotten to accede to some arbitrary and impractical office rule? HR says we are minded to ask everyone to remember that it is essential to comply by rule X. Try to question whether an ill-judged commitment could be reversed? HR apologises meekly that the decision has been made.

Not giving subjects to any of these responses is a deliberate ploy. Subjects give ownership. They imbue accountability. Not giving sentences subjects means that HR is passing the buck, but to no one in particular. And with no subject, no one can be blamed, or protested against.

The passive construction is also designed to give the sense that it’s not HR speaking, but that they are the conduit for a higher-up and incontestable power. It’s designed to be both authoritative and banal, so that we torpidly accept it, like the sovereignty of the Queen. It’s saying: “This is the way things are – deal with it because it isn’t changing.” It’s indifferent and deliberately opaque. It’s the worst kind of utopianism (the kind David Graeber targets in his recent book on “stupidity and the secret joys of bureaucracy”), where system and rule are king and hang the individual. It’s deeply, deeply oppressive.

Annual leave is perhaps an even worse example of HR’s linguistic malpractice. The phrase gives the sense that we are not sitting in the office but rather fighting some dismal war and that we should be grateful for the mercy of Field Marshal HR in allowing us a finite absence from the front line. Is it too indulgent and too frivolous to say that we are going on holiday (even if we’re just taking the day to go to Ikea)? Would it so damage our career prospects? Would the emerging markets of the world be emboldened by the decadence and complacency of saying we’re going on hols? I don’t think so, but they clearly do.

Actually, I don’t think it’s so much of a stretch to imagine Orwell himself establishing the whole HR enterprise as a sort of grim parody of Stalinism; a never-ending, ever-expanding live action art installation sequel to Animal Farm and Nineteen Eighty-Four.

Look at your office’s internal newsletter. Is it an incomprehensible black hole of sense? Is it trying to prod you into a place of content, incognisant of all the everyday hardships and irritations you endure? If your answer is yes, then I think that like me, you find it fairly easy to imagine Orwell composing these Newspeak emails from beyond the grave to make us believe that War is Peace, Freedom is Slavery and 2+2=5.

Delving deeper, the parallels become increasingly hard to ignore. Company restructures and key performance indicators make no sense in the abstract, merely serving to demotivate the workforce, sap confidence and obstruct productivity. So are they actually cleverly designed parodies of Stalin’s purges and the cult of Stakhanovism?

Read the entire story here.

 

Passion, Persistence and Pluto

New Horizons Pluto Flyby

Alliterations aside this is a great story of how passion, persistence and persuasiveness can make a real impact. This is especially significant when you look at the triumphant climax to NASA’s unlikely New Horizons mission to Pluto. Over 20 years in the making and fraught with budget cuts and political infighting — NASA is known for its bureaucracy — the mission reached its zenith last week. While thanks go to the many hundreds engineers and scientists involved from its inception, the mission would not have succeeded without the vision and determination of one person — Alan Stern.

In a music track called “Over the Sea” by the 1980s (and 90s) band Information Society there is a sample of Star Trek’s Captain Kirk saying,

“In every revolution there is one man with a vision.”

How appropriate.

From Smithsonian

On July 14 at approximately 8 a.m. Eastern time, a half-ton NASA spacecraft that has been racing across the solar system for nine and a half years will finally catch up with tiny Pluto, at three billion miles from the Sun the most distant object that anyone or anything from Earth has ever visited. Invisible to the naked eye, Pluto wasn’t even discovered until 1930, and has been regarded as our solar system’s oddball ever since, completely different from the rocky planets close to the Sun, Earth included, and equally unlike the outer gas giants. This quirky and mysterious little world will swing into dramatic view as the New Horizons spacecraft makes its closest approach, just 6,000 miles away, and onboard cameras snap thousands of photographs. Other instruments will gauge Pluto’s topography, surface and atmospheric chemistry, temperature, magnetic field and more. New Horizons will also take a hard look at Pluto’s five known moons, including Charon, the largest. It might even find other moons, and maybe a ring or two.
It was barely 20 years ago when scientists first learned that Pluto, far from alone at the edge of the solar system, was just one in a vast swarm of small frozen bodies in wide, wide orbit around the Sun, like a ring of debris left at the outskirts of a construction zone. That insight, among others, has propelled the New Horizons mission. Understand Pluto and how it fits in with those remnant bodies, scientists say, and you can better understand the formation and evolution of the solar system itself.
If all goes well, “encounter day,” as the New Horizons team calls it, will be a cork-popping celebration of tremendous scientific and engineering prowess—it’s no small feat to fling a collection of precision instruments through the frigid void at speeds up to 47,000 miles an hour to rendezvous nearly a decade later with an icy sphere about half as wide as the United States is broad. The day will also be a sweet vindication for the leader of the mission, Alan Stern. A 57-year-old astronomer, aeronautical engineer, would-be astronaut and self-described “rabble-rouser,” Stern has spent the better part of his career fighting to get Pluto the attention he thinks it deserves. He began pushing NASA to approve a Pluto mission nearly a quarter of a century ago, then watched in frustration as the agency gave the green light to one Pluto probe after another, only to later cancel them. “It was incredibly frustrating,” he says, “like watching Lucy yank the football away from Charlie Brown, over and over.” Finally, Stern recruited other scientists and influential senators to join his lobbying effort, and because underdog Pluto has long been a favorite of children, proponents of the mission savvily enlisted kids to write to Congress, urging that funding for the spacecraft be approved.
New Horizons mission control is headquartered at Johns Hopkins University’s Applied Physics Laboratory near Baltimore, where Stern and several dozen other Plutonians will be installed for weeks around the big July event, but I caught up with Stern late last year in Boulder at the Southwest Research Institute, where he is an associate vice president for research and development. A picture window in his impressive office looks out onto the Rockies, where he often goes to hike and unwind. Trim and athletic at 5-foot-4, he’s also a runner, a sport he pursues with the exactitude of, well, a rocket scientist. He has calculated his stride rate, and says (only half-joking) that he’d be world-class if only his legs were longer. It wouldn’t be an overstatement to say that he is a polarizing figure in the planetary science community; his single-minded pursuit of Pluto has annoyed some colleagues. So has his passionate defense of Pluto in the years since astronomy officials famously demoted it to a “dwarf planet,” giving it the bum’s rush out of the exclusive solar system club, now limited to the eight biggies.
The timing of that insult, which is how Stern and other jilted Pluto-lovers see it, could not have been more dramatic, coming in August 2006, just months after New Horizons had rocketed into space from Cape Canaveral. What makes Pluto’s demotion even more painfully ironic to Stern is that some of the groundbreaking scientific discoveries that he had predicted greatly strengthened his opponents’ arguments, all while opening the door to a new age of planetary science. In fact, Stern himself used the term “dwarf planet” as early as the 1990s.
The wealthy astronomer Percival Lowell, widely known for insisting there were artificial canals on Mars, first started searching for Pluto at his private observatory in Arizona in 1905. Careful study of planetary orbits had suggested that Neptune was not the only object out there exerting a gravitational tug on Uranus, and Lowell set out to find what he dubbed “Planet X.” He died without success, but a young man named Clyde Tombaugh, who had a passion for astronomy though no college education, arrived at the observatory and picked up the search in 1929. After 7,000 hours staring at some 90 million star images, he caught sight of a new planet on his photographic plates in February 1930. The name Pluto, the Roman god of the underworld, was suggested by an 11-year-old British girl named Venetia Burney, who had been discussing the discovery with her grandfather. The name was unanimously adopted by the Lowell Observatory staff in part because the first two letters are Percival Lowell’s initials.
Pluto’s solitary nature baffled scientists for decades. Shouldn’t there be other, similar objects out beyond Neptune? Why did the solar system appear to run out of material so abruptly? “It seemed just weird that the outer solar system would be so empty, while the inner solar system was filled with planets and asteroids,” recalls David Jewitt, a planetary scientist at UCLA. Throughout the decades various astronomers proposed that there were smaller bodies out there, yet unseen. Comets that periodically sweep in to light up the night sky, they speculated, probably hailed from a belt or disk of debris at the solar system’s outer reaches.
Stern, in a paper published in 1991 in the journal Icarus, argued not only that the belt existed, but also that it contained things as big as Pluto. They were simply too far away, and too dim, to be easily seen. His reasoning: Neptune’s moon Triton is a near-twin of Pluto, and probably orbited the Sun before it was captured by Neptune’s gravity. Uranus has a drastically tilted axis of rotation, probably due to a collision eons ago with a Pluto-size object. That made three Pluto-like objects at least, which suggested to Stern there had to be more. The number of planets in the solar system would someday need to be revised upward, he thought. There were probably hundreds, with the majority, including Pluto, best assigned to a subcategory of “dwarf planets.”
Just a year later, the first object (other than Pluto and Charon) was discovered in that faraway region, called the Kuiper Belt after the Dutch-born astronomer Gerard Kuiper. Found by Jewitt and his colleague, Jane Luu, it’s only about 100 miles across, while Pluto spans 1,430 miles. A decade later, Caltech astronomers Mike Brown and Chad Trujillo discovered an object about half the size of Pluto, large enough to be spherical, which they named Quaoar (pronounced “kwa-war” and named for the creator god in the mythology of the pre-Columbian Tongva people native to the Los Angeles basin). It was followed in quick succession by Haumea, and in 2005, Brown’s group found Eris, about the same size as Pluto and also spherical.
Planetary scientists have spotted many hundreds of smaller Kuiper Belt Objects; there could be as many as ten billion that are a mile across or more. Stern will take a more accurate census of their sizes with the cameras on New Horizons. His simple idea is to map and measure Pluto’s and Charon’s craters, which are signs of collisions with other Kuiper Belt Objects and thus serve as a representative sample. When Pluto is closest to the Sun, frozen surface material evaporates into a temporary atmosphere, some of which escapes into space. This “escape erosion” can erase older craters, so Pluto will provide a recent census. Charon, without this erosion, will offer a record that spans cosmic history. In one leading theory, the original, much denser Kuiper Belt would have formed dozens of planets as big or bigger than Earth, but the orbital changes of Jupiter and Saturn flung most of the building blocks away before that could happen, nipping planet formation in the bud.
By the time New Horizons launched at Cape Canaveral on January 19, 2006, it had become difficult to argue that Pluto was materially different from many of its Kuiper Belt neighbors. Curiously, no strict definition of “planet” existed at the time, so some scientists argued that there should be a size cutoff, to avoid making the list of planets too long. If you called Pluto and the other relatively small bodies something else, you’d be left with a nice tidy eight planets—Mercury through Neptune. In 2000, Neil deGrasse Tyson, director of the Hayden Planetarium in New York City, had famously chosen the latter option, leaving Pluto out of a solar system exhibit.
Then, with New Horizons less than 15 percent of the way to Pluto, members of the International Astronomical Union, responsible for naming and classifying celestial objects, voted at a meeting in Prague to make that arrangement official. Pluto and the others were now to be known as dwarf planets, which, in contrast to Stern’s original meaning, were not planets. They were an entirely different sort of beast. Because he discovered Eris, Caltech’s Brown is sometimes blamed for the demotion. He has said he would have been fine with either outcome, but he did title his 2010 memoir How I Killed Pluto and Why It Had It Coming.
“It’s embarrassing,” recalls Stern, who wasn’t in Prague for the vote. “It’s wrong scientifically and it’s wrong pedagogically.” He said the same sort of things publicly at the time, in language that’s unusually blunt in the world of science. Among the dumbest arguments for demoting Pluto and the others, Stern noted, was the idea that having 20 or more planets would be somehow inconvenient. Also ridiculous, he says, is the notion that a dwarf planet isn’t really a planet. “Is a dwarf evergreen not an evergreen?” he asks.
Stern’s barely concealed contempt for what he considers foolishness of the bureaucratic and scientific varieties hasn’t always endeared him to colleagues. One astronomer I asked about Stern replied, “My mother taught me that if you can’t say anything nice about someone, don’t say anything.” Another said, “His last name is ‘Stern.’ That tells you all you need to know.”
DeGrasse Tyson, for his part, offers measured praise: “When it comes to everything from rousing public sentiment in support of astronomy to advocating space science missions to defending Pluto, Alan Stern is always there.”
Stern also inspires less reserved admiration. “Alan is incredibly creative and incredibly energetic,” says Richard Binzel, an MIT planetary scientist who has known Stern since their graduate-school days. “I don’t know where he gets it.”
Read the entire article here.

Image: New Horizons Principal Investigator Alan Stern of Southwest Research Institute (SwRI), Boulder, CO, celebrates with New Horizons Flight Controllers after they received confirmation from the spacecraft that it had successfully completed the flyby of Pluto, Tuesday, July 14, 2015 in the Mission Operations Center (MOC) of the Johns Hopkins University Applied Physics Laboratory (APL), Laurel, Maryland. Public domain.

The Big Breakthrough Listen

If you were a Russian billionaire with money to burn and a penchant for astronomy and physics what would you do? Well, rather than spend it on a 1,000 ft long super-yacht, you might want to spend it on the search for extraterrestrial intelligence. That’s what Yuri Milner is doing. So, hats off to him and his colleagues.

Though, I do hope any far-distant aliens have similar, or greater, sums of cash to throw at equipment to transmit a signal so that we may receive it. Also, I have to wonder what alien oligarchs spend their excess millions and billions on — and what type of monetary system they use (hopefully not Euros).

From the Guardian:

Astronomers are to embark on the most intensive search for alien life yet by listening out for potential radio signals coming from advanced civilisations far beyond the solar system.

Leading researchers have secured time on two of the world’s most powerful telescopes in the US and Australia to scan the Milky Way and neighbouring galaxies for radio emissions that betray the existence of life elsewhere. The search will be 50 times more sensitive, and cover 10 times more sky, than previous hunts for alien life.

The Green Bank Observatory in West Virginia, the largest steerable telescope on the planet, and the Parkes Observatory in New South Wales, are contracted to lead the unprecedented search that will start in January 2016. In tandem, the Lick Observatory in California will perform the most comprehensive search for optical laser transmissions beamed from other planets.

Operators have signed agreements that hand the scientists thousands of hours of telescope time per year to eavesdrop on planets that orbit the million stars closest to Earth and the 100 nearest galaxies. The telescopes will scan the centre of the Milky Way and the entire length of the galactic plane.

Launched on Monday at the Royal Society in London, with the Cambridge cosmologist Stephen Hawking, the Breakthrough Listen project has some of the world’s leading experts at the helm. Among them are Lord Martin Rees, the astronomer royal, Geoff Marcy, who has discovered more planets beyond the solar system than anyone, and the veteran US astronomer Frank Drake, a pioneer in the search for extraterrestrial intelligence (Seti).

Stephen Hawking said the effort was “critically important” and raised hopes for answering the question of whether humanity has company in the universe. “It’s time to commit to finding the answer, to search for life beyond Earth,” he said. “Mankind has a deep need to explore, to learn, to know. We also happen to be sociable creatures. It is important for us to know if we are alone in the dark.”

The project will not broadcast signals into space, because scientists on the project believe humans have more to gain from simply listening out for others. Hawking, however, warned against shouting into the cosmos, because some advanced alien civilisations might possess the same violent, aggressive and genocidal traits found among humans.

“A civilisation reading one of our messages could be billions of years ahead of us. If so they will be vastly more powerful and may not see us as any more valuable than we see bacteria,” he said.

The alien hunters are the latest scientists to benefit from the hefty bank balance of Yuri Milner, a Russian internet billionaire, who quit a PhD in physics to make his fortune. In the past five years, Milner has handed out prizes worth tens of millions of dollars to physicists, biologists and mathematicians, to raise the public profile of scientists. He is the sole funder of the $100m Breakthrough Listen project.

“It is our responsibility as human beings to use the best equipment we have to try to answer one of the biggest questions: are we alone?” Milner told the Guardian. “We cannot afford not to do this.”

Milner was named after Yuri Gagarin, who became the first person to fly in space in 1961, the year he was born.

The Green Bank and Parkes observatories are sensitive enough to pick up radio signals as strong as common aircraft radar from planets around the nearest 1,000 stars. Civilisations as far away as the centre of the Milky Way could be detected if they emit radio signals more than 10 times the power of the Arecibo planetary radar on Earth. The Lick Observatory can pick up laser signals as weak as 100W from nearby stars 25tn miles away.

Read the entire story here.

Pics Or It Didn’t Happen

Apparently, in this day and age of ubiquitous technology there is no excuse for not having evidence. So, if you recently had a terrific (or terrible) meal in your (un-)favorite restaurant you must have pictures to back up your story. If you just returned from a gorgeous mountain hike you must have images for every turn on the trial. Just attended your high-school reunion? Pictures! Purchased a new mattress? Pictures! Cracked your heirloom tea service? Pictures! Mowed the lawn? Pictures! Stubbed toe? Pictures!

The pressure to record our experiences has grown in lock-step with the explosive growth in smartphones and connectivity. Collecting and sharing our memories remains a key part of our story-telling nature. But, this obsessive drive to record every minutiae of every experience, however trivial, has many missing the moment — behind the camera or in front of it, we are no longer in the moment.

Just as our online social networks have stirred growth in the increasingly neurotic condition known as FOMO (fear of missing out), we are now on the cusp on some new techno-enabled, acronym-friendly disorders. Let’s call these FONBB — fear of not being believed, FONGELOFAMP — fear of not getting enough likes or followers as my peers, FOBIO — fear of becoming irrelevant online.

From NYT:

“Pics or it didn’t happen” is the response you get online when you share some unlikely experience or event and one of your friends, followers or stalkers calls you out for evidence. “Next thing I know, I’m bowling with Bill Murray!” Pics or it didn’t happen. “I taught my cockatoo how to rap ‘Baby Got Back’ — in pig Latin.” Pics or it didn’t happen. “Against all odds, I briefly smiled today.” Pics or it didn’t happen!

It’s a glib reply to a comrade’s boasting — coming out of Internet gaming forums to rebut boasts about high scores and awesome kills — but the fact is we like proof. Proof in the instant replay that decides the big game, the vacation pic that persuades us we were happy once, the selfie that reassures us that our face is still our own. “Pics or it didn’t happen” gained traction because in an age of bountiful technology, when everyone is armed with a camera, there is no excuse for not having evidence.

Does the phrase have what it takes to transcend its humble origins as a cruddy meme and become an aphorism in the pantheon of “A picture is worth a thousand words” and “Seeing is believing”? For clues to the longevity of “Pics,” let’s take a survey of some classic epigrams about visual authority and see how they hold up under the realities of contemporary American life.

“A picture is worth a thousand words” is a dependable workhorse, emerging from early-­20th-­century newspaper culture as a pitch to advertisers: Why rely on words when an illustration can accomplish so much more? It seems appropriate to test the phrase with a challenge drawn from contemporary news media. Take one of the Pulitzer Prize-­winning photographs from The St. Louis Post-­Dispatch’s series on Ferguson. In the darkness, a figure is captured in an instant of dynamic motion: legs braced, long hair flying wild, an extravagant plume of smoke and flames trailing from the incendiary object he is about to hurl into space. His chest is covered by an American-­flag T-­shirt, he holds fire in one hand and a bag of chips in the other, a living collage of the grand and the bathetic.

Headlines — like the graphics that gave birth to “A picture is worth a thousand words” — are a distillation, a shortcut to meaning. Breitbart News presented that photograph under “Rioters Throw Molotov Cocktails at Police in Ferguson — Again.” CBS St. Louis/Associated Press ran with “Protester Throws Tear-­Gas Canister Back at Police While Holding Bag of Chips.” Rioter, protester, Molotov cocktail, tear-­gas canister. Peace officers, hypermilitarized goons. What’s the use of a thousand words when they are Babel’s noise, the confusion of a thousand interpretations?

“Seeing is believing” was an early entry in the canon. Most sources attribute it to the Apostle Thomas’s incredulity over Jesus’ resurrection. (“Last night after you left the party, Jesus turned all the water into wine” is a classic “Pics or it didn’t happen” moment.) “Unless I see the nail marks in his hands and put my finger where the nails were, and put my hand into his side, I will not believe it.” Once Jesus shows up, Thomas concludes that seeing will suffice. A new standard of proof enters the lexicon.

Intuitive logic is not enough, though. Does “Seeing is believing” hold up when confronted by current events like, say, the killing of Eric Garner last summer by the police? The bystander’s video is over two minutes long, so dividing it into an old-­fashioned 24 frames per second gives us a bounty of more than 3,000 stills. A real bonanza, atrocity-­wise. But here the biblical formulation didn’t hold up: Even with the video and the medical examiner’s assessment of homicide, a grand jury declined to indict Officer Daniel Pantaleo. Time to downgrade “Seeing is believing,” too, and kick “Justice is blind” up a notch.

Can we really use one cherry-­picked example to condemn a beloved idiom? Is the system rigged? Of course it is. Always, everywhere. Let’s say these expressions concerning visual evidence are not to blame for their failures, but rather subjectivity is. The problem is us. How we see things. How we see people. We can broaden our idiomatic investigations to include phrases that account for the human element, like “The eyes are the windows to the soul.” We can also change our idiomatic stressors from contemporary video to early photography. Before smartphones put a developing booth in everyone’s pocket, affordable portable cameras loosed amateur photographers upon the world. Everyday citizens could now take pictures of children in their Sunday best, gorgeous vistas of unspoiled nature and lynchings.

A hundred years ago, Americans took souvenirs of lynchings, just as we might now take a snapshot of a farewell party for a work colleague or a mimosa-­heavy brunch. They were keepsakes, sent to relatives to allow them to share in the event, and sometimes made into postcards so that one could add a “Wish you were here”-­type endearment. In the book “Without Sanctuary: Lynching Photography in America,” Leon F. Litwack shares an account of the 1915 lynching of Thomas Brooks in Fayette County, Tenn. “Hundreds of Kodaks clicked all morning at the scene. .?.?. People in automobiles and carriages came from miles around to view the corpse dangling at the end of the rope.” Pics or it didn’t happen. “Picture-­card photographers installed a portable printing plant at the bridge and reaped a harvest in selling postcards.” Pics or it didn’t happen. “Women and children were there by the score. At a number of country schools, the day’s routine was delayed until boy and girl pupils could get back from viewing the lynched man.” Pics or it didn’t happen.

Read the entire story here.

Gadzooks, Gosh, Tarnation and the F-Bomb

Blimey! How our lexicon of foul language has evolved! Up to a few hundred years ago most swear words and oaths bore some connection to God, Jesus or other religious figure or event. But the need to display some level of dubious piety and avoid a lightening bolt from the blue led many to invent and mince a whole range of creative euphemisms. Hence, even today, we still hear words like “drat”, “gosh”, “tarnation”, “by george”, “by jove”, “heck”, “strewth”, “odsbodikins”, “gadzooks”, “doggone”.

More recently our linguistic penchant for shock and awe stems mostly from euphemistic — or not — labels for body parts and bodily functions — think: “freaking” or “shit” or “dick” and all manner of “f-words” and “c-words”. Sensitivities aside, many of us are fortunate enough to live in nations that have evolved beyond corporal or even capital punishment for uttering such blasphemous or vulgar indiscretions.

So, the next time your drop the “f-bomb” or a “dagnabbit” in public reflect for a while and thank yourself for supporting your precious democracy over the neighboring theocracy.

From WSJ:

At street level and in popular culture, Americans are freer with profanity now than ever before—or so it might seem to judge by how often people throw around the “F-bomb” or use a certain S-word of scatological meaning as a synonym for “stuff.” Or consider the millions of fans who adore the cartoon series “South Park,” with its pint-size, raucously foul-mouthed characters.

But things might look different to an expedition of anthropologists visiting from Mars. They might conclude that Americans today are as uptight about profanity as were our 19th-century forbears in ascots and petticoats. It’s just that what we think of as “bad” words is different. To us, our ancestors’ word taboos look as bizarre as tribal rituals. But the real question is: How different from them, for better or worse, are we?

In medieval English, at a time when wars were fought in disputes over religious doctrine and authority, the chief category of profanity was, at first, invoking—that is, swearing to—the name of God, Jesus or other religious figures in heated moments, along the lines of “By God!” Even now, we describe profanity as “swearing” or as muttering “oaths.”

It might seem like a kind of obsessive piety to us now, but the culture of that day was largely oral, and swearing—making a sincere oral testament—was a key gesture of commitment. To swear by or to God lightly was considered sinful, which is the origin of the expression to take the Lord’s name in vain (translated from Biblical Hebrew for “emptily”).

The need to avoid such transgressions produced various euphemisms, many of them familiar today, such as “by Jove,” “by George,” “gosh,” “golly” and “Odsbodikins,” which started as “God’s body.” “Zounds!” was a twee shortening of “By his wounds,” as in those of Jesus. A time traveler to the 17th century would encounter variations on that theme such as “Zlids!” and “Znails!”, referring to “his” eyelids and nails.

In the 19th century, “Drat!” was a way to say “God rot.” Around the same time, darn started when people avoided saying “Eternal damnation!” by saying “Tarnation!”, which, because of the D-word hovering around, was easy to recast as “Darnation!”, from which “darn!” was a short step.

By the late 18th century, sex, excretion and the parts associated with same had come to be treated as equally profane as “swearing” in the religious sense. Such matters had always been considered bawdy topics, of course, but the space for ordinary words referring to them had been shrinking for centuries already.

Chaucer had available to him a thoroughly inoffensive word referring to the sex act, swive. An anatomy book in the 1400s could casually refer to a part of the female anatomy with what we today call the C-word. But over time, referring to these things in common conversation came to be regarded with a kind of pearl-clutching horror.

By the 1500s, as English began taking its place alongside Latin as a world language with a copious high literature, a fashion arose for using fancy Latinate terms in place of native English ones for more private matters. Thus was born a slightly antiseptic vocabulary, with words like copulate and penis. Even today modern English has no terms for such things that are neither clinical nor vulgar, along the lines of arm or foot or whistle.

The burgeoning bourgeois culture of the late 1700s, both in Great Britain and America, was especially alarmist about the “down there” aspect of things. In growing cities with stark social stratification, a new gentry developed a new linguistic self-consciousness—more English grammars were published between 1750 and 1800 than had ever appeared before that time.

In speaking of cooked fowl, “white” and “dark” meat originated as terms to avoid mention of breasts and limbs. What one does in a restroom, another euphemism of this era, is only laboriously classified as repose. Bosom and seat (for the backside) originated from the same impulse.

Passages in books of the era can be opaque to us now without an understanding of how particular people had gotten: In Dickens’s “Oliver Twist,” Giles the butler begins, “I got softly out of bed; drew on a pair of…” only to be interrupted with “Ladies present…” after which he dutifully says “…of shoes, sir.” He wanted to say trousers, but because of where pants sit on the body, well…

Or, from the gargantuan Oxford English Dictionary, published in 1884 and copious enough to take up a shelf and bend it, you would never have known in the original edition that the F-word or the C-word existed.

Such moments extend well into the early 20th century. In a number called “Shuffle Off to Buffalo” in the 1932 Broadway musical “42nd Street,” Ginger Rogers sings “He did right by little Nelly / with a shotgun at his bell-” and then interjects “tummy” instead. “Belly” was considered a rude part of the body to refer to; tummy was OK because of its association with children.

Read the entire story here.

The Pivot and the Money

Once upon a time the word “pivot” usually referred to an object’s point of rotation. Then, corporate America got its sticky hands all over it. The word even found its way in to Microsoft Excel — as in Pivot Table. But, the best euphemistic example comes from one of my favorite places for invention and euphemism — Silicon Valley. In this region of the world pivot has come to mean a complete change in business direction.

Now, let’s imagine you’re part of start-up company. At the outset, your company has a singularly great, world-changing idea. You believe it’s the best idea, since, well, the last greatest world-changing idea. It’s unique. You are totally committed. You secure funding from some big name VCs anxious to capitalize and make the next $100 billion. You and your team work countless hours on realizing your big idea — it’s your dream, your passion. Then, suddenly you realize that your idea is utterly worthless — the product looks good but nobody, absolutely nobody, will consider it, let alone buy it; in fact, a hundred other companies before you had the same great, unique idea and all failed.

What are you and your company to do? Well, you pivot.

The entrepreneurial side of me would cheer an opportunistic company for “pivoting”, abandoning that original, great idea, and seeking another. Better than packing one’s bags and enrolling in corporate serfdom, right? But, there’s another part of me that thinks this is an ethical sell-out: it’s disingenuous to the financial backers, and it shows lack of integrity. That said, the example is of course set in Silicon Valley.

From Medium:

It was about a month after graduating from Techstars that my co-founder, Lianne, and I had our “oh shit” moment.

This is a special moment for founders; it’s not when you find a fixable bug in your app, when you realize you have been poorly optimizing your conversion funnel, or when you get a “no” from an investor. An “oh shit” moment is when you realize there is something fundamentally wrong with your business.

In our case, we realized that the product that we wanted to create was irreconcilable with a viable business model. So who were we going to tell? Techstars, who just accepted us into their highly prestigious accelerator on the basis that we could make it work? Our investors, who we just closed a round with?

It turns out, our Techstars family, our friends, and the angels (literally) who invested in us became our greatest allies, supporters, and advocates as we navigated the treacherous, terrifying, uncertain, and ultimately wildly liberating waters of a pivot. So let’s start at the beginning…

In February of 2014, Lianne and I were completing our undergrad CS degrees at the University of Colorado. As we were reflecting on the past four years of school, we realized that the most valuable experiences that we had happened outside the classroom in the incredible communities that we became involved in. Being techies, we wanted to build a product which helped other students make these “serendipitous” connections around their campus?—?to make the most of their time in college as well. We wanted to help our friends explore their world around them.
We called it Varsity. The app was basically a replacement for the unreadable kiosks full of posters found on college campuses. Students could submit events and activities happening around their campus that others could discover and indicate they were attending. We also built in a personalization mechanism, which proactively suggested things to do around you based upon your interests.
A few months later, the MVP of the Varsity and a well-practiced pitch won us the New Venture Challenge at CU, which came with a $13k award and garnered the attention of Techstars Boulder.
The next couple of months were a whirlwind of change; Lianne and I graduated, we transitioned to our first full-time job (working for ourselves), and I spent a month in Israel with my sister before she left for college in Florida. We spent a good amount of our time networking our way around Techstars?—?feeling a little like the high school kids at a college party?—?but loving it at the same time. We met some incredible people (Sue Heilbronner, Brad Berenthal, Zach Nies, and Howard Diamond, to name a few) who taught us so much about our nascent business in a very short time.
We took as many meetings as we could with whomever would talk with us, and we funneled all of our learnings into our Techstars application. Through some combination of luck, sweat, and my uncanny ability to say the right things when standing in front of a large group of people, we were accepted into Techstars.
Techstars was incredibly challenging for us. The 3-month program was also equally rewarding. Lianne and I learned more about ourselves, our company, and our relationship with each other than we had in 4 years of undergraduate education together. About half-way through the program we rebranded Varsity to Native and started exploring ways to monitize the platform. The product had come along way?—?we had done some incredible engineering and design work that we were happy with.
Unfortunately, the problem with Varsity was absolutely zero alignment between the product that we wanted to build and the way that would bring it to market. One option was to spend the next 3 years grinding through the 8-month sales-cycles of universities across the country, which felt challenging (in the wrong ways) and bureaucratic. Alternatively, we could monetize the student attention we garnered, which we feared would cause discordance between the content students wanted to see and the content that advertisers wanted to show them.
Soon after graduating from Techstars, someone showed us Simon Sinek’s famous TED talk about how great leaders inspire action. Sinek describes how famous brands like Apple engage their customers starting with their “why” for doing business, which takes precedence over “how” they do business, and even over “what” their business does. At Native, we knew our “why” was something about helping people discover the world around them, and we now knew that the “how” and “what” of our current business wouldn’t get us there.
So, we decided to pivot.
Around this time I grabbed coffee with my friend Fletcher Richman. I explained to him the situation and asked for his advice. He offered the perspective that startups are designed to solve problems in the most efficient way possible. Basically, startups should be created to fill voids in the market that weren’t being solved by an existing company. The main issue was we had no problem to solve.
Shit.
250k in funding, but nothing to fund? Do we give up, give the money back, and go get real jobs? Lianne and I weren’t done yet, so we went in search of problems worth solving.

Read the entire story here.

Living In And From a Box

google-search-boxes

Many of us in the West are lucky enough to live in a house or apartment. But for all intents it’s really an over-sized box. We are box dwellers. So it comes as no surprise to see our fascination of boxes accelerate over the last 10 years or so. These more recent boxes are much smaller than the ones in which we eat, relax, work and sleep, and they move around; these new boxes are the ones that deliver all we need to eat, relax, work and sleep.

Nowadays from the comfort of our own big box we can have anything delivered to us in a smaller box. [As I write this I’m sitting on my favorite armchair, which arrived from an online store, via a box]. But, this age of box-delivered convenience is very much a double-edged sword. We can now sate our cravings for almost anything, anytime and have an anonymous box-bringer deliver it to us almost instantaneously and all without any human interaction. We can now surround ourselves with foods and drinks and objects (and boxes) without ever leaving our very own box. We are becoming antisocial hermits.

From Medium:

Angel the concierge stands behind a lobby desk at a luxe apartment building in downtown San Francisco, and describes the residents of this imperial, 37-story tower. “Ubers, Squares, a few Twitters,” she says. “A lot of work-from-homers.”

And by late afternoon on a Tuesday, they’re striding into the lobby at a just-get-me-home-goddammit clip, some with laptop bags slung over their shoulders, others carrying swank leather satchels. At the same time a second, temporary population streams into the building: the app-based meal delivery people hoisting thermal carrier bags and sacks. Green means Sprig. A huge M means Munchery. Down in the basement, Amazon Prime delivery people check in packages with the porter. The Instacart groceries are plunked straight into a walk-in fridge.

This is a familiar scene. Five months ago I moved into a spartan apartment a few blocks away, where dozens of startups and thousands of tech workers live. Outside my building there’s always a phalanx of befuddled delivery guys who seem relieved when you walk out, so they can get in. Inside, the place is stuffed with the goodies they bring: Amazon Prime boxes sitting outside doors, evidence of the tangible, quotidian needs that are being serviced by the web. The humans who live there, though, I mostly never see. And even when I do, there seems to be a tacit agreement among residents to not talk to one another. I floated a few “hi’s” in the elevator when I first moved in, but in return I got the monosyllabic, no-eye-contact mumble. It was clear: Lady, this is not that kind of building.

Back in the elevator in the 37-story tower, the messengers do talk, one tells me. They end up asking each other which apps they work for: Postmates. Seamless. EAT24. GrubHub. Safeway.com. A woman hauling two Whole Foods sacks reads the concierge an apartment number off her smartphone, along with the resident’s directions: “Please deliver to my door.”

“They have a nice kitchen up there,” Angel says. The apartments rent for as much as $5,000 a month for a one-bedroom. “But so much, so much food comes in. Between 4 and 8 o’clock, they’re on fire.”

I start to walk toward home. En route, I pass an EAT24 ad on a bus stop shelter, and a little further down the street, a Dungeons & Dragons–type dude opens the locked lobby door of yet another glass-box residential building for a Sprig deliveryman:

“You’re…”

“Jonathan?”

“Sweet,” Dungeons & Dragons says, grabbing the bag of food. The door clanks behind him.

And that’s when I realized: the on-demand world isn’t about sharing at all. It’s about being served. This is an economy of shut-ins.

In 1998, Carnegie Mellon researchers warned that the internet could make us into hermits. They released a study monitoring the social behavior of 169 people making their first forays online. The web-surfers started talking less with family and friends, and grew more isolated and depressed. “We were surprised to find that what is a social technology has such anti-social consequences,” said one of the researchers at the time. “And these are the same people who, when asked, describe the Internet as a positive thing.”

We’re now deep into the bombastic buildout of the on-demand economy— with investment in the apps, platforms and services surging exponentially. Right now Americans buy nearly eight percent of all their retail goods online, though that seems a wild underestimate in the most congested, wired, time-strapped urban centers.

Many services promote themselves as life-expanding?—?there to free up your time so you can spend it connecting with the people you care about, not standing at the post office with strangers. Rinse’s ad shows a couple chilling at a park, their laundry being washed by someone, somewhere beyond the picture’s frame. But plenty of the delivery companies are brutally honest that, actually, they never want you to leave home at all.

GrubHub’s advertising banks on us secretly never wanting to talk to a human again: “Everything great about eating, combined with everything great about not talking to people.” DoorDash, another food delivery service, goes for the all-caps, batshit extreme:

“NEVER LEAVE HOME AGAIN.”

Katherine van Ekert isn’t a shut-in, exactly, but there are only two things she ever has to run errands for any more: trash bags and saline solution. For those, she must leave her San Francisco apartment and walk two blocks to the drug store, “so woe is my life,” she tells me. (She realizes her dry humor about #firstworldproblems may not translate, and clarifies later: “Honestly, this is all tongue in cheek. We’re not spoiled brats.”) Everything else is done by app. Her husband’s office contracts with Washio. Groceries come from Instacart. “I live on Amazon,” she says, buying everything from curry leaves to a jogging suit for her dog, complete with hoodie.

She’s so partial to these services, in fact, that she’s running one of her own: A veterinarian by trade, she’s a co-founder of VetPronto, which sends an on-call vet to your house. It’s one of a half-dozen on-demand services in the current batch at Y Combinator, the startup factory, including a marijuana delivery app called Meadow (“You laugh, but they’re going to be rich,” she says). She took a look at her current clients?—?they skew late 20s to late 30s, and work in high-paying jobs: “The kinds of people who use a lot of on demand services and hang out on Yelp a lot ?”

Basically, people a lot like herself. That’s the common wisdom: the apps are created by the urban young for the needs of urban young. The potential of delivery with a swipe of the finger is exciting for van Ekert, who grew up without such services in Sydney and recently arrived in wired San Francisco. “I’m just milking this city for all it’s worth,” she says. “I was talking to my father on Skype the other day. He asked, ‘Don’t you miss a casual stroll to the shop?’ Everything we do now is time-limited, and you do everything with intention. There’s not time to stroll anywhere.”

Suddenly, for people like van Ekert, the end of chores is here. After hours, you’re free from dirty laundry and dishes. (TaskRabbit’s ad rolls by me on a bus: “Buy yourself time?—?literally.”)

So here’s the big question. What does she, or you, or any of us do with all this time we’re buying? Binge on Netflix shows? Go for a run? Van Ekert’s answer: “It’s more to dedicate more time to working.”

Read the entire article here.

Image courtesy of Google Search.

Viva Vinyl

Hotel-California-album

When I first moved to college and a tiny dorm room (in the UK they’re called halls of residence), my first purchase was a Garrard turntable and a pair of Denon stereo speakers. Books would come later. First, I had to build a new shrine to my burgeoning vinyl collection, which thrives even today.

So, after what seems like a hundred years since those heady days and countless music technology revolutions, it comes as quite a surprise — but perhaps not — to see vinyl on a resurgent path. The disruptors tried to kill LPs, 45s and 12-inchers with 8-track (ha), compact cassette (yuk), minidisk (yawn), CD (cool), MP3 (meh), iPod (yay) and now streaming (hmm).

But like a kind, zombie uncle the music industry cannot completely bury vinyl for good. Why did vinyl capture the imagination and the ears of the audiophile so? Well, perhaps it comes from watching the slow turn of the LP on the cool silver platter. Or, it may be the anticipation from watching the needle spiral its way to the first track. Or the raw, crackling authenticity of the sound. For me it was the weekly pilgrimage to the dusty independent record store — sampling tracks on clunky headphones; soaking up the artistry of the album cover, the lyrics, the liner notes; discussing the pros and cons of the bands with friends. Our digital world has now mostly replaced this experience, but it cannot hope to replicate it. Long live vinyl.

From ars technica:

On Thursday [July 2, 2015] , Nielsen Music released its 2015 US mid-year report, finding that overall music consumption had increased by 14 percent in the first half of the year. What’s driving that boom? Well, certainly a growth in streaming—on-demand streaming increased year-over-year by 92.4 percent, with more than 135 billion songs streamed, and overall sales of digital streaming increased by 23 percent.

But what may be more fascinating is the continued resurgence of the old licorice pizza—that is, vinyl LPs. Nielsen reports that vinyl LP sales are up 38 percent year-to-date. “Vinyl sales now comprise nearly 9 percent of physical album sales,” Nielsen stated.

Who’s leading the charge on all that vinyl? None other than the music industry’s favorite singer-songwriter Taylor Swift with her album 1989, which sold 33,500 LPs. Swift recently flexed her professional muscle when she wrote an open letter to Apple, criticizing the company for failing to pay artists during the free three-month trial of Apple Music. Apple quickly kowtowed to the pop star and reversed its position.

Following behind Swift on the vinyl chart is Sufjan Stevens’ Carrie & Lowell, The Arctic Monkeys’ AM (released in 2013), Alabama Shakes’ Sound & Color, and in fifth place, none other than Miles Davis’ Kind of Blue, which sold 23,200 copies in 2015.

Also interesting is that Nielsen found that digital album sales were flat compared to last year, and digital track sales were down 10.4 percent. Unsurprisingly, CD sales were down 10 percent.

When Nielsen reported in 2010 that 2.5 million vinyl records were sold in 2009, Ars noted that was more than any other year since the media-tracking business started keeping score in 1991. Fast forward five years and that number has more than doubled, as Nielsen counted 5.6 million vinyl records sold. The trend shows little sign of abating—last year, the US’ largest vinyl plant reported that it was adding 16 vinyl presses to its lineup of 30, and just this year Ars reported on a company called Qrates that lets artists solicit crowdfunding to do small-batch vinyl pressing.

Read the entire story here.

Image: Hotel California, The Eagles, album cover. Courtesy of the author.

A Patent to End All Patents

You’ve seen the “we’ll help you file your patent application” infomercials on late night cable. The underlying promise is simple: your unique invention will find its way into every household on Earth and consequently will thrust you into the financial stratosphere making you the planet’s first gazillionaire. Of course, this will happen only after you part with your hard-earned cash for help in filing the patent. Incidentally, filing a patent with the US Patent and Trademark Office (USPTO) usually starts at around $10-15,000.

Some patents are truly extraordinary in their optimistic silliness: wind harnessing bicycle, apparatus for simulating a high-five, flatulence deodorizer, jet-powered surfboard, thong diaper, life-size interactive bowl of soup, nicotine infused coffee, edible business cards, magnetic rings to promote immortality, and so it goes. Remember, though, this is the United States, and most crazy things are possible and profitable. So, you could well find yourself becoming addicted to those 20oz nicotine infused lattes each time you pull up at the local coffee shop on your jet-powered surfboard.

But perhaps the most recent thoroughly earnest and whacky patent filing comes from Boeing no less. It’s for a laser-powered fusion-fission jet engine. The engine uses ultra-high powered lasers to fuse pellets of hydrogen, causing uranium to fission, which generates heat and subsequently electricity. All of this powering your next flight to Seattle. So, the next time you fly on a Boeing aircraft, keep in mind what some of the company’s engineers have in store for you 100 or 1,000 years from now. I think I’d prefer to be disassembled and beamed up.

From ars technica:

Assume the brace position: Boeing has received a patent for, I kid you not, a laser-powered fusion-fission jet propulsion system. Boeing envisions that this system could replace both rocket and turbofan engines, powering everything from spacecraft to missiles to airplanes.

The patent, US 9,068,562, combines inertial confinement fusion, fission, and a turbine that generates electricity. It sounds completely crazy because it is. Currently, this kind of engine is completely unrealistic given our mastery of fusion, or rather our lack thereof. Perhaps in the future (the distant, distant future that is), this could be a rather ingenious solution. For now, it’s yet another patent head-scratcher.

To begin with, imagine the silhouette of a big turbofan engine, like you’d see on a commercial jetliner. Somewhere in the middle of the engine there is a fusion chamber, with a number of very strong lasers focused on a single point. A hohlraum (pellet) containing a mix of deuterium and tritium (hydrogen isotopes) is placed at this focal point. The lasers are all turned on at the same instant, creating massive pressure on the pellet, which implodes and causes the hydrogen atoms to fuse. (This is called inertial confinement fusion, as opposed to the magnetic confinement fusion that is carried out in a tokamak.)

According to the patent, the hot gases produced by the fusion are pushed out of a nozzle at the back of the engine, creating thrust—but that’s not all! One of the by-products of hydrogen fusion is lots of fast neutrons. In Boeing’s patented design, there is a shield around the fusion chamber that’s coated with a fissionable material (uranium-238 is one example given). The neutrons hit the fissionable material, causing a fission reaction that generates lots of heat.

Finally, there’s some kind of heat exchanger system that takes the heat from the fission reaction and uses that heat (via a heated liquid or gas) to drive a turbine. This turbine generates the electricity that powers the lasers. Voilà: a fusion-fission rocket engine thing.

Let’s talk a little bit about why this is such an outlandish idea. To begin with, this patented design involves placing a lump of material that’s made radioactive in an airplane engine—and these vehicles are known to sometimes crash. Today, the only way we know of efficiently harvesting radioactive decay is a giant power plant, and we cannot get inertial fusion to fire more than once in a reasonable amount of time (much less on the short timescales needed to maintain thrust). This process requires building-sized lasers, like those found at the National Ignition Facility in California. Currently, the technique only works poorly. Those two traits are not conducive to air travel.

But this is the USA we’re talking about, where patents can be issued on firewalls (“being wielded in one of most outrageous trolling campaigns we have ever seen,” according to the EFF) and universities can claim such rights on “agent-based collaborative recognition-primed decision-making” (EFF: “The patent reads a little like what might result if you ate a dictionary filled with buzzwords and drank a bottle of tequila”). As far as patented products go, it is pretty hard to imagine this one actually being built in the real world. Putting aside the difficulties of inertial confinement fusion (we’re nowhere near hitting the break-even point), it’s also a bit far-fetched to shoehorn all of these disparate and rather difficult-to-work-with technologies into a small chassis that hangs from the wing of a commercial airplane.

Read the entire story here.