Digital Forensics and the Wayback Machine

Amazon-Aug1999

Many of us see history — the school subject — as rather dull and boring. After all, how can the topic be made interesting when it’s usually taught by a coach who has other things on his or her mind [no joke, I have evidence of this from both sides of the Atlantic!].

Yet we also know that history’s lessons are essential to shaping our current world view and our vision for the future, in a myriad of ways. Since humans could speak and then write, our ancestors have recorded and transmitted their histories through oral storytelling, and then through books and assorted media.

Then came the internet. The explosion of content, media formats and related technologies over the last quarter-century has led to an immense challenge for archivists and historians intent on cataloging our digital stories. One facet of this challenge is the tremendous volume of information and its accelerating growth. Another is the dynamic nature of the content — much of it being constantly replaced and refreshed.

But, all is not lost. The Internet Archive founded in 1996 has been quietly archiving text, pages, images, audio and more recently entire web sites from the Tubes of the vast Internets. Currently the non-profit has archived around half a trillion web pages. It’s our modern day equivalent of the Library of Alexandria.

Please say hello to the Internet Archive Wayback Machine, and give it a try. The Wayback Machine took the screenshot above of Amazon.com in 1999, in case you’ve ever wondered what Amazon looked like before it swallowed or destroyed entire retail sectors.

From the New Yorker:

Malaysia Airlines Flight 17 took off from Amsterdam at 10:31 A.M. G.M.T. on July 17, 2014, for a twelve-hour flight to Kuala Lumpur. Not much more than three hours later, the plane, a Boeing 777, crashed in a field outside Donetsk, Ukraine. All two hundred and ninety-eight people on board were killed. The plane’s last radio contact was at 1:20 P.M. G.M.T. At 2:50 P.M. G.M.T., Igor Girkin, a Ukrainian separatist leader also known as Strelkov, or someone acting on his behalf, posted a message on VKontakte, a Russian social-media site: “We just downed a plane, an AN-26.” (An Antonov 26 is a Soviet-built military cargo plane.) The post includes links to video of the wreckage of a plane; it appears to be a Boeing 777.

Two weeks before the crash, Anatol Shmelev, the curator of the Russia and Eurasia collection at the Hoover Institution, at Stanford, had submitted to the Internet Archive, a nonprofit library in California, a list of Ukrainian and Russian Web sites and blogs that ought to be recorded as part of the archive’s Ukraine Conflict collection. Shmelev is one of about a thousand librarians and archivists around the world who identify possible acquisitions for the Internet Archive’s subject collections, which are stored in its Wayback Machine, in San Francisco. Strelkov’s VKontakte page was on Shmelev’s list. “Strelkov is the field commander in Slaviansk and one of the most important figures in the conflict,” Shmelev had written in an e-mail to the Internet Archive on July 1st, and his page “deserves to be recorded twice a day.”

On July 17th, at 3:22 P.M. G.M.T., the Wayback Machine saved a screenshot of Strelkov’s VKontakte post about downing a plane. Two hours and twenty-two minutes later, Arthur Bright, the Europe editor of the Christian Science Monitor, tweeted a picture of the screenshot, along with the message “Grab of Donetsk militant Strelkov’s claim of downing what appears to have been MH17.” By then, Strelkov’s VKontakte page had already been edited: the claim about shooting down a plane was deleted. The only real evidence of the original claim lies in the Wayback Machine.

The average life of a Web page is about a hundred days. Strelkov’s “We just downed a plane” post lasted barely two hours. It might seem, and it often feels, as though stuff on the Web lasts forever, for better and frequently for worse: the embarrassing photograph, the regretted blog (more usually regrettable not in the way the slaughter of civilians is regrettable but in the way that bad hair is regrettable). No one believes any longer, if anyone ever did, that “if it’s on the Web it must be true,” but a lot of people do believe that if it’s on the Web it will stay on the Web. Chances are, though, that it actually won’t. In 2006, David Cameron gave a speech in which he said that Google was democratizing the world, because “making more information available to more people” was providing “the power for anyone to hold to account those who in the past might have had a monopoly of power.” Seven years later, Britain’s Conservative Party scrubbed from its Web site ten years’ worth of Tory speeches, including that one. Last year, BuzzFeed deleted more than four thousand of its staff writers’ early posts, apparently because, as time passed, they looked stupider and stupider. Social media, public records, junk: in the end, everything goes.

Web pages don’t have to be deliberately deleted to disappear. Sites hosted by corporations tend to die with their hosts. When MySpace, GeoCities, and Friendster were reconfigured or sold, millions of accounts vanished. (Some of those companies may have notified users, but Jason Scott, who started an outfit called Archive Team—its motto is “We are going to rescue your shit”—says that such notification is usually purely notional: “They were sending e-mail to dead e-mail addresses, saying, ‘Hello, Arthur Dent, your house is going to be crushed.’ ”) Facebook has been around for only a decade; it won’t be around forever. Twitter is a rare case: it has arranged to archive all of its tweets at the Library of Congress. In 2010, after the announcement, Andy Borowitz tweeted, “Library of Congress to acquire entire Twitter archive—will rename itself Museum of Crap.” Not long after that, Borowitz abandoned that Twitter account. You might, one day, be able to find his old tweets at the Library of Congress, but not anytime soon: the Twitter Archive is not yet open for research. Meanwhile, on the Web, if you click on a link to Borowitz’s tweet about the Museum of Crap, you get this message: “Sorry, that page doesn’t exist!”

The Web dwells in a never-ending present. It is—elementally—ethereal, ephemeral, unstable, and unreliable. Sometimes when you try to visit a Web page what you see is an error message: “Page Not Found.” This is known as “link rot,” and it’s a drag, but it’s better than the alternative. More often, you see an updated Web page; most likely the original has been overwritten. (To overwrite, in computing, means to destroy old data by storing new data in their place; overwriting is an artifact of an era when computer storage was very expensive.) Or maybe the page has been moved and something else is where it used to be. This is known as “content drift,” and it’s more pernicious than an error message, because it’s impossible to tell that what you’re seeing isn’t what you went to look for: the overwriting, erasure, or moving of the original is invisible. For the law and for the courts, link rot and content drift, which are collectively known as “reference rot,” have been disastrous. In providing evidence, legal scholars, lawyers, and judges often cite Web pages in their footnotes; they expect that evidence to remain where they found it as their proof, the way that evidence on paper—in court records and books and law journals—remains where they found it, in libraries and courthouses. But a 2013 survey of law- and policy-related publications found that, at the end of six years, nearly fifty per cent of the URLs cited in those publications no longer worked. According to a 2014 study conducted at Harvard Law School, “more than 70% of the URLs within the Harvard Law Review and other journals, and 50% of the URLs within United States Supreme Court opinions, do not link to the originally cited information.” The overwriting, drifting, and rotting of the Web is no less catastrophic for engineers, scientists, and doctors. Last month, a team of digital library researchers based at Los Alamos National Laboratory reported the results of an exacting study of three and a half million scholarly articles published in science, technology, and medical journals between 1997 and 2012: one in five links provided in the notes suffers from reference rot. It’s like trying to stand on quicksand.

The footnote, a landmark in the history of civilization, took centuries to invent and to spread. It has taken mere years nearly to destroy. A footnote used to say, “Here is how I know this and where I found it.” A footnote that’s a link says, “Here is what I used to know and where I once found it, but chances are it’s not there anymore.” It doesn’t matter whether footnotes are your stock-in-trade. Everybody’s in a pinch. Citing a Web page as the source for something you know—using a URL as evidence—is ubiquitous. Many people find themselves doing it three or four times before breakfast and five times more before lunch. What happens when your evidence vanishes by dinnertime?

The day after Strelkov’s “We just downed a plane” post was deposited into the Wayback Machine, Samantha Power, the U.S. Ambassador to the United Nations, told the U.N. Security Council, in New York, that Ukrainian separatist leaders had “boasted on social media about shooting down a plane, but later deleted these messages.” In San Francisco, the people who run the Wayback Machine posted on the Internet Archive’s Facebook page, “Here’s why we exist.”

Read the entire story here.

Image: Wayback Machine’s screenshot of Amazon.com’s home page, August 1999.

From a Million Miles

epicearthmoonstill

The Deep Space Climate Observatory (DSCOVR) spacecraft is now firmly in place about one million miles from Earth at its L1 (Legrange) point, a focus of gravitational balance between the sun and our planet. Jointly operated by NASA, NOAA (National Oceanic and Atmospheric Administration) and the U.S. Air Force, the spacecraft uses its digital optics to observe the Earth from sunrise to sunset. Researchers use its observations to measure a number of climate variables including ozone, aerosols, cloud heights, dust, and volcanic ash. The spacecraft also monitors the sun’s solar wind. Luckily, it also captures gorgeous images like the one above from July 16, 2015, of the moon, with dark side visible, as it transits over the Pacific Ocean.

Learn more about DSCOVR here.

Image: This image shows the far side of the moon, illuminated by the sun, as it crosses between the DSCOVR spacecraft’s Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth. Courtesy: NASA, NOAA.

Aspirational or Inspirational?

Both of my parents came from a background of chronic poverty and limited educational opportunity. They eventually overcame these constraints through a combination of hard work, persistence and passion. They instilled these traits in me, and somehow they did so in a way that fostered a belief in a well-balanced life containing both work and leisure.

But to many, especially in the United States, the live-to-work ethic thrives. This condition is so acute and prevalent that most Americans caught in corporate jobs never take their full — and yet meager by global standards — allotment of annual vacation. Our culture is replete with tales of driven, aspirational parents — think dragon mom — who seem to have their kid’s lives mapped out from the crib.

I have to agree with columnist George Monbiot: while naked ambition may gain our children monetary riches and a higher rung on the corporate ladder it does not a life make.

From the Guardian:

Perhaps because the alternative is too hideous to contemplate, we persuade ourselves that those who wield power know what they are doing. The belief in a guiding intelligence is hard to shake.

We know that our conditions of life are deteriorating. Most young people have little prospect of owning a home, or even of renting a decent one. Interesting jobs are sliced up, through digital Taylorism, into portions of meaningless drudgery. The natural world, whose wonders enhance our lives, and upon which our survival depends, is being rubbed out with horrible speed. Those to whom we look for guardianship, in government and among the economic elite, do not arrest this decline, they accelerate it.

The political system that delivers these outcomes is sustained by aspiration: the faith that if we try hard enough we could join the elite, even as living standards decline and social immobility becomes set almost in stone. But to what are we aspiring? A life that is better than our own, or worse?

Last week a note from an analyst at Barclays’ Global Power and Utilities group in New York was leaked. It addressed students about to begin a summer internship, and offered a glimpse of the toxic culture into which they are inducted.

“I wanted to introduce you to the 10 Power Commandments … For nine weeks you will live and die by these … We expect you to be the last ones to leave every night, no matter what … I recommend bringing a pillow to the office. It makes sleeping under your desk a lot more comfortable … the internship really is a nine-week commitment at the desk … an intern asked our staffer for a weekend off for a family reunion – he was told he could go. He was also asked to hand in his BlackBerry and pack up his desk … Play time is over and it’s time to buckle up.”

Play time is over, but did it ever begin? If these students have the kind of parents featured in the Financial Times last month, perhaps not. The article marked a new form of employment: the nursery consultant. These people, who charge from £290 an hour, must find a nursery that will put their clients’ toddlers on the right track to an elite university.

They spoke of parents who had already decided that their six-month-old son would go to Cambridge then Deutsche Bank, or whose two-year-old daughter “had a tutor for two afternoons a week (to keep on top of maths and literacy) as well as weekly phonics and reading classes, drama, piano, beginner French and swimming. They were considering adding Mandarin and Spanish. ‘The little girl was so exhausted and on edge she was terrified of opening her mouth.’”

In New York, playdate coaches charging $450 an hour train small children in the social skills that might help secure their admission to the most prestigious private schools. They are taught to hide traits that could suggest they’re on the autistic spectrum, which might reduce their chances of selection.

From infancy to employment, this is a life-denying, love-denying mindset, informed not by joy or contentment, but by an ambition that is both desperate and pointless, for it cannot compensate for what it displaces: childhood, family life, the joys of summer, meaningful and productive work, a sense of arrival, living in the moment. For the sake of this toxic culture, the economy is repurposed, the social contract is rewritten, the elite is released from tax, regulation and the other restraints imposed by democracy.

Where the elite goes, we are induced to follow. As if the assessment regimes were too lax in UK primary schools, last year the education secretary announced a new test for four-year-olds. A primary school in Cambridge has just taken the obvious next step: it is now streaming four-year-olds into classes according to perceived ability. The education and adoption bill, announced in the Queen’s speech, will turn the screw even tighter. Will this help children, or hurt them?

Read the entire column here.

Girlfriend or Nuclear Reactor?

YellowcakeAsk a typical 14 year-old boy if he’d prefer to have a girlfriend or a home-made nuclear fission reactor he’s highly likely to gravitate towards the former. Not so Taylor Wilson; he seems to prefer the company of Geiger counters, particle accelerators, vacuum tubes and radioactive materials.

From the Guardian:

Taylor Wilson has a Geiger counter watch on his wrist, a sleek, sporty-looking thing that sounds an alert in response to radiation. As we enter his parents’ garage and approach his precious jumble of electrical equipment, it emits an ominous beep. Wilson is in full flow, explaining the old-fashioned control panel in the corner, and ignores it. “This is one of the original atom smashers,” he says with pride. “It would accelerate particles up to, um, 2.5m volts – so kind of up there, for early nuclear physics work.” He pats the knobs.

It was in this garage that, at the age of 14, Wilson built a working nuclear fusion reactor, bringing the temperature of its plasma core to 580mC – 40 times as hot as the core of the sun. This skinny kid from Arkansas, the son of a Coca-Cola bottler and a yoga instructor, experimented for years, painstakingly acquiring materials, instruments and expertise until he was able to join the elite club of scientists who have created a miniature sun on Earth.

Not long after, Wilson won $50,000 at a science fair, for a device that can detect nuclear materials in cargo containers – a counter-terrorism innovation he later showed to a wowed Barack Obama at a White House-sponsored science fair.

Wilson’s two TED talks (Yup, I Built A Nuclear Fusion Reactor and My Radical Plan For Small Nuclear Fission Reactors) have been viewed almost 4m times. A Hollywood biopic is planned, based on an imminent biography. Meanwhile, corporations have wooed him and the government has offered to buy some of his inventions. Former US under-secretary for energy, Kristina Johnson, told his biographer, Tom Clynes: “I would say someone like him comes along maybe once in a generation. He’s not just smart – he’s cool and articulate. I think he may be the most amazing kid I’ve ever met.”

Seven years on from fusing the atom, the gangly teen with a mop of blond hair is now a gangly 21-year-old with a mop of blond hair, who shuttles between his garage-cum-lab in the family’s home in Reno, Nevada, and other more conventional labs. In addition to figuring out how to intercept dirty bombs, he looks at ways of improving cancer treatment and lowering energy prices – while plotting a hi-tech business empire around the patents.

As we tour his parents’ garage, Wilson shows me what appears to be a collection of nuggets. His watch sounds another alert, but he continues lovingly to detail his inventory. “The first thing I got for my fusion project was a mass spectrometer from an ex-astronaut in Houston, Texas,” he explains. This was a treasure he obtained simply by writing a letter asking for it. He ambles over to a large steel safe, with a yellow and black nuclear hazard sticker on the front. He spins the handle, opens the door and extracts a vial with pale powder in it.

“That’s some yellowcake I made – the famous stuff that Saddam Hussein was supposedly buying from Niger. This is basically the starting point for nuclear, whether it’s a weapons programme or civilian energy production.” He gives the vial a shake. A vision of dodgy dossiers, atomic intrigue and mushroom clouds swims before me, a reverie broken by fresh beeping. “That’ll be the allanite. It’s a rare earth mineral,” Wilson explains. He picks up a dark, knobbly little rock streaked with silver. “It has thorium, a potential nuclear fuel.”

I think now may be a good moment to exit the garage, but the tour is not over. “One of the things people are surprised by is how ubiquitous radiation and radioactivity is,” Wilson says, giving me a reassuring look. “I’m very cautious. I’m actually a bit of a hypochondriac. It’s all about relative risk.”

He paces over to a plump steel tube, elevated to chest level – an object that resembles an industrial vacuum cleaner, and gleams in the gloom. This is the jewel in Wilson’s crown, the reactor he built at 14, and he gives it a tender caress. “This is safer than many things,” he says, gesturing to his Aladdin’s cave of atomic accessories. “For instance, horse riding. People fear radioactivity because it is very mysterious. You want to have respect for it, but not be paralysed by fear.”

The Wilson family home is a handsome, hacienda-style house tucked into foothills outside Reno. Unusually for the high desert at this time of year, grey clouds with bellies of rain rumble overhead. Wilson, by contrast, is all sunny smiles. He is still the slightly ethereal figure you see in the TED talks (I have to stop myself from offering him a sandwich), but the handshake is firm, the eye contact good and the energy enviable – even though Wilson has just flown back from a weekend visiting friends in Los Angeles. “I had an hour’s sleep last night. Three hours the night before that,” he says, with a hint of pride.

He does not drink or smoke, is a natty dresser (in suede jacket, skinny tie, jeans and Converse-style trainers) and he is a talker. From the moment we meet until we part hours later, he talks and talks, great billows of words about the origin of his gift and the responsibility it brings; about trying to be normal when he knows he’s special; about Fukushima, nuclear power and climate change; about fame and ego, and seeing his entire life chronicled in a book for all the world to see when he’s barely an adult and still wrestling with how to ask a girl out on a date.

The future feels urgent and mysterious. “My life has been this series of events that I didn’t see coming. It’s both exciting and daunting to know you’re going to be constantly trying to one-up yourself,” he says. “People can have their opinions about what I should do next, but my biggest pressure is internal. I hate resting on laurels. If I burn out, I burn out – but I don’t see that happening. I’ve more ideas than I have time to execute.”

Wilson credits his parents with huge influence, but wavers on the nature versus nurture debate: was he born brilliant or educated into it? “I don’t have an answer. I go back and forth.” The pace of technological change makes predicting his future a fool’s errand, he says. “It’s amazing – amazing – what I can do today that I couldn’t have done if I was born 10 years earlier.” And his ambitions are sky-high: he mentions, among many other plans, bringing electricity and state-of-the-art healthcare to the developing world.

Read the entire fascinating story here.

Image: Yellowcake, a type of uranium concentrate powder, an intermediate step in the processing of uranium ores. Courtesy of United States Department of Energy. Public Domain.

Creativity and Mental Illness

Vincent_van_Gogh-Self_portrait_with_bandaged_ear

The creative genius — oft misunderstood, outcast, tortured, misanthropic, fueled by demon spirits. Yet, this same description would seem to be equally apt at describing many of those who are unfortunate enough to suffer from mental illness. So, could creativity and mental illness be high-level symptoms of a broader underlying spectrum “disorder”? After all, a not insignificant number of people and businesses tend to regard creativity as a behavioral problem — best left outside the front-door to the office. Time to check out the results of the latest psychological study.

From the Guardian:

The ancient Greeks were first to make the point. Shakespeare raised the prospect too. But Lord Byron was, perhaps, the most direct of them all: “We of the craft are all crazy,” he told the Countess of Blessington, casting a wary eye over his fellow poets.

The notion of the tortured artist is a stubborn meme. Creativity, it states, is fuelled by the demons that artists wrestle in their darkest hours. The idea is fanciful to many scientists. But a new study claims the link may be well-founded after all, and written into the twisted molecules of our DNA.

In a large study published on Monday, scientists in Iceland report that genetic factors that raise the risk of bipolar disorder and schizophrenia are found more often in people in creative professions. Painters, musicians, writers and dancers were, on average, 25% more likely to carry the gene variants than professions the scientists judged to be less creative, among which were farmers, manual labourers and salespeople.

Kari Stefansson, founder and CEO of deCODE, a genetics company based in Reykjavik, said the findings, described in the journal Nature Neuroscience, point to a common biology for some mental disorders and creativity. “To be creative, you have to think differently,” he told the Guardian. “And when we are different, we have a tendency to be labelled strange, crazy and even insane.”

The scientists drew on genetic and medical information from 86,000 Icelanders to find genetic variants that doubled the average risk of schizophrenia, and raised the risk of bipolar disorder by more than a third. When they looked at how common these variants were in members of national arts societies, they found a 17% increase compared with non-members.

The researchers went on to check their findings in large medical databases held in the Netherlands and Sweden. Among these 35,000 people, those deemed to be creative (by profession or through answers to a questionnaire) were nearly 25% more likely to carry the mental disorder variants.

Stefansson believes that scores of genes increase the risk of schizophrenia and bipolar disorder. These may alter the ways in which many people think, but in most people do nothing very harmful. But for 1% of the population, genetic factors, life experiences and other influences can culminate in problems, and a diagnosis of mental illness.

“Often, when people are creating something new, they end up straddling between sanity and insanity,” said Stefansson. “I think these results support the old concept of the mad genius. Creativity is a quality that has given us Mozart, Bach, Van Gogh. It’s a quality that is very important for our society. But it comes at a risk to the individual, and 1% of the population pays the price for it.”

Stefansson concedes that his study found only a weak link between the genetic variants for mental illness and creativity. And it is this that other scientists pick up on. The genetic factors that raise the risk of mental problems explained only about 0.25% of the variation in peoples’ artistic ability, the study found. David Cutler, a geneticist at Emory University in Atlanta, puts that number in perspective: “If the distance between me, the least artistic person you are going to meet, and an actual artist is one mile, these variants appear to collectively explain 13 feet of the distance,” he said.

Most of the artist’s creative flair, then, is down to different genetic factors, or to other influences altogether, such as life experiences, that set them on their creative journey.

For Stefansson, even a small overlap between the biology of mental illness and creativity is fascinating. “It means that a lot of the good things we get in life, through creativity, come at a price. It tells me that when it comes to our biology, we have to understand that everything is in some way good and in some way bad,” he said.

Read the entire article here.

Image: Vincent van Gogh, self-portrait, 1889. Courtesy of Courtauld Institute Galleries, London. Wikipaintings.org. Public Domain.

Monsters of Our Own Making

For parents: a few brief tips on how to deal with young adult children — that most pampered of generations. Tip number 1: turn off junior’s access to the family Netflix account.

From WSJ:

Congratulations. Two months ago, your kid graduated from college, bravely finishing his degree rather than dropping out to make millions on his idea for a dating app for people who throw up during Cross Fit training. If he’s like a great many of his peers, he’s moved back home, where he’s figuring out how to become an adult in the same room that still has his orthodontic headgear strapped to an Iron Man helmet.

Now we’re deep into summer, and the logistical challenges of your grad really being home are sinking in. You’re constantly juggling cars, cleaning more dishes and dealing with your daughter’s boyfriend, who not only slept over but also drank your last can of Pure Protein Frosty Chocolate shake.

But the real challenge here is a problem of your own making. You see, these children are members of the Most-Loved Generation: They’ve grown up with their lives stage-managed by us, their college-acceptance-obsessed parents. Remember when Eva, at age 7, was obsessed with gymnastics…for exactly 10 months, which is why the TV in your guest room sits on top of a $2,500 pommel horse?

Now that they’re out of college, you realize what wasn’t included in that $240,000 education: classes in life skills and decision-making.

With your kid at home, you find that he’s incapable of making a single choice on his own. Like when you’re working and he interrupts to ask how many blades is the best number for a multi-blade razor. Or when you’ve just crawled into bed and hear the familiar refrain of, “Mom, what can we eat?” All those years being your kid’s concierge and coach have created a monster.

So the time has come for you to cut the cord. And by that I mean: Take your kid off your Netflix account. He will be confused and upset at first, not understanding why this is happening to him, but it’s a great opportunity for him to sign up for something all by himself.

Which brings us to money. It’s finally time to channel your Angela Merkel and get tough with your young Alexis Tsipras. Put him on a consistent allowance and make him pay the extra fees incurred when he uses the ATM at the weird little deli rather than the one at his bank, a half-block away.

Next, nudge your kid to read books about self-motivation. Begin with baby steps: Don’t just hand her “Lean In” and “I Am Malala.” Your daughter’s great, but she’s no Malala. And the only thing she’s leaning in to is a bag of kettle corn while binge-watching “Orange Is the New Black.”

Instead, over dinner, casually drop a few pearls of wisdom from “Coach Wooden’s Pyramid of Success,” such as, “Make each day your masterpiece.” Let your kid decide whether getting a high score on her “Panda Pop Bubble Shooter” iPhone game qualifies. Then hope that John Wooden has piqued her curiosity and leave his book out with a packet of Sour Patch Xploderz on top. With luck, she’ll take the bait (candy and book).

Now it’s time to work on your kid’s inability to make a decision, which, let’s be honest, you’ve instilled over the years by jumping to answer all of her texts, even that time you were at the opera. “But,” you object, “it could have been an emergency!” It wasn’t. She couldn’t remember whether she liked Dijon mustard or mayo on her turkey wrap.

Set up some outings that nurture independence. Send your kid to the grocery store with orders to buy a week of dinner supplies. She’ll ask a hundred questions about what to get, but just respond with, “Whatever looks good to you” or, “Have fun with it.” She will look at you with panic, but don’t lose your resolve. Send her out and turn your phone off to avoid a barrage of texts, such as, “They’re out of bacterial wipes to clean off the shopping cart handle. What should I do?”

Rest assured, in a couple of hours, she’ll return with “dinner”—frozen waffles and a bag of Skinny Pop popcorn. Tough it out and serve it for dinner: The name of the game is positive reinforcement.

Once she’s back you’ll inevitably get hit with more questions, like, “It’s not lost, but how expensive is that remote key for the car?” Take a deep breath and just say, “Um, I’m not sure. Why don’t you Google it?”

Read the entire story here.

The Literal Word

Abraham-Sarah-Hagar

I’ve been following the recent story of a country clerk in Kentucky who is refusing to grant marriage licenses to same-sex couples. The clerk cites her profound Christian beliefs for contravening the new law of the land. I’m reminded that most people who ardently follow a faith, as proscribed by the literal word from a God, tend to interpret, cherry-pick and obey what they wish. And, those same individuals will fervently ignore many less palatable demands from their God. So, let’s review a few biblical pronouncements, lest we forget what all believers in the Christian bible should be doing.

From the Independent:

Social conservatives who object to marriage licenses for gay couples claim to defend “Christian marriage,” meaning one man paired with one woman for life, which they say is prescribed by God in the Bible.

But in fact, Bible writers give the divine thumbs-up to many kinds of sexual union or marriage. They also use several literary devices to signal God’s approval for one or another sexual liaison: The law or a prophet might prescribe it, Jesus might endorse it, or God might reward it with the greatest of all blessings: boy babies who go on to become powerful men.

While the approved list does include one man coupled with one woman, the Bible explicitly endorses polygamy and sexual slavery, providing detailed regulations for each; and at times it also rewards rape and incest.

Polygamy. Polygamy is the norm in the Old Testament and accepted without reproof by Jesus (Matthew 22:23-32). Biblicalpolygamy.com contains pages dedicated to 40 biblical figures, each of whom had multiple wives.

Sex slaves. The Bible provides instructions on how to acquire several types of sex slaves. For example, if a man buys a Hebrew girl and “she please not her master” he can’t sell her to a foreigner; and he must allow her to go free if he doesn’t provide for her (Exodus 21:8).

War booty. Virgin females are counted, literally, among the booty of war. In the book of Numbers (31:18) God’s servant commands the Israelites to kill all of the used Midianite women along with all boy children, but to keep the virgin girls for themselves. The Law of Moses spells out a ritual to purify a captive virgin before sex. (Deuteronomy 21:10-14).

Incest. Incest is mostly forbidden in the Bible, but God makes exceptions. Abraham and Sarah, much favoured by God, are said to be half-siblings. Lot’s daughters get him drunk and mount him, and God rewards them with male babies who become patriarchs of great nations (Genesis 19).

Brother’s widow. If a brother dies with no children, it becomes a man’s duty to impregnate the brother’s widow. Onan is struck dead by God because he prefers to spill his seed on the ground rather than providing offspring for his brother (Genesis 38:8-10). A New Testament story (Matthew 22:24-28) shows that the tradition has survived.

Wife’s handmaid. After seven childless decades, Abraham’s frustrated wife Sarah says, “Go, sleep with my slave; perhaps I can build a family through her.”  Her slave, Hagar, becomes pregnant. Two generations later, the sister-wives of Jacob repeatedly send their slaves to him, each trying to produce more sons than the other (Genesis 30:1-22).

Read the entire story here.

Image: Biblical engraving: Sarah Offering Hagar to Her Husband, Abraham, c1897. Courtesy of Wikipedia.

The Post-Capitalism Dream

Anti-capitalism_color

I’m not sure that I fully agree with the premises and conclusions that author Paul Mason outlines in his essay below excerpted from his new book, Postcapitalism (published on 30 July 2015). However, I’d like to believe that we could all very soon thrive in a much more equitable and socially just future society. While the sharing economy has gone someway to democratizing work effort, Mason points out other, and growing, areas of society that are marching to the beat of a different, non-capitalist drum: volunteerism, alternative currencies, cooperatives, gig-economy, self-managed spaces, social sharing, time banks. This is all good.

It will undoubtedly take generations for society to grapple with the consequences of these shifts and more importantly dealing with the ongoing and accelerating upheaval wrought by ubiquitous automation. Meanwhile, the vested interests — the capitalist heads of state, the oligarchs, the monopolists, the aging plutocrats and their assorted (political) sycophants  — will most certainly fight until the very bitter end to maintain an iron grip on the invisible hand of the market.

From the Guardian:

The red flags and marching songs of Syriza during the Greek crisis, plus the expectation that the banks would be nationalised, revived briefly a 20th-century dream: the forced destruction of the market from above. For much of the 20th century this was how the left conceived the first stage of an economy beyond capitalism. The force would be applied by the working class, either at the ballot box or on the barricades. The lever would be the state. The opportunity would come through frequent episodes of economic collapse.

Instead over the past 25 years it has been the left’s project that has collapsed. The market destroyed the plan; individualism replaced collectivism and solidarity; the hugely expanded workforce of the world looks like a “proletariat”, but no longer thinks or behaves as it once did.

If you lived through all this, and disliked capitalism, it was traumatic. But in the process technology has created a new route out, which the remnants of the old left – and all other forces influenced by it – have either to embrace or die. Capitalism, it turns out, will not be abolished by forced-march techniques. It will be abolished by creating something more dynamic that exists, at first, almost unseen within the old system, but which will break through, reshaping the economy around new values and behaviours. I call this postcapitalism.

As with the end of feudalism 500 years ago, capitalism’s replacement by postcapitalism will be accelerated by external shocks and shaped by the emergence of a new kind of human being. And it has started.

Postcapitalism is possible because of three major changes information technology has brought about in the past 25 years. First, it has reduced the need for work, blurred the edges between work and free time and loosened the relationship between work and wages. The coming wave of automation, currently stalled because our social infrastructure cannot bear the consequences, will hugely diminish the amount of work needed – not just to subsist but to provide a decent life for all.

Second, information is corroding the market’s ability to form prices correctly. That is because markets are based on scarcity while information is abundant. The system’s defence mechanism is to form monopolies – the giant tech companies – on a scale not seen in the past 200 years, yet they cannot last. By building business models and share valuations based on the capture and privatisation of all socially produced information, such firms are constructing a fragile corporate edifice at odds with the most basic need of humanity, which is to use ideas freely.

Third, we’re seeing the spontaneous rise of collaborative production: goods, services and organisations are appearing that no longer respond to the dictates of the market and the managerial hierarchy. The biggest information product in the world – Wikipedia – is made by volunteers for free, abolishing the encyclopedia business and depriving the advertising industry of an estimated $3bn a year in revenue.

Almost unnoticed, in the niches and hollows of the market system, whole swaths of economic life are beginning to move to a different rhythm. Parallel currencies, time banks, cooperatives and self-managed spaces have proliferated, barely noticed by the economics profession, and often as a direct result of the shattering of the old structures in the post-2008 crisis.

You only find this new economy if you look hard for it. In Greece, when a grassroots NGO mapped the country’s food co-ops, alternative producers, parallel currencies and local exchange systems they found more than 70 substantive projects and hundreds of smaller initiatives ranging from squats to carpools to free kindergartens. To mainstream economics such things seem barely to qualify as economic activity – but that’s the point. They exist because they trade, however haltingly and inefficiently, in the currency of postcapitalism: free time, networked activity and free stuff. It seems a meagre and unofficial and even dangerous thing from which to craft an entire alternative to a global system, but so did money and credit in the age of Edward III.

New forms of ownership, new forms of lending, new legal contracts: a whole business subculture has emerged over the past 10 years, which the media has dubbed the “sharing economy”. Buzzwords such as the “commons” and “peer-production” are thrown around, but few have bothered to ask what this development means for capitalism itself.

I believe it offers an escape route – but only if these micro-level projects are nurtured, promoted and protected by a fundamental change in what governments do. And this must be driven by a change in our thinking – about technology, ownership and work. So that, when we create the elements of the new system, we can say to ourselves, and to others: “This is no longer simply my survival mechanism, my bolt hole from the neoliberal world; this is a new way of living in the process of formation.”

The power of imagination will become critical. In an information society, no thought, debate or dream is wasted – whether conceived in a tent camp, prison cell or the table football space of a startup company.

As with virtual manufacturing, in the transition to postcapitalism the work done at the design stage can reduce mistakes in the implementation stage. And the design of the postcapitalist world, as with software, can be modular. Different people can work on it in different places, at different speeds, with relative autonomy from each other. If I could summon one thing into existence for free it would be a global institution that modelled capitalism correctly: an open source model of the whole economy; official, grey and black. Every experiment run through it would enrich it; it would be open source and with as many datapoints as the most complex climate models.

The main contradiction today is between the possibility of free, abundant goods and information; and a system of monopolies, banks and governments trying to keep things private, scarce and commercial. Everything comes down to the struggle between the network and the hierarchy: between old forms of society moulded around capitalism and new forms of society that prefigure what comes next.

Is it utopian to believe we’re on the verge of an evolution beyond capitalism? We live in a world in which gay men and women can marry, and in which contraception has, within the space of 50 years, made the average working-class woman freer than the craziest libertine of the Bloomsbury era. Why do we, then, find it so hard to imagine economic freedom?

It is the elites – cut off in their dark-limo world – whose project looks as forlorn as that of the millennial sects of the 19th century. The democracy of riot squads, corrupt politicians, magnate-controlled newspapers and the surveillance state looks as phoney and fragile as East Germany did 30 years ago.

All readings of human history have to allow for the possibility of a negative outcome. It haunts us in the zombie movie, the disaster movie, in the post-apocalytic wasteland of films such as The Road or Elysium. But why should we not form a picture of the ideal life, built out of abundant information, non-hierarchical work and the dissociation of work from wages?

Millions of people are beginning to realise they have been sold a dream at odds with what reality can deliver. Their response is anger – and retreat towards national forms of capitalism that can only tear the world apart. Watching these emerge, from the pro-Grexit left factions in Syriza to the Front National and the isolationism of the American right has been like watching the nightmares we had during the Lehman Brothers crisis come true.

We need more than just a bunch of utopian dreams and small-scale horizontal projects. We need a project based on reason, evidence and testable designs, that cuts with the grain of history and is sustainable by the planet. And we need to get on with it.

Read the excerpt here.

Image: The Industrial Workers of the World poster “Pyramid of Capitalist System” (1911). Courtesy of Wikipedia. Public Domain.

Cause and Effect

One of the most fundamental tenets of our macroscopic world is the notion that an effect has a cause. Throw a pebble (cause) into a still pond and the ripples (effect) will be visible for all to see. Down at the microscopic level, physicists have determined through their mathematical convolutions that there is no such thing — there is nothing precluding the laws of physics running in reverse. Yet, we never witness ripples in a pond diminishing and ejecting a pebble, which then finds its way back to a catcher.

Of course, this quandary has kept many a philosopher’s pencil well sharpened while physicists continue to scratch their heads. So, is cause and effect merely an coincidental illusion? Or, does our physics only operate in one direction, determined by a yet to be discovered fundamental law?

Author of Causal Reasoning in Physics, philosopher Mathias Frisch, offers great summary of current thinking, but no fundamental breakthrough.

From Aeon:

Do early childhood vaccinations cause autism, as the American model Jenny McCarthy maintains? Are human carbon emissions at the root of global warming? Come to that, if I flick this switch, will it make the light on the porch come on? Presumably I don’t need to persuade you that these would be incredibly useful things to know.

Since anthropogenic greenhouse gas emissions do cause climate change, cutting our emissions would make a difference to future warming. By contrast, autism cannot be prevented by leaving children unvaccinated. Now, there’s a subtlety here. For our judgments to be much use to us, we have to distinguish between causal relations and mere correlations. From 1999 and 2009, the number of people in the US who fell into a swimming pool and drowned varies with the number of films in which Nicholas Cage appeared – but it seems unlikely that we could reduce the number of pool drownings by keeping Cage off the screen, desirable as the remedy might be for other reasons.

In short, a working knowledge of the way in which causes and effects relate to one another seems indispensible to our ability to make our way in the world. Yet there is a long and venerable tradition in philosophy, dating back at least to David Hume in the 18th century, that finds the notions of causality to be dubious. And that might be putting it kindly.

Hume argued that when we seek causal relations, we can never discover the real power; the, as it were, metaphysical glue that binds events together. All we are able to see are regularities – the ‘constant conjunction’ of certain sorts of observation. He concluded from this that any talk of causal powers is illegitimate. Which is not to say that he was ignorant of the central importance of causal reasoning; indeed, he said that it was only by means of such inferences that we can ‘go beyond the evidence of our memory and senses’. Causal reasoning was somehow both indispensable and illegitimate. We appear to have a dilemma.

Hume’s remedy for such metaphysical quandaries was arguably quite sensible, as far as it went: have a good meal, play backgammon with friends, and try to put it out of your mind. But in the late 19th and 20th centuries, his causal anxieties were reinforced by another problem, arguably harder to ignore. According to this new line of thought, causal notions seemed peculiarly out of place in our most fundamental science – physics.

There were two reasons for this. First, causes seemed too vague for a mathematically precise science. If you can’t observe them, how can you measure them? If you can’t measure them, how can you put them in your equations? Second, causality has a definite direction in time: causes have to happen before their effects. Yet the basic laws of physics (as distinct from such higher-level statistical generalisations as the laws of thermodynamics) appear to be time-symmetric: if a certain process is allowed under the basic laws of physics, a video of the same process played backwards will also depict a process that is allowed by the laws.

The 20th-century English philosopher Bertrand Russell concluded from these considerations that, since cause and effect play no fundamental role in physics, they should be removed from the philosophical vocabulary altogether. ‘The law of causality,’ he said with a flourish, ‘like much that passes muster among philosophers, is a relic of a bygone age, surviving, like the monarchy, only because it is erroneously supposed not to do harm.’

Neo-Russellians in the 21st century express their rejection of causes with no less rhetorical vigour. The philosopher of science John Earman of the University of Pittsburgh maintains that the wooliness of causal notions makes them inappropriate for physics: ‘A putative fundamental law of physics must be stated as a mathematical relation without the use of escape clauses or words that require a PhD in philosophy to apply (and two other PhDs to referee the application, and a third referee to break the tie of the inevitable disagreement of the first two).’

This is all very puzzling. Is it OK to think in terms of causes or not? If so, why, given the apparent hostility to causes in the underlying laws? And if not, why does it seem to work so well?

A clearer look at the physics might help us to find our way. Even though (most of) the basic laws are symmetrical in time, there are many arguably non-thermodynamic physical phenomena that can happen only one way. Imagine a stone thrown into a still pond: after the stone breaks the surface, waves spread concentrically from the point of impact. A common enough sight.

Now, imagine a video clip of the spreading waves played backwards. What we would see are concentrically converging waves. For some reason this second process, which is the time-reverse of the first, does not seem to occur in nature. The process of waves spreading from a source looks irreversible. And yet the underlying physical law describing the behaviour of waves – the wave equation – is as time-symmetric as any law in physics. It allows for both diverging and converging waves. So, given that the physical laws equally allow phenomena of both types, why do we frequently observe organised waves diverging from a source but never coherently converging waves?

Physicists and philosophers disagree on the correct answer to this question – which might be fine if it applied only to stones in ponds. But the problem also crops up with electromagnetic waves and the emission of light or radio waves: anywhere, in fact, that we find radiating waves. What to say about it?

On the one hand, many physicists (and some philosophers) invoke a causal principle to explain the asymmetry. Consider an antenna transmitting a radio signal. Since the source causes the signal, and since causes precede their effects, the radio waves diverge from the antenna after it is switched on simply because they are the repercussions of an initial disturbance, namely the switching on of the antenna. Imagine the time-reverse process: a radio wave steadily collapses into an antenna before the latter has been turned on. On the face of it, this conflicts with the idea of causality, because the wave would be present before its cause (the antenna) had done anything. David Griffiths, Emeritus Professor of Physics at Reed College in Oregon and the author of a widely used textbook on classical electrodynamics, favours this explanation, going so far as to call a time-asymmetric principle of causality ‘the most sacred tenet in all of physics’.

On the other hand, some physicists (and many philosophers) reject appeals to causal notions and maintain that the asymmetry ought to be explained statistically. The reason why we find coherently diverging waves but never coherently converging ones, they maintain, is not that wave sources cause waves, but that a converging wave would require the co?ordinated behaviour of ‘wavelets’ coming in from multiple different directions of space – delicately co?ordinated behaviour so improbable that it would strike us as nearly miraculous.

It so happens that this wave controversy has quite a distinguished history. In 1909, a few years before Russell’s pointed criticism of the notion of cause, Albert Einstein took part in a published debate concerning the radiation asymmetry. His opponent was the Swiss physicist Walther Ritz, a name you might not recognise.

It is in fact rather tragic that Ritz did not make larger waves in his own career, because his early reputation surpassed Einstein’s. The physicist Hermann Minkowski, who taught both Ritz and Einstein in Zurich, called Einstein a ‘lazy dog’ but had high praise for Ritz.  When the University of Zurich was looking to appoint its first professor of theoretical physics in 1909, Ritz was the top candidate for the position. According to one member of the hiring committee, he possessed ‘an exceptional talent, bordering on genius’. But he suffered from tuberculosis, and so, due to his failing health, he was passed over for the position, which went to Einstein instead. Ritz died that very year at age 31.

Months before his death, however, Ritz published a joint letter with Einstein summarising their disagreement. While Einstein thought that the irreversibility of radiation processes could be explained probabilistically, Ritz proposed what amounted to a causal explanation. He maintained that the reason for the asymmetry is that an elementary source of radiation has an influence on other sources in the future and not in the past.

This joint letter is something of a classic text, widely cited in the literature. What is less well-known is that, in the very same year, Einstein demonstrated a striking reversibility of his own. In a second published letter, he appears to take a position very close to Ritz’s – the very view he had dismissed just months earlier. According to the wave theory of light, Einstein now asserted, a wave source ‘produces a spherical wave that propagates outward. The inverse process does not exist as elementary process’. The only way in which converging waves can be produced, Einstein claimed, was by combining a very large number of coherently operating sources. He appears to have changed his mind.

Given Einstein’s titanic reputation, you might think that such a momentous shift would occasion a few ripples in the history of science. But I know of only one significant reference to his later statement: a letter from the philosopher Karl Popper to the journal Nature in 1956. In this letter, Popper describes the wave asymmetry in terms very similar to Einstein’s. And he also makes one particularly interesting remark, one that might help us to unpick the riddle. Coherently converging waves, Popper insisted, ‘would demand a vast number of distant coherent generators of waves the co?ordination of which, to be explicable, would have to be shown as originating from the centre’ (my italics).

This is, in fact, a particular instance of a much broader phenomenon. Consider two events that are spatially distant yet correlated with one another. If they are not related as cause and effect, they tend to be joint effects of a common cause. If, for example, two lamps in a room go out suddenly, it is unlikely that both bulbs just happened to burn out simultaneously. So we look for a common cause – perhaps a circuit breaker that tripped.

Common-cause inferences are so pervasive that it is difficult to imagine what we could know about the world beyond our immediate surroundings without them. Hume was right: judgments about causality are absolutely essential in going ‘beyond the evidence of the senses’. In his book The Direction of Time (1956), the philosopher Hans Reichenbach formulated a principle underlying such inferences: ‘If an improbable coincidence has occurred, there must exist a common cause.’ To the extent that we are bound to apply Reichenbach’s rule, we are all like the hard-boiled detective who doesn’t believe in coincidences.

Read the entire article here.

Dismaland

Google-search-Dismaland

A dreary, sardonic, anti-establishment theme park could only happen in the UK. Let’s face it, the corporate optimists running the US would never allow such a pessimistic and apocalyptic vision to unfold in the land of Disney and Nickelodeon.

Thus, residents of the UK are the sole, fortunate recipients of a sarcastic visual nightmare curated by Banksy and a posse of fellow pop-culture-skewering artists. Dismaland — a Bemusement Park — is hosted in appropriately grey seafront venue of Weston-super-Mare. But, grab your tickets soon, the un-theme park is only open from August 22 to September 27, 2015.

Visit Dismaland online, here.

Image courtesy of Google Search.

Psychic Media Watch

Watching the media is one of my favorite amateur pursuits. It’s a continuous source of paradox, infotainment, hypocrisy, truthiness (Stephen Colbert, 2005), loud-mouthery (me, 2015) and hence, enjoyment. So, when two opposing headlines collide mid-way across the Atlantic it’s hard for me to resist highlighting the dissonance. I snapped both these stories on the same day, August 28, 2015. The headlines read:

New York Times:

Psychic-news-28Aug2015-NYTApparently, fortunetelling is “a scam”, according to convicted New York psychic, Celia Mitchell.

The Independent:

Psychic-news-28Aug2015-Independent

Yet, in the UK, the College of Policing recommends using psychics to find missing persons.

Enjoy.

Bang Bang, You’re Dead. The Next Great Reality TV Show

Google-search-reality-tv

Aside from my disbelief that America can let the pathetic and harrowing violence from guns continue, the latest shocking episode in Virginia raises another disturbing thought. And, Jonathan Jones has captured it quite aptly. Are we increasingly internalizing real world violence as a vivid but trivial game? Despite trails of murder victims and untold trauma to families and friends, the rest of us are lulled into dream-like detachment. The violence is just like a video game, right? The violence is played out as a reality TV show, right? And we know both are just fiction — it’s not news, it’s titillating, voyeuristic entertainment. So, there is no need for us to do anything. Let’s just all sit back and wait for the next innovative installment in America’s murderous screenplay. Bang bang, you’re dead! The show must go on.

Or, you could do something different, however small, and I don’t mean recite your go-to prayer or converge around a candle!

From Jonathan Jones over at the Guardian:

Vester Flanagan’s video of his own murderous shooting of Alison Parker and Adam Ward shows a brutal double killing from the shooter’s point of view. While such a sick stunt echoes the horror film Peeping Tom by British director Michael Powell, in which a cameraman films his murders, this is not fiction. It is reality – or the closest modern life gets to reality.

I agree with those who say such excreta of violence should not be shared on social media, let alone screened by television stations or hosted by news websites. But like everything else that simply should not happen, the broadcasting and circulation of this monstrous video has happened. It is one more step in the destruction of boundaries that seems a relentless rush of our time. Nothing is sacred. Not even the very last moments of Alison Parker as we see, from Flanagan’s point of view, Flanagan’s gun pointing at her.

Like the giant gun Alfred Hitchcock used to create a disturbing point of view shot in Spellbound, the weapon dominates the sequence I have seen (I have no intention of seeking out the other scenes). The gun is in Flanagan’s hand and it gives him power. It is held there, shown to the camera, like a child’s proud toy or an exposed dick in his hand – it is obscene because you can see that it is so important to him, that it is supposed to be some kind of answer, revenger or – as gun fans like to nickname America’s most famous gun the Colt 45 – “the Equaliser”. The way Flanagan focuses on his gun revealed the madness of America’s gun laws because it shows the infantile and pathetic relationship the killer appears to have with his weapon. How can it make sense to give guns so readily to troubled individuals?

What did the killer expect viewers to get from watching his video? The horrible conclusion has to be that he expected empathy. Surely, that is not possible. The person who you care about when seeing this is unambiguously his victim. This is, viewed with any humanity at all, a harrowing view of the evil of killing another person. I watched it once. I can’t look again at Alison Parker’s realization of her plight.

The sense that we somehow have a right to see this, the decision of many media outlets to screen it, has a lot to do with the television trappings of this crime. Because part of the attack was seen and heard live on air, because the victims and the perpetrator all worked for the same TV station, there’s something stagey about it all. Sadly people so enjoy true life crime stories and this one has a hokey TV setting that recalls many fictional plots of films and TV programs.

It exposes the paradox of ‘reality television’ – that people on television are not real to the audience at all. The death of a presenter is therefore something that can be replayed on screens with impunity. To see how bizarre and improper this is, imagine if anyone broadcast or hosted a serial killer’s videos of graphic murders. How is viewing this better?

But there is still another level of unreality. The view of that gun pointing at Parker resembles video games like Call of Duty that similarly show your gun pointing at virtual enemies. Is this more than a coincidence? It is complicated by the fact that Flanagan had worked in television. His experience of cameras was not just virtual. So his act of videoing his crime would seem to be another crass, mad way of getting “revenge” on former colleagues. But the resemblance to video games is nevertheless eerie. It adds to the depressing conclusion that we may see more images taken by killers, more dead-eyed recordings of inhuman acts. For video games do create fantasy worlds in which pointing a gun is such a light thing to do.

In this film from the abyss the gun is used as if it was game. Pointed at real people with the ease of manipulating a joystick. And bang bang, they are dead.

Read the entire article here.

Image courtesy of Google Search.

The Tragedy. The Reaction

gun-violence-reaction

Another day, another dark and twisted murder in the United States facilitated by the simple convenience of a gun. The violence and horror seems to become more incredible each time: murder in restaurants, murder at the movie theater, murder on the highway, murder in the convenience store, murder at work, murder in a place of worship, and now murder on-air, live and staged via social media.

But, as I’ve mentioned before the real tragedy is the inaction of the people. Oh apologies, there is a modicum of action, but it is inconsequential, with apologies to the victims’ families. After each mass shooting — we don’t hear much about individual murder anymore (far too common) — the pattern is lamentably predictable: tears and grief; headlines of disbelief and horror; mass soul-searching (lasting several minutes at most); prayer and words, often spoken by a community or national leader; tributes to the victims and sympathy for the families and friends; candlelight vigils, balloons, flowers and cards at the crime scene. It’s all so sad and pathetic. Another day, another mass murder. Repeat the inaction.

Until individuals, neighbors and communities actually take real action to curb gun violence these sad tragedies and empty gestures will continue to loop endlessly.

Image courtesy of Google Search.

HR and the Evil Omnipotence of the Passive Construction

Next time you browse through your company’s compensation or business expense policies, or for that matter, anything written by the human resources (HR) department, cast your mind to George Orwell. In one of his critical essays Politics and the English Language, Orwell makes a clear case for the connection between linguistic obfuscation and political power. While Orwell’s obsession was on the political machine, you could just as well apply his reasoning to the mangled literary machinations of every corporate HR department.

Oh, the pen is indeed mightier than the sword, especially when it is used to construct obtuse passive sentences without a subject — perfect for a rulebook that all citizens must follow and that no one can challenge.

From the Guardian:

In our age there is no such thing as ‘keeping out of human resources’. All issues are human resource issues, and human resources itself is a mass of lies, evasions, folly, hatred and schizophrenia.

OK, that’s not exactly what Orwell wrote. The hair-splitters among you will moan that I’ve taken the word “politics” out of the above and replaced it with “human resources”. Sorry.

But I think there’s no denying that had he been alive today, Orwell – the great opponent and satirist of totalitarianism – would have deplored the bureaucratic repression of HR. He would have hated their blind loyalty to power, their unquestioning faithfulness to process, their abhorrence of anything or anyone deviating from the mean.

In particular, Orwell would have utterly despised the language that HR people use. In his excellent essay Politics and the English Language (where he began the thought that ended with Newspeak), Orwell railed against the language crimes committed by politicians.

In our time, political speech and writing are largely the defence of the indefensible … Thus political language has to consist largely of euphemism, question-begging and sheer cloudy vagueness. Defenceless villages are bombarded from the air, the inhabitants driven out into the countryside, the cattle machine-gunned, the huts set on fire with incendiary bullets: this is called pacification. Millions of peasants are robbed of their farms and sent trudging along the roads with no more than they can carry: this is called transfer of population or rectification of frontiers. People are imprisoned for years without trial, or shot in the back of the neck or sent to die of scurvy in Arctic lumber camps: this is called elimination of unreliable elements.

Repeat the politics/human resources switch in the above and the argument remains broadly the same. Yes, HR is not explaining away murders, but it nonetheless deliberately misuse language as a sort of low-tech mind control to avert our eyes from office atrocities and keep us fixed on our inboxes. Thus mass sackings are wrapped up in cowardly sophistry and called rightsizings, individuals are offboarded to the jobcentre and the few hardy souls left are consoled by their membership of a more streamlined organisation.

Orwell would have despised the passive constructions that are the HR department’s default setting. Want some flexibility in your contract? HR says company policy is unable to support that. Forgotten to accede to some arbitrary and impractical office rule? HR says we are minded to ask everyone to remember that it is essential to comply by rule X. Try to question whether an ill-judged commitment could be reversed? HR apologises meekly that the decision has been made.

Not giving subjects to any of these responses is a deliberate ploy. Subjects give ownership. They imbue accountability. Not giving sentences subjects means that HR is passing the buck, but to no one in particular. And with no subject, no one can be blamed, or protested against.

The passive construction is also designed to give the sense that it’s not HR speaking, but that they are the conduit for a higher-up and incontestable power. It’s designed to be both authoritative and banal, so that we torpidly accept it, like the sovereignty of the Queen. It’s saying: “This is the way things are – deal with it because it isn’t changing.” It’s indifferent and deliberately opaque. It’s the worst kind of utopianism (the kind David Graeber targets in his recent book on “stupidity and the secret joys of bureaucracy”), where system and rule are king and hang the individual. It’s deeply, deeply oppressive.

Annual leave is perhaps an even worse example of HR’s linguistic malpractice. The phrase gives the sense that we are not sitting in the office but rather fighting some dismal war and that we should be grateful for the mercy of Field Marshal HR in allowing us a finite absence from the front line. Is it too indulgent and too frivolous to say that we are going on holiday (even if we’re just taking the day to go to Ikea)? Would it so damage our career prospects? Would the emerging markets of the world be emboldened by the decadence and complacency of saying we’re going on hols? I don’t think so, but they clearly do.

Actually, I don’t think it’s so much of a stretch to imagine Orwell himself establishing the whole HR enterprise as a sort of grim parody of Stalinism; a never-ending, ever-expanding live action art installation sequel to Animal Farm and Nineteen Eighty-Four.

Look at your office’s internal newsletter. Is it an incomprehensible black hole of sense? Is it trying to prod you into a place of content, incognisant of all the everyday hardships and irritations you endure? If your answer is yes, then I think that like me, you find it fairly easy to imagine Orwell composing these Newspeak emails from beyond the grave to make us believe that War is Peace, Freedom is Slavery and 2+2=5.

Delving deeper, the parallels become increasingly hard to ignore. Company restructures and key performance indicators make no sense in the abstract, merely serving to demotivate the workforce, sap confidence and obstruct productivity. So are they actually cleverly designed parodies of Stalin’s purges and the cult of Stakhanovism?

Read the entire story here.

 

Passion, Persistence and Pluto

New Horizons Pluto Flyby

Alliterations aside this is a great story of how passion, persistence and persuasiveness can make a real impact. This is especially significant when you look at the triumphant climax to NASA’s unlikely New Horizons mission to Pluto. Over 20 years in the making and fraught with budget cuts and political infighting — NASA is known for its bureaucracy — the mission reached its zenith last week. While thanks go to the many hundreds engineers and scientists involved from its inception, the mission would not have succeeded without the vision and determination of one person — Alan Stern.

In a music track called “Over the Sea” by the 1980s (and 90s) band Information Society there is a sample of Star Trek’s Captain Kirk saying,

“In every revolution there is one man with a vision.”

How appropriate.

From Smithsonian

On July 14 at approximately 8 a.m. Eastern time, a half-ton NASA spacecraft that has been racing across the solar system for nine and a half years will finally catch up with tiny Pluto, at three billion miles from the Sun the most distant object that anyone or anything from Earth has ever visited. Invisible to the naked eye, Pluto wasn’t even discovered until 1930, and has been regarded as our solar system’s oddball ever since, completely different from the rocky planets close to the Sun, Earth included, and equally unlike the outer gas giants. This quirky and mysterious little world will swing into dramatic view as the New Horizons spacecraft makes its closest approach, just 6,000 miles away, and onboard cameras snap thousands of photographs. Other instruments will gauge Pluto’s topography, surface and atmospheric chemistry, temperature, magnetic field and more. New Horizons will also take a hard look at Pluto’s five known moons, including Charon, the largest. It might even find other moons, and maybe a ring or two.
It was barely 20 years ago when scientists first learned that Pluto, far from alone at the edge of the solar system, was just one in a vast swarm of small frozen bodies in wide, wide orbit around the Sun, like a ring of debris left at the outskirts of a construction zone. That insight, among others, has propelled the New Horizons mission. Understand Pluto and how it fits in with those remnant bodies, scientists say, and you can better understand the formation and evolution of the solar system itself.
If all goes well, “encounter day,” as the New Horizons team calls it, will be a cork-popping celebration of tremendous scientific and engineering prowess—it’s no small feat to fling a collection of precision instruments through the frigid void at speeds up to 47,000 miles an hour to rendezvous nearly a decade later with an icy sphere about half as wide as the United States is broad. The day will also be a sweet vindication for the leader of the mission, Alan Stern. A 57-year-old astronomer, aeronautical engineer, would-be astronaut and self-described “rabble-rouser,” Stern has spent the better part of his career fighting to get Pluto the attention he thinks it deserves. He began pushing NASA to approve a Pluto mission nearly a quarter of a century ago, then watched in frustration as the agency gave the green light to one Pluto probe after another, only to later cancel them. “It was incredibly frustrating,” he says, “like watching Lucy yank the football away from Charlie Brown, over and over.” Finally, Stern recruited other scientists and influential senators to join his lobbying effort, and because underdog Pluto has long been a favorite of children, proponents of the mission savvily enlisted kids to write to Congress, urging that funding for the spacecraft be approved.
New Horizons mission control is headquartered at Johns Hopkins University’s Applied Physics Laboratory near Baltimore, where Stern and several dozen other Plutonians will be installed for weeks around the big July event, but I caught up with Stern late last year in Boulder at the Southwest Research Institute, where he is an associate vice president for research and development. A picture window in his impressive office looks out onto the Rockies, where he often goes to hike and unwind. Trim and athletic at 5-foot-4, he’s also a runner, a sport he pursues with the exactitude of, well, a rocket scientist. He has calculated his stride rate, and says (only half-joking) that he’d be world-class if only his legs were longer. It wouldn’t be an overstatement to say that he is a polarizing figure in the planetary science community; his single-minded pursuit of Pluto has annoyed some colleagues. So has his passionate defense of Pluto in the years since astronomy officials famously demoted it to a “dwarf planet,” giving it the bum’s rush out of the exclusive solar system club, now limited to the eight biggies.
The timing of that insult, which is how Stern and other jilted Pluto-lovers see it, could not have been more dramatic, coming in August 2006, just months after New Horizons had rocketed into space from Cape Canaveral. What makes Pluto’s demotion even more painfully ironic to Stern is that some of the groundbreaking scientific discoveries that he had predicted greatly strengthened his opponents’ arguments, all while opening the door to a new age of planetary science. In fact, Stern himself used the term “dwarf planet” as early as the 1990s.
The wealthy astronomer Percival Lowell, widely known for insisting there were artificial canals on Mars, first started searching for Pluto at his private observatory in Arizona in 1905. Careful study of planetary orbits had suggested that Neptune was not the only object out there exerting a gravitational tug on Uranus, and Lowell set out to find what he dubbed “Planet X.” He died without success, but a young man named Clyde Tombaugh, who had a passion for astronomy though no college education, arrived at the observatory and picked up the search in 1929. After 7,000 hours staring at some 90 million star images, he caught sight of a new planet on his photographic plates in February 1930. The name Pluto, the Roman god of the underworld, was suggested by an 11-year-old British girl named Venetia Burney, who had been discussing the discovery with her grandfather. The name was unanimously adopted by the Lowell Observatory staff in part because the first two letters are Percival Lowell’s initials.
Pluto’s solitary nature baffled scientists for decades. Shouldn’t there be other, similar objects out beyond Neptune? Why did the solar system appear to run out of material so abruptly? “It seemed just weird that the outer solar system would be so empty, while the inner solar system was filled with planets and asteroids,” recalls David Jewitt, a planetary scientist at UCLA. Throughout the decades various astronomers proposed that there were smaller bodies out there, yet unseen. Comets that periodically sweep in to light up the night sky, they speculated, probably hailed from a belt or disk of debris at the solar system’s outer reaches.
Stern, in a paper published in 1991 in the journal Icarus, argued not only that the belt existed, but also that it contained things as big as Pluto. They were simply too far away, and too dim, to be easily seen. His reasoning: Neptune’s moon Triton is a near-twin of Pluto, and probably orbited the Sun before it was captured by Neptune’s gravity. Uranus has a drastically tilted axis of rotation, probably due to a collision eons ago with a Pluto-size object. That made three Pluto-like objects at least, which suggested to Stern there had to be more. The number of planets in the solar system would someday need to be revised upward, he thought. There were probably hundreds, with the majority, including Pluto, best assigned to a subcategory of “dwarf planets.”
Just a year later, the first object (other than Pluto and Charon) was discovered in that faraway region, called the Kuiper Belt after the Dutch-born astronomer Gerard Kuiper. Found by Jewitt and his colleague, Jane Luu, it’s only about 100 miles across, while Pluto spans 1,430 miles. A decade later, Caltech astronomers Mike Brown and Chad Trujillo discovered an object about half the size of Pluto, large enough to be spherical, which they named Quaoar (pronounced “kwa-war” and named for the creator god in the mythology of the pre-Columbian Tongva people native to the Los Angeles basin). It was followed in quick succession by Haumea, and in 2005, Brown’s group found Eris, about the same size as Pluto and also spherical.
Planetary scientists have spotted many hundreds of smaller Kuiper Belt Objects; there could be as many as ten billion that are a mile across or more. Stern will take a more accurate census of their sizes with the cameras on New Horizons. His simple idea is to map and measure Pluto’s and Charon’s craters, which are signs of collisions with other Kuiper Belt Objects and thus serve as a representative sample. When Pluto is closest to the Sun, frozen surface material evaporates into a temporary atmosphere, some of which escapes into space. This “escape erosion” can erase older craters, so Pluto will provide a recent census. Charon, without this erosion, will offer a record that spans cosmic history. In one leading theory, the original, much denser Kuiper Belt would have formed dozens of planets as big or bigger than Earth, but the orbital changes of Jupiter and Saturn flung most of the building blocks away before that could happen, nipping planet formation in the bud.
By the time New Horizons launched at Cape Canaveral on January 19, 2006, it had become difficult to argue that Pluto was materially different from many of its Kuiper Belt neighbors. Curiously, no strict definition of “planet” existed at the time, so some scientists argued that there should be a size cutoff, to avoid making the list of planets too long. If you called Pluto and the other relatively small bodies something else, you’d be left with a nice tidy eight planets—Mercury through Neptune. In 2000, Neil deGrasse Tyson, director of the Hayden Planetarium in New York City, had famously chosen the latter option, leaving Pluto out of a solar system exhibit.
Then, with New Horizons less than 15 percent of the way to Pluto, members of the International Astronomical Union, responsible for naming and classifying celestial objects, voted at a meeting in Prague to make that arrangement official. Pluto and the others were now to be known as dwarf planets, which, in contrast to Stern’s original meaning, were not planets. They were an entirely different sort of beast. Because he discovered Eris, Caltech’s Brown is sometimes blamed for the demotion. He has said he would have been fine with either outcome, but he did title his 2010 memoir How I Killed Pluto and Why It Had It Coming.
“It’s embarrassing,” recalls Stern, who wasn’t in Prague for the vote. “It’s wrong scientifically and it’s wrong pedagogically.” He said the same sort of things publicly at the time, in language that’s unusually blunt in the world of science. Among the dumbest arguments for demoting Pluto and the others, Stern noted, was the idea that having 20 or more planets would be somehow inconvenient. Also ridiculous, he says, is the notion that a dwarf planet isn’t really a planet. “Is a dwarf evergreen not an evergreen?” he asks.
Stern’s barely concealed contempt for what he considers foolishness of the bureaucratic and scientific varieties hasn’t always endeared him to colleagues. One astronomer I asked about Stern replied, “My mother taught me that if you can’t say anything nice about someone, don’t say anything.” Another said, “His last name is ‘Stern.’ That tells you all you need to know.”
DeGrasse Tyson, for his part, offers measured praise: “When it comes to everything from rousing public sentiment in support of astronomy to advocating space science missions to defending Pluto, Alan Stern is always there.”
Stern also inspires less reserved admiration. “Alan is incredibly creative and incredibly energetic,” says Richard Binzel, an MIT planetary scientist who has known Stern since their graduate-school days. “I don’t know where he gets it.”
Read the entire article here.

Image: New Horizons Principal Investigator Alan Stern of Southwest Research Institute (SwRI), Boulder, CO, celebrates with New Horizons Flight Controllers after they received confirmation from the spacecraft that it had successfully completed the flyby of Pluto, Tuesday, July 14, 2015 in the Mission Operations Center (MOC) of the Johns Hopkins University Applied Physics Laboratory (APL), Laurel, Maryland. Public domain.

The Big Breakthrough Listen

If you were a Russian billionaire with money to burn and a penchant for astronomy and physics what would you do? Well, rather than spend it on a 1,000 ft long super-yacht, you might want to spend it on the search for extraterrestrial intelligence. That’s what Yuri Milner is doing. So, hats off to him and his colleagues.

Though, I do hope any far-distant aliens have similar, or greater, sums of cash to throw at equipment to transmit a signal so that we may receive it. Also, I have to wonder what alien oligarchs spend their excess millions and billions on — and what type of monetary system they use (hopefully not Euros).

From the Guardian:

Astronomers are to embark on the most intensive search for alien life yet by listening out for potential radio signals coming from advanced civilisations far beyond the solar system.

Leading researchers have secured time on two of the world’s most powerful telescopes in the US and Australia to scan the Milky Way and neighbouring galaxies for radio emissions that betray the existence of life elsewhere. The search will be 50 times more sensitive, and cover 10 times more sky, than previous hunts for alien life.

The Green Bank Observatory in West Virginia, the largest steerable telescope on the planet, and the Parkes Observatory in New South Wales, are contracted to lead the unprecedented search that will start in January 2016. In tandem, the Lick Observatory in California will perform the most comprehensive search for optical laser transmissions beamed from other planets.

Operators have signed agreements that hand the scientists thousands of hours of telescope time per year to eavesdrop on planets that orbit the million stars closest to Earth and the 100 nearest galaxies. The telescopes will scan the centre of the Milky Way and the entire length of the galactic plane.

Launched on Monday at the Royal Society in London, with the Cambridge cosmologist Stephen Hawking, the Breakthrough Listen project has some of the world’s leading experts at the helm. Among them are Lord Martin Rees, the astronomer royal, Geoff Marcy, who has discovered more planets beyond the solar system than anyone, and the veteran US astronomer Frank Drake, a pioneer in the search for extraterrestrial intelligence (Seti).

Stephen Hawking said the effort was “critically important” and raised hopes for answering the question of whether humanity has company in the universe. “It’s time to commit to finding the answer, to search for life beyond Earth,” he said. “Mankind has a deep need to explore, to learn, to know. We also happen to be sociable creatures. It is important for us to know if we are alone in the dark.”

The project will not broadcast signals into space, because scientists on the project believe humans have more to gain from simply listening out for others. Hawking, however, warned against shouting into the cosmos, because some advanced alien civilisations might possess the same violent, aggressive and genocidal traits found among humans.

“A civilisation reading one of our messages could be billions of years ahead of us. If so they will be vastly more powerful and may not see us as any more valuable than we see bacteria,” he said.

The alien hunters are the latest scientists to benefit from the hefty bank balance of Yuri Milner, a Russian internet billionaire, who quit a PhD in physics to make his fortune. In the past five years, Milner has handed out prizes worth tens of millions of dollars to physicists, biologists and mathematicians, to raise the public profile of scientists. He is the sole funder of the $100m Breakthrough Listen project.

“It is our responsibility as human beings to use the best equipment we have to try to answer one of the biggest questions: are we alone?” Milner told the Guardian. “We cannot afford not to do this.”

Milner was named after Yuri Gagarin, who became the first person to fly in space in 1961, the year he was born.

The Green Bank and Parkes observatories are sensitive enough to pick up radio signals as strong as common aircraft radar from planets around the nearest 1,000 stars. Civilisations as far away as the centre of the Milky Way could be detected if they emit radio signals more than 10 times the power of the Arecibo planetary radar on Earth. The Lick Observatory can pick up laser signals as weak as 100W from nearby stars 25tn miles away.

Read the entire story here.

Pics Or It Didn’t Happen

Apparently, in this day and age of ubiquitous technology there is no excuse for not having evidence. So, if you recently had a terrific (or terrible) meal in your (un-)favorite restaurant you must have pictures to back up your story. If you just returned from a gorgeous mountain hike you must have images for every turn on the trial. Just attended your high-school reunion? Pictures! Purchased a new mattress? Pictures! Cracked your heirloom tea service? Pictures! Mowed the lawn? Pictures! Stubbed toe? Pictures!

The pressure to record our experiences has grown in lock-step with the explosive growth in smartphones and connectivity. Collecting and sharing our memories remains a key part of our story-telling nature. But, this obsessive drive to record every minutiae of every experience, however trivial, has many missing the moment — behind the camera or in front of it, we are no longer in the moment.

Just as our online social networks have stirred growth in the increasingly neurotic condition known as FOMO (fear of missing out), we are now on the cusp on some new techno-enabled, acronym-friendly disorders. Let’s call these FONBB — fear of not being believed, FONGELOFAMP — fear of not getting enough likes or followers as my peers, FOBIO — fear of becoming irrelevant online.

From NYT:

“Pics or it didn’t happen” is the response you get online when you share some unlikely experience or event and one of your friends, followers or stalkers calls you out for evidence. “Next thing I know, I’m bowling with Bill Murray!” Pics or it didn’t happen. “I taught my cockatoo how to rap ‘Baby Got Back’ — in pig Latin.” Pics or it didn’t happen. “Against all odds, I briefly smiled today.” Pics or it didn’t happen!

It’s a glib reply to a comrade’s boasting — coming out of Internet gaming forums to rebut boasts about high scores and awesome kills — but the fact is we like proof. Proof in the instant replay that decides the big game, the vacation pic that persuades us we were happy once, the selfie that reassures us that our face is still our own. “Pics or it didn’t happen” gained traction because in an age of bountiful technology, when everyone is armed with a camera, there is no excuse for not having evidence.

Does the phrase have what it takes to transcend its humble origins as a cruddy meme and become an aphorism in the pantheon of “A picture is worth a thousand words” and “Seeing is believing”? For clues to the longevity of “Pics,” let’s take a survey of some classic epigrams about visual authority and see how they hold up under the realities of contemporary American life.

“A picture is worth a thousand words” is a dependable workhorse, emerging from early-­20th-­century newspaper culture as a pitch to advertisers: Why rely on words when an illustration can accomplish so much more? It seems appropriate to test the phrase with a challenge drawn from contemporary news media. Take one of the Pulitzer Prize-­winning photographs from The St. Louis Post-­Dispatch’s series on Ferguson. In the darkness, a figure is captured in an instant of dynamic motion: legs braced, long hair flying wild, an extravagant plume of smoke and flames trailing from the incendiary object he is about to hurl into space. His chest is covered by an American-­flag T-­shirt, he holds fire in one hand and a bag of chips in the other, a living collage of the grand and the bathetic.

Headlines — like the graphics that gave birth to “A picture is worth a thousand words” — are a distillation, a shortcut to meaning. Breitbart News presented that photograph under “Rioters Throw Molotov Cocktails at Police in Ferguson — Again.” CBS St. Louis/Associated Press ran with “Protester Throws Tear-­Gas Canister Back at Police While Holding Bag of Chips.” Rioter, protester, Molotov cocktail, tear-­gas canister. Peace officers, hypermilitarized goons. What’s the use of a thousand words when they are Babel’s noise, the confusion of a thousand interpretations?

“Seeing is believing” was an early entry in the canon. Most sources attribute it to the Apostle Thomas’s incredulity over Jesus’ resurrection. (“Last night after you left the party, Jesus turned all the water into wine” is a classic “Pics or it didn’t happen” moment.) “Unless I see the nail marks in his hands and put my finger where the nails were, and put my hand into his side, I will not believe it.” Once Jesus shows up, Thomas concludes that seeing will suffice. A new standard of proof enters the lexicon.

Intuitive logic is not enough, though. Does “Seeing is believing” hold up when confronted by current events like, say, the killing of Eric Garner last summer by the police? The bystander’s video is over two minutes long, so dividing it into an old-­fashioned 24 frames per second gives us a bounty of more than 3,000 stills. A real bonanza, atrocity-­wise. But here the biblical formulation didn’t hold up: Even with the video and the medical examiner’s assessment of homicide, a grand jury declined to indict Officer Daniel Pantaleo. Time to downgrade “Seeing is believing,” too, and kick “Justice is blind” up a notch.

Can we really use one cherry-­picked example to condemn a beloved idiom? Is the system rigged? Of course it is. Always, everywhere. Let’s say these expressions concerning visual evidence are not to blame for their failures, but rather subjectivity is. The problem is us. How we see things. How we see people. We can broaden our idiomatic investigations to include phrases that account for the human element, like “The eyes are the windows to the soul.” We can also change our idiomatic stressors from contemporary video to early photography. Before smartphones put a developing booth in everyone’s pocket, affordable portable cameras loosed amateur photographers upon the world. Everyday citizens could now take pictures of children in their Sunday best, gorgeous vistas of unspoiled nature and lynchings.

A hundred years ago, Americans took souvenirs of lynchings, just as we might now take a snapshot of a farewell party for a work colleague or a mimosa-­heavy brunch. They were keepsakes, sent to relatives to allow them to share in the event, and sometimes made into postcards so that one could add a “Wish you were here”-­type endearment. In the book “Without Sanctuary: Lynching Photography in America,” Leon F. Litwack shares an account of the 1915 lynching of Thomas Brooks in Fayette County, Tenn. “Hundreds of Kodaks clicked all morning at the scene. .?.?. People in automobiles and carriages came from miles around to view the corpse dangling at the end of the rope.” Pics or it didn’t happen. “Picture-­card photographers installed a portable printing plant at the bridge and reaped a harvest in selling postcards.” Pics or it didn’t happen. “Women and children were there by the score. At a number of country schools, the day’s routine was delayed until boy and girl pupils could get back from viewing the lynched man.” Pics or it didn’t happen.

Read the entire story here.

Gadzooks, Gosh, Tarnation and the F-Bomb

Blimey! How our lexicon of foul language has evolved! Up to a few hundred years ago most swear words and oaths bore some connection to God, Jesus or other religious figure or event. But the need to display some level of dubious piety and avoid a lightening bolt from the blue led many to invent and mince a whole range of creative euphemisms. Hence, even today, we still hear words like “drat”, “gosh”, “tarnation”, “by george”, “by jove”, “heck”, “strewth”, “odsbodikins”, “gadzooks”, “doggone”.

More recently our linguistic penchant for shock and awe stems mostly from euphemistic — or not — labels for body parts and bodily functions — think: “freaking” or “shit” or “dick” and all manner of “f-words” and “c-words”. Sensitivities aside, many of us are fortunate enough to live in nations that have evolved beyond corporal or even capital punishment for uttering such blasphemous or vulgar indiscretions.

So, the next time your drop the “f-bomb” or a “dagnabbit” in public reflect for a while and thank yourself for supporting your precious democracy over the neighboring theocracy.

From WSJ:

At street level and in popular culture, Americans are freer with profanity now than ever before—or so it might seem to judge by how often people throw around the “F-bomb” or use a certain S-word of scatological meaning as a synonym for “stuff.” Or consider the millions of fans who adore the cartoon series “South Park,” with its pint-size, raucously foul-mouthed characters.

But things might look different to an expedition of anthropologists visiting from Mars. They might conclude that Americans today are as uptight about profanity as were our 19th-century forbears in ascots and petticoats. It’s just that what we think of as “bad” words is different. To us, our ancestors’ word taboos look as bizarre as tribal rituals. But the real question is: How different from them, for better or worse, are we?

In medieval English, at a time when wars were fought in disputes over religious doctrine and authority, the chief category of profanity was, at first, invoking—that is, swearing to—the name of God, Jesus or other religious figures in heated moments, along the lines of “By God!” Even now, we describe profanity as “swearing” or as muttering “oaths.”

It might seem like a kind of obsessive piety to us now, but the culture of that day was largely oral, and swearing—making a sincere oral testament—was a key gesture of commitment. To swear by or to God lightly was considered sinful, which is the origin of the expression to take the Lord’s name in vain (translated from Biblical Hebrew for “emptily”).

The need to avoid such transgressions produced various euphemisms, many of them familiar today, such as “by Jove,” “by George,” “gosh,” “golly” and “Odsbodikins,” which started as “God’s body.” “Zounds!” was a twee shortening of “By his wounds,” as in those of Jesus. A time traveler to the 17th century would encounter variations on that theme such as “Zlids!” and “Znails!”, referring to “his” eyelids and nails.

In the 19th century, “Drat!” was a way to say “God rot.” Around the same time, darn started when people avoided saying “Eternal damnation!” by saying “Tarnation!”, which, because of the D-word hovering around, was easy to recast as “Darnation!”, from which “darn!” was a short step.

By the late 18th century, sex, excretion and the parts associated with same had come to be treated as equally profane as “swearing” in the religious sense. Such matters had always been considered bawdy topics, of course, but the space for ordinary words referring to them had been shrinking for centuries already.

Chaucer had available to him a thoroughly inoffensive word referring to the sex act, swive. An anatomy book in the 1400s could casually refer to a part of the female anatomy with what we today call the C-word. But over time, referring to these things in common conversation came to be regarded with a kind of pearl-clutching horror.

By the 1500s, as English began taking its place alongside Latin as a world language with a copious high literature, a fashion arose for using fancy Latinate terms in place of native English ones for more private matters. Thus was born a slightly antiseptic vocabulary, with words like copulate and penis. Even today modern English has no terms for such things that are neither clinical nor vulgar, along the lines of arm or foot or whistle.

The burgeoning bourgeois culture of the late 1700s, both in Great Britain and America, was especially alarmist about the “down there” aspect of things. In growing cities with stark social stratification, a new gentry developed a new linguistic self-consciousness—more English grammars were published between 1750 and 1800 than had ever appeared before that time.

In speaking of cooked fowl, “white” and “dark” meat originated as terms to avoid mention of breasts and limbs. What one does in a restroom, another euphemism of this era, is only laboriously classified as repose. Bosom and seat (for the backside) originated from the same impulse.

Passages in books of the era can be opaque to us now without an understanding of how particular people had gotten: In Dickens’s “Oliver Twist,” Giles the butler begins, “I got softly out of bed; drew on a pair of…” only to be interrupted with “Ladies present…” after which he dutifully says “…of shoes, sir.” He wanted to say trousers, but because of where pants sit on the body, well…

Or, from the gargantuan Oxford English Dictionary, published in 1884 and copious enough to take up a shelf and bend it, you would never have known in the original edition that the F-word or the C-word existed.

Such moments extend well into the early 20th century. In a number called “Shuffle Off to Buffalo” in the 1932 Broadway musical “42nd Street,” Ginger Rogers sings “He did right by little Nelly / with a shotgun at his bell-” and then interjects “tummy” instead. “Belly” was considered a rude part of the body to refer to; tummy was OK because of its association with children.

Read the entire story here.

The Pivot and the Money

Once upon a time the word “pivot” usually referred to an object’s point of rotation. Then, corporate America got its sticky hands all over it. The word even found its way in to Microsoft Excel — as in Pivot Table. But, the best euphemistic example comes from one of my favorite places for invention and euphemism — Silicon Valley. In this region of the world pivot has come to mean a complete change in business direction.

Now, let’s imagine you’re part of start-up company. At the outset, your company has a singularly great, world-changing idea. You believe it’s the best idea, since, well, the last greatest world-changing idea. It’s unique. You are totally committed. You secure funding from some big name VCs anxious to capitalize and make the next $100 billion. You and your team work countless hours on realizing your big idea — it’s your dream, your passion. Then, suddenly you realize that your idea is utterly worthless — the product looks good but nobody, absolutely nobody, will consider it, let alone buy it; in fact, a hundred other companies before you had the same great, unique idea and all failed.

What are you and your company to do? Well, you pivot.

The entrepreneurial side of me would cheer an opportunistic company for “pivoting”, abandoning that original, great idea, and seeking another. Better than packing one’s bags and enrolling in corporate serfdom, right? But, there’s another part of me that thinks this is an ethical sell-out: it’s disingenuous to the financial backers, and it shows lack of integrity. That said, the example is of course set in Silicon Valley.

From Medium:

It was about a month after graduating from Techstars that my co-founder, Lianne, and I had our “oh shit” moment.

This is a special moment for founders; it’s not when you find a fixable bug in your app, when you realize you have been poorly optimizing your conversion funnel, or when you get a “no” from an investor. An “oh shit” moment is when you realize there is something fundamentally wrong with your business.

In our case, we realized that the product that we wanted to create was irreconcilable with a viable business model. So who were we going to tell? Techstars, who just accepted us into their highly prestigious accelerator on the basis that we could make it work? Our investors, who we just closed a round with?

It turns out, our Techstars family, our friends, and the angels (literally) who invested in us became our greatest allies, supporters, and advocates as we navigated the treacherous, terrifying, uncertain, and ultimately wildly liberating waters of a pivot. So let’s start at the beginning…

In February of 2014, Lianne and I were completing our undergrad CS degrees at the University of Colorado. As we were reflecting on the past four years of school, we realized that the most valuable experiences that we had happened outside the classroom in the incredible communities that we became involved in. Being techies, we wanted to build a product which helped other students make these “serendipitous” connections around their campus?—?to make the most of their time in college as well. We wanted to help our friends explore their world around them.
We called it Varsity. The app was basically a replacement for the unreadable kiosks full of posters found on college campuses. Students could submit events and activities happening around their campus that others could discover and indicate they were attending. We also built in a personalization mechanism, which proactively suggested things to do around you based upon your interests.
A few months later, the MVP of the Varsity and a well-practiced pitch won us the New Venture Challenge at CU, which came with a $13k award and garnered the attention of Techstars Boulder.
The next couple of months were a whirlwind of change; Lianne and I graduated, we transitioned to our first full-time job (working for ourselves), and I spent a month in Israel with my sister before she left for college in Florida. We spent a good amount of our time networking our way around Techstars?—?feeling a little like the high school kids at a college party?—?but loving it at the same time. We met some incredible people (Sue Heilbronner, Brad Berenthal, Zach Nies, and Howard Diamond, to name a few) who taught us so much about our nascent business in a very short time.
We took as many meetings as we could with whomever would talk with us, and we funneled all of our learnings into our Techstars application. Through some combination of luck, sweat, and my uncanny ability to say the right things when standing in front of a large group of people, we were accepted into Techstars.
Techstars was incredibly challenging for us. The 3-month program was also equally rewarding. Lianne and I learned more about ourselves, our company, and our relationship with each other than we had in 4 years of undergraduate education together. About half-way through the program we rebranded Varsity to Native and started exploring ways to monitize the platform. The product had come along way?—?we had done some incredible engineering and design work that we were happy with.
Unfortunately, the problem with Varsity was absolutely zero alignment between the product that we wanted to build and the way that would bring it to market. One option was to spend the next 3 years grinding through the 8-month sales-cycles of universities across the country, which felt challenging (in the wrong ways) and bureaucratic. Alternatively, we could monetize the student attention we garnered, which we feared would cause discordance between the content students wanted to see and the content that advertisers wanted to show them.
Soon after graduating from Techstars, someone showed us Simon Sinek’s famous TED talk about how great leaders inspire action. Sinek describes how famous brands like Apple engage their customers starting with their “why” for doing business, which takes precedence over “how” they do business, and even over “what” their business does. At Native, we knew our “why” was something about helping people discover the world around them, and we now knew that the “how” and “what” of our current business wouldn’t get us there.
So, we decided to pivot.
Around this time I grabbed coffee with my friend Fletcher Richman. I explained to him the situation and asked for his advice. He offered the perspective that startups are designed to solve problems in the most efficient way possible. Basically, startups should be created to fill voids in the market that weren’t being solved by an existing company. The main issue was we had no problem to solve.
Shit.
250k in funding, but nothing to fund? Do we give up, give the money back, and go get real jobs? Lianne and I weren’t done yet, so we went in search of problems worth solving.

Read the entire story here.

Living In And From a Box

google-search-boxes

Many of us in the West are lucky enough to live in a house or apartment. But for all intents it’s really an over-sized box. We are box dwellers. So it comes as no surprise to see our fascination of boxes accelerate over the last 10 years or so. These more recent boxes are much smaller than the ones in which we eat, relax, work and sleep, and they move around; these new boxes are the ones that deliver all we need to eat, relax, work and sleep.

Nowadays from the comfort of our own big box we can have anything delivered to us in a smaller box. [As I write this I’m sitting on my favorite armchair, which arrived from an online store, via a box]. But, this age of box-delivered convenience is very much a double-edged sword. We can now sate our cravings for almost anything, anytime and have an anonymous box-bringer deliver it to us almost instantaneously and all without any human interaction. We can now surround ourselves with foods and drinks and objects (and boxes) without ever leaving our very own box. We are becoming antisocial hermits.

From Medium:

Angel the concierge stands behind a lobby desk at a luxe apartment building in downtown San Francisco, and describes the residents of this imperial, 37-story tower. “Ubers, Squares, a few Twitters,” she says. “A lot of work-from-homers.”

And by late afternoon on a Tuesday, they’re striding into the lobby at a just-get-me-home-goddammit clip, some with laptop bags slung over their shoulders, others carrying swank leather satchels. At the same time a second, temporary population streams into the building: the app-based meal delivery people hoisting thermal carrier bags and sacks. Green means Sprig. A huge M means Munchery. Down in the basement, Amazon Prime delivery people check in packages with the porter. The Instacart groceries are plunked straight into a walk-in fridge.

This is a familiar scene. Five months ago I moved into a spartan apartment a few blocks away, where dozens of startups and thousands of tech workers live. Outside my building there’s always a phalanx of befuddled delivery guys who seem relieved when you walk out, so they can get in. Inside, the place is stuffed with the goodies they bring: Amazon Prime boxes sitting outside doors, evidence of the tangible, quotidian needs that are being serviced by the web. The humans who live there, though, I mostly never see. And even when I do, there seems to be a tacit agreement among residents to not talk to one another. I floated a few “hi’s” in the elevator when I first moved in, but in return I got the monosyllabic, no-eye-contact mumble. It was clear: Lady, this is not that kind of building.

Back in the elevator in the 37-story tower, the messengers do talk, one tells me. They end up asking each other which apps they work for: Postmates. Seamless. EAT24. GrubHub. Safeway.com. A woman hauling two Whole Foods sacks reads the concierge an apartment number off her smartphone, along with the resident’s directions: “Please deliver to my door.”

“They have a nice kitchen up there,” Angel says. The apartments rent for as much as $5,000 a month for a one-bedroom. “But so much, so much food comes in. Between 4 and 8 o’clock, they’re on fire.”

I start to walk toward home. En route, I pass an EAT24 ad on a bus stop shelter, and a little further down the street, a Dungeons & Dragons–type dude opens the locked lobby door of yet another glass-box residential building for a Sprig deliveryman:

“You’re…”

“Jonathan?”

“Sweet,” Dungeons & Dragons says, grabbing the bag of food. The door clanks behind him.

And that’s when I realized: the on-demand world isn’t about sharing at all. It’s about being served. This is an economy of shut-ins.

In 1998, Carnegie Mellon researchers warned that the internet could make us into hermits. They released a study monitoring the social behavior of 169 people making their first forays online. The web-surfers started talking less with family and friends, and grew more isolated and depressed. “We were surprised to find that what is a social technology has such anti-social consequences,” said one of the researchers at the time. “And these are the same people who, when asked, describe the Internet as a positive thing.”

We’re now deep into the bombastic buildout of the on-demand economy— with investment in the apps, platforms and services surging exponentially. Right now Americans buy nearly eight percent of all their retail goods online, though that seems a wild underestimate in the most congested, wired, time-strapped urban centers.

Many services promote themselves as life-expanding?—?there to free up your time so you can spend it connecting with the people you care about, not standing at the post office with strangers. Rinse’s ad shows a couple chilling at a park, their laundry being washed by someone, somewhere beyond the picture’s frame. But plenty of the delivery companies are brutally honest that, actually, they never want you to leave home at all.

GrubHub’s advertising banks on us secretly never wanting to talk to a human again: “Everything great about eating, combined with everything great about not talking to people.” DoorDash, another food delivery service, goes for the all-caps, batshit extreme:

“NEVER LEAVE HOME AGAIN.”

Katherine van Ekert isn’t a shut-in, exactly, but there are only two things she ever has to run errands for any more: trash bags and saline solution. For those, she must leave her San Francisco apartment and walk two blocks to the drug store, “so woe is my life,” she tells me. (She realizes her dry humor about #firstworldproblems may not translate, and clarifies later: “Honestly, this is all tongue in cheek. We’re not spoiled brats.”) Everything else is done by app. Her husband’s office contracts with Washio. Groceries come from Instacart. “I live on Amazon,” she says, buying everything from curry leaves to a jogging suit for her dog, complete with hoodie.

She’s so partial to these services, in fact, that she’s running one of her own: A veterinarian by trade, she’s a co-founder of VetPronto, which sends an on-call vet to your house. It’s one of a half-dozen on-demand services in the current batch at Y Combinator, the startup factory, including a marijuana delivery app called Meadow (“You laugh, but they’re going to be rich,” she says). She took a look at her current clients?—?they skew late 20s to late 30s, and work in high-paying jobs: “The kinds of people who use a lot of on demand services and hang out on Yelp a lot ?”

Basically, people a lot like herself. That’s the common wisdom: the apps are created by the urban young for the needs of urban young. The potential of delivery with a swipe of the finger is exciting for van Ekert, who grew up without such services in Sydney and recently arrived in wired San Francisco. “I’m just milking this city for all it’s worth,” she says. “I was talking to my father on Skype the other day. He asked, ‘Don’t you miss a casual stroll to the shop?’ Everything we do now is time-limited, and you do everything with intention. There’s not time to stroll anywhere.”

Suddenly, for people like van Ekert, the end of chores is here. After hours, you’re free from dirty laundry and dishes. (TaskRabbit’s ad rolls by me on a bus: “Buy yourself time?—?literally.”)

So here’s the big question. What does she, or you, or any of us do with all this time we’re buying? Binge on Netflix shows? Go for a run? Van Ekert’s answer: “It’s more to dedicate more time to working.”

Read the entire article here.

Image courtesy of Google Search.

Viva Vinyl

Hotel-California-album

When I first moved to college and a tiny dorm room (in the UK they’re called halls of residence), my first purchase was a Garrard turntable and a pair of Denon stereo speakers. Books would come later. First, I had to build a new shrine to my burgeoning vinyl collection, which thrives even today.

So, after what seems like a hundred years since those heady days and countless music technology revolutions, it comes as quite a surprise — but perhaps not — to see vinyl on a resurgent path. The disruptors tried to kill LPs, 45s and 12-inchers with 8-track (ha), compact cassette (yuk), minidisk (yawn), CD (cool), MP3 (meh), iPod (yay) and now streaming (hmm).

But like a kind, zombie uncle the music industry cannot completely bury vinyl for good. Why did vinyl capture the imagination and the ears of the audiophile so? Well, perhaps it comes from watching the slow turn of the LP on the cool silver platter. Or, it may be the anticipation from watching the needle spiral its way to the first track. Or the raw, crackling authenticity of the sound. For me it was the weekly pilgrimage to the dusty independent record store — sampling tracks on clunky headphones; soaking up the artistry of the album cover, the lyrics, the liner notes; discussing the pros and cons of the bands with friends. Our digital world has now mostly replaced this experience, but it cannot hope to replicate it. Long live vinyl.

From ars technica:

On Thursday [July 2, 2015] , Nielsen Music released its 2015 US mid-year report, finding that overall music consumption had increased by 14 percent in the first half of the year. What’s driving that boom? Well, certainly a growth in streaming—on-demand streaming increased year-over-year by 92.4 percent, with more than 135 billion songs streamed, and overall sales of digital streaming increased by 23 percent.

But what may be more fascinating is the continued resurgence of the old licorice pizza—that is, vinyl LPs. Nielsen reports that vinyl LP sales are up 38 percent year-to-date. “Vinyl sales now comprise nearly 9 percent of physical album sales,” Nielsen stated.

Who’s leading the charge on all that vinyl? None other than the music industry’s favorite singer-songwriter Taylor Swift with her album 1989, which sold 33,500 LPs. Swift recently flexed her professional muscle when she wrote an open letter to Apple, criticizing the company for failing to pay artists during the free three-month trial of Apple Music. Apple quickly kowtowed to the pop star and reversed its position.

Following behind Swift on the vinyl chart is Sufjan Stevens’ Carrie & Lowell, The Arctic Monkeys’ AM (released in 2013), Alabama Shakes’ Sound & Color, and in fifth place, none other than Miles Davis’ Kind of Blue, which sold 23,200 copies in 2015.

Also interesting is that Nielsen found that digital album sales were flat compared to last year, and digital track sales were down 10.4 percent. Unsurprisingly, CD sales were down 10 percent.

When Nielsen reported in 2010 that 2.5 million vinyl records were sold in 2009, Ars noted that was more than any other year since the media-tracking business started keeping score in 1991. Fast forward five years and that number has more than doubled, as Nielsen counted 5.6 million vinyl records sold. The trend shows little sign of abating—last year, the US’ largest vinyl plant reported that it was adding 16 vinyl presses to its lineup of 30, and just this year Ars reported on a company called Qrates that lets artists solicit crowdfunding to do small-batch vinyl pressing.

Read the entire story here.

Image: Hotel California, The Eagles, album cover. Courtesy of the author.

A Patent to End All Patents

You’ve seen the “we’ll help you file your patent application” infomercials on late night cable. The underlying promise is simple: your unique invention will find its way into every household on Earth and consequently will thrust you into the financial stratosphere making you the planet’s first gazillionaire. Of course, this will happen only after you part with your hard-earned cash for help in filing the patent. Incidentally, filing a patent with the US Patent and Trademark Office (USPTO) usually starts at around $10-15,000.

Some patents are truly extraordinary in their optimistic silliness: wind harnessing bicycle, apparatus for simulating a high-five, flatulence deodorizer, jet-powered surfboard, thong diaper, life-size interactive bowl of soup, nicotine infused coffee, edible business cards, magnetic rings to promote immortality, and so it goes. Remember, though, this is the United States, and most crazy things are possible and profitable. So, you could well find yourself becoming addicted to those 20oz nicotine infused lattes each time you pull up at the local coffee shop on your jet-powered surfboard.

But perhaps the most recent thoroughly earnest and whacky patent filing comes from Boeing no less. It’s for a laser-powered fusion-fission jet engine. The engine uses ultra-high powered lasers to fuse pellets of hydrogen, causing uranium to fission, which generates heat and subsequently electricity. All of this powering your next flight to Seattle. So, the next time you fly on a Boeing aircraft, keep in mind what some of the company’s engineers have in store for you 100 or 1,000 years from now. I think I’d prefer to be disassembled and beamed up.

From ars technica:

Assume the brace position: Boeing has received a patent for, I kid you not, a laser-powered fusion-fission jet propulsion system. Boeing envisions that this system could replace both rocket and turbofan engines, powering everything from spacecraft to missiles to airplanes.

The patent, US 9,068,562, combines inertial confinement fusion, fission, and a turbine that generates electricity. It sounds completely crazy because it is. Currently, this kind of engine is completely unrealistic given our mastery of fusion, or rather our lack thereof. Perhaps in the future (the distant, distant future that is), this could be a rather ingenious solution. For now, it’s yet another patent head-scratcher.

To begin with, imagine the silhouette of a big turbofan engine, like you’d see on a commercial jetliner. Somewhere in the middle of the engine there is a fusion chamber, with a number of very strong lasers focused on a single point. A hohlraum (pellet) containing a mix of deuterium and tritium (hydrogen isotopes) is placed at this focal point. The lasers are all turned on at the same instant, creating massive pressure on the pellet, which implodes and causes the hydrogen atoms to fuse. (This is called inertial confinement fusion, as opposed to the magnetic confinement fusion that is carried out in a tokamak.)

According to the patent, the hot gases produced by the fusion are pushed out of a nozzle at the back of the engine, creating thrust—but that’s not all! One of the by-products of hydrogen fusion is lots of fast neutrons. In Boeing’s patented design, there is a shield around the fusion chamber that’s coated with a fissionable material (uranium-238 is one example given). The neutrons hit the fissionable material, causing a fission reaction that generates lots of heat.

Finally, there’s some kind of heat exchanger system that takes the heat from the fission reaction and uses that heat (via a heated liquid or gas) to drive a turbine. This turbine generates the electricity that powers the lasers. Voilà: a fusion-fission rocket engine thing.

Let’s talk a little bit about why this is such an outlandish idea. To begin with, this patented design involves placing a lump of material that’s made radioactive in an airplane engine—and these vehicles are known to sometimes crash. Today, the only way we know of efficiently harvesting radioactive decay is a giant power plant, and we cannot get inertial fusion to fire more than once in a reasonable amount of time (much less on the short timescales needed to maintain thrust). This process requires building-sized lasers, like those found at the National Ignition Facility in California. Currently, the technique only works poorly. Those two traits are not conducive to air travel.

But this is the USA we’re talking about, where patents can be issued on firewalls (“being wielded in one of most outrageous trolling campaigns we have ever seen,” according to the EFF) and universities can claim such rights on “agent-based collaborative recognition-primed decision-making” (EFF: “The patent reads a little like what might result if you ate a dictionary filled with buzzwords and drank a bottle of tequila”). As far as patented products go, it is pretty hard to imagine this one actually being built in the real world. Putting aside the difficulties of inertial confinement fusion (we’re nowhere near hitting the break-even point), it’s also a bit far-fetched to shoehorn all of these disparate and rather difficult-to-work-with technologies into a small chassis that hangs from the wing of a commercial airplane.

Read the entire story here.

 

The Post-Stewart Apocalypse

Jon-Stewart

Our planet continues to orbit its home star. The cosmos has yet to collapse into a galactic-sized blackhole. But, don’t be fooled. The apocalypse is here. It has indeed arrived. Today is August 7, 2015 or 1 PS.

We are now one day into the PS era, that’s PS for Post-Stewart — Jon Stewart, that is. So, as we enter this uncharted period — a contemporary Dark Ages — I will mourn Jon Stewart’s passing and yet curse him for leaving The Daily Show before his projected death of natural causes in 2065.

However, I am reminded that his arch-enemy Faux News will continue to amaze and entertain those of us who search for truth in the dumbed-down, fear-mongering drivel that it pumps through our nation’s cables. The channel’s puppet-master, and chief propagandist, Roger Ailes had this to say of Stewart,

“He’s feeling unrewarded because Fox News beats him on the amount of money we make, on ratings and on popularity. I’m sure it’s very depressing when he sits home at night and worries about it. We never did.”

This is so wonderfully hilarious, for Mr. Ailes fails to notice that he’s comparing his vast “news” media empire to a mere comedy show. I suppose I can take solace from this quote — who needs Jon Stewart when the target of his ire can do such a preeminent job of skewering itself.

Bye Jon, I hope you find several suitable Moments of Zen! But, you’re still a bastard.

Image courtesy of Google Search / The Daily Show.

The Battle of the Century

The comedic geniuses, Laurel and Hardy show us what happens when aggression and revenge are channeled through slapstick and 3,000 custard pies. If only all our human conflicts could be resolved through a good custard pie fight.

More importantly, the missing second reel of their 1927 silent movie, The Battle of the Century, has been found. So, we may finally know the climax of the Stan and Ollie cult classic — and see more pie-throwing in the process. Yum.

[tube]XDgnqfepRfI[/tube]

Video: Clip from Laurel and Hardy’s silent film The Battle of the Century (1927).

Europa Here We Come

NASA-Europa

With the the European Space Agency’s (ESA) Philae lander firmly rooted to a comet, NASA’s Dawn probe orbiting dwarf planet Ceres and its New Horizon’s spacecraft hurtling towards Pluto and Charon it would seem that we are doing lots of extraterrestrial exploration lately. Well, this is exciting, but for arm-chair explorers like myself this is still not enough. So, three cheers to NASA for giving a recent thumbs up to their next great mission — Europa Multi Flyby — to Jupiter’s moon, Europa.

Development is a go! But we’ll have to wait until the mid-2020s for lift-off. And, better yet, ESA has a mission to Europa planned for launch in 2022. Can’t wait — it looks spectacular.

From ars technica:

Get ready, we’re going to Europa! NASA’s plan to send a spacecraft to explore Jupiter’s moon just passed a major hurdle. The mission, planned for the 2020s, now has NASA’s official stamp of approval and was given the green light to move from concept phase to development phase.

Formerly known as Europa Clipper, the mission will temporarily be referred to as the Europa Multi Flyby Mission until it is given an official name. The current mission plan would include 45 separate flybys around the moon while orbiting Jupiter every two weeks. “We are taking an exciting step from concept to mission in our quest to find signs of life beyond Earth,” John Grunsfeld, associate administrator for NASA’s Science Mission Directorate, said in a press release.

Since Galileo first turned a spyglass up to the skies and discovered the Jovian moon, Europa has been a world of intrigue. In the 1970s, we received our first look at Europa through the eyes of Pioneer 10 and 11, followed closely by the twin Voyager satellites in the 1980s. Their images provided the first detailed view of the Solar System’s smoothest body. These photos also delivered evidence that the moon might be harboring a subsurface ocean. In the mid 1990s, the Galileo spacecraft gave us the best view to-date of Europa’s surface.

“Observations of Europa have provided us with tantalizing clues over the last two decades, and the time has come to seek answers to one of humanity’s most profound questions,” Grunsfeld said. “Mainly, is there life beyond Earth?”

Sending a probe to explore Jupiter’s icy companion will help scientists in the search for this life. If Europa can support microbial life, other glacial moons such as Enceladus might as well.

Water, chemistry, and energy are three components essential to the presence of life. Liquid water is present throughout the Solar System, but so far the only world known to support life is Earth. Scientists think that if we follow the water, we may find evidence of life beyond Earth.

However, water alone will not support life; the right combination of ingredients is key. This mission to Europa will explore the moon’s potential habitability as opposed to outright looking for life.

When we set out to explore new worlds, we do it in phases. First we flyby, then we send robotic landers, and then we send people. This three-step process is how we, as humans, have explored the Moon and how we are partly through the process of exploring Mars.

The flyby of Europa will be a preliminary mission with four objectives: explore the ice shell and subsurface ocean; determine the composition, distribution, and chemistry of various compounds and how they relate to the ocean composition; map surface features and determine if there is current geologic activity; characterize sites to determine where a future lander might safely touch down.

Europa, at 3,100 kilometers wide (1,900 miles), is the sixth largest moon in the Solar System. It has a 15 to 30 kilometer (9 to 18 mile) thick icy outer crust that covers a salty subsurface ocean. If that ocean is in contact with Europa’s rocky mantle, a number of complex chemical reactions are possible. Scientists think that hydrothermal vents lurk on the seafloor, and, just like the vents here on Earth, they could support life.

The Galileo orbiter taught us most of what we know about Europa through 12 flybys of the icy moon. The new mission is scheduled to conduct approximately 45 flybys over a 2.5-year period, providing even more insight into the moon’s habitability.

Read the article here.

Image: Europa. Europa is Jupiter’s sixth-closest moon, and the sixth-largest moon in the Solar System. Courtesy of NASA.

An Eleven Year Marathon

While 11 years is about how long my kids suggest it would take me to run a marathon, this marathon is entirely other-worldly. It’s taken NASA’s Opportunity rover this length of time to cover just over 26 miles. It may seem like an awfully long time to cover that short distance, but think of all the rest stops — for incredible scientific discovery — along the way.

Check out a time-lapse that compresses Opportunity’s incredible martian journey into a mere 8 minutes.

[tube]3b1DxICZbGc[/tube]

Video courtesy of NASA / JPL.

Kodokushi. A Lonely Death

As we age many of us tend to ponder our legacies. We wonder if we did good throughout our lives; we wonder if we’ll be remembered. Then we die.

Some will pass on treasured mementos to their descendents, families and friends; others — usually the one percenters — will cast their names on buildings, art bequests, research funds, and academic chairs. And yet others may not entrust any physical objects to their survivors, but nonetheless they’ll leave behind even more significant artifacts: trails of goodwill, moral frameworks, positive behaviors and traits, sound knowledge and teachings, passion, wonder.

Some of us will die in our sleep. A few will die in accidents or at the hands of others. Many of us will die in hospitals or clinics, attached to our technologies, sometimes attended by nearest and dearest, sometimes attended only by clinicians.

Sadly, some will die alone. Paradoxically, despite our increasing technologically enabled interconnectedness this phenomenon is on the increase, especially in aging societies with a low birth rate. Japan is a striking example — to such an extent that the Japanese even have a word for it: kodokushi or “lonely death”. Sadder still, where there are kodokushi victims there are now removal companies dedicated to their cleanup.

From Roads and Kingdoms:

Three months ago in an apartment on the outskirts of Osaka, Japan, Haruki Watanabe died alone. For weeks his body slowly decomposed, slouched in its own fluids and surrounded by fetid, fortnight-old food. He died of self-neglect, solitude, and a suspected heart problem. At 60, Watanabe, wasn’t old, nor was he especially poor. He had no friends, no job, no wife, and no concerned children. His son hadn’t spoken to him in years, nor did he want to again.

For three months no one called, no one knew, no one cared. For three months Watanabe rotted in his bedsheets, alongside pots of instant ramen and swarming cockroaches. The day that someone eventually called, he came not out of concern but out of administration. Watanabe had run out of money, and his bank had stopped paying the rent. The exasperated landlord, Toru Suzuki, had rung and rung, but no one had picked up. Sufficiently angry, he made the trip from his own home, in downtown Osaka, to the quiet suburb where his lodger lived. (Both men’s names are pseudonyms.)

First, there was the smell, a thick, noxious sweetness oozing from beneath the door frame. Second, there was the sight, the shape of a mortally slumped corpse beneath urine-soaked bedsheets. Third, there was the reality: Suzuki had come to collect his dues but had instead found his tenant’s dead body.

Disgusted, angry, but mostly shocked that this could happen to him, the landlord rang the police. The police came; they investigated with procedural dispassion and declared the death unsuspicious. This wasn’t suicide in the traditional sense, they said, but it did seem that the deceased had wanted to die. They’d seen it before, and it was an increasingly common occurrence throughout Japan: a single man dying, essentially, from loneliness.

They noted down what was required by their forms, wrapped up the body in officialdom, tied it with red tape, and removed it amid gawps and gags of inquisitive neighbors. The police then departed for the cemetery, where, because no family member had stepped forward to claim the body, they would intern Watanabe in an unmarked grave alongside the rest of Japan’s forgotten dead.

Suzuki was now left to his festering property and precarious financials. He was concerned. He didn’t know who to call or how to deal with the situation. In Japan, suicide can dramatically reduce the value of a property, and although this wasn’t suicide, his neighbors had seen enough; the gossip would spread fast. He heard whispers of kodokushi, a word bandied about since the Great Hanshin earthquake in 1995, when thousands of elderly Japanese were relocated to different residences and started dying alone, ostracized or isolated from family and friends. But what did that really mean for Suzuki, and how was he going to deal with it? Like most Japanese, he had heard of the “lonely death” but had not really believed in it; he certainly didn’t know what to do in such circumstances. So he turned to the Internet, and after hours of fruitless searching found a company called Risk-Benefit, run by a man named Toru Koremura.

With no other options he picked up the phone and gave the company a call.

With one of the fastest aging populations in the world and traditional family structures breaking down, Japan’s kodokushi phenomenon is becoming harder to ignore—not that the government and the Japanese people don’t do their best to sweep it under the carpet. Inaccurate statistics abound, with confusing definitions of what is and isn’t considered kodokushi being created in the process. According to the Ministry of Health, Labour and Welfare, there were some 3,700 “unaccompanied deaths” in Japan in 2013. However, other experts estimate the number is nearer 30,000 a year.

Scott North, a sociologist at Osaka University, argues that this extreme divergence could be the result of experts including some forms of suicide (of which there are around 27,000 cases a year in Japan) into the category of kodokushi. It could also be the result of bad accounting. Recently, senior Japanese bureaucrats admitted to having lost track of more than 250,000 people older than age 100. In a case that made international headlines in 2010, Sogen Kato, thought to be Tokyo’s oldest man at 111 years of age, turned out to have been mummified in his own apartment for more than 30 years.

Read the entire story here.

Thirty Going On Sixty or Sixty Going on Thirty?

By now you probably realize that I’m a glutton for human research studies. I’m particularly fond of studies that highlight a particular finding one week, only to be contradicted by the results of another study the following week.

However, despite lack of contradictions, this one published via the Proceedings of the National Academy of Sciences caught my eye. It suggests that we age at remarkably different rates. While most subjects showed a perceived, biological age within a handful of years of their actual, chronological age, there were some surprises. Some 30-year-olds showed a biological age twice that of their chronological age, while some appeared ten years younger.

From the BBC:

A study of people born within a year of each other has uncovered a huge gulf in the speed at which their bodies age.

The report, in Proceedings of the National Academy of Sciences, tracked traits such as weight, kidney function and gum health.

Some of the 38-year-olds were ageing so badly that their “biological age” was on the cusp of retirement.

The team said the next step was to discover what was affecting the pace of ageing.

The international research group followed 954 people from the same town in New Zealand who were all born in 1972-73.

The scientists looked at 18 different ageing-related traits when the group turned 26, 32 and 38 years old.

The analysis showed that at the age of 38, the people’s biological ages ranged from the late-20s to those who were nearly 60.

“They look rough, they look lacking in vitality,” said Prof Terrie Moffitt from Duke University in the US.

The study said some people had almost stopped ageing during the period of the study, while others were gaining nearly three years of biological age for every twelve months that passed.

People with older biological ages tended to do worse in tests of brain function and had a weaker grip.

Most people’s biological age was within a few years of their chronological age. It is unclear how the pace of biological ageing changes through life with these measures.

Read the entire story here.

Cat in the (Hat) Box

google-search-cat

Cat owner? Ever pondered why your aloof, inscrutable feline friend loves boxes? Here are some answers courtesy of people who study these kinds of things.

From Wired:

Take heart feline enthusiasts. Your cat’s continued indifference toward her new Deluxe Scratch DJ Deck may be disappointing, but there is an object that’s pretty much guaranteed to pique her interest. That object, as the Internet has so thoroughly documented, is a box. Any box, really. Big boxes, small boxes, irregularly shaped boxes—it doesn’t matter. Place one on the ground, a chair, or a bookshelf and watch as Admiral Snuggles quickly commandeers it.

So what are we to make of the strange gravitational pull that empty Amazon packaging exerts on Felis sylvestris catus? Like many other really weird things cats do, science hasn’t fully cracked this particular feline mystery. There’s the obvious predation advantage a box affords: Cats are ambush predators, and boxes provide great hiding places to stalk prey from (and retreat to). But there’s clearly more going on here.

Thankfully, behavioral biologists and veterinarians have come up with a few other interesting explanations. In fact, when you look at all the evidence together, it could be that your cat may not just like boxes, he may need them.

The box-and-whisker plot

Understanding the feline mind is notoriously difficult. Cats, after all, tend not to be the easiest test subjects. Still, there’s a sizable amount of behavioral research on cats who are, well, used for other kinds of research (i.e., lab cats). These studies—many of which focused on environmental enrichment—have been taking place for more than 50 years and they make one thing abundantly clear: Your fuzzy companion derives comfort and security from enclosed spaces.

This is likely true for a number of reasons, but for cats in these often stressful situations, a box or some other type of separate enclosure (within the enclosures they’re already in) can have a profound impact on both their behavior and physiology.

EthologistClaudia Vinke of Utrecht University in the Netherlands is one of the latest researchers to study stress levels in shelter cats. Working with domestic cats in a Dutch animal shelter, Vinke provided hiding boxes for a group of newly arrived cats while depriving another group of them entirely. She found a significant difference in stress levels between cats that had the boxes and those that didn’t. In effect, the cats with boxes got used to their new surroundings faster, were far less stressed early on, and were more interested in interacting with humans.

Read the entire story here.

Image courtesy of Google Search.

Dune At Fifty

USA_Oregon_Dunes

Quite coincidentally, and with no prescience at work, I had a half-read Dune Messiah (the second installment of the Dune chronicles) at my side when this article spun its way across the ether. So, it made me put digital pen to digital paper. It’s hard to believe that this master work is now well into middle age. And like a fine wine maturing over time, rather than bursting into our collective consciousness when first published, Dune and its successors took decades to reach a critical mass of appeal.

In crafting this epic work of imagination Frank Herbert takes us on a voyage that goes beyond the narrow genres much-needed by our literary establishment. Is Dune science fiction? Is Dune space opera? Is Dune Fantasy or literary fiction? Is Dune thriller or romance? Or is Dune a treatise on politics and religion. The answer is yes.

But rather than seek to pigeonhole the work and thus limit its initial appeal to a new audience, I think it would be wise to took at Dune in an entirely different way. Dune is an evolutionary tale, and at many levels — it tells us of the evolution of ecological philosophy; the evolution of the self and of the state; the evolution of ideas and religion; the evolution of consciousness and culture.

I have to hope that younger generations, evolving fifty years from now and beyond, will be reading and contemplating Herbert’s work with as much awe.

From the Guardian:

In 1959, if you were walking the sand dunes near Florence, Oregon, you might have encountered a burly, bearded extrovert, striding about in Ray-Ban Aviators and practical army surplus clothing. Frank Herbert, a freelance writer with a feeling for ecology, was researching a magazine story about a US Department of Agriculture programme to stabilise the shifting sands by introducing European beach grass. Pushed by strong winds off the Pacific, the dunes moved eastwards, burying everything in their path. Herbert hired a Cessna light aircraft to survey the scene from the air. “These waves [of sand] can be every bit as devastating as a tidal wave … they’ve even caused deaths,” he wrote in a pitch to his agent. Above all he was intrigued by the idea that it might be possible to engineer an ecosystem, to green a hostile desert landscape.

About to turn 40, Herbert had been a working writer since the age of 19, and his fortunes had always been patchy. After a hard childhood in a small coastal community near Tacoma, Washington, where his pleasures had been fishing and messing about in boats, he’d worked for various regional newspapers in the Pacific northwest and sold short stories to magazines. He’d had a relatively easy war, serving eight months as a naval photographer before receiving a medical discharge. More recently he’d spent a weird interlude in Washington as a speechwriter for a Republican senator. There (his only significant time living on the east coast) he attended the daily Army-McCarthy hearings, watching his distant relative senator Joseph McCarthy root out communism. Herbert was a quintessential product of the libertarian culture of the Pacific coast, self-reliant and distrustful of centralised authority, yet with a mile-wide streak of utopian futurism and a concomitant willingness to experiment. He was also chronically broke. During the period he wrote Dune, his wife Beverly Ann was the main bread-winner, her own writing career sidelined by a job producing advertising copy for department stores.

Soon, Herbert’s research into dunes became research into deserts and desert cultures. It overpowered his article about the heroism of the men of the USDA (proposed title “They Stopped the Moving Sands”) and became two short SF novels, serialised in Analog Science Fact & Fiction, one of the more prestigious genre magazines. Unsatisfied, Herbert industriously reworked his two stories into a single, giant epic. The prevailing publishing wisdom of the time had it that SF readers liked their stories short. Dune (400 pages in its first hardcover edition, almost 900 in the paperback on my desk) was rejected by more than 20 houses before being accepted by Chilton, a Philadelphia operation known for trade and hobby magazines such as Motor Age, Jewelers’ Circular and the no-doubt-diverting Dry Goods Economist.

Though Dune won the Nebula and Hugo awards, the two most prestigious science fiction prizes, it was not an overnight commercial success. Its fanbase built through the 60s and 70s, circulating in squats, communes, labs and studios, anywhere where the idea of global transformation seemed attractive. Fifty years later it is considered by many to be the greatest novel in the SF canon, and has sold in millions around the world.

***

Dune is set in a far future, where warring noble houses are kept in line by a ruthless galactic emperor. As part of a Byzantine political intrigue, the noble duke Leto, head of the Homerically named House Atreides, is forced to move his household from their paradisiacal home planet of Caladan to the desert planet Arrakis, colloquially known as Dune. The climate on Dune is frighteningly hostile. Water is so scarce that whenever its inhabitants go outside, they must wear stillsuits, close-fitting garments that capture body moisture and recycle it for drinking.

The great enemy of House Atreides is House Harkonnen, a bunch of sybaritic no-goods who torture people for fun, and whose head, Baron Vladimir, is so obese that he has to use little anti-gravity “suspensors” as he moves around. The Harkonnens used to control Dune, which despite its awful climate and grubby desert nomad people, has incalculable strategic significance: its great southern desert is the only place in the galaxy where a fantastically valuable commodity called “melange” or “spice” is mined. Spice is a drug whose many useful properties include the induction of a kind of enhanced space-time perception in pilots of interstellar spacecraft. Without it, the entire communication and transport system of the Imperium will collapse. It is highly addictive, and has the side effect of turning the eye of the user a deep blue. Spice mining is dangerous, not just because of sandstorms and nomad attacks, but because the noise attracts giant sandworms, behemoths many hundreds of metres in length that travel through the dunes like whales through the ocean.

Have the Harkonnens really given up Dune, this source of fabulous riches? Of course not. Treachery and tragedy duly ensue, and young Paul survives a general bloodbath to go on the run in the hostile open desert, accompanied, unusually for an adventure story, by his mum. Paul is already showing signs of a kind of cosmic precociousness, and people suspect that he may even be the messiah figure foretold in ancient prophecies. His mother, Jessica, is an initiate of the great female powerbase in an otherwise patriarchal galactic order, a religious sisterhood called the Bene Gesserit. Witchy and psychically powerful, the sisters have engaged in millennia of eugenic programming, of which Paul may be the culmination.

This setup owes something to the Mars stories of Edgar Rice Burroughs and Isaac Asimov’s Foundation books, as well as the tales written by Idaho-born food chemist Elmer Edward “Doc” Smith, creator of the popular Lensman space operas of the 1940s and 50s, in which eugenically bred heroes are initiated into a “galactic patrol” of psychically enhanced supercops. For Smith, altered states of consciousness were mainly tools for the whiteous and righteous to vaporise whole solar systems of subversives, aliens and others with undesirable traits. Herbert, by contrast, was no friend of big government. He had also taken peyote and read Jung. In 1960, a sailing buddy introduced him to the Zen thinker Alan Watts, who was living on a houseboat in Sausalito. Long conversations with Watts, the main conduit by which Zen was permeating the west-coast counterculture, helped turn Herbert’s pacy adventure story into an exploration of temporality, the limits of personal identity and the mind’s relationship to the body.

Every fantasy reflects the place and time that produced it. If The Lord of the Rings is about the rise of fascism and the trauma of the second world war, and Game of Thrones, with its cynical realpolitik and cast of precarious, entrepreneurial characters is a fairytale of neoliberalism, then Dune is the paradigmatic fantasy of the Age of Aquarius. Its concerns – environmental stress, human potential, altered states of consciousness and the developing countries’ revolution against imperialism – are blended together into an era-defining vision of personal and cosmic transformation.

Read the entire article here.

Image: The Oregon Dunes, near Florence, Oregon, served as an inspiration for the Dune saga. Courtesy of Rebecca Kennison. Creative Commons.