Category Archives: Technica

Google AI Versus the Human Race

Korean_Go_Game_ca_1910-1920

It does indeed appear that a computer armed with Google’s experimental AI (artificial intelligence) software just beat a grandmaster of the strategy board game Go. The game was devised in ancient China — it’s been around for several millennia. Go is commonly held to be substantially more difficult than chess to master, to which I can personally attest.

So, does this mean that the human race is next in line for a defeat at the hands of an uber-intelligent AI? Well, not really, not yet anyway.

But, I’m with prominent scientists and entrepreneurs — including Stephen Hawking, Bill Gates and Elon Musk — who warn of the long-term existential peril to humanity from unfettered AI. In the meantime check out how AlphaGo from Google’s DeepMind unit set about thrashing a human.

From Wired:

An artificially intelligent Google machine just beat a human grandmaster at the game of Go, the 2,500-year-old contest of strategy and intellect that’s exponentially more complex than the game of chess. And Nick Bostrom isn’t exactly impressed.

Bostrom is the Swedish-born Oxford philosophy professor who rose to prominence on the back of his recent bestseller Superintelligence: Paths, Dangers, Strategies, a book that explores the benefits of AI, but also argues that a truly intelligent computer could hasten the extinction of humanity. It’s not that he discounts the power of Google’s Go-playing machine. He just argues that it isn’t necessarily a huge leap forward. The technologies behind Google’s system, Bostrom points out, have been steadily improving for years, including much-discussed AI techniques such as deep learning and reinforcement learning. Google beating a Go grandmaster is just part of a much bigger arc. It started long ago, and it will continue for years to come.

“There has been, and there is, a lot of progress in state-of-the-art artificial intelligence,” Bostrom says. “[Google’s] underlying technology is very much continuous with what has been under development for the last several years.”

But if you look at this another way, it’s exactly why Google’s triumph is so exciting—and perhaps a little frightening. Even Bostrom says it’s a good excuse to stop and take a look at how far this technology has come and where it’s going. Researchers once thought AI would struggle to crack Go for at least another decade. Now, it’s headed to places that once seemed unreachable. Or, at least, there are many people—with much power and money at their disposal—who are intent on reaching those places.

Building a Brain

Google’s AI system, known as AlphaGo, was developed at DeepMind, the AI research house that Google acquired for $400 million in early 2014. DeepMind specializes in both deep learning and reinforcement learning, technologies that allow machines to learn largely on their own.

Using what are called neural networks—networks of hardware and software that approximate the web of neurons in the human brain—deep learning is what drives the remarkably effective image search tool build into Google Photos—not to mention the face recognition service on Facebook and the language translation tool built into Microsoft’s Skype and the system that identifies porn on Twitter. If you feed millions of game moves into a deep neural net, you can teach it to play a video game.

Reinforcement learning takes things a step further. Once you’ve built a neural net that’s pretty good at playing a game, you can match it against itself. As two versions of this neural net play thousands of games against each other, the system tracks which moves yield the highest reward—that is, the highest score—and in this way, it learns to play the game at an even higher level.

AlphaGo uses all this. And then some. Hassabis [Demis Hassabis, AlphaGo founder] and his team added a second level of “deep reinforcement learning” that looks ahead to the longterm results of each move. And they lean on traditional AI techniques that have driven Go-playing AI in the past, including the Monte Carlo tree search method, which basically plays out a huge number of scenarios to their eventual conclusions. Drawing from techniques both new and old, they built a system capable of beating a top professional player. In October, AlphaGo played a close-door match against the reigning three-time European Go champion, which was only revealed to the public on Wednesday morning. The match spanned five games, and AlphaGo won all five.

Read the entire story here.

Image: Korean couple, in traditional dress, play Go; photograph dated between 1910 and 1920. Courtesy: Frank and Frances Carpenter Collection. Public Domain.

DeepDrumpf the 4th-Grader

DeepDrumpf is a Twitter bot out of MIT’s Computer Science and Artificial Intelligence Lab (CSAIL). It uses artificial intelligence (AI) to learn from the jaw-dropping rants of the current Republican frontrunner for the Presidential nomination and then tweets its own remarkably Trump-like musings.

A handful of DeepDrumpf’s recent deep-thoughts here:

DeepDrumpf-Twitter-bot

 

The bot’s designer CSAIL postdoc Bradley Hayes says DeepDrumpf uses “techniques from ‘deep-learning,’ a field of artificial intelligence that uses systems called neural networks to teach computers to to find patterns on their own. ”

I would suggest that the deep-learning algorithms, in the case of Trump’s speech patterns, did not have to be too deep. After all, linguists who have studied his words agree that it’s mostly at a  4th-grade level — coherent language is not required.

Patterns aside, I think I prefer the bot over the real thing — it’s likely to do far less damage to our country and the globe than the real thing.

 

Another Corporate Empire Bites the Dust

Motorola-DynaTACBusinesses and brands come and they go. Seemingly unassailable corporations, often valued in the tens of billions of dollars (and sometimes more) fall to the incessant march of technological change and increasingly due to the ever fickle desires of the consumer.

And, these monoliths of business last but blinks of an eye when compared with the likes of our vast social empires such as the Roman, Han, Ottoman, Venetian, Sudanese, Portuguese, which persist for many hundreds — sometimes thousands — of years.

Yet, even a few years ago who would have predicted the demise of the Motorola empire, the company mostly responsible for the advent of the handheld mobile phone. Motorola had been on a recent downward spiral, failing in part to capitalize on the shift to smartphones, mobile operating systems and apps. Now it’s brand is dust. RIP brick!

From the Guardian:

Motorola, the brand which invented the mobile phone, brought us the iconic “Motorola brick”, and gave us both the first flip-phone and the iconic Razr, is to cease to exist.

Bought from Google by the Chinese smartphone and laptop powerhouse Lenovo in January 2014, Motorola had found success over the past two years. It launched the Moto G in early 2014, which propelled the brand, which had all but disappeared after the Razr, from a near-0% market share to 6% of sales in the UK.

The Moto G kickstarted the reinvigoration of the brand, which saw Motorola ship more than 10m smartphones in the third quarter of 2014, up 118% year-on-year.

But now Lenovo has announced that it will kill off the US mobile phone pioneer’s name. It will keep Moto, the part of Motorola’s product naming that has gained traction in recent years, but Moto smartphones will be branded under Lenovo.

Motorola chief operating officer Rick Osterloh told Cnet that “we’ll slowly phase out Motorola and focus on Moto”.

The Moto line will be joined by Lenovo’s Vibe line in the low end, leaving the fate of the Moto E and G uncertain. The Motorola Mobility division of Lenovo will take over responsibility for the Chinese manufacturer’s entire smartphone range.

Read the entire story here.

Image: Motorola DynaTAC 8000X commercial portable cellular phone, 1983. Courtesy of Motorola.

Meet the Broadband Preacher

Welcome-to-mississippi_I20

This fascinating article follows Roberto Gallardo an extension professor at Mississippi State University as he works to bring digital literacy, the internet and other services of our 21st century electronic age to rural communities across the South. It’s an uphill struggle.

From Wired:

For a guy born and raised in Mexico, Roberto Gallardo has an exquisite knack for Southern manners. That’s one of the first things I notice about him when we meet up one recent morning at a deli in Starkville, Mississippi. Mostly it’s the way he punctuates his answers to my questions with a decorous “Yes sir” or “No sir”—a verbal tic I associate with my own Mississippi upbringing in the 1960s.

Gallardo is 36 years old, with a salt-and-pepper beard, oval glasses, and the faint remnant of a Latino accent. He came to Mississippi from Mexico a little more than a decade ago for a doctorate in public policy. Then he never left.

I’m here in Starkville, sitting in this booth, to learn about the work that has kept Gallardo in Mississippi all these years—work that seems increasingly vital to the future of my home state. I’m also here because Gallardo reminds me of my father.

Gallardo is affiliated with something called the Extension Service, an institution that dates back to the days when America was a nation of farmers. Its original purpose was to disseminate the latest agricultural know-how to all the homesteads scattered across the interior. Using land grant universities as bases of operations, each state’s extension service would deploy a network of experts and “county agents” to set up 4-H Clubs or instruct farmers in cultivation science or demonstrate how to can and freeze vegetables without poisoning yourself in your own kitchen.

State extension services still do all this, but Gallardo’s mission is a bit of an update. Rather than teach modern techniques of crop rotation, his job—as an extension professor at Mississippi State University—is to drive around the state in his silver 2013 Nissan Sentra and teach rural Mississippians the value of the Internet.

In sleepy public libraries, at Rotary breakfasts, and in town halls, he gives PowerPoint presentations that seem calculated to fill rural audiences with healthy awe for the technological sublime. Rather than go easy, he starts with a rapid-fire primer on heady concepts like the Internet of Things, the mobile revolution, cloud computing, digital disruption, and the perpetual increase of processing power. (“It’s exponential, folks. It’s just growing and growing.”) The upshot: If you don’t at least try to think digitally, the digital economy will disrupt you. It will drain your town of young people and leave your business in the dust.

Then he switches gears and tries to stiffen their spines with confidence. Start a website, he’ll say. Get on social media. See if the place where you live can finally get a high-speed broadband connection—a baseline point of entry into modern economic and civic life.

Even when he’s talking to me, Gallardo delivers this message with the straitlaced intensity of a traveling preacher. “Broadband is as essential to this country’s infrastructure as electricity was 110 years ago or the Interstate Highway System 50 years ago,” he says from his side of our booth at the deli, his voice rising high enough above the lunch-hour din that a man at a nearby table starts paying attention. “If you don’t have access to the technology, or if you don’t know how to use it, it’s similar to not being able to read and write.”

These issues of digital literacy, access, and isolation are especially pronounced here in the Magnolia State. Mississippi today ranks around the bottom of nearly every national tally of health and economic well-being. It has the lowest median household income and the highest rate of child mortality. It also ranks last in high-speed household Internet access. In human terms, that means more than a million Mississippians—over a third of the state’s population—lack access to fast wired broadband at home.

Gallardo doesn’t talk much about race or history, but that’s the broader context for his work in a state whose population has the largest percentage of African-Americans (38 percent) of any in the union. The most Gallardo will say on the subject is that he sees the Internet as a natural way to level out some of the persistent inequalities—between black and white, urban and rural—that threaten to turn parts of Mississippi into places of exile, left further and further behind the rest of the country.

And yet I can’t help but wonder how Gallardo’s work figures into the sweep of Mississippi’s history, which includes—looking back over just the past century—decades of lynchings, huge outward migrations, the fierce, sustained defense of Jim Crow, and now a period of unprecedented mass incarceration. My curiosity on this point is not merely journalistic. During the lead-up to the civil rights era, my father worked with the Extension Service in southern Mississippi as well. Because the service was segregated at the time, his title was “negro county agent.” As a very young child, I would travel from farm to farm with him. Now I’m here to travel around Mississippi with Gallardo, much as I did with my father. I want to see whether the deliberate isolation of the Jim Crow era—when Mississippi actively fought to keep itself apart from the main currents of American life—has any echoes in today’s digital divide.

Read the entire article here.

Image: Welcome to Mississippi. Courtesy of WebTV3.

The Internet of Flow

Time-based structures of information and flowing data — on a global scale — will increasingly dominate the Web. Eventually, this flow is likely to transform how we organize, consume and disseminate our digital knowledge. While we see evidence of this in effect today, in blogs, Facebook’s wall and timeline and, most basically, via Twitter, the long-term implications of this fundamentally new organizing principle have yet to be fully understood — especially in business.

For a brief snapshot on a possible, and likely, future of the Internet I turn to David Gelernter. He is Professor of Computer Science at Yale University, an important thinker and author who has helped shape the fields of parallel computing, artificial intelligence (AI) and networking. Many of Gelernter’s papers, some written over 20 years ago offer a remarkably prescient view, most notably: Mirror Worlds (1991), The Muse In The Machine (1994) and The Second Coming – A Manifesto (1999).

From WSJ:

People ask where the Web is going; it’s going nowhere. The Web was a brilliant first shot at making the Internet usable, but it backed the wrong horse. It chose space over time. The conventional website is “space-organized,” like a patterned beach towel—pineapples upper left, mermaids lower right. Instead it might have been “time-organized,” like a parade—first this band, three minutes later this float, 40 seconds later that band.

We go to the Internet for many reasons, but most often to discover what’s new. We have had libraries for millennia, but never before have we had a crystal ball that can tell us what is happening everywhere right now. Nor have we ever had screens, from room-sized to wrist-sized, that can show us high-resolution, constantly flowing streams of information.

Today, time-based structures, flowing data—in streams, feeds, blogs—increasingly dominate the Web. Flow has become the basic organizing principle of the cybersphere. The trend is widely understood, but its implications aren’t.

Working together at Yale in the mid-1990s, we forecast the coming dominance of time-based structures and invented software called the “lifestream.” We had been losing track of our digital stuff, which was scattered everywhere, across proliferating laptops and desktops. Lifestream unified our digital life: Each new document, email, bookmark or video became a bead threaded onto a single wire in the Cloud, in order of arrival.

To find a bead, you search, as on the Web. Or you can watch the wire and see each new bead as it arrives. Whenever you add a bead to the lifestream, you specify who may see it: everyone, my friends, me. Each post is as private as you make it.

Where do these ideas lead? Your future home page—the screen you go to first on your phone, laptop or TV—is a bouquet of your favorite streams from all over. News streams are blended with shopping streams, blogs, your friends’ streams, each running at its own speed.

This home stream includes your personal stream as part of the blend—emails, documents and so on. Your home stream is just one tiny part of the world stream. You can see your home stream in 3-D on your laptop or desktop, in constant motion on your phone or as a crawl on your big TV.

By watching one stream, you watch the whole world—all the public and private events you care about. To keep from being overwhelmed, you adjust each stream’s flow rate when you add it to your collection. The system slows a stream down by replacing many entries with one that lists short summaries—10, 100 or more.

An all-inclusive home stream creates new possibilities. You could build a smartwatch to display the stream as it flows past. It could tap you on the wrist when there’s something really important onstream. You can set something aside or rewind if necessary. Just speak up to respond to messages or add comments. True in-car computing becomes easy. Because your home stream gathers everything into one line, your car can read it to you as you drive.

Read the entire article here.

 

Streaming is So 2015

Led Zeppelin-IV

Fellow music enthusiasts and technology early adopters ditch the streaming sounds right now. And, if you still have an iPod, or worse an MP3 or CD player, trash it; trash them all.

The future of music is coming, and it’s beamed and implanted directly into your grey matter. I’m not sure if I like the idea of Taylor Swift inside my head — I’m more of a Pink Floyd and Led Zeppelin person — nor the idea of not having a filter for certain genres (i.e., country music). However, some might like the notion of a digital-DJ brain implant that lays down tracks based on your mood from monitoring your neurochemical mix. It’s only a matter of time.

Thanks, but I’ll stick to vinyl, crackles and all.

From WSJ:

The year is 2040, and as you wait for a drone to deliver your pizza, you decide to throw on some tunes. Once a commodity bought and sold in stores, music is now an omnipresent utility invoked via spoken- word commands. In response to a simple “play,” an algorithmic DJ opens a blended set of songs, incorporating information about your location, your recent activities and your historical preferences—complemented by biofeedback from your implanted SmartChip. A calming set of lo-fi indie hits streams forth, while the algorithm adjusts the beats per minute and acoustic profile to the rain outside and the fact that you haven’t eaten for six hours.

The rise of such dynamically generated music is the story of the age. The album, that relic of the 20th century, is long dead. Even the concept of a “song” is starting to blur. Instead there are hooks, choruses, catchphrases and beats—a palette of musical elements that are mixed and matched on the fly by the computer, with occasional human assistance. Your life is scored like a movie, with swelling crescendos for the good parts, plaintive, atonal plunks for the bad, and fuzz-pedal guitar for the erotic. The DJ’s ability to read your emotional state approaches clairvoyance. But the developers discourage the name “artificial intelligence” to describe such technology. They prefer the term “mood-affiliated procedural remixing.”

Right now, the mood is hunger. You’ve put on weight lately, as your refrigerator keeps reminding you. With its assistance—and the collaboration of your DJ—you’ve come up with a comprehensive plan for diet and exercise, along with the attendant soundtrack. Already, you’ve lost six pounds. Although you sometimes worry that the machines are running your life, it’s not exactly a dystopian experience—the other day, after a fast- paced dubstep remix spurred you to a personal best on your daily run through the park, you burst into tears of joy.

Cultural production was long thought to be an impregnable stronghold of human intelligence, the one thing the machines could never do better than humans. But a few maverick researchers persisted, and—aided by startling, asymptotic advances in other areas of machine learning—suddenly, one day, they could. To be a musician now is to be an arranger. To be a songwriter is to code. Atlanta, the birthplace of “trap” music, is now a locus of brogrammer culture. Nashville is a leading technology incubator. The Capitol Records tower was converted to condos after the label uploaded its executive suite to the cloud.

Read the entire story here.

Image: Led Zeppelin IV album cover. Courtesy of the author.

 

Who Needs a Self-Driving Car?

Self-driving vehicles have been very much in the news over the last couple of years. Google’s autonomous car project is perhaps the most notable recent example — its latest road-worthy prototype is the culmination of a project out of Stanford, which garnered an innovation prize from DARPA (Defense Advanced Research Projects Agency) back in 2005. And, numerous companies are in various stages of experimenting, planning, prototyping and developing, including GM, Apple, Mercedes-Benz, Nissan, BMW, Tesla, to name but a few.

Ehang-184-AAVThat said, even though it may still be a few years yet before we see traffic jams of driverless cars clogging the Interstate Highway system, some forward-thinkers are not resting on their laurels.  EHang, a Chinese drone manufacturer is leapfrogging the car entirely and pursuing an autonomous drone — actually an autonomous aerial vehicle (AAV) known as the Ehang 184 — capable of flying one passenger. Cooler still, the only onboard control is a Google-map interface that allows the passenger to select a destination. The AAV and ground-based command centers take care of the rest.

I have to wonder if EHang’s command centers will be able to use the drone to shoot missiles at militants as well as delivering a passenger, or better still, targeting missiles at rogue drivers.

Wired has more about this fascinating new toy — probably aimed at Russian oligarchs and Silicon Valley billionaires.

Image: Ehang 184 — Autonomous Aerial Vehicle. Courtesy of EHang.

 

iScoliosis

Google-search-neck-xray

Industrial and occupational illnesses have followed humans since the advent of industry. Obvious ones include: lung diseases from mining and a variety of skin diseases from exposure to agricultural and factory chemicals.

The late 20th century saw us succumb to carpal tunnel and other repetitive stress injuries from laboring over our desks and computers. Now, in the 21st we are becoming hosts to the smartphone pathogen.

In addition to the spectrum of social and cultural disorders wrought by our constantly chattering mobile devices, we are at increased psychological and physical risk. But, let’s leave aside the two obvious ones: risk from vehicle injury due to texting while driving, and risk from injury due to texting while walking. More commonly, we are at increased risk of back and other chronic physical problems resulting from poor posture. This in turn leads to mood disorders, memory problems and depression. Some have termed this condition “text-neck”, “iHunch”, or “iPosture”; I’ll go with “iScoliosis™”.

From NYT:

THERE are plenty of reasons to put our cellphones down now and then, not least the fact that incessantly checking them takes us out of the present moment and disrupts family dinners around the globe. But here’s one you might not have considered: Smartphones are ruining our posture. And bad posture doesn’t just mean a stiff neck. It can hurt us in insidious psychological ways.

If you’re in a public place, look around: How many people are hunching over a phone? Technology is transforming how we hold ourselves, contorting our bodies into what the New Zealand physiotherapist Steve August calls the iHunch. I’ve also heard people call it text neck, and in my work I sometimes refer to it as iPosture.

The average head weighs about 10 to 12 pounds. When we bend our necks forward 60 degrees, as we do to use our phones, the effective stress on our neck increases to 60 pounds — the weight of about five gallons of paint. When Mr. August started treating patients more than 30 years ago, he says he saw plenty of “dowagers’ humps, where the upper back had frozen into a forward curve, in grandmothers and great-grandmothers.” Now he says he’s seeing the same stoop in teenagers.

When we’re sad, we slouch. We also slouch when we feel scared or powerless. Studies have shown that people with clinical depression adopt a posture that eerily resembles the iHunch. One, published in 2010 in the official journal of the Brazilian Psychiatric Association, found that depressed patients were more likely to stand with their necks bent forward, shoulders collapsed and arms drawn in toward the body.

Posture doesn’t just reflect our emotional states; it can also cause them. In a study published in Health Psychology earlier this year, Shwetha Nair and her colleagues assigned non-depressed participants to sit in an upright or slouched posture and then had them answer a mock job-interview question, a well-established experimental stress inducer, followed by a series of questionnaires. Compared with upright sitters, the slouchers reported significantly lower self-esteem and mood, and much greater fear. Posture affected even the contents of their interview answers: Linguistic analyses revealed that slouchers were much more negative in what they had to say. The researchers concluded, “Sitting upright may be a simple behavioral strategy to help build resilience to stress.”

Slouching can also affect our memory: In a study published last year in Clinical Psychology and Psychotherapy of people with clinical depression, participants were randomly assigned to sit in either a slouched or an upright position and then presented with a list of positive and negative words. When they were later asked to recall those words, the slouchers showed a negative recall bias (remembering the bad stuff more than the good stuff), while those who sat upright showed no such bias. And in a 2009 study of Japanese schoolchildren, those who were trained to sit with upright posture were more productive than their classmates in writing assignments.

Read the entire article here, preferably not via your smartphone.

Image courtesy of Google Search.

 

Robotic Stock Keeping

Tally-robot-simbe

Meet Tally and it may soon be coming to a store near you. Tally is an autonomous robot that patrols store aisles and scans shelves to ensure items are correctly stocked. While the robot doesn’t do the restocking itself — beware stock clerk, this is probably only a matter of time — it audits shelves for out-of-stock items, low stock items, misplaced items, and pricing errors. The robot was developed by start-up Simbe Robotics.

From Technology Review:

When customers can’t find a product on a shelf it’s an inconvenience. But by some estimates, it adds up to billions of dollars of lost revenue each year for retailers around the world.

A new shelf-scanning robot called Tally could help ensure that customers never leave a store empty-handed. It roams the aisles and automatically records which shelves need to be restocked.

The robot, developed by a startup called Simbe Robotics, is the latest effort to automate some of the more routine work done in millions of warehouses and retail stores. It is also an example of the way robots and AI will increasingly take over parts of people’s jobs rather than replacing them.

Restocking shelves is simple but hugely important for retailers. Billions of dollars may be lost each year because products are missing, misplaced, or poorly arranged, according to a report from the analyst firm IHL Services. In a large store it can take hundreds of hours to inspect shelves manually each week.

Brad Bogolea, CEO and cofounder of Simbe Robotics, says his company’s robot can scan the shelves of a small store, like a modest CVS or Walgreens, in about an hour. A very large retailer might need several robots to patrol its premises. He says the robot will be offered on a subscription basis but did not provide the pricing. Bogolea adds that one large retailer is already testing the machine.

Tally automatically roams a store, checking whether a shelf needs restocking; whether a product has been misplaced or poorly arranged; and whether the prices shown on shelves are correct. The robot consists of a wheeled platform with four cameras that scan the shelves on either side from the floor up to a height of eight feet.

Read the entire article here.

Image: Tally. Courtesy of Simbe Robotics.

 

Re-Innovation: Silicon Valley’s Trivial Pursuit Problem

I read and increasing number of articles like the one excerpted below, which cause me to sigh with exasperation yet again. Is Silicon Valley — that supposed beacon of global innovation — in danger of becoming a drainage ditch of regurgitated sameness, of me-too banality?

It’s frustrating to watch many of our self-proclaimed brightest tech minds re-package colorful “new” solutions to our local trivialities, yet again, and over and over. So, here we are, celebrating the arrival of the “next big thing”; the next tech unicorn with a valuation above $1 billion, which proposes to upend and improve all our lives, yet again.

DoorDash. Seamless. Deliveroo. HelloFresh. HomeChef. SpoonRocket. Sprig. GrubHub. Instacart. These are all great examples of too much money chasing too few truly original ideas. I hope you’ll agree: a cool compound name is a cool compound name, but it certainly does not for innovation make. By the way, whatever happened to WebVan?

Where are my slippers? Yawn.

From Wired:

Founded in 2013, DoorDash is a food delivery service. It’s also the latest startup to be eying a valuation of more than $1 billion. DoorDash already raised $40 million in March; according to Bloomberg, it may soon reap another round of funding that would put the company in the same lofty territory as Uber, Airbnb, and more than 100 other so-called unicorns.

Not that DoorDash is doing anything terribly original. Startups bringing food to your door are everywhere. There’s Instacart, which wants to shop for groceries for you. Deliveroo and Postmastes, like DoorDash, are looking to overtake Seamless as the way we get takeout at home. Munchery, SpoonRocket, and Sprig offer pre-made meals. Blue Apron, Gobble, HelloFresh, and HomeChef deliver ingredients to make the food ourselves. For the moment, investors are giddily rushing to subsidize this race to our doors. But skeptics say that the payout those investors are banking on might never come.

Even in a crowded field, funding for these delivery startups continues to grow. CB Insights, a research group that tracks startup investments, said this summer that the sector was “starting to get a little crowded.” Last year, venture-backed food delivery startups based in the US reaped more than $1 billion in equity funding; during first half of this year, they pulled in $750 million more, CB Insights found.

The enormous waves of funding may prove money poorly spent if Silicon Valley finds itself in a burst bubble. Bill Gurley, the well-known investor and a partner at venture firm Benchmark, believes delivery startups may soon be due for a rude awakening. Unlike the first dotcom bubble, he said, smartphones might offer help, because startups are able to collect more data. But he compared the optimism investors are showing for such low-margin operations to the misplaced enthusiasms of 1999.  “It’s the same shit,” Gurley said during a recent appearance. (Gurley’s own investment in food delivery service, GrubHub, went public in April 2014 and is now valued at more than $2.2 billion.)

Read the entire article here.

 

The Man With No Phone

If Hitchcock were alive today the title of this post — The Man With No Phone — might be a fitting description of his latest noir, celluloid masterpiece. For in many the notion of being phone-less distills deep nightmarish visions of blood-curdling terror.

Does The Man With No Phone lose track of all reality, family, friends, appointments, status updates, sales records, dinner, grocery list, transportation schedules and news, turning into an empty neurotic shell of a human being? Or, does lack of constant connectivity and elimination of instant, digital gratification lead The Man With No Phone to become a schizoid, feral monster? Let’s read on to find out.

[tube]uWhkbDMISl8[/tube]

Large swathes of the world are still phone-less, and much of the global population — at least those of us over the age of 35 — grew up smartphone-less and even cellphone-less. So, it’s rather disconcerting to read Steve Hilton’s story; he’s been phone-less for 3 years now. However, it’s not disconcerting that he’s without a phone — I find it inspiring (and normal), it’s disconcerting that many people are wondering how on earth he can live without one. And, even more perplexing — why would anyone need a digital detox or mindfulness app on their smartphone? Just hide the thing in your junk draw for a week (or more) and breathe out. Long live The Man With No Phone!

From the Guardian:

Before you read on, I want to make one thing clear: I’m not trying to convert you. I’m not trying to lecture you or judge you. Honestly, I’m not. It may come over like that here and there, but believe me, that’s not my intent. In this piece, I’m just trying to … explain.

People who knew me in a previous life as a policy adviser to the British prime minister are mildly surprised that I’m now the co-founder and CEO of a tech startup . And those who know that I’ve barely read a book since school are surprised that I have now actually written one.

But the single thing that no one seems able to believe – the thing that apparently demands explanation – is the fact that I am phone-free. That’s right: I do not own a cellphone; I do not use a cellphone. I do not have a phone. No. Phone. Not even an old-fashioned dumb one. Nothing. You can’t call me unless you use my landline – yes, landline! Can you imagine? At home. Or call someone else that I happen to be with (more on that later).

When people discover this fact about my life, they could not be more surprised than if I had let slip that I was actually born with a chicken’s brain. “But how do you live?” they cry. And then: “How does your wife feel about it?” More on that too, later.

As awareness has grown about my phone-free status (and its longevity: this is no passing fad, people – I haven’t had a phone for over three years), I have received numerous requests to “tell my story”. People seem to be genuinely interested in how someone living and working in the heart of the most tech-obsessed corner of the planet, Silicon Valley, can possibly exist on a day-to-day basis without a smartphone.

So here we go. Look, I know it’s not exactly Caitlyn Jenner, but still: here I am, and here’s my story.

In the spring of 2012, I moved to the San Francisco bay area with my wife and two young sons. Rachel was then a senior executive at Google, which involved a punishing schedule to take account of the eight-hour time difference. I had completed two years at 10 Downing Street as senior adviser to David Cameron – let’s just put it diplomatically and say that I and the government machine had had quite enough of each other. To make both of our lives easier, we moved to California.

I took with me my old phone, which had been paid for by the taxpayer. It was an old Nokia phone – I always hated touch-screens and refused to have a smartphone; neither did I want a BlackBerry or any other device on which the vast, endless torrent of government emails could follow me around. Once we moved to the US my government phone account was of course stopped and telephonically speaking, I was on my own.

I tried to get hold of one of my beloved old Nokia handsets, but they were no longer available. Madly, for a couple of months I used old ones procured through eBay, with a pay-as-you-go plan from a UK provider. The handsets kept breaking and the whole thing cost a fortune. Eventually, I had enough when the charging outlet got blocked by sand after a trip to the beach. “I’m done with this,” I thought, and just left it.

I remember the exact moment when I realized something important had happened. I was on my bike, cycling to Stanford, and it struck me that a week had gone by without my having a phone. And everything was just fine. Better than fine, actually. I felt more relaxed, carefree, happier. Of course a lot of that had to do with moving to California. But this was different. I felt this incredibly strong sense of just thinking about things during the day. Being able to organize those thoughts in my mind. Noticing things.

Read the entire story here.

Video: Hanging on the Telephone, Blondie. Courtesy: EMI Music.

Design Thinking Versus Product Development

Out with product managers; in with design thinkers. Time for some corporate creativity. Think user journeys and empathy roadmaps.

A different corporate mantra is beginning to take hold at some large companies like IBM. It’s called design thinking, and while it’s not necessarily new, it holds promise for companies seeking to meet the needs of their customers at a fundamental level. Where design is often thought of in terms of defining and constructing cool-looking products, design thinking is used to capture a business problem at a broader level, shape business strategy and deliver a more holistic, deeper solution to customers. And, importantly, to do so more quickly than through a typical product development life-cycle.

From NYT:

Phil Gilbert is a tall man with a shaved head and wire-rimmed glasses. He typically wears cowboy boots and bluejeans to work — hardly unusual these days, except he’s an executive at IBM, a company that still has a button-down suit-and-tie reputation. And in case you don’t get the message from his wardrobe, there’s a huge black-and-white photograph hanging in his office of a young Bob Dylan, hunched over sheet music, making changes to songs in the “Highway 61 Revisited” album. It’s an image, Mr. Gilbert will tell you, that conveys both a rebel spirit and hard work.

Let’s not get carried away. Mr. Gilbert, who is 59 years old, is not trying to redefine an entire generation. On the other hand, he wants to change the habits of a huge company as it tries to adjust to a new era, and that is no small task.

IBM, like many established companies, is confronting the relentless advance of digital technology. For these companies, the question is: Can you grow in the new businesses faster than your older, lucrative businesses decline?

Mr. Gilbert answers that question with something called design thinking. (His title is general manager of design.) Among other things, design thinking flips traditional technology product development on its head. The old way is that you come up with a new product idea and then try to sell it to customers. In the design thinking way, the idea is to identify users’ needs as a starting point.

Mr. Gilbert and his team talk a lot about “iteration cycles,” “lateral thinking,” “user journeys” and “empathy maps.” To the uninitiated, the canons of design thinking can sound mushy and self-evident. But across corporate America, there is a rising enthusiasm for design thinking not only to develop products but also to guide strategy and shape decisions of all kinds. The September cover article of the Harvard Business Review was “The Evolution of Design Thinking.” Venture capital firms are hiring design experts, and so are companies in many industries.

Still, the IBM initiative stands out. The company is well on its way to hiring more than 1,000 professional designers, and much of its management work force is being trained in design thinking. “I’ve never seen any company implement it on the scale of IBM,” said William Burnett, executive director of the design program at Stanford University. “To try to change a culture in a company that size is a daunting task.”

Daunting seems an understatement. IBM has more than 370,000 employees. While its revenues are huge, the company’s quarterly reports have shown them steadily declining in the last two years. The falloff in revenue is partly intentional, as the company sold off less profitable operations, but the sometimes disappointing profits are not, and they reflect IBM’s struggle with its transition. Last month, the company shaved its profit target for 2015.

In recent years, the company has invested heavily in new fields, including data analytics, cloud computing, mobile technology, security, social media software for business and its Watson artificial intelligence technology. Those businesses are growing rapidly, generating revenue of $25 billion last year, and IBM forecasts that they will contribute $40 billion by 2018, through internal growth and acquisitions. Just recently, for example, IBM agreed to pay $2 billion for the Weather Company (not including its television channel), gaining its real-time and historical weather data to feed into Watson and analytics software.

But IBM’s biggest businesses are still the traditional ones — conventional hardware, software and services — which contribute 60 percent of its revenue and most of its profit. And these IBM mainstays are vulnerable, as customers increasingly prefer to buy software as a service, delivered over the Internet from remote data centers.

Recognizing the importance of design is not new, certainly not at IBM. In the 1950s, Thomas J. Watson Jr., then the company’s chief executive, brought on Eliot Noyes, a distinguished architect and industrial designer, to guide a design program at IBM. And Noyes, in turn, tapped others including Paul Rand, Charles Eames and Eero Saarinen in helping design everything from corporate buildings to the eight-bar corporate logo to the IBM Selectric typewriter with its golf-ball-shaped head.

At that time, and for many years, design meant creating eye-pleasing, functional products. Now design thinking has broader aims, as a faster, more productive way of organizing work: Look at problems first through the prism of users’ needs, research those needs with real people and then build prototype products quickly.

Defining problems more expansively is part of the design-thinking ethos. At a course in New York recently, a group of IBM managers were given pads and felt-tip pens and told to sketch designs for “the thing that holds flowers on a table” in two minutes. The results, predictably, were vases of different sizes and shapes.

Next, they were given two minutes to design “a better way for people to enjoy flowers in their home.” In Round 2, the ideas included wall placements, a rotating flower pot run by solar power and a software app for displaying images of flowers on a home TV screen.

Read the entire story here.

On the Joys of Not Being Twenty Again

I’m not twenty, and am constantly reminded that I’m not — both from internal alerts and external messages. Would I like to be younger? Of course. But it certainly comes at a price. So, after reading the exploits of a 20-something forced to live without her smartphone for a week, I realize it’s not all that bad being a cranky old luddite.

I hope that the ordeal, excerpted below, is tongue-very-much-in-cheek but I suspect it’s not: constant status refreshes, morning selfies, instant content gratification, nano-scale attention span, over-stimulation, life-stream documentation, peer ranking, group-think, interrupted interruptions. Thus, I realize I’m rather content not to be twenty after all.

From the Telegraph:

I have a confession to make: I am addicted to my smartphone. I use it as an alarm clock, map, notepad, mirror and camera.

I spend far too much time on Twitter and Instagram and have this week realised I have a nervous tick where I repeatedly unlock my smartphone.

And because of my phone’s many apps which organise my life and help me navigate the world, like many people my age, I am quite literally lost without it.

I am constantly told off by friends and family for using my phone during conversations, and I recently found out (to my horror) that I have taken over 5,000 selfies.

So when my phone broke I seized the opportunity to spend an entire week without it, and kept a diary each day.

Day One: Thursday

Frazzled, I reached to my bedside table, so I could take a morning selfie and send it to my friends.

Realising why that could not happen, my hand and my heart both felt empty. I knew at this point it was going to be a long week.

Day Two: Friday

I basked in the fact my colleagues could not contact me – and if I did not reply to their emails straight away it would not be the end of the world.

I then took the train home to see my parents outside London.

I couldn’t text my mother about any delays which may have happened (they didn’t), and she couldn’t tell me if she was going to be late to the station (she wasn’t). The lack of phone did nothing but make me feel anxious and prevent me from being able to tweet about the irritating children screaming on the train.

Day Three: Saturday

It is a bit weird feeling completely cut off from the outside world; I am not chained to my computer like I am at work and I am not allowed to constantly be on my laptop like a teen hacker.

It was nice though – a real detox. We went on a walk with our spaniel in the countryside near the Chiltern Hills. I had to properly talk to everyone, instead of constantly refreshing Twitter, which was novel.

I do feel like my attention span is improving every day, but I equally feel anchorless and lost without having any way of contacting anyone, or documenting my life.

….

Day Seven: Wednesday

My attention span and patience have grown somewhat, and I have noticed I daydream and have thoughts independent of Twitter far more often than usual.

Read the entire account here.

Back to the Future

France_in_XXI_Century_Latest_fashionJust over a hundred years ago, at the turn of the 19th century, Jean-Marc Côté and some of his fellow French artists were commissioned to imagine what the world would look like in 2000. Their colorful sketches and paintings portrayed some interesting inventions, though all seem to be grounded in familiar principles and incremental innovations — mechanical helpers, ubiquitous propellers and wings. Interestingly, none of these artist-futurists imagined a world beyond Victorian dress, gender inequality and wars. But these are gems nonetheless.

France_in_XXI_Century._Air_cabSome of their works found their way into cigar boxes and cigarette cases, others were exhibited at the 1900 World Exhibition in Paris. My three favorites: a Tailor of the Latest Fashion, the Aero-cab Station and the Whale Bus. See the full complement of these remarkable futuristic visions at the Public Domain Review, and check out the House Rolling Through the Countryside and At School.

I suspect our contemporary futurists — born in the late 20th or early 21st-century — will fall prey to the same narrow visions when asked to sketch our planet in 3000. But despite the undoubted wealth of new gadgets and gizmos a thousand years from now the challenge would be to see if their imagined worlds might be at peace and with equality for all.
France_in_XXI_Century_Whale_busImages courtesy of the Public Domain Review, a project of the Open Knowledge Foundation. Public Domain.

 

A Positive Female Role Model

Margaret_Hamilton_in_action

Our society does a better, but still poor, job of promoting positive female role models. Most of our — let’s face it — male designed images of women fall into rather narrowly defined stereotypical categories: nurturing care-giver, stay-at-home soccer mom, matriarchal office admin, overly bossy middle-manager, vacuous reality-TV spouse or scantily clad vixen.

But every now and then the media seems to discover another unsung, female who made significant contributions in a male-dominated and male-overshadowed world. Take the case of computer scientist Margaret Hamilton — she developed on-board flight software for the Apollo space program while director of the Software Engineering Division of the MIT Instrumentation Laboratory. Aside from developing technology that put people on the Moon, she helped NASA understand the true power of software and the consequences of software-driven technology.

From Wired:

Margaret Hamilton wasn’t supposed to invent the modern concept of software and land men on the moon. It was 1960, not a time when women were encouraged to seek out high-powered technical work. Hamilton, a 24-year-old with an undergrad degree in mathematics, had gotten a job as a programmer at MIT, and the plan was for her to support her husband through his three-year stint at Harvard Law. After that, it would be her turn—she wanted a graduate degree in math.

But the Apollo space program came along. And Hamilton stayed in the lab to lead an epic feat of engineering that would help change the future of what was humanly—and digitally—possible.

As a working mother in the 1960s, Hamilton was unusual; but as a spaceship programmer, Hamilton was positively radical. Hamilton would bring her daughter Lauren by the lab on weekends and evenings. While 4-year-old Lauren slept on the floor of the office overlooking the Charles River, her mother programmed away, creating routines that would ultimately be added to the Apollo’s command module computer.

“People used to say to me, ‘How can you leave your daughter? How can you do this?’” Hamilton remembers. But she loved the arcane novelty of her job. She liked the camaraderie—the after-work drinks at the MIT faculty club; the geek jokes, like saying she was “going to branch left minus” around the hallway. Outsiders didn’t have a clue. But at the lab, she says, “I was one of the guys.”

Then, as now, “the guys” dominated tech and engineering. Like female coders in today’s diversity-challenged tech industry, Hamilton was an outlier. It might surprise today’s software makers that one of the founding fathers of their boys’ club was, in fact, a mother—and that should give them pause as they consider why the gender inequality of the Mad Men era persists to this day.

As Hamilton’s career got under way, the software world was on the verge of a giant leap, thanks to the Apollo program launched by John F. Kennedy in 1961. At the MIT Instrumentation Lab where Hamilton worked, she and her colleagues were inventing core ideas in computer programming as they wrote the code for the world’s first portable computer. She became an expert in systems programming and won important technical arguments. “When I first got into it, nobody knew what it was that we were doing. It was like the Wild West. There was no course in it. They didn’t teach it,” Hamilton says.

This was a decade before Microsoft and nearly 50 years before Marc Andreessen would observe that software is, in fact, “eating the world.” The world didn’t think much at all about software back in the early Apollo days. The original document laying out the engineering requirements of the Apollo mission didn’t even mention the word software, MIT aeronautics professor David Mindell writes in his book Digital Apollo. “Software was not included in the schedule, and it was not included in the budget.” Not at first, anyhow.

Read the entire story here.

Image: Margaret Hamilton during her time as lead Apollo flight software designer. Courtesy NASA. Public Domain.

AIs and ICBMs

You know something very creepy is going on when robots armed with artificial intelligence (AI) engage in conversations about nuclear war and inter-continental ballistic missiles (ICBM). This scene could be straight out of a William Gibson novel.

[tube]mfcyq7uGbZg[/tube]

Video: The BINA48 robot, created by Martine Rothblatt and Hanson Robotics, has a conversation with Siri. Courtesy of ars technica.

Planned Obsolescence

Guiyu-ewaste

Our digital technologies bring us many benefits: improved personal and organizational efficiency; enhanced communication and collaboration; increased access to knowledge, information, and all manner of products and services. Over the foreseeable future our technologies promise to re-shape our work-lives and our leisure time ever-more, for the better. Yet, for all the positives, our modern technology comes with an immense flaw. It’s called planned obsolescence. Those who make and sell us the next, great digital gizmo rely upon planned obsolescence, combined with our magpie-like desire for shiny new objects, to lock us into an unrelenting cycle of buy-and-replace, buy-and-replace.

Yet, if you are over 30, you may recall having fixed something. You had the tools (some possibly makeshift), you had the motivation (saving money), and you had enough skills and information to get it done. The item you fixed was pre-digital, a simple electrical device, or more likely mechanical — positively ancient by current standards. But, you breathed new life into it, avoided a new purchase, saved some cash and even learned something in the process. Our digital products make this kind of DIY repair almost impossible for all but the expert (the manufacturer or its licensed technician); in fact, should you attempt a fix, you are more likely to render the product in worse shape than before.

Our digital products are just too complex to fix. And, therein lies the paradox of our digital progress — our technologies have advanced tremendously but our ability to repair them has withered. Even many of our supposedly simpler household devices, such as the lowly toaster, blender, or refrigerator have at their heart a digital something-or-other, making repair exorbitantly expensive or impossible.

As a kid, I recall helping my parents fix an old television (the kind with vacuum tubes), mechanical cameras, a vacuum cleaner, darkroom enlarger, doorbell. Those were the days. Today, unfortunately, we’ve become conditioned to consign a broken product to a landfill and to do the same with our skills. It would be a step in the right direction for us to regain the ability to repair our products. But don’t expect any help from the product manufacturer.

From WSJ:

We don’t have to keep buying new gadgets. In fact, we should insist on the right to keep old ones running.

Who hasn’t experienced a situation like this? Halfway through a classic Jack Lemmon DVD, my colleague Shira’s 40-inch TV conked out. Nothing showed up on the screen when she pressed the power button. The TV just hiccupped, going, “Clip-clop. Clip-clop.”

This was a great excuse to dump her old Samsung and buy a shiny new TV, right? But before heading to Best Buy, Shira gave me a call hoping for a less expensive option, not to mention one that’s better for the environment.

We ended up with a project that changed my view on our shop-till-you-drop gadget culture. We’re more capable of fixing technology than we realize, but the electronics industry doesn’t want us to know that. In many ways, it’s obstructing us.

There’s a fight brewing between giant tech companies and tinkerers that could impact how we repair gadgets or choose the shop where we get it done by a pro.

At issue: Who owns the knowledge required to take apart and repair TVs, phones and other electronics?

Some manufacturers stop us by controlling repair plans and limiting access to parts. Others even employ digital software locks to keep us from making changes or repairs. This may not always be planned obsolescence, but it’s certainly intentional obfuscation.

Thankfully, the Internet is making it harder for them to get away with it.

My first stop with Shira’s TV, a 2008 model, was Samsung itself. On its website, I registered the TV and described what was broken.

Our TV problem wasn’t unique: Samsung was taken to court about this exact issue, caused by a busted component called a capacitor. With a little googling of the TV model, I found our problem wasn’t unique. Samsung settled in 2012 by agreeing to extend warranties for 18 months on certain TVs, including this one. It also kept repairing the problem at no cost for a while after.

But when a Samsung support rep called back, she said they’d no longer fix the problem free. She passed me to an authorized Samsung repair shop in my area. They said they’d charge $90 for an estimate, and at least $125 plus parts for a repair.

Buying a similar-size Samsung TV today costs $380. Why wouldn’t Shira just buy a new TV? She felt guilty. Old electronics don’t just go to the great scrapheap in the sky—even recycled e-waste can end up in toxic dumps in the developing world.

Enter Plan B: Searching the Web, I found a ton of people talking about this TV’s broken capacitors. There were even a few folks selling DIY repair kits. The parts cost…wait for it…$12.

Read the entire story here.

Image: Digital trash, Guiyu e-waste town, 2014. Courtesy of Mightyroy / Wikipedia. CC.

Selfie-Drone: It Was Only a Matter of Time

Google-search-selfie-drone

Those of you who crave a quiet, reflective escape from the incessant noise of the modern world, may soon find even fewer places for quiet respite. Make the most of your calming visit to the beach or a mountain peak or an alpine lake or an emerald forest before you are jolted back to reality by swarms of buzzing selfie-drones. It’s rather ironic to see us regress as our technology evolves. Oh, and you can even get a wearable one! Does our penchant for narcissistic absorption have no bounds? That said, there is one positive to come of this dreadful application of a useful invention — the selfie-stick may be on the way out. I will now revert to my quiet cave for the next 50 years.

From NYT:

It was a blistering hot Sunday in Provence. The painted shutters of the houses in Arles were closed. Visitors were scarce. In the Roman amphitheater, built to hold some 20,000 spectators, I sat among empty bleachers, above homes with orange tile roofs, looking past ancient arcades and terraces to the blue horizon. Was this the sort of stillness van Gogh experienced when he was in Arles on this same June day in 1888? I began to entertain the thought but was distracted by a soft whirring; a faint electric hum. Something was drawing near. I looked around and saw nothing — until it and I were eye to eye.

Or rather, eye to lens. A drone resembling one of those round Roomba robotic vacuums had levitated from the pit of the nearly 2,000-year-old arena and was hovering in the air between me and the cloudless horizon. Reflexively I turned away and tugged on the hem of my dress. Who knew where this flying Roomba was looking or what it was recording?

Unexpected moments of tranquility, like finding yourself in a near-empty Roman arena during a heat wave, are becoming more and more elusive. If someone isn’t about to inadvertently impale you with a selfie-stick, another may catch you on video with a recreational drone, like the DJI Phantom (about $500 to $1,600), which is easy to use (unless you’re inebriated, like the man who crashed a Phantom on the White House grounds in January).

Yet what travelers are seeing today — remote-controlled drones bobbing around tourist sites, near airports, in the Floridian National Golf Club in Palm City while President Obama played golf — is but the tip of the iceberg. Think remote-controlled drones and selfie-sticks are intrusive? Prepare for the selfie-drone.

This next generation of drones, which are just beginning to roll out, doesn’t require users to hold remote controllers: They are hands-free. Simply toss them in the air, and they will follow you like Tinker Bell. With names such as Lily (around $700 on pre-order) and Nixie (not yet available for pre-order), they are capable of recording breathtaking video footage and trailing adventure travelers across bridges and streams, down ski slopes and into secluded gardens.

Nixie, which you can wear on your wrist until you want to fling it off for a photo or video, has a “boomerang mode” that allows it to fly back to you as if it were a trained raptor. A promotional video for Lily shows a man with a backpack lobbing the drone like a stone over a bridge and casually walking away, only to have the thing float up and follow him. Think you can outmaneuver the contraption in white-water rapids? Lily is waterproof. I watched with awe a video of Lily being dumped into a river beside a woman in a kayak (where one assumes Lily will perish), yet within seconds emerging and rising, like Glenn Close from the bathtub in “Fatal Attraction.”

There is no denying that the latest drone technology is impressive. And the footage is striking. Adventure travelers who wish to watch themselves scale Kilimanjaro or surf in Hawaii along the North Shore of Oahu will no doubt want one. But if selfie-drones become staples of every traveler who can afford them, we stand to lose more than we stand to gain when it comes to privacy, safety and quality-of-life factors like peace and beauty.

Imagine sunsets at the lake or beach with dozens of selfie-drones cluttering the sky, each vying for that perfect shot. Picture canoodling on a seemingly remote park bench during your romantic getaway and ending up on video. The intimate walks and tête-à-têtes that call to mind Jane Eyre and Mr. Rochester would hardly be the same with drones whizzing by. Think of your children building sand castles and being videotaped by passing drones. Who will be watching and recording us, and where will that information end up?

I shudder to think of 17- and 18-year-olds receiving drones for Christmas and on their winter vacations crashing the contraptions into unsuspecting sunbathers. Or themselves. Lest you think I joke, consider that in May the singer Enrique Iglesias, who is well past his teenage years, sliced his fingers while trying to snap a photo with a (remote-controlled) drone during his concert in Mexico.

Read the entire article here.

Image courtesy of Google Search.

What If You Spoke Facebookish?

The video from comedian Jason Horton shows us what real world interactions would be like if we conversed the same way as we do online via Facebook. His conversations may be tongue-in-cheek but they’re too close to becoming reality for comfort. You have to suppose that these offline (real world) status updates would have us drowning in hashtags, over-reaction, moralizing, and endless yawn inducing monologues.

[tube]aRmKD23Pstk[/tube]

I’d rather have Esperanto make a comeback.

Video courtesy of Jason Horton.

Don’t Call Me; I’ll Not Call You Either

google-search-telephone

We all have smartphones, but the phone call is dead. That tool of arcane real-time conversation between two people (sometimes more) is making way for asynchronous sharing via text, image and other data.

From the Atlantic:

One of the ironies of modern life is that everyone is glued to their phones, but nobody uses them as phones anymore. Not by choice, anyway. Phone calls—you know, where you put the thing up to your ear and speak to someone in real time—are becoming relics of a bygone era, the “phone” part of a smartphone turning vestigial as communication evolves, willingly or not, into data-oriented formats like text messaging and chat apps.

The distaste for telephony is especially acute among Millennials, who have come of age in a world of AIM and texting, then gchat and iMessage, but it’s hardly limited to young people. When asked, people with a distaste for phone calls argue that they are presumptuous and intrusive, especially given alternative methods of contact that don’t make unbidden demands for someone’s undivided attention. In response, some have diagnosed a kind of telephoniphobia among this set. When even initiating phone calls is a problem—and even innocuous ones, like phoning the local Thai place to order takeout—then anxiety rather than habit may be to blame: When asynchronous, textual media like email or WhatsApp allow you to intricately craft every exchange, the improvisational nature of ordinary, live conversation can feel like an unfamiliar burden. Those in power sometimes think that this unease is a defect in need of remediation, while those supposedly afflicted by it say they are actually just fine, thanks very much.

But when it comes to taking phone calls and not making them, nobody seems to have admitted that using the telephone today is a different material experience than it was 20 or 30 (or 50) years ago, not just a different social experience. That’s not just because our phones have also become fancy two-way pagers with keyboards, but also because they’ve become much crappier phones. It’s no wonder that a bad version of telephony would be far less desirable than a good one. And the telephone used to be truly great, partly because of the situation of its use, and partly because of the nature of the apparatus we used to refer to as the “telephone”—especially the handset.

On the infrastructural level, mobile phones operate on cellular networks, which route calls between between transceivers distributed across a service area. These networks are wireless, obviously, which means that signal strength, traffic, and interference can make calls difficult or impossible. Together, these factors have made phone calls synonymous with unreliability. Failures to connect, weak signals that staccato sentences into bursts of signal and silence, and the frequency of dropped calls all help us find excuses not to initiate or accept a phone call.

By contrast, the traditional, wired public switched telephone network (PSTN) operates by circuit switching. When a call is connected, one line is connected to another by routing it through a network of switches. At first these were analog signals running over copper wire, which is why switchboard operators had to help connect calls. But even after the PSTN went digital and switching became automated, a call was connected and then maintained over a reliable circuit for its duration. Calls almost never dropped and rarely failed to connect.

But now that more than half of American adults under 35 use mobile phones as their only phones, the intrinsic unreliability of the cellular network has become internalized as a property of telephony. Even if you might have a landline on your office desk, the cellular infrastructure has conditioned us to think of phone calls as fundamentally unpredictable affairs. Of course, why single out phones? IP-based communications like IM and iMessage are subject to the same signal and routing issues as voice, after all. But because those services are asynchronous, a slow or failed message feels like less of a failure—you can just regroup and try again. When you combine the seemingly haphazard reliability of a voice call with the sense of urgency or gravity that would recommend a phone call instead of a Slack DM or an email, the risk of failure amplifies the anxiety of unfamiliarity. Telephone calls now exude untrustworthiness from their very infrastructure.

Going deeper than dropped connections, telephony suffered from audio-signal processing compromises long before cellular service came along, but the differences between mobile and landline phone usage amplifies those challenges, as well. At first, telephone audio was entirely analogue, such that the signal of your voice and your interlocutor’s would be sent directly over the copper wire. The human ear can hear frequencies up to about 20 kHz, but for bandwidth considerations, the channel was restricted to a narrow frequency range called the voice band, between 300 and 3,400 Hz. It was a reasonable choice when the purpose of phones—to transmit and receive normal human speech—was taken into account.

By the 1960s, demand for telephony recommended more efficient methods, and the transistor made it both feasible and economical to carry many more calls on a single, digital circuit. The standard that was put in place cemented telephony’s commitment to the voice band, a move that would reverberate in the ears of our mobile phones a half-century later.

In order to digitally switch calls, the PSTN became subject to sampling, the process of converting a continuous signal to a discrete one. Sampling is carried out by capturing snapshots of a source signal at a specific interval. A principle called the Nyquist–Shannon sampling theorem specifies that a waveform of a particular maximum frequency can be reconstructed from a sample taken at twice that frequency per second. Since the voice band required only 4 kHz of bandwidth, a sampling rate of 8 kHz (that is, 8,000 samples per second) was established by Bell Labs engineers for a voice digitization method. This system used a technique developed by Bernard Oliver, John Pierce, and Claude Shannon in the late ‘40s called Pulse Code Modulation (PCM). In 1962, Bell began deploying PCM into the telephone-switching network, and the 3 kHz range for telephone calls was effectively fixed.

Since the PSTN is still very much alive and well and nearly entirely digitized save for the last mile, this sampling rate has persisted over the decades. (If you have a landline in an older home, its signal is probably still analog until it reaches the trunk of your telco provider.) Cellular phones still have to interface with the ordinary PSTN, so they get sampled in this range too.

Two intertwined problems arise. First, it turns out that human voices may transmit important information well above 3,300 Hz or even 5,000 Hz. The auditory neuroscientist Brian Monson has conducted substantial research on high-frequency energy perception. A widely-covered 2011 study showed that subjects could still discern communicative information well above the frequencies typically captured in telephony. Even though frequencies above 5,000 Hz are too high to transmit clear spoken language without the lower frequencies, Monson’s subjects could discern talking from singing and determine the sex of the speaker with reasonable accuracy, even when all the signal under 5,000 Hz was removed entirely. Monson’s study shows that 20th century bandwidth and sampling assumptions may already have made incorrect assumptions about how much of the range of human hearing was use for communication by voice.

That wasn’t necessarily an issue until the second part of the problem arises: the way we use mobile phones versus landline phones. When the PSTN was first made digital, home and office phones were used in predictable environments: a bedroom, a kitchen, an office. In these circumstances, telephony became a private affair cut off from the rest of the environment. You’d close the door or move into the hallway to conduct a phone call, not only for the quiet but also for the privacy. Even in public, phones were situated out-of-the-way, whether in enclosed phone booths or tucked away onto walls in the back of a diner or bar, where noise could be minimized.

Today, of course, we can and do carry our phones with us everywhere. And when we try to use them, we’re far more likely to be situated in an environment that is not compatible with the voice band—coffee shops, restaurants, city streets, and so forth. Background noise tends to be low-frequency, and, when it’s present, the higher frequencies that Monson showed are more important than we thought in any circumstance become particularly important. But because digital sampling makes those frequencies unavailable, we tend not to be able to hear clearly. Add digital signal loss from low or wavering wireless signals, and the situation gets even worse. Not only are phone calls unstable, but even when they connect and stay connected in a technical sense, you still can’t hear well enough to feel connected in a social one. By their very nature, mobile phones make telephony seem unreliable.

Read the entire story here.

Image courtesy of Google Search.

The Tech Emperor Has No Clothes

OLYMPUS DIGITAL CAMERA

Bill Hewlett. David Packard. Bill Gates. Steve Allen. Steve Jobs. Larry Ellison. Gordon Moore. Tech titans. Moguls of the microprocessor. Their names hold a key place in the founding and shaping of our technological evolution. That they catalyzed and helped create entire economic sectors goes without doubt. Yet, a deeper, objective analysis of market innovation shows that the view of the lone, great-man (or two) — combating and succeeding against all-comers — may be more of a self-perpetuating myth than actual reality. The idea that single, visionary individual drives history and shapes the future is but a long and enduring invention.

From Technology Review:

Since Steve Jobs’s death, in 2011, Elon Musk has emerged as the leading celebrity of Silicon Valley. Musk is the CEO of Tesla Motors, which produces electric cars; the CEO of SpaceX, which makes rockets; and the chairman of SolarCity, which provides solar power systems. A self-made billionaire, programmer, and engineer—as well as an inspiration for Robert Downey Jr.’s Tony Stark in the Iron Man movies—he has been on the cover of Fortune and Time. In 2013, he was first on the Atlantic’s list of “today’s greatest inventors,” nominated by leaders at Yahoo, Oracle, and Google. To believers, Musk is steering the history of technology. As one profile described his mystique, his “brilliance, his vision, and the breadth of his ambition make him the one-man embodiment of the future.”

Musk’s companies have the potential to change their sectors in fundamental ways. Still, the stories around these advances—and around Musk’s role, in particular—can feel strangely outmoded.

The idea of “great men” as engines of change grew popular in the 19th century. In 1840, the Scottish philosopher Thomas Carlyle wrote that “the history of what man has accomplished in this world is at bottom the history of the Great Men who have worked here.” It wasn’t long, however, before critics questioned this one–dimensional view, arguing that historical change is driven by a complex mix of trends and not by any one person’s achievements. “All of those changes of which he is the proximate initiator have their chief causes in the generations he descended from,” Herbert Spencer wrote in 1873. And today, most historians of science and technology do not believe that major innovation is driven by “a lone inventor who relies only on his own imagination, drive, and intellect,” says Daniel Kevles, a historian at Yale. Scholars are “eager to identify and give due credit to significant people but also recognize that they are operating in a context which enables the work.” In other words, great leaders rely on the resources and opportunities available to them, which means they do not shape history as much as they are molded by the moments in which they live.

Musk’s success would not have been possible without, among other things, government funding for basic research and subsidies for electric cars and solar panels. Above all, he has benefited from a long series of innovations in batteries, solar cells, and space travel. He no more produced the technological landscape in which he operates than the Russians created the harsh winter that allowed them to vanquish Napoleon. Yet in the press and among venture capitalists, the great-man model of Musk persists, with headlines citing, for instance, “His Plan to Change the Way the World Uses Energy” and his own claim of “changing history.”

The problem with such portrayals is not merely that they are inaccurate and unfair to the many contributors to new technologies. By warping the popular understanding of how technologies develop, great-man myths threaten to undermine the structure that is actually necessary for future innovations.

Space cowboy

Elon Musk, the best-selling biography by business writer Ashlee Vance, describes Musk’s personal and professional trajectory—and seeks to explain how, exactly, the man’s repeated “willingness to tackle impossible things” has “turned him into a deity in Silicon Valley.”

Born in South Africa in 1971, Musk moved to Canada at age 17; he took a job cleaning the boiler room of a lumber mill and then talked his way into an internship at a bank by cold-calling a top executive. After studying physics and economics in Canada and at the Wharton School of the University of Pennsylvania, he enrolled in a PhD program at Stanford but opted out after a couple of days. Instead, in 1995, he cofounded a company called Zip2, which provided an online map of businesses—“a primitive Google maps meets Yelp,” as Vance puts it. Although he was not the most polished coder, Musk worked around the clock and slept “on a beanbag next to his desk.” This drive is “what the VCs saw—that he was willing to stake his existence on building out this platform,” an early employee told Vance. After Compaq bought Zip2, in 1999, Musk helped found an online financial services company that eventually became PayPal. This was when he “began to hone his trademark style of entering an ultracomplex business and not letting the fact that he knew very little about the industry’s nuances bother him,” Vance writes.

When eBay bought PayPal for $1.5 billion, in 2002, Musk emerged with the wherewithal to pursue two passions he believed could change the world. He founded SpaceX with the goal of building cheaper rockets that would facilitate research and space travel. Investing over $100 million of his personal fortune, he hired engineers with aeronautics experience, built a factory in Los Angeles, and began to oversee test launches from a remote island between Hawaii and Guam. At the same time, Musk cofounded Tesla Motors to develop battery technology and electric cars. Over the years, he cultivated a media persona that was “part playboy, part space cowboy,” Vance writes.

Musk sells himself as a singular mover of mountains and does not like to share credit for his success. At SpaceX, in particular, the engineers “flew into a collective rage every time they caught Musk in the press claiming to have designed the Falcon rocket more or less by himself,” Vance writes, referring to one of the company’s early models. In fact, Musk depends heavily on people with more technical expertise in rockets and cars, more experience with aeronautics and energy, and perhaps more social grace in managing an organization. Those who survive under Musk tend to be workhorses willing to forgo public acclaim. At SpaceX, there is Gwynne Shotwell, the company president, who manages operations and oversees complex negotiations. At Tesla, there is JB Straubel, the chief technology officer, responsible for major technical advances. Shotwell and Straubel are among “the steady hands that will forever be expected to stay in the shadows,” writes Vance. (Martin Eberhard, one of the founders of Tesla and its first CEO, arguably contributed far more to its engineering achievements. He had a bitter feud with Musk and left the company years ago.)

Likewise, Musk’s success at Tesla is undergirded by public-sector investment and political support for clean tech. For starters, Tesla relies on lithium-ion batteries pioneered in the late 1980s with major funding from the Department of Energy and the National Science Foundation. Tesla has benefited significantly from guaranteed loans and state and federal subsidies. In 2010, the company reached a loan agreement with the Department of Energy worth $465 million. (Under this arrangement, Tesla agreed to produce battery packs that other companies could benefit from and promised to manufacture electric cars in the United States.) In addition, Tesla has received $1.29 billion in tax incentives from Nevada, where it is building a “gigafactory” to produce batteries for cars and consumers. It has won an array of other loans and tax credits, plus rebates for its consumers, totaling another $1 billion, according to a recent series by the Los Angeles Times.

It is striking, then, that Musk insists on a success story that fails to acknowledge the importance of public-sector support. (He called the L.A. Times series “misleading and deceptive,” for instance, and told CNBC that “none of the government subsidies are necessary,” though he did admit they are “helpful.”)

If Musk’s unwillingness to look beyond himself sounds familiar, Steve Jobs provides a recent antecedent. Like Musk, who obsessed over Tesla cars’ door handles and touch screens and the layout of the SpaceX factory, Jobs brought a fierce intensity to product design, even if he did not envision the key features of the Mac, the iPod, or the iPhone. An accurate version of Apple’s story would give more acknowledgment not only to the work of other individuals, from designer Jonathan Ive on down, but also to the specific historical context in which Apple’s innovation occurred. “There is not a single key technology behind the iPhone that has not been state funded,” says economist Mazzucato. This includes the wireless networks, “the Internet, GPS, a touch-screen display, and … the voice-activated personal assistant Siri.” Apple has recombined these technologies impressively. But its achievements rest on many years of public-sector investment. To put it another way, do we really think that if Jobs and Musk had never come along, there would have been no smartphone revolution, no surge of interest in electric vehicles?

Read the entire story here.

Image: Titan Oceanus. Trevi Fountain, Rome. Public Domain.

Digital Forensics and the Wayback Machine

Amazon-Aug1999

Many of us see history — the school subject — as rather dull and boring. After all, how can the topic be made interesting when it’s usually taught by a coach who has other things on his or her mind [no joke, I have evidence of this from both sides of the Atlantic!].

Yet we also know that history’s lessons are essential to shaping our current world view and our vision for the future, in a myriad of ways. Since humans could speak and then write, our ancestors have recorded and transmitted their histories through oral storytelling, and then through books and assorted media.

Then came the internet. The explosion of content, media formats and related technologies over the last quarter-century has led to an immense challenge for archivists and historians intent on cataloging our digital stories. One facet of this challenge is the tremendous volume of information and its accelerating growth. Another is the dynamic nature of the content — much of it being constantly replaced and refreshed.

But, all is not lost. The Internet Archive founded in 1996 has been quietly archiving text, pages, images, audio and more recently entire web sites from the Tubes of the vast Internets. Currently the non-profit has archived around half a trillion web pages. It’s our modern day equivalent of the Library of Alexandria.

Please say hello to the Internet Archive Wayback Machine, and give it a try. The Wayback Machine took the screenshot above of Amazon.com in 1999, in case you’ve ever wondered what Amazon looked like before it swallowed or destroyed entire retail sectors.

From the New Yorker:

Malaysia Airlines Flight 17 took off from Amsterdam at 10:31 A.M. G.M.T. on July 17, 2014, for a twelve-hour flight to Kuala Lumpur. Not much more than three hours later, the plane, a Boeing 777, crashed in a field outside Donetsk, Ukraine. All two hundred and ninety-eight people on board were killed. The plane’s last radio contact was at 1:20 P.M. G.M.T. At 2:50 P.M. G.M.T., Igor Girkin, a Ukrainian separatist leader also known as Strelkov, or someone acting on his behalf, posted a message on VKontakte, a Russian social-media site: “We just downed a plane, an AN-26.” (An Antonov 26 is a Soviet-built military cargo plane.) The post includes links to video of the wreckage of a plane; it appears to be a Boeing 777.

Two weeks before the crash, Anatol Shmelev, the curator of the Russia and Eurasia collection at the Hoover Institution, at Stanford, had submitted to the Internet Archive, a nonprofit library in California, a list of Ukrainian and Russian Web sites and blogs that ought to be recorded as part of the archive’s Ukraine Conflict collection. Shmelev is one of about a thousand librarians and archivists around the world who identify possible acquisitions for the Internet Archive’s subject collections, which are stored in its Wayback Machine, in San Francisco. Strelkov’s VKontakte page was on Shmelev’s list. “Strelkov is the field commander in Slaviansk and one of the most important figures in the conflict,” Shmelev had written in an e-mail to the Internet Archive on July 1st, and his page “deserves to be recorded twice a day.”

On July 17th, at 3:22 P.M. G.M.T., the Wayback Machine saved a screenshot of Strelkov’s VKontakte post about downing a plane. Two hours and twenty-two minutes later, Arthur Bright, the Europe editor of the Christian Science Monitor, tweeted a picture of the screenshot, along with the message “Grab of Donetsk militant Strelkov’s claim of downing what appears to have been MH17.” By then, Strelkov’s VKontakte page had already been edited: the claim about shooting down a plane was deleted. The only real evidence of the original claim lies in the Wayback Machine.

The average life of a Web page is about a hundred days. Strelkov’s “We just downed a plane” post lasted barely two hours. It might seem, and it often feels, as though stuff on the Web lasts forever, for better and frequently for worse: the embarrassing photograph, the regretted blog (more usually regrettable not in the way the slaughter of civilians is regrettable but in the way that bad hair is regrettable). No one believes any longer, if anyone ever did, that “if it’s on the Web it must be true,” but a lot of people do believe that if it’s on the Web it will stay on the Web. Chances are, though, that it actually won’t. In 2006, David Cameron gave a speech in which he said that Google was democratizing the world, because “making more information available to more people” was providing “the power for anyone to hold to account those who in the past might have had a monopoly of power.” Seven years later, Britain’s Conservative Party scrubbed from its Web site ten years’ worth of Tory speeches, including that one. Last year, BuzzFeed deleted more than four thousand of its staff writers’ early posts, apparently because, as time passed, they looked stupider and stupider. Social media, public records, junk: in the end, everything goes.

Web pages don’t have to be deliberately deleted to disappear. Sites hosted by corporations tend to die with their hosts. When MySpace, GeoCities, and Friendster were reconfigured or sold, millions of accounts vanished. (Some of those companies may have notified users, but Jason Scott, who started an outfit called Archive Team—its motto is “We are going to rescue your shit”—says that such notification is usually purely notional: “They were sending e-mail to dead e-mail addresses, saying, ‘Hello, Arthur Dent, your house is going to be crushed.’ ”) Facebook has been around for only a decade; it won’t be around forever. Twitter is a rare case: it has arranged to archive all of its tweets at the Library of Congress. In 2010, after the announcement, Andy Borowitz tweeted, “Library of Congress to acquire entire Twitter archive—will rename itself Museum of Crap.” Not long after that, Borowitz abandoned that Twitter account. You might, one day, be able to find his old tweets at the Library of Congress, but not anytime soon: the Twitter Archive is not yet open for research. Meanwhile, on the Web, if you click on a link to Borowitz’s tweet about the Museum of Crap, you get this message: “Sorry, that page doesn’t exist!”

The Web dwells in a never-ending present. It is—elementally—ethereal, ephemeral, unstable, and unreliable. Sometimes when you try to visit a Web page what you see is an error message: “Page Not Found.” This is known as “link rot,” and it’s a drag, but it’s better than the alternative. More often, you see an updated Web page; most likely the original has been overwritten. (To overwrite, in computing, means to destroy old data by storing new data in their place; overwriting is an artifact of an era when computer storage was very expensive.) Or maybe the page has been moved and something else is where it used to be. This is known as “content drift,” and it’s more pernicious than an error message, because it’s impossible to tell that what you’re seeing isn’t what you went to look for: the overwriting, erasure, or moving of the original is invisible. For the law and for the courts, link rot and content drift, which are collectively known as “reference rot,” have been disastrous. In providing evidence, legal scholars, lawyers, and judges often cite Web pages in their footnotes; they expect that evidence to remain where they found it as their proof, the way that evidence on paper—in court records and books and law journals—remains where they found it, in libraries and courthouses. But a 2013 survey of law- and policy-related publications found that, at the end of six years, nearly fifty per cent of the URLs cited in those publications no longer worked. According to a 2014 study conducted at Harvard Law School, “more than 70% of the URLs within the Harvard Law Review and other journals, and 50% of the URLs within United States Supreme Court opinions, do not link to the originally cited information.” The overwriting, drifting, and rotting of the Web is no less catastrophic for engineers, scientists, and doctors. Last month, a team of digital library researchers based at Los Alamos National Laboratory reported the results of an exacting study of three and a half million scholarly articles published in science, technology, and medical journals between 1997 and 2012: one in five links provided in the notes suffers from reference rot. It’s like trying to stand on quicksand.

The footnote, a landmark in the history of civilization, took centuries to invent and to spread. It has taken mere years nearly to destroy. A footnote used to say, “Here is how I know this and where I found it.” A footnote that’s a link says, “Here is what I used to know and where I once found it, but chances are it’s not there anymore.” It doesn’t matter whether footnotes are your stock-in-trade. Everybody’s in a pinch. Citing a Web page as the source for something you know—using a URL as evidence—is ubiquitous. Many people find themselves doing it three or four times before breakfast and five times more before lunch. What happens when your evidence vanishes by dinnertime?

The day after Strelkov’s “We just downed a plane” post was deposited into the Wayback Machine, Samantha Power, the U.S. Ambassador to the United Nations, told the U.N. Security Council, in New York, that Ukrainian separatist leaders had “boasted on social media about shooting down a plane, but later deleted these messages.” In San Francisco, the people who run the Wayback Machine posted on the Internet Archive’s Facebook page, “Here’s why we exist.”

Read the entire story here.

Image: Wayback Machine’s screenshot of Amazon.com’s home page, August 1999.

Girlfriend or Nuclear Reactor?

YellowcakeAsk a typical 14 year-old boy if he’d prefer to have a girlfriend or a home-made nuclear fission reactor he’s highly likely to gravitate towards the former. Not so Taylor Wilson; he seems to prefer the company of Geiger counters, particle accelerators, vacuum tubes and radioactive materials.

From the Guardian:

Taylor Wilson has a Geiger counter watch on his wrist, a sleek, sporty-looking thing that sounds an alert in response to radiation. As we enter his parents’ garage and approach his precious jumble of electrical equipment, it emits an ominous beep. Wilson is in full flow, explaining the old-fashioned control panel in the corner, and ignores it. “This is one of the original atom smashers,” he says with pride. “It would accelerate particles up to, um, 2.5m volts – so kind of up there, for early nuclear physics work.” He pats the knobs.

It was in this garage that, at the age of 14, Wilson built a working nuclear fusion reactor, bringing the temperature of its plasma core to 580mC – 40 times as hot as the core of the sun. This skinny kid from Arkansas, the son of a Coca-Cola bottler and a yoga instructor, experimented for years, painstakingly acquiring materials, instruments and expertise until he was able to join the elite club of scientists who have created a miniature sun on Earth.

Not long after, Wilson won $50,000 at a science fair, for a device that can detect nuclear materials in cargo containers – a counter-terrorism innovation he later showed to a wowed Barack Obama at a White House-sponsored science fair.

Wilson’s two TED talks (Yup, I Built A Nuclear Fusion Reactor and My Radical Plan For Small Nuclear Fission Reactors) have been viewed almost 4m times. A Hollywood biopic is planned, based on an imminent biography. Meanwhile, corporations have wooed him and the government has offered to buy some of his inventions. Former US under-secretary for energy, Kristina Johnson, told his biographer, Tom Clynes: “I would say someone like him comes along maybe once in a generation. He’s not just smart – he’s cool and articulate. I think he may be the most amazing kid I’ve ever met.”

Seven years on from fusing the atom, the gangly teen with a mop of blond hair is now a gangly 21-year-old with a mop of blond hair, who shuttles between his garage-cum-lab in the family’s home in Reno, Nevada, and other more conventional labs. In addition to figuring out how to intercept dirty bombs, he looks at ways of improving cancer treatment and lowering energy prices – while plotting a hi-tech business empire around the patents.

As we tour his parents’ garage, Wilson shows me what appears to be a collection of nuggets. His watch sounds another alert, but he continues lovingly to detail his inventory. “The first thing I got for my fusion project was a mass spectrometer from an ex-astronaut in Houston, Texas,” he explains. This was a treasure he obtained simply by writing a letter asking for it. He ambles over to a large steel safe, with a yellow and black nuclear hazard sticker on the front. He spins the handle, opens the door and extracts a vial with pale powder in it.

“That’s some yellowcake I made – the famous stuff that Saddam Hussein was supposedly buying from Niger. This is basically the starting point for nuclear, whether it’s a weapons programme or civilian energy production.” He gives the vial a shake. A vision of dodgy dossiers, atomic intrigue and mushroom clouds swims before me, a reverie broken by fresh beeping. “That’ll be the allanite. It’s a rare earth mineral,” Wilson explains. He picks up a dark, knobbly little rock streaked with silver. “It has thorium, a potential nuclear fuel.”

I think now may be a good moment to exit the garage, but the tour is not over. “One of the things people are surprised by is how ubiquitous radiation and radioactivity is,” Wilson says, giving me a reassuring look. “I’m very cautious. I’m actually a bit of a hypochondriac. It’s all about relative risk.”

He paces over to a plump steel tube, elevated to chest level – an object that resembles an industrial vacuum cleaner, and gleams in the gloom. This is the jewel in Wilson’s crown, the reactor he built at 14, and he gives it a tender caress. “This is safer than many things,” he says, gesturing to his Aladdin’s cave of atomic accessories. “For instance, horse riding. People fear radioactivity because it is very mysterious. You want to have respect for it, but not be paralysed by fear.”

The Wilson family home is a handsome, hacienda-style house tucked into foothills outside Reno. Unusually for the high desert at this time of year, grey clouds with bellies of rain rumble overhead. Wilson, by contrast, is all sunny smiles. He is still the slightly ethereal figure you see in the TED talks (I have to stop myself from offering him a sandwich), but the handshake is firm, the eye contact good and the energy enviable – even though Wilson has just flown back from a weekend visiting friends in Los Angeles. “I had an hour’s sleep last night. Three hours the night before that,” he says, with a hint of pride.

He does not drink or smoke, is a natty dresser (in suede jacket, skinny tie, jeans and Converse-style trainers) and he is a talker. From the moment we meet until we part hours later, he talks and talks, great billows of words about the origin of his gift and the responsibility it brings; about trying to be normal when he knows he’s special; about Fukushima, nuclear power and climate change; about fame and ego, and seeing his entire life chronicled in a book for all the world to see when he’s barely an adult and still wrestling with how to ask a girl out on a date.

The future feels urgent and mysterious. “My life has been this series of events that I didn’t see coming. It’s both exciting and daunting to know you’re going to be constantly trying to one-up yourself,” he says. “People can have their opinions about what I should do next, but my biggest pressure is internal. I hate resting on laurels. If I burn out, I burn out – but I don’t see that happening. I’ve more ideas than I have time to execute.”

Wilson credits his parents with huge influence, but wavers on the nature versus nurture debate: was he born brilliant or educated into it? “I don’t have an answer. I go back and forth.” The pace of technological change makes predicting his future a fool’s errand, he says. “It’s amazing – amazing – what I can do today that I couldn’t have done if I was born 10 years earlier.” And his ambitions are sky-high: he mentions, among many other plans, bringing electricity and state-of-the-art healthcare to the developing world.

Read the entire fascinating story here.

Image: Yellowcake, a type of uranium concentrate powder, an intermediate step in the processing of uranium ores. Courtesy of United States Department of Energy. Public Domain.

Pics Or It Didn’t Happen

Apparently, in this day and age of ubiquitous technology there is no excuse for not having evidence. So, if you recently had a terrific (or terrible) meal in your (un-)favorite restaurant you must have pictures to back up your story. If you just returned from a gorgeous mountain hike you must have images for every turn on the trial. Just attended your high-school reunion? Pictures! Purchased a new mattress? Pictures! Cracked your heirloom tea service? Pictures! Mowed the lawn? Pictures! Stubbed toe? Pictures!

The pressure to record our experiences has grown in lock-step with the explosive growth in smartphones and connectivity. Collecting and sharing our memories remains a key part of our story-telling nature. But, this obsessive drive to record every minutiae of every experience, however trivial, has many missing the moment — behind the camera or in front of it, we are no longer in the moment.

Just as our online social networks have stirred growth in the increasingly neurotic condition known as FOMO (fear of missing out), we are now on the cusp on some new techno-enabled, acronym-friendly disorders. Let’s call these FONBB — fear of not being believed, FONGELOFAMP — fear of not getting enough likes or followers as my peers, FOBIO — fear of becoming irrelevant online.

From NYT:

“Pics or it didn’t happen” is the response you get online when you share some unlikely experience or event and one of your friends, followers or stalkers calls you out for evidence. “Next thing I know, I’m bowling with Bill Murray!” Pics or it didn’t happen. “I taught my cockatoo how to rap ‘Baby Got Back’ — in pig Latin.” Pics or it didn’t happen. “Against all odds, I briefly smiled today.” Pics or it didn’t happen!

It’s a glib reply to a comrade’s boasting — coming out of Internet gaming forums to rebut boasts about high scores and awesome kills — but the fact is we like proof. Proof in the instant replay that decides the big game, the vacation pic that persuades us we were happy once, the selfie that reassures us that our face is still our own. “Pics or it didn’t happen” gained traction because in an age of bountiful technology, when everyone is armed with a camera, there is no excuse for not having evidence.

Does the phrase have what it takes to transcend its humble origins as a cruddy meme and become an aphorism in the pantheon of “A picture is worth a thousand words” and “Seeing is believing”? For clues to the longevity of “Pics,” let’s take a survey of some classic epigrams about visual authority and see how they hold up under the realities of contemporary American life.

“A picture is worth a thousand words” is a dependable workhorse, emerging from early-­20th-­century newspaper culture as a pitch to advertisers: Why rely on words when an illustration can accomplish so much more? It seems appropriate to test the phrase with a challenge drawn from contemporary news media. Take one of the Pulitzer Prize-­winning photographs from The St. Louis Post-­Dispatch’s series on Ferguson. In the darkness, a figure is captured in an instant of dynamic motion: legs braced, long hair flying wild, an extravagant plume of smoke and flames trailing from the incendiary object he is about to hurl into space. His chest is covered by an American-­flag T-­shirt, he holds fire in one hand and a bag of chips in the other, a living collage of the grand and the bathetic.

Headlines — like the graphics that gave birth to “A picture is worth a thousand words” — are a distillation, a shortcut to meaning. Breitbart News presented that photograph under “Rioters Throw Molotov Cocktails at Police in Ferguson — Again.” CBS St. Louis/Associated Press ran with “Protester Throws Tear-­Gas Canister Back at Police While Holding Bag of Chips.” Rioter, protester, Molotov cocktail, tear-­gas canister. Peace officers, hypermilitarized goons. What’s the use of a thousand words when they are Babel’s noise, the confusion of a thousand interpretations?

“Seeing is believing” was an early entry in the canon. Most sources attribute it to the Apostle Thomas’s incredulity over Jesus’ resurrection. (“Last night after you left the party, Jesus turned all the water into wine” is a classic “Pics or it didn’t happen” moment.) “Unless I see the nail marks in his hands and put my finger where the nails were, and put my hand into his side, I will not believe it.” Once Jesus shows up, Thomas concludes that seeing will suffice. A new standard of proof enters the lexicon.

Intuitive logic is not enough, though. Does “Seeing is believing” hold up when confronted by current events like, say, the killing of Eric Garner last summer by the police? The bystander’s video is over two minutes long, so dividing it into an old-­fashioned 24 frames per second gives us a bounty of more than 3,000 stills. A real bonanza, atrocity-­wise. But here the biblical formulation didn’t hold up: Even with the video and the medical examiner’s assessment of homicide, a grand jury declined to indict Officer Daniel Pantaleo. Time to downgrade “Seeing is believing,” too, and kick “Justice is blind” up a notch.

Can we really use one cherry-­picked example to condemn a beloved idiom? Is the system rigged? Of course it is. Always, everywhere. Let’s say these expressions concerning visual evidence are not to blame for their failures, but rather subjectivity is. The problem is us. How we see things. How we see people. We can broaden our idiomatic investigations to include phrases that account for the human element, like “The eyes are the windows to the soul.” We can also change our idiomatic stressors from contemporary video to early photography. Before smartphones put a developing booth in everyone’s pocket, affordable portable cameras loosed amateur photographers upon the world. Everyday citizens could now take pictures of children in their Sunday best, gorgeous vistas of unspoiled nature and lynchings.

A hundred years ago, Americans took souvenirs of lynchings, just as we might now take a snapshot of a farewell party for a work colleague or a mimosa-­heavy brunch. They were keepsakes, sent to relatives to allow them to share in the event, and sometimes made into postcards so that one could add a “Wish you were here”-­type endearment. In the book “Without Sanctuary: Lynching Photography in America,” Leon F. Litwack shares an account of the 1915 lynching of Thomas Brooks in Fayette County, Tenn. “Hundreds of Kodaks clicked all morning at the scene. .?.?. People in automobiles and carriages came from miles around to view the corpse dangling at the end of the rope.” Pics or it didn’t happen. “Picture-­card photographers installed a portable printing plant at the bridge and reaped a harvest in selling postcards.” Pics or it didn’t happen. “Women and children were there by the score. At a number of country schools, the day’s routine was delayed until boy and girl pupils could get back from viewing the lynched man.” Pics or it didn’t happen.

Read the entire story here.

Viva Vinyl

Hotel-California-album

When I first moved to college and a tiny dorm room (in the UK they’re called halls of residence), my first purchase was a Garrard turntable and a pair of Denon stereo speakers. Books would come later. First, I had to build a new shrine to my burgeoning vinyl collection, which thrives even today.

So, after what seems like a hundred years since those heady days and countless music technology revolutions, it comes as quite a surprise — but perhaps not — to see vinyl on a resurgent path. The disruptors tried to kill LPs, 45s and 12-inchers with 8-track (ha), compact cassette (yuk), minidisk (yawn), CD (cool), MP3 (meh), iPod (yay) and now streaming (hmm).

But like a kind, zombie uncle the music industry cannot completely bury vinyl for good. Why did vinyl capture the imagination and the ears of the audiophile so? Well, perhaps it comes from watching the slow turn of the LP on the cool silver platter. Or, it may be the anticipation from watching the needle spiral its way to the first track. Or the raw, crackling authenticity of the sound. For me it was the weekly pilgrimage to the dusty independent record store — sampling tracks on clunky headphones; soaking up the artistry of the album cover, the lyrics, the liner notes; discussing the pros and cons of the bands with friends. Our digital world has now mostly replaced this experience, but it cannot hope to replicate it. Long live vinyl.

From ars technica:

On Thursday [July 2, 2015] , Nielsen Music released its 2015 US mid-year report, finding that overall music consumption had increased by 14 percent in the first half of the year. What’s driving that boom? Well, certainly a growth in streaming—on-demand streaming increased year-over-year by 92.4 percent, with more than 135 billion songs streamed, and overall sales of digital streaming increased by 23 percent.

But what may be more fascinating is the continued resurgence of the old licorice pizza—that is, vinyl LPs. Nielsen reports that vinyl LP sales are up 38 percent year-to-date. “Vinyl sales now comprise nearly 9 percent of physical album sales,” Nielsen stated.

Who’s leading the charge on all that vinyl? None other than the music industry’s favorite singer-songwriter Taylor Swift with her album 1989, which sold 33,500 LPs. Swift recently flexed her professional muscle when she wrote an open letter to Apple, criticizing the company for failing to pay artists during the free three-month trial of Apple Music. Apple quickly kowtowed to the pop star and reversed its position.

Following behind Swift on the vinyl chart is Sufjan Stevens’ Carrie & Lowell, The Arctic Monkeys’ AM (released in 2013), Alabama Shakes’ Sound & Color, and in fifth place, none other than Miles Davis’ Kind of Blue, which sold 23,200 copies in 2015.

Also interesting is that Nielsen found that digital album sales were flat compared to last year, and digital track sales were down 10.4 percent. Unsurprisingly, CD sales were down 10 percent.

When Nielsen reported in 2010 that 2.5 million vinyl records were sold in 2009, Ars noted that was more than any other year since the media-tracking business started keeping score in 1991. Fast forward five years and that number has more than doubled, as Nielsen counted 5.6 million vinyl records sold. The trend shows little sign of abating—last year, the US’ largest vinyl plant reported that it was adding 16 vinyl presses to its lineup of 30, and just this year Ars reported on a company called Qrates that lets artists solicit crowdfunding to do small-batch vinyl pressing.

Read the entire story here.

Image: Hotel California, The Eagles, album cover. Courtesy of the author.

Online Social Networks Make Us More and Less Social

Two professors walk in to a bar… One claims that online social networks enrich our relationships and social lives; the other claims that technology diminishes and distracts us from real world relationships. Professor Keith N. Hampton at Rutgers University’s School of Communication and Information argues for the former positive position. While Professor Larry Rosen at California State University argues against. Who’s right?

Well, they’re both probably right.

But, several consequences seem to be more certain about our new, social technologies: our focus is increasingly fragmented and short; our memory and knowledge retention is being increasingly outsourced; our impatience and need for instant gratification continues to grow; and our newly acquired anxieties continue to expand — fear of missing out, fear of being unfriended, fear of being trolled, fear of being shamed, fear from not getting comments or replies, fear of not going viral, fear of partner’s lack of status reciprocity, fear of partner’s status change, fear of being Photoshopped or photobombed, fear of having personal images distributed, fear of quiet…

From the WSJ:

With the spread of mobile technology, it’s become much easier for more people to maintain constant contact with their social networks online. And a lot of people are taking advantage of that opportunity.

One indication: A recent Pew Research survey of adults in the U.S. found that 71% use Facebook at least occasionally, and 45% of Facebook users check the site several times a day.

That sounds like people are becoming more sociable. But some people think the opposite is happening. The problem, they say, is that we spend so much time maintaining superficial connections online that we aren’t dedicating enough time or effort to cultivating deeper real-life relationships. Too much chatter, too little real conversation.

Others counter that online social networks supplement face-to-face sociability, they don’t replace it. These people argue that we can expand our social horizons online, deepening our connections to the world around us, and at the same time take advantage of technology to make our closest relationships even closer.

Larry Rosen, a professor of psychology at California State University, Dominguez Hills, says technology is distracting us from our real-world relationships. Keith N. Hampton, who holds the Professorship in Communication and Public Policy at Rutgers University’s School of Communication and Information, argues that technology is enriching those relationships and the rest of our social lives.

Read the entire story here.

 

Professional Trolling

Just a few short years ago the word “troll” in the context of the internet had not even entered our lexicon. Now, you can enter a well-paid career in the distasteful practice, especially if you live in Russia. You have to admire the human ability to find innovative and profitable ways to inflict pain on others.

From NYT:

Around 8:30 a.m. on Sept. 11 last year, Duval Arthur, director of the Office of Homeland Security and Emergency Preparedness for St. Mary Parish, Louisiana, got a call from a resident who had just received a disturbing text message. “Toxic fume hazard warning in this area until 1:30 PM,” the message read. “Take Shelter. Check Local Media and columbiachemical.com.”

St. Mary Parish is home to many processing plants for chemicals and natural gas, and keeping track of dangerous accidents at those plants is Arthur’s job. But he hadn’t heard of any chemical release that morning. In fact, he hadn’t even heard of Columbia Chemical. St. Mary Parish had a Columbian Chemicals plant, which made carbon black, a petroleum product used in rubber and plastics. But he’d heard nothing from them that morning, either. Soon, two other residents called and reported the same text message. Arthur was worried: Had one of his employees sent out an alert without telling him?

If Arthur had checked Twitter, he might have become much more worried. Hundreds of Twitter accounts were documenting a disaster right down the road. “A powerful explosion heard from miles away happened at a chemical plant in Centerville, Louisiana #ColumbianChemicals,” a man named Jon Merritt tweeted. The #ColumbianChemicals hashtag was full of eyewitness accounts of the horror in Centerville. @AnnRussela shared an image of flames engulfing the plant. @Ksarah12 posted a video of surveillance footage from a local gas station, capturing the flash of the explosion. Others shared a video in which thick black smoke rose in the distance.

Dozens of journalists, media outlets and politicians, from Louisiana to New York City, found their Twitter accounts inundated with messages about the disaster. “Heather, I’m sure that the explosion at the #ColumbianChemicals is really dangerous. Louisiana is really screwed now,” a user named @EricTraPPP tweeted at the New Orleans Times-Picayune reporter Heather Nolan. Another posted a screenshot of CNN’s home page, showing that the story had already made national news. ISIS had claimed credit for the attack, according to one YouTube video; in it, a man showed his TV screen, tuned to an Arabic news channel, on which masked ISIS fighters delivered a speech next to looping footage of an explosion. A woman named Anna McClaren (@zpokodon9) tweeted at Karl Rove: “Karl, Is this really ISIS who is responsible for #ColumbianChemicals? Tell @Obama that we should bomb Iraq!” But anyone who took the trouble to check CNN.com would have found no news of a spectacular Sept. 11 attack by ISIS. It was all fake: the screenshot, the videos, the photographs.

 In St. Mary Parish, Duval Arthur quickly made a few calls and found that none of his employees had sent the alert. He called Columbian Chemicals, which reported no problems at the plant. Roughly two hours after the first text message was sent, the company put out a news release, explaining that reports of an explosion were false. When I called Arthur a few months later, he dismissed the incident as a tasteless prank, timed to the anniversary of the attacks of Sept. 11, 2001. “Personally I think it’s just a real sad, sick sense of humor,” he told me. “It was just someone who just liked scaring the daylights out of people.” Authorities, he said, had tried to trace the numbers that the text messages had come from, but with no luck. (The F.B.I. told me the investigation was still open.)

The Columbian Chemicals hoax was not some simple prank by a bored sadist. It was a highly coordinated disinformation campaign, involving dozens of fake accounts that posted hundreds of tweets for hours, targeting a list of figures precisely chosen to generate maximum attention. The perpetrators didn’t just doctor screenshots from CNN; they also created fully functional clones of the websites of Louisiana TV stations and newspapers. The YouTube video of the man watching TV had been tailor-made for the project. A Wikipedia page was even created for the Columbian Chemicals disaster, which cited the fake YouTube video. As the virtual assault unfolded, it was complemented by text messages to actual residents in St. Mary Parish. It must have taken a team of programmers and content producers to pull off.

And the hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. The campaign followed the same pattern of fake news reports and videos, this time under the hashtag #EbolaInAtlanta, which briefly trended in Atlanta. Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncé’s recent single “7/11” played in the background, an apparent attempt to establish the video’s contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.

On the same day as the Ebola hoax, a totally different group of accounts began spreading a rumor that an unarmed black woman had been shot to death by police. They all used the hashtag #shockingmurderinatlanta. Here again, the hoax seemed designed to piggyback on real public anxiety; that summer and fall were marked by protests over the shooting of Michael Brown in Ferguson, Mo. In this case, a blurry video purports to show the shooting, as an onlooker narrates. Watching it, I thought I recognized the voice — it sounded the same as the man watching TV in the Columbian Chemicals video, the one in which ISIS supposedly claims responsibility. The accent was unmistakable, if unplaceable, and in both videos he was making a very strained attempt to sound American. Somehow the result was vaguely Australian.

Who was behind all of this? When I stumbled on it last fall, I had an idea. I was already investigating a shadowy organization in St. Petersburg, Russia, that spreads false information on the Internet. It has gone by a few names, but I will refer to it by its best known: the Internet Research Agency. The agency had become known for employing hundreds of Russians to post pro-Kremlin propaganda online under fake identities, including on Twitter, in order to create the illusion of a massive army of supporters; it has often been called a “troll farm.” The more I investigated this group, the more links I discovered between it and the hoaxes. In April, I went to St. Petersburg to learn more about the agency and its brand of information warfare, which it has aggressively deployed against political opponents at home, Russia’s perceived enemies abroad and, more recently, me.

Read the entire article here.

Art And Algorithms And Code And Cash

#!/usr/bin/perl
# 472-byte qrpff, Keith Winstein and Marc Horowitz <sipb-iap-dvd@mit.edu>
# MPEG 2 PS VOB file -> descrambled output on stdout.
# usage: perl -I <k1>:<k2>:<k3>:<k4>:<k5> qrpff
# where k1..k5 are the title key bytes in least to most-significant order

s''$/=\2048;while(<>){G=29;R=142;if((@a=unqT="C*",_)[20]&48){D=89;_=unqb24,qT,@
b=map{ord qB8,unqb8,qT,_^$a[--D]}@INC;s/...$/1$&/;Q=unqV,qb25,_;H=73;O=$b[4]<<9
|256|$b[3];Q=Q>>8^(P=(E=255)&(Q>>12^Q>>4^Q/8^Q))<<17,O=O>>8^(E&(F=(S=O>>14&7^O)
^S*8^S<<6))<<9,_=(map{U=_%16orE^=R^=110&(S=(unqT,"\xb\ntd\xbz\x14d")[_/16%8]);E
^=(72,@z=(64,72,G^=12*(U-2?0:S&17)),H^=_%64?12:0,@z)[_%8]}(16..271))[_]^((D>>=8
)+=P+(~F&E))for@a[128..$#a]}print+qT,@a}';s/[D-HO-U_]/\$$&/g;s/q/pack+/g;eval

You know that hacking has gone mainstream when the WSJ features it on the from page. Further, you know it must be passé when the WSJ claims that the art world is now purveying chunks of code as, well, art. You have to love this country for its entrepreneurial capitalist acumen!

So, if you are an enterprising (ex-)coder and have some cool Fortran, C++, or better yet, Assembler, lying around, dust off the diskette (or floppy, or better, yet, a punch card) and make haste to your nearest art gallery. You could become the first Picasso of programming — onward to the Gagosian! My story began with PL/1, IMS and then C, so my code may only be worthy of the artistic C-list.

From WSJ:

In March, Daniel Benitez, a cinema executive in Miami, paid $2,500 for a necktie. It wasn’t just any strip of designer neckwear. Imprinted on the blue silk were six lines of computer code that once brought the motion picture industry to its knees.

To the unschooled eye, the algorithm script on the tie, known formally as “qrpff,” looks like a lengthy typographical error.

But to Mr. Benitez and other computer cognoscenti, the algorithm it encodes is an artifact of rare beauty that embodies a kind of performance art. He framed it.

The algorithm sets out a procedure for what copyright holders once deemed a criminal act: picking the software lock on the digital scrambling system that Hollywood uses to protect its DVDs. At the turn of the century, hackers encoded it in many ways and distributed them freely—as programs, lines of poetry, lyrics in a rock song, and a square dance routine. They printed it on T-shirts and ties, like the item Mr. Benitez purchased. They proclaimed it free speech. No matter how many times the entertainment industry sued, their lawyers found the algorithm as hard to eradicate as kudzu.

Now it is exhibit A in the art world’s newest collecting trend.

Dealers in digital art are amassing algorithms, the computerized formulas that automate processes from stock-market sales to social networks.

In March, the online art brokerage Artsy and a digital code gallery called Ruse Laboratories held the world’s first algorithm art auction in New York. The Cooper Hewitt, Smithsonian Design Museum, where the auction was held as a fundraiser, is assembling a collection of computer code. In April, the Museum of Modern Art convened a gathering of computer experts and digital artists to discuss algorithms and design.

It is a small step for technology but a leap, perhaps, for the art world. “It is a whole new dimension we are trying to grapple with,” said curatorial director Cara McCarty at the Cooper Hewitt museum. “The art term I keep hearing is code.”

Read the entire article here.

Code snippet: Qrpff. A Perl script for decoding DVD content scrambling.

Myth Busting Silicon(e) Valley

map-Silicon-Valley

Question: what do silicone implants and Silicon Valley have in common?  Answer: they are both instruments of a grandiose illusion. The first, on a mostly personal level, promises eternal youth and vigor; the second, on a much grander scale, promises eternal wealth and greatness for humanity.

So, let’s leave aside the human cosmetic question for another time and concentrate on the broad deception that is current Silicon Valley. It’s a deception at many different levels —  self-deception of Silicon Valley’s young geeks and code jockeys, and the wider delusion that promises us all a glittering future underwritten by rapturous tech.

And, how best to debunk the myths that envelop the Valley like San Francisco’s fog, than to turn to Sam Biddle, former editor of Valleywag. He offers a scathing critique, which happens to be spot on. Quite rightly he asks if we need yet another urban, on-demand laundry app and what on earth is the value to society of “Yo”? But more importantly, he asks us to reconsider our misplaced awe and to knock Silicon Valley from its perch of self-fulfilling self-satisfaction. Yo and Facebook and Uber and Clinkle and Ringly and DogVacay and WhatsApp and the thousands of other trivial start-ups — despite astronomical valuations — will not be humanity’s savior. We need better ideas and deeper answers.

From GQ:

I think my life is better because of my iPhone. Yours probably is, too. I’m grateful to live in a time when I can see my baby cousins or experience any album ever without getting out of bed. I’m grateful that I will literally never be lost again, so long as my phone has battery. And I’m grateful that there are so many people so much smarter than I am who devise things like this, which are magical for the first week they show up, then a given in my life a week later.

We live in an era of technical ability that would have nauseated our ancestors with wonder, and so much of it comes from one very small place in California. But all these unimpeachable humanoid upgrades—the smartphones, the Google-gifted knowledge—are increasingly the exception, rather than the rule, of Silicon Valley’s output. What was once a land of upstarts and rebels is now being led by the money-hungry and the unspirited. Which is why we have a start-up that mails your dog curated treats and an app that says “Yo.” The brightest minds in tech just lately seem more concerned with silly business ideas and innocuous “disruption,” all for the shot at an immense payday. And when our country’s smartest people are working on the dumbest things, we all lose out.

That gap between the Silicon Valley that enriches the world and the Silicon Valley that wastes itself on the trivial is widening daily. And one of the biggest contributing factors is that the Valley has lost touch with reality by subscribing to its own self-congratulatory mythmaking. That these beliefs are mostly baseless, or at least egotistically distorted, is a problem—not just for Silicon Valley but for the rest of us. Which is why we’re here to help the Valley tear down its own myths—these seven in particular.

Myth #1: Silicon Valley Is the Universe’s Only True Meritocracy

 Everyone in Silicon Valley has convinced himself he’s helped create a free-market paradise, the software successor to Jefferson’s brotherhood of noble yeomen. “Silicon Valley has this way of finding greatness and supporting it,” said a member of Greylock Partners, a major venture-capital firm with over $2 billion under management. “It values meritocracy more than anyplace else.” After complaints of the start-up economy’s profound whiteness reached mainstream discussion just last year, companies like Apple, Facebook, and Twitter reluctantly released internal diversity reports. The results were as homogenized as expected: At Twitter, 79 percent of the leadership is male and 72 percent of it is white. At Facebook, senior positions are 77 percent male and 74 percent white. Twitter—a company whose early success can be directly attributed to the pioneering downloads of black smartphone users—hosts an entirely white board of directors. It’s a pounding indictment of Silicon Valley’s corporate psyche that Mark Zuckerberg—a bourgeois white kid from suburban New York who attended Harvard—is considered the Horatio Alger 2.0 paragon. When Paul Graham, the then head of the massive start-up incubator Y Combinator, told The New York Times that he could “be tricked by anyone who looks like Mark Zuckerberg,” he wasn’t just talking about Zuck’s youth.

If there’s any reassuring news, it’s not that tech’s diversity crisis is getting better, but that in the face of so much dismal news, people are becoming angry enough and brave enough to admit that the state of things is not good. Silicon Valley loves data, after all, and with data readily demonstrating tech’s overwhelming white-guy problem, even the true believers in meritocracy see the circumstances as they actually are.

Earlier this year, Ellen Pao became the most mentioned name in Silicon Valley as her gender-discrimination suit against her former employer, Kleiner Perkins Caufield & Byers, played out in court. Although the jury sided with the legendary VC firm, the Pao case was a watershed moment, bringing sunlight and national scrutiny to the issue of unchecked Valley sexism. For every defeated Ellen Pao, we can hope there are a hundred other female technology workers who feel new courage to speak up against wrongdoing, and a thousand male co-workers and employers who’ll reconsider their boys’-club bullshit. But they’ve got their work cut out for them.

Myth #4: School Is for Suckers, Just Drop Out

 Every year PayPal co-founder, investor-guru, and rabid libertarian Peter Thiel awards a small group of college-age students the Thiel Fellowship, a paid offer to either drop out or forgo college entirely. In exchange, the students receive money, mentorship, and networking opportunities from Thiel as they pursue a start-up of their choice. We’re frequently reminded of the tech titans of industry who never got a degree—Steve Jobs, Bill Gates, and Mark Zuckerberg are the most cited, though the fact that they’re statistical exceptions is an aside at best. To be young in Silicon Valley is great; to be a young dropout is golden.

The virtuous dropout hasn’t just made college seem optional for many aspiring barons—formal education is now excoriated in Silicon Valley as an obsolete system dreamed up by people who’d never heard of photo filters or Snapchat. Mix this cynicism with the libertarian streak many tech entrepreneurs carry already and you’ve got yourself a legit anti-education movement.

And for what? There’s no evidence that avoiding a conventional education today grants business success tomorrow. The gifted few who end up dropping out and changing tech history would have probably changed tech history anyway—you can’t learn start-up greatness by refusing to learn in a college classroom. And given that most start-ups fail, do we want an appreciable segment of bright young people gambling so heavily on being the next Zuck? More important, do we want an economy of CEOs who never had to learn to get along with their dorm-mates? Who never had the opportunity to grow up and figure out how to be a human being functioning in society? Who went straight from a bedroom in their parents’ house to an incubator that paid for their meals? It’s no wonder tech has an antisocial rep.

Myth #7: Silicon Valley Is Saving the World

Two years ago an online list of “57 start-up lessons” made its way through the coder community, bolstered by a co-sign from Paul Graham. “Wow, is this list good,” he commented. “It has the kind of resonance you only get when you’re writing from a lot of hard experience.” Among the platitudinous menagerie was this gem: “If it doesn’t augment the human condition for a huge number of people in a meaningful way, it’s not worth doing.” In a mission statement published on Andreessen Horowitz’s website, Marc Andreessen claimed he was “looking for the companies who are going to be the big winners because they are going to cause a fundamental change in the world.” The firm’s portfolio includes Ringly (maker of rings that light up when your phone does something), Teespring (custom T-shirts), DogVacay (pet-sitters on demand), and Hem (the zombified corpse of the furniture store Fab.com). Last year, wealthy Facebook alum Justin Rosenstein told a packed audience at TechCrunch Disrupt, “We in this room, we in technology, have a greater capacity to change the world than the kings and presidents of even a hundred years ago.” No one laughed, even though Rosenstein’s company, Asana, sells instant-messaging software.

 This isn’t just a matter of preening guys in fleece vests building giant companies predicated on their own personal annoyances. It’s wasteful and genuinely harmful to have so many people working on such trivial projects (Clinkle and fucking Yo) under the auspices of world-historical greatness. At one point recently, there were four separate on-demand laundry services operating in San Francisco, each no doubt staffed by smart young people who thought they were carving out a place of small software greatness. And yet for every laundry app, there are smart people doing smart, valuable things: Among the most recent batch of Y Combinator start-ups featured during March’s “Demo Day” were Diassess (twenty-minute HIV tests), Standard Cyborg (3D-printed limbs), and Atomwise (using supercomputing to develop new medical compounds). Those start-ups just happen to be sharing desk space at the incubator with “world changers” like Lumi (easy logo printing) and Underground Cellar (“curated, limited-edition wines with a twist”).

Read the entire article here.

Map: Silicon Valley, CA. Courtesy of Google.

 

Your Goldfish is Better Than You

Common_goldfish

Well, perhaps not at philosophical musings or mathematics. But, your little orange aquatic friend now has an attention span that is longer than yours. And, it’s all thanks to mobile devices and multi-tasking on multiple media platforms. [Psst, by the way, multi-tasking at the level of media consumption is a fallacy]. On average, the adult attention span is now down to a laughingly paltry 8 seconds, whereas the lowly goldfish comes in at 9 seconds. Where of course that leaves your inbetweeners and teenagers is anyone’s guess.

From the Independent:

Humans have become so obsessed with portable devices and overwhelmed by content that we now have attention spans shorter than that of the previously jokingly juxtaposed goldfish.

Microsoft surveyed 2,000 people and used electroencephalograms (EEGs) to monitor the brain activity of another 112 in the study, which sought to determine the impact that pocket-sized devices and the increased availability of digital media and information have had on our daily lives.

Among the good news in the 54-page report is that our ability to multi-task has drastically improved in the information age, but unfortunately attention spans have fallen.

In 2000 the average attention span was 12 seconds, but this has now fallen to just eight. The goldfish is believed to be able to maintain a solid nine.

“Canadians [who were tested] with more digital lifestyles (those who consume more media, are multi-screeners, social media enthusiasts, or earlier adopters of technology) struggle to focus in environments where prolonged attention is needed,” the study reads.

“While digital lifestyles decrease sustained attention overall, it’s only true in the long-term. Early adopters and heavy social media users front load their attention and have more intermittent bursts of high attention. They’re better at identifying what they want/don’t want to engage with and need less to process and commit things to memory.”

Anecdotely, many of us can relate to the increasing inability to focus on tasks, being distracted by checking your phone or scrolling down a news feed.

Another recent study by the National Centre for Biotechnology Information and the National Library of Medicine in the US found that 79 per cent of respondents used portable devices while watching TV (known as dual-screening) and 52 per cent check their phone every 30 minutes.

Read the entire story here.

Image: Common Goldfish. Public Domain.