Tag Archives: technology

MondayMap: Addresses Made Simple

what3words-buckingham-palace

I recently tripped over a fascinating mapping app called What3Words. Its goal is to make location and address finding easier. It does so in quite a creative way — by assigning a unique combination of 3 words to every 3×3 square meter location on the planet. In What3Words own words:

So in case you were wondering. The Queen’s official residence in London (Buckingham Palace) is fence.gross.bats.

It’s far more accurate than a postal address and it’s much easier to remember, use and share than a set of coordinates.

Better addressing improves customer experience, delivers business efficiencies, drives growth and helps the social & economic development of countries.

How cool.

Image: What3Words screenshot. Courtesy: What3Words.

Send to Kindle

Pokemon Go and the Post-Apocalyptic Future is Nigh

google-search-pokemon-go

Some have lauded Pokémon Go as the next great health and fitness enabler since the “invention” of running. After all, over the span of just a few days it has forced half of Western civilization to unplug from Netflix, get off the couch and move around, and to do so outside!

The cynic in me perceives deeper, darker motives at play: a plot by North Korea to distract the West while it prepares a preemptive nuclear strike; a corporate sponsored youth brain-washing program; an exquisitely orchestrated, self-perpetuated genocidal time-bomb wrought by shady political operatives; a Google inspired initiative to tackle the obesity epidemic.

While the true nature of this elegantly devious phenomenon unfolds over the long-term — and maintains the collective attention of tens of millions of teens and millennials in the process — I will make a dozen bold, short-term predictions:

  1. A legendary Pokémon, such as Mewtwo, will show up at the Republican National Convention in Cleveland, and it will be promptly shot by open carry fanatics.
  2. The first Pokémon Go fatality will occur by July 31, 2016 — a player will inadvertently step into traffic while trying to throw a Poké Ball.
  3. The hundredth Pokémon Go fatality will occur on August 1, 2016 — the 49th player to fall into a sewer and drown.
  4. Sales of comfortable running shoes will skyrocket over the next 3 days, as the West discovers walking.
  5. Evangelical mega-churches in the US will hack the game to ensure Pokémon characters appear during revivals to draw more potential customers.
  6. Pokémon characters will begin showing up on Fox News and the Supreme Court.
  7. Tinder will file for chapter 11 bankruptcy and emerge as a Pokémon dating site.
  8. Gyms and stadia around the country will ditch all sporting events to make way for mass Pokémon hunts; NFL’s next expansion team will be virtual and led by Pikachu as quarterback.
  9. The Pokémon Company, Nintendo and Niantic Labs will join forces to purchase Japan by year’s end.
  10. Google and Tesla will team up to deliver Poké Spot in-car navigation allowing players to automatically drive to Pokémon locations.
  11. Donald Trump will assume office of PokémonPresident of the United States on January 20, 2017; 18-35-year-olds forgot to vote.
  12. World ends, January 21, 2017.

Pokemon-Go WSJ screenshot 13Jul2016If you’re one of the few earthlings wondering what Pokémon Go is all about, and how in the space of just a few days our neighborhoods have become overrun by zombie-like players, look no further than the WSJ. Rupert Murdoch must be a fan.

Image courtesy of Google Search.

 

Send to Kindle

First, Order a Pizza. Second, World Domination

Google-search-pizza

Tech startups that plan to envelope the globe with their never-thought-of-before-but-cannot-do-without technologies and services have to begin somewhere. Usually, the path to worldwide domination begins with pizza.

From the Washington Post:

In an ordinary conference room in this city of start-ups, a group of engineers sat down to order pizza in an entirely new way.

“Get me a pizza from Pizz’a Chicago near my office,” one of the engineers said into his smartphone. It was their first real test of Viv, the artificial-intelligence technology that the team had been quietly building for more than a year. Everyone was a little nervous. Then, a text from Viv piped up: “Would you like toppings with that?”

The engineers, eight in all, started jumping in: “Pepperoni.” “Half cheese.” “Caesar salad.” Emboldened by the result, they peppered Viv with more commands: Add more toppings. Remove toppings. Change medium size to large.

About 40 minutes later — and after a few hiccups when Viv confused the office address — a Pizz’a Chicago driver showed up with four made-to-order pizzas.

The engineers erupted in cheers as the pizzas arrived. They had ordered pizza, from start to finish, without placing a single phone call and without doing a Google search — without any typing at all, actually. Moreover, they did it without downloading an app from Domino’s or Grubhub.

Of course, a pizza is just a pizza. But for Silicon Valley, a seemingly small change in consumer behavior or design can mean a tectonic shift in the commercial order, with ripple effects across an entire economy. Engineers here have long been animated by the quest to achieve the path of least friction — to use the parlance of the tech world — to the proverbial pizza.

The stealthy, four-year-old Viv is among the furthest along in an endeavor that many in Silicon Valley believe heralds that next big shift in computing — and digital commerce itself. Over the next five years, that transition will turn smartphones — and perhaps smart homes and cars and other devices — into virtual assistants with supercharged conversational capabilities, said Julie Ask, an expert in mobile commerce at Forrester.

Powered by artificial intelligence and unprecedented volumes of data, they could become the portal through which billions of people connect to every service and business on the Internet. It’s a world in which you can order a taxi, make a restaurant reservation and buy movie tickets in one long unbroken conversation — no more typing, searching or even clicking.

Viv, which will be publicly demonstrated for the first time at a major industry conference on Monday, is one of the most highly anticipated technologies expected to come out of a start-up this year. But Viv is by no means alone in this effort. The quest to define the next generation of artificial-intelligence technology has sparked an arms race among the five major tech giants: Apple, Google, Microsoft, Facebook and Amazon.com have all announced major investments in virtual-assistant software over the past year.

Read the entire story here.

Image courtesy of Google Search.

Send to Kindle

Venture Capital and (Ping-Pong) Balls

Google-search-ping-pong-balls

If you’re even slightly interested in protecting your retirement savings from the bursting of the next tech bubble and subsequent stock market crash look no further than sales of ping-pong tables and Pot-A-Shot indoor basketball. It turns out that there is a direct correlation between the sale of indoor recreational gear and the flow of venture capital to Silicon Valley’s next trillion dollar babies (ie., those saving humanity or building the next cool dating app).

From WSJ:

Twitter ’s gloomy quarterly report last week unsettled investors. They might have anticipated trouble more than a year ago had they noticed one key indicator.

Until late 2014, Twitter was regularly ordering ping-pong tables from Billiard Wholesale, a store in San Jose, Calif. Then, suddenly, it wasn’t.

The store’s owner, Simon Ng, figured it either ran out of space “or they’re having company problems.”

Twitter Inc.’s slowing user growth has been unsettling analysts, and the company’s revenue growth was unexpectedly weak in last week’s report. Asked why Twitter stopped buying tables, spokesman Jim Prosser says: “I guess we bought really sturdy ones.” Twitter spokeswoman Natalie Miyake says: “Honestly, we’re more of a Pop-A-Shot company now,” referring to an indoor basketball game.

Is the tech bubble popping? Ping pong offers an answer, and the tables are turning.

“Last year, the first quarter was hot” for tables, says Mr. Ng, who thinks sales track the tech economy. Now “there’s a general slowdown.”

In the first quarter of 2016, his table sales to companies fell 50% from the prior quarter. In that period, U.S. startup funding dropped 25%, says Dow Jones VentureSource, which tracks venture financing.

The table-tennis indicator is a peek into Silicon Valley culture, in which the right to play ping pong on the job is sacrosanct.

“If you don’t have a ping-pong table, you’re not a tech company,” says Sunil Rajasekar, chief technology officer at Lithium Technologies, a San Francisco software startup.

Read the entire story here.

Image courtesy of Google Search.

Send to Kindle

Meet the Chatbot Speech Artist

While speech recognition technology has been in the public sphere for several decades, Silicon Valley has re-discovered it with a renewed fervor. Companies from the tech giants, such as Facebook and Amazon, down to dozens of start-ups and their VC handlers have declared the next few years those of the chatbot; natural language-based messaging is the next big thing.

Thanks to Apple the most widespread incarnation of the chatbot is of course Siri — a personalized digital assistant capable of interacting with a user through a natural language conversation (well, almost). But while the parsing and understanding of human conversation, and the construction of chatbot responses, is all done via software — the vocalizations themselves are human. As a result, a new career field is opening up for enterprising speech artists.

From Washington Post:

Until recently, Robyn Ewing was a writer in Hollywood, developing TV scripts and pitching pilots to film studios.

Now she’s applying her creative talents toward building the personality of a different type of character — a virtual assistant, animated by artifical intelligence, that interacts with sick patients.

Ewing works with engineers on the software program, called Sophie, which can be downloaded to a smartphone. The virtual nurse gently reminds users to check their medication, asks them how they are feeling or if they are in pain, and then sends the data to a real doctor.

As tech behemoths and a wave of start-ups double down on virtual assistants that can chat with human beings, writing for AI is becoming a hot job in Silicon Valley. Behind Apple’s Siri, Amazon’s Alexa and Microsoft’s Cortana are not just software engineers. Increasingly, there are poets, comedians, fiction writers, and other artistic types charged with engineering the personalities for a fast-growing crop of artificial intelligence tools.

“Maybe this will help pay back all the student loans,” joked Ewing, who has master’s degrees from the Iowa Writer’s Workshop and film school.

Unlike the fictional characters that Ewing developed in Hollywood, who are put through adventures, personal trials and plot twists, most virtual assistants today are designed to perform largely prosaic tasks, such as reading through email, sending meetings reminders or turning off the lights as you shout across the room.

But a new crop of virtual assistant start-ups, whose products will soon flood the market, have in mind more ambitious bots that can interact seamlessly with human beings.

Because this wave of technology is distinguished by the ability to chat, writers for AI must focus on making the conversation feel natural. Designers for Amazon’s Alexa have built humanizing “hmms” and “ums” into her responses to questions. Apple’s Siri assistant is known for her wry jokes, as well as her ability to beatbox upon request.

As in fiction, the AI writers for virtual assistants dream up a life story for their bots. Writers for medical and productivity apps make character decisions such as whether bots should be workaholics, eager beavers or self-effacing. “You have to develop an entire backstory — even if you never use it,” Ewing said.

Even mundane tasks demand creative effort, as writers try to build personality quirks into the most rote activities. At the start-up x.ai, a Harvard theater graduate is tasked with deciding whether its scheduling bots, Amy and Andrew, should use emojis or address people by first names. “We don’t want people saying, ‘Your assistant is too casual — or too much,’?” said Anna Kelsey, whose title is AI interaction designer. “We don’t want her to be one of those crazy people who uses 15 million exclamation points.”

Virtual assistant start-ups garnered at least $35 million in investment over the past year, according to CBInsights and Washington Post research (This figure doesn’t count the many millions spent by tech giants Google, Amazon, Apple, Facebook, and Microsoft).

The surge of investor interest in virtual assistants that can converse has been fueled in part by the popularity of messaging apps, such as WeChat, WhatsApp, and Facebook’s Messenger, which are among the most widely downloaded smartphone applications. Investors see that users are increasingly drawn to conversational platforms, and hope to build additional features into them.

Read the entire story here.

Send to Kindle

Google AI Versus the Human Race

Korean_Go_Game_ca_1910-1920

It does indeed appear that a computer armed with Google’s experimental AI (artificial intelligence) software just beat a grandmaster of the strategy board game Go. The game was devised in ancient China — it’s been around for several millennia. Go is commonly held to be substantially more difficult than chess to master, to which I can personally attest.

So, does this mean that the human race is next in line for a defeat at the hands of an uber-intelligent AI? Well, not really, not yet anyway.

But, I’m with prominent scientists and entrepreneurs — including Stephen Hawking, Bill Gates and Elon Musk — who warn of the long-term existential peril to humanity from unfettered AI. In the meantime check out how AlphaGo from Google’s DeepMind unit set about thrashing a human.

From Wired:

An artificially intelligent Google machine just beat a human grandmaster at the game of Go, the 2,500-year-old contest of strategy and intellect that’s exponentially more complex than the game of chess. And Nick Bostrom isn’t exactly impressed.

Bostrom is the Swedish-born Oxford philosophy professor who rose to prominence on the back of his recent bestseller Superintelligence: Paths, Dangers, Strategies, a book that explores the benefits of AI, but also argues that a truly intelligent computer could hasten the extinction of humanity. It’s not that he discounts the power of Google’s Go-playing machine. He just argues that it isn’t necessarily a huge leap forward. The technologies behind Google’s system, Bostrom points out, have been steadily improving for years, including much-discussed AI techniques such as deep learning and reinforcement learning. Google beating a Go grandmaster is just part of a much bigger arc. It started long ago, and it will continue for years to come.

“There has been, and there is, a lot of progress in state-of-the-art artificial intelligence,” Bostrom says. “[Google’s] underlying technology is very much continuous with what has been under development for the last several years.”

But if you look at this another way, it’s exactly why Google’s triumph is so exciting—and perhaps a little frightening. Even Bostrom says it’s a good excuse to stop and take a look at how far this technology has come and where it’s going. Researchers once thought AI would struggle to crack Go for at least another decade. Now, it’s headed to places that once seemed unreachable. Or, at least, there are many people—with much power and money at their disposal—who are intent on reaching those places.

Building a Brain

Google’s AI system, known as AlphaGo, was developed at DeepMind, the AI research house that Google acquired for $400 million in early 2014. DeepMind specializes in both deep learning and reinforcement learning, technologies that allow machines to learn largely on their own.

Using what are called neural networks—networks of hardware and software that approximate the web of neurons in the human brain—deep learning is what drives the remarkably effective image search tool build into Google Photos—not to mention the face recognition service on Facebook and the language translation tool built into Microsoft’s Skype and the system that identifies porn on Twitter. If you feed millions of game moves into a deep neural net, you can teach it to play a video game.

Reinforcement learning takes things a step further. Once you’ve built a neural net that’s pretty good at playing a game, you can match it against itself. As two versions of this neural net play thousands of games against each other, the system tracks which moves yield the highest reward—that is, the highest score—and in this way, it learns to play the game at an even higher level.

AlphaGo uses all this. And then some. Hassabis [Demis Hassabis, AlphaGo founder] and his team added a second level of “deep reinforcement learning” that looks ahead to the longterm results of each move. And they lean on traditional AI techniques that have driven Go-playing AI in the past, including the Monte Carlo tree search method, which basically plays out a huge number of scenarios to their eventual conclusions. Drawing from techniques both new and old, they built a system capable of beating a top professional player. In October, AlphaGo played a close-door match against the reigning three-time European Go champion, which was only revealed to the public on Wednesday morning. The match spanned five games, and AlphaGo won all five.

Read the entire story here.

Image: Korean couple, in traditional dress, play Go; photograph dated between 1910 and 1920. Courtesy: Frank and Frances Carpenter Collection. Public Domain.

Send to Kindle

Another Corporate Empire Bites the Dust

Motorola-DynaTACBusinesses and brands come and they go. Seemingly unassailable corporations, often valued in the tens of billions of dollars (and sometimes more) fall to the incessant march of technological change and increasingly due to the ever fickle desires of the consumer.

And, these monoliths of business last but blinks of an eye when compared with the likes of our vast social empires such as the Roman, Han, Ottoman, Venetian, Sudanese, Portuguese, which persist for many hundreds — sometimes thousands — of years.

Yet, even a few years ago who would have predicted the demise of the Motorola empire, the company mostly responsible for the advent of the handheld mobile phone. Motorola had been on a recent downward spiral, failing in part to capitalize on the shift to smartphones, mobile operating systems and apps. Now it’s brand is dust. RIP brick!

From the Guardian:

Motorola, the brand which invented the mobile phone, brought us the iconic “Motorola brick”, and gave us both the first flip-phone and the iconic Razr, is to cease to exist.

Bought from Google by the Chinese smartphone and laptop powerhouse Lenovo in January 2014, Motorola had found success over the past two years. It launched the Moto G in early 2014, which propelled the brand, which had all but disappeared after the Razr, from a near-0% market share to 6% of sales in the UK.

The Moto G kickstarted the reinvigoration of the brand, which saw Motorola ship more than 10m smartphones in the third quarter of 2014, up 118% year-on-year.

But now Lenovo has announced that it will kill off the US mobile phone pioneer’s name. It will keep Moto, the part of Motorola’s product naming that has gained traction in recent years, but Moto smartphones will be branded under Lenovo.

Motorola chief operating officer Rick Osterloh told Cnet that “we’ll slowly phase out Motorola and focus on Moto”.

The Moto line will be joined by Lenovo’s Vibe line in the low end, leaving the fate of the Moto E and G uncertain. The Motorola Mobility division of Lenovo will take over responsibility for the Chinese manufacturer’s entire smartphone range.

Read the entire story here.

Image: Motorola DynaTAC 8000X commercial portable cellular phone, 1983. Courtesy of Motorola.

Send to Kindle

The Internet of Flow

Time-based structures of information and flowing data — on a global scale — will increasingly dominate the Web. Eventually, this flow is likely to transform how we organize, consume and disseminate our digital knowledge. While we see evidence of this in effect today, in blogs, Facebook’s wall and timeline and, most basically, via Twitter, the long-term implications of this fundamentally new organizing principle have yet to be fully understood — especially in business.

For a brief snapshot on a possible, and likely, future of the Internet I turn to David Gelernter. He is Professor of Computer Science at Yale University, an important thinker and author who has helped shape the fields of parallel computing, artificial intelligence (AI) and networking. Many of Gelernter’s papers, some written over 20 years ago offer a remarkably prescient view, most notably: Mirror Worlds (1991), The Muse In The Machine (1994) and The Second Coming – A Manifesto (1999).

From WSJ:

People ask where the Web is going; it’s going nowhere. The Web was a brilliant first shot at making the Internet usable, but it backed the wrong horse. It chose space over time. The conventional website is “space-organized,” like a patterned beach towel—pineapples upper left, mermaids lower right. Instead it might have been “time-organized,” like a parade—first this band, three minutes later this float, 40 seconds later that band.

We go to the Internet for many reasons, but most often to discover what’s new. We have had libraries for millennia, but never before have we had a crystal ball that can tell us what is happening everywhere right now. Nor have we ever had screens, from room-sized to wrist-sized, that can show us high-resolution, constantly flowing streams of information.

Today, time-based structures, flowing data—in streams, feeds, blogs—increasingly dominate the Web. Flow has become the basic organizing principle of the cybersphere. The trend is widely understood, but its implications aren’t.

Working together at Yale in the mid-1990s, we forecast the coming dominance of time-based structures and invented software called the “lifestream.” We had been losing track of our digital stuff, which was scattered everywhere, across proliferating laptops and desktops. Lifestream unified our digital life: Each new document, email, bookmark or video became a bead threaded onto a single wire in the Cloud, in order of arrival.

To find a bead, you search, as on the Web. Or you can watch the wire and see each new bead as it arrives. Whenever you add a bead to the lifestream, you specify who may see it: everyone, my friends, me. Each post is as private as you make it.

Where do these ideas lead? Your future home page—the screen you go to first on your phone, laptop or TV—is a bouquet of your favorite streams from all over. News streams are blended with shopping streams, blogs, your friends’ streams, each running at its own speed.

This home stream includes your personal stream as part of the blend—emails, documents and so on. Your home stream is just one tiny part of the world stream. You can see your home stream in 3-D on your laptop or desktop, in constant motion on your phone or as a crawl on your big TV.

By watching one stream, you watch the whole world—all the public and private events you care about. To keep from being overwhelmed, you adjust each stream’s flow rate when you add it to your collection. The system slows a stream down by replacing many entries with one that lists short summaries—10, 100 or more.

An all-inclusive home stream creates new possibilities. You could build a smartwatch to display the stream as it flows past. It could tap you on the wrist when there’s something really important onstream. You can set something aside or rewind if necessary. Just speak up to respond to messages or add comments. True in-car computing becomes easy. Because your home stream gathers everything into one line, your car can read it to you as you drive.

Read the entire article here.

 

Send to Kindle

Streaming is So 2015

Led Zeppelin-IV

Fellow music enthusiasts and technology early adopters ditch the streaming sounds right now. And, if you still have an iPod, or worse an MP3 or CD player, trash it; trash them all.

The future of music is coming, and it’s beamed and implanted directly into your grey matter. I’m not sure if I like the idea of Taylor Swift inside my head — I’m more of a Pink Floyd and Led Zeppelin person — nor the idea of not having a filter for certain genres (i.e., country music). However, some might like the notion of a digital-DJ brain implant that lays down tracks based on your mood from monitoring your neurochemical mix. It’s only a matter of time.

Thanks, but I’ll stick to vinyl, crackles and all.

From WSJ:

The year is 2040, and as you wait for a drone to deliver your pizza, you decide to throw on some tunes. Once a commodity bought and sold in stores, music is now an omnipresent utility invoked via spoken- word commands. In response to a simple “play,” an algorithmic DJ opens a blended set of songs, incorporating information about your location, your recent activities and your historical preferences—complemented by biofeedback from your implanted SmartChip. A calming set of lo-fi indie hits streams forth, while the algorithm adjusts the beats per minute and acoustic profile to the rain outside and the fact that you haven’t eaten for six hours.

The rise of such dynamically generated music is the story of the age. The album, that relic of the 20th century, is long dead. Even the concept of a “song” is starting to blur. Instead there are hooks, choruses, catchphrases and beats—a palette of musical elements that are mixed and matched on the fly by the computer, with occasional human assistance. Your life is scored like a movie, with swelling crescendos for the good parts, plaintive, atonal plunks for the bad, and fuzz-pedal guitar for the erotic. The DJ’s ability to read your emotional state approaches clairvoyance. But the developers discourage the name “artificial intelligence” to describe such technology. They prefer the term “mood-affiliated procedural remixing.”

Right now, the mood is hunger. You’ve put on weight lately, as your refrigerator keeps reminding you. With its assistance—and the collaboration of your DJ—you’ve come up with a comprehensive plan for diet and exercise, along with the attendant soundtrack. Already, you’ve lost six pounds. Although you sometimes worry that the machines are running your life, it’s not exactly a dystopian experience—the other day, after a fast- paced dubstep remix spurred you to a personal best on your daily run through the park, you burst into tears of joy.

Cultural production was long thought to be an impregnable stronghold of human intelligence, the one thing the machines could never do better than humans. But a few maverick researchers persisted, and—aided by startling, asymptotic advances in other areas of machine learning—suddenly, one day, they could. To be a musician now is to be an arranger. To be a songwriter is to code. Atlanta, the birthplace of “trap” music, is now a locus of brogrammer culture. Nashville is a leading technology incubator. The Capitol Records tower was converted to condos after the label uploaded its executive suite to the cloud.

Read the entire story here.

Image: Led Zeppelin IV album cover. Courtesy of the author.

 

Send to Kindle

Who Needs a Self-Driving Car?

Self-driving vehicles have been very much in the news over the last couple of years. Google’s autonomous car project is perhaps the most notable recent example — its latest road-worthy prototype is the culmination of a project out of Stanford, which garnered an innovation prize from DARPA (Defense Advanced Research Projects Agency) back in 2005. And, numerous companies are in various stages of experimenting, planning, prototyping and developing, including GM, Apple, Mercedes-Benz, Nissan, BMW, Tesla, to name but a few.

Ehang-184-AAVThat said, even though it may still be a few years yet before we see traffic jams of driverless cars clogging the Interstate Highway system, some forward-thinkers are not resting on their laurels.  EHang, a Chinese drone manufacturer is leapfrogging the car entirely and pursuing an autonomous drone — actually an autonomous aerial vehicle (AAV) known as the Ehang 184 — capable of flying one passenger. Cooler still, the only onboard control is a Google-map interface that allows the passenger to select a destination. The AAV and ground-based command centers take care of the rest.

I have to wonder if EHang’s command centers will be able to use the drone to shoot missiles at militants as well as delivering a passenger, or better still, targeting missiles at rogue drivers.

Wired has more about this fascinating new toy — probably aimed at Russian oligarchs and Silicon Valley billionaires.

Image: Ehang 184 — Autonomous Aerial Vehicle. Courtesy of EHang.

 

Send to Kindle

Re-Innovation: Silicon Valley’s Trivial Pursuit Problem

I read and increasing number of articles like the one excerpted below, which cause me to sigh with exasperation yet again. Is Silicon Valley — that supposed beacon of global innovation — in danger of becoming a drainage ditch of regurgitated sameness, of me-too banality?

It’s frustrating to watch many of our self-proclaimed brightest tech minds re-package colorful “new” solutions to our local trivialities, yet again, and over and over. So, here we are, celebrating the arrival of the “next big thing”; the next tech unicorn with a valuation above $1 billion, which proposes to upend and improve all our lives, yet again.

DoorDash. Seamless. Deliveroo. HelloFresh. HomeChef. SpoonRocket. Sprig. GrubHub. Instacart. These are all great examples of too much money chasing too few truly original ideas. I hope you’ll agree: a cool compound name is a cool compound name, but it certainly does not for innovation make. By the way, whatever happened to WebVan?

Where are my slippers? Yawn.

From Wired:

Founded in 2013, DoorDash is a food delivery service. It’s also the latest startup to be eying a valuation of more than $1 billion. DoorDash already raised $40 million in March; according to Bloomberg, it may soon reap another round of funding that would put the company in the same lofty territory as Uber, Airbnb, and more than 100 other so-called unicorns.

Not that DoorDash is doing anything terribly original. Startups bringing food to your door are everywhere. There’s Instacart, which wants to shop for groceries for you. Deliveroo and Postmastes, like DoorDash, are looking to overtake Seamless as the way we get takeout at home. Munchery, SpoonRocket, and Sprig offer pre-made meals. Blue Apron, Gobble, HelloFresh, and HomeChef deliver ingredients to make the food ourselves. For the moment, investors are giddily rushing to subsidize this race to our doors. But skeptics say that the payout those investors are banking on might never come.

Even in a crowded field, funding for these delivery startups continues to grow. CB Insights, a research group that tracks startup investments, said this summer that the sector was “starting to get a little crowded.” Last year, venture-backed food delivery startups based in the US reaped more than $1 billion in equity funding; during first half of this year, they pulled in $750 million more, CB Insights found.

The enormous waves of funding may prove money poorly spent if Silicon Valley finds itself in a burst bubble. Bill Gurley, the well-known investor and a partner at venture firm Benchmark, believes delivery startups may soon be due for a rude awakening. Unlike the first dotcom bubble, he said, smartphones might offer help, because startups are able to collect more data. But he compared the optimism investors are showing for such low-margin operations to the misplaced enthusiasms of 1999.  “It’s the same shit,” Gurley said during a recent appearance. (Gurley’s own investment in food delivery service, GrubHub, went public in April 2014 and is now valued at more than $2.2 billion.)

Read the entire article here.

 

Send to Kindle

The Man With No Phone

If Hitchcock were alive today the title of this post — The Man With No Phone — might be a fitting description of his latest noir, celluloid masterpiece. For in many the notion of being phone-less distills deep nightmarish visions of blood-curdling terror.

Does The Man With No Phone lose track of all reality, family, friends, appointments, status updates, sales records, dinner, grocery list, transportation schedules and news, turning into an empty neurotic shell of a human being? Or, does lack of constant connectivity and elimination of instant, digital gratification lead The Man With No Phone to become a schizoid, feral monster? Let’s read on to find out.

Large swathes of the world are still phone-less, and much of the global population — at least those of us over the age of 35 — grew up smartphone-less and even cellphone-less. So, it’s rather disconcerting to read Steve Hilton’s story; he’s been phone-less for 3 years now. However, it’s not disconcerting that he’s without a phone — I find it inspiring (and normal), it’s disconcerting that many people are wondering how on earth he can live without one. And, even more perplexing — why would anyone need a digital detox or mindfulness app on their smartphone? Just hide the thing in your junk draw for a week (or more) and breathe out. Long live The Man With No Phone!

From the Guardian:

Before you read on, I want to make one thing clear: I’m not trying to convert you. I’m not trying to lecture you or judge you. Honestly, I’m not. It may come over like that here and there, but believe me, that’s not my intent. In this piece, I’m just trying to … explain.

People who knew me in a previous life as a policy adviser to the British prime minister are mildly surprised that I’m now the co-founder and CEO of a tech startup . And those who know that I’ve barely read a book since school are surprised that I have now actually written one.

But the single thing that no one seems able to believe – the thing that apparently demands explanation – is the fact that I am phone-free. That’s right: I do not own a cellphone; I do not use a cellphone. I do not have a phone. No. Phone. Not even an old-fashioned dumb one. Nothing. You can’t call me unless you use my landline – yes, landline! Can you imagine? At home. Or call someone else that I happen to be with (more on that later).

When people discover this fact about my life, they could not be more surprised than if I had let slip that I was actually born with a chicken’s brain. “But how do you live?” they cry. And then: “How does your wife feel about it?” More on that too, later.

As awareness has grown about my phone-free status (and its longevity: this is no passing fad, people – I haven’t had a phone for over three years), I have received numerous requests to “tell my story”. People seem to be genuinely interested in how someone living and working in the heart of the most tech-obsessed corner of the planet, Silicon Valley, can possibly exist on a day-to-day basis without a smartphone.

So here we go. Look, I know it’s not exactly Caitlyn Jenner, but still: here I am, and here’s my story.

In the spring of 2012, I moved to the San Francisco bay area with my wife and two young sons. Rachel was then a senior executive at Google, which involved a punishing schedule to take account of the eight-hour time difference. I had completed two years at 10 Downing Street as senior adviser to David Cameron – let’s just put it diplomatically and say that I and the government machine had had quite enough of each other. To make both of our lives easier, we moved to California.

I took with me my old phone, which had been paid for by the taxpayer. It was an old Nokia phone – I always hated touch-screens and refused to have a smartphone; neither did I want a BlackBerry or any other device on which the vast, endless torrent of government emails could follow me around. Once we moved to the US my government phone account was of course stopped and telephonically speaking, I was on my own.

I tried to get hold of one of my beloved old Nokia handsets, but they were no longer available. Madly, for a couple of months I used old ones procured through eBay, with a pay-as-you-go plan from a UK provider. The handsets kept breaking and the whole thing cost a fortune. Eventually, I had enough when the charging outlet got blocked by sand after a trip to the beach. “I’m done with this,” I thought, and just left it.

I remember the exact moment when I realized something important had happened. I was on my bike, cycling to Stanford, and it struck me that a week had gone by without my having a phone. And everything was just fine. Better than fine, actually. I felt more relaxed, carefree, happier. Of course a lot of that had to do with moving to California. But this was different. I felt this incredibly strong sense of just thinking about things during the day. Being able to organize those thoughts in my mind. Noticing things.

Read the entire story here.

Video: Hanging on the Telephone, Blondie. Courtesy: EMI Music.

Send to Kindle

Design Thinking Versus Product Development

Out with product managers; in with design thinkers. Time for some corporate creativity. Think user journeys and empathy roadmaps.

A different corporate mantra is beginning to take hold at some large companies like IBM. It’s called design thinking, and while it’s not necessarily new, it holds promise for companies seeking to meet the needs of their customers at a fundamental level. Where design is often thought of in terms of defining and constructing cool-looking products, design thinking is used to capture a business problem at a broader level, shape business strategy and deliver a more holistic, deeper solution to customers. And, importantly, to do so more quickly than through a typical product development life-cycle.

From NYT:

Phil Gilbert is a tall man with a shaved head and wire-rimmed glasses. He typically wears cowboy boots and bluejeans to work — hardly unusual these days, except he’s an executive at IBM, a company that still has a button-down suit-and-tie reputation. And in case you don’t get the message from his wardrobe, there’s a huge black-and-white photograph hanging in his office of a young Bob Dylan, hunched over sheet music, making changes to songs in the “Highway 61 Revisited” album. It’s an image, Mr. Gilbert will tell you, that conveys both a rebel spirit and hard work.

Let’s not get carried away. Mr. Gilbert, who is 59 years old, is not trying to redefine an entire generation. On the other hand, he wants to change the habits of a huge company as it tries to adjust to a new era, and that is no small task.

IBM, like many established companies, is confronting the relentless advance of digital technology. For these companies, the question is: Can you grow in the new businesses faster than your older, lucrative businesses decline?

Mr. Gilbert answers that question with something called design thinking. (His title is general manager of design.) Among other things, design thinking flips traditional technology product development on its head. The old way is that you come up with a new product idea and then try to sell it to customers. In the design thinking way, the idea is to identify users’ needs as a starting point.

Mr. Gilbert and his team talk a lot about “iteration cycles,” “lateral thinking,” “user journeys” and “empathy maps.” To the uninitiated, the canons of design thinking can sound mushy and self-evident. But across corporate America, there is a rising enthusiasm for design thinking not only to develop products but also to guide strategy and shape decisions of all kinds. The September cover article of the Harvard Business Review was “The Evolution of Design Thinking.” Venture capital firms are hiring design experts, and so are companies in many industries.

Still, the IBM initiative stands out. The company is well on its way to hiring more than 1,000 professional designers, and much of its management work force is being trained in design thinking. “I’ve never seen any company implement it on the scale of IBM,” said William Burnett, executive director of the design program at Stanford University. “To try to change a culture in a company that size is a daunting task.”

Daunting seems an understatement. IBM has more than 370,000 employees. While its revenues are huge, the company’s quarterly reports have shown them steadily declining in the last two years. The falloff in revenue is partly intentional, as the company sold off less profitable operations, but the sometimes disappointing profits are not, and they reflect IBM’s struggle with its transition. Last month, the company shaved its profit target for 2015.

In recent years, the company has invested heavily in new fields, including data analytics, cloud computing, mobile technology, security, social media software for business and its Watson artificial intelligence technology. Those businesses are growing rapidly, generating revenue of $25 billion last year, and IBM forecasts that they will contribute $40 billion by 2018, through internal growth and acquisitions. Just recently, for example, IBM agreed to pay $2 billion for the Weather Company (not including its television channel), gaining its real-time and historical weather data to feed into Watson and analytics software.

But IBM’s biggest businesses are still the traditional ones — conventional hardware, software and services — which contribute 60 percent of its revenue and most of its profit. And these IBM mainstays are vulnerable, as customers increasingly prefer to buy software as a service, delivered over the Internet from remote data centers.

Recognizing the importance of design is not new, certainly not at IBM. In the 1950s, Thomas J. Watson Jr., then the company’s chief executive, brought on Eliot Noyes, a distinguished architect and industrial designer, to guide a design program at IBM. And Noyes, in turn, tapped others including Paul Rand, Charles Eames and Eero Saarinen in helping design everything from corporate buildings to the eight-bar corporate logo to the IBM Selectric typewriter with its golf-ball-shaped head.

At that time, and for many years, design meant creating eye-pleasing, functional products. Now design thinking has broader aims, as a faster, more productive way of organizing work: Look at problems first through the prism of users’ needs, research those needs with real people and then build prototype products quickly.

Defining problems more expansively is part of the design-thinking ethos. At a course in New York recently, a group of IBM managers were given pads and felt-tip pens and told to sketch designs for “the thing that holds flowers on a table” in two minutes. The results, predictably, were vases of different sizes and shapes.

Next, they were given two minutes to design “a better way for people to enjoy flowers in their home.” In Round 2, the ideas included wall placements, a rotating flower pot run by solar power and a software app for displaying images of flowers on a home TV screen.

Read the entire story here.

Send to Kindle

Can Burning Man Be Saved?

Burning-Man-2015-gallery

I thought it rather appropriate to revisit Burning Man one day after Guy Fawkes Day in the UK. I must say that Burning Man has grown into more of a corporate event compared with the cheesy pyrotechnic festivities in Britain on the 5th of November. So, even though Burners have a bigger, bolder, brasher event please remember-remember, we Brits had the original burning man — by 380 years.

The once-counter-cultural phenomenon known as Burning Man seems to be maturing into an executive-level tech-fest. Let’s face it, if I can read about the festival in the mainstream media it can’t be as revolutionary as it once set out to be. Though, the founders‘ desire to keep the festival radically inclusive means that organizers can’t turn away those who may end up razing Burning Man to the ground due to corporate excess. VCs and the tech elite from Silicon Valley now descend in their hoards, having firmly placed Burning Man on their app-party circuit. Until recently, Burners mingled relatively freely throughout the week-long temporary metropolis in the Nevada desert; now, the nouveau riche arrive on private jets and “camp” in exclusive wagon-circles of luxury RVs catered to by corporate chefs and personal costume designers. It certainly seems like some of Larry Harvey’s 10 Principles delineating Burning Man’s cultural ethos are on shaky ground. Oh well, capitalism ruins another great idea! But, go once before you die.

From NYT:

There are two disciplines in which Silicon Valley entrepreneurs excel above almost everyone else. The first is making exorbitant amounts of money. The second is pretending they don’t care about that money.

To understand this, let’s enter into evidence Exhibit A: the annual Burning Man festival in Black Rock City, Nev.

If you have never been to Burning Man, your perception is likely this: a white-hot desert filled with 50,000 stoned, half-naked hippies doing sun salutations while techno music thumps through the air.

A few years ago, this assumption would have been mostly correct. But now things are a little different. Over the last two years, Burning Man, which this year runs from Aug. 25 to Sept. 1, has been the annual getaway for a new crop of millionaire and billionaire technology moguls, many of whom are one-upping one another in a secret game of I-can-spend-more-money-than-you-can and, some say, ruining it for everyone else.

Some of the biggest names in technology have been making the pilgrimage to the desert for years, happily blending in unnoticed. These include Larry Page and Sergey Brin, the Google founders, and Jeff Bezos, chief executive of Amazon. But now a new set of younger rich techies are heading east, including Mark Zuckerberg of Facebook, employees from Twitter, Zynga and Uber, and a slew of khaki-wearing venture capitalists.

Before I explain just how ridiculous the spending habits of these baby billionaires have become, let’s go over the rules of Burning Man: You bring your own place to sleep (often a tent), food to eat (often ramen noodles) and the strangest clothing possible for the week (often not much). There is no Internet or cell reception. While drugs are technically illegal, they are easier to find than candy on Halloween. And as for money, with the exception of coffee and ice, you cannot buy anything at the festival. Selling things to people is also a strict no-no. Instead, Burners (as they are called) simply give things away. What’s yours is mine. And that often means everything from a meal to saliva.

In recent years, the competition for who in the tech world could outdo who evolved from a need for more luxurious sleeping quarters. People went from spending the night in tents, to renting R.V.s, to building actual structures.

“We used to have R.V.s and precooked meals,” said a man who attends Burning Man with a group of Silicon Valley entrepreneurs. (He asked not to be named so as not to jeopardize those relationships.) “Now, we have the craziest chefs in the world and people who build yurts for us that have beds and air-conditioning.” He added with a sense of amazement, “Yes, air-conditioning in the middle of the desert!”

His camp includes about 100 people from the Valley and Hollywood start-ups, as well as several venture capital firms. And while dues for most non-tech camps run about $300 a person, he said his camp’s fees this year were $25,000 a person. A few people, mostly female models flown in from New York, get to go free, but when all is told, the weekend accommodations will collectively cost the partygoers over $2 million.

This is drastically different from the way most people experience the event. When I attended Burning Man a few years ago, we slept in tents and a U-Haul moving van. We lived on cereal and beef jerky for a week. And while Burning Man was one of the best experiences of my life, using the public Porta-Potty toilets was certainly one of the most revolting experiences thus far. But that’s what makes Burning Man so great: at least you’re all experiencing those gross toilets together.

That is, until recently. Now the rich are spending thousands of dollars to get their own luxury restroom trailers, just like those used on movie sets.

“Anyone who has been going to Burning Man for the last five years is now seeing things on a level of expense or flash that didn’t exist before,” said Brian Doherty, author of the book “This Is Burning Man.” “It does have this feeling that, ‘Oh, look, the rich people have moved into my neighborhood.’ It’s gentrifying.”

For those with even more money to squander, there are camps that come with “Sherpas,” who are essentially paid help.

Tyler Hanson, who started going to Burning Man in 1995, decided a couple of years ago to try working as a paid Sherpa at one of these luxury camps. He described the experience this way: Lavish R.V.s are driven in and connected together to create a private forted area, ensuring that no outsiders can get in. The rich are flown in on private planes, then picked up at the Burning Man airport, driven to their camp and served like kings and queens for a week. (Their meals are prepared by teams of chefs, which can include sushi, lobster boils and steak tartare — yes, in the middle of 110-degree heat.)

“Your food, your drugs, your costumes are all handled for you, so all you have to do is show up,” Mr. Hanson said. “In the camp where I was working, there were about 30 Sherpas for 12 attendees.”

Mr. Hanson said he won’t be going back to Burning Man anytime soon. The Sherpas, the money, the blockaded camps and the tech elite were too much for him. “The tech start-ups now go to Burning Man and eat drugs in search of the next greatest app,” he said. “Burning Man is no longer a counterculture revolution. It’s now become a mirror of society.”

Strangely, the tech elite won’t disagree with Mr. Hanson about it being a reflection of society. This year at the premiere of the HBO show “Silicon Valley,” Elon Musk, an entrepreneur who was a founder of PayPal, complained that Mike Judge, the show’s creator, didn’t get the tech world because — wait for it — he had not attended the annual party in the desert.

“I really feel like Mike Judge has never been to Burning Man, which is Silicon Valley,” Mr. Musk said to a Re/Code reporter, while using a number of expletives to describe the festival. “If you haven’t been, you just don’t get it.”

Read the entire story here.

Image: Burning Man gallery. Courtesy of Burners.

Send to Kindle

Selfie-Drone: It Was Only a Matter of Time

Google-search-selfie-drone

Those of you who crave a quiet, reflective escape from the incessant noise of the modern world, may soon find even fewer places for quiet respite. Make the most of your calming visit to the beach or a mountain peak or an alpine lake or an emerald forest before you are jolted back to reality by swarms of buzzing selfie-drones. It’s rather ironic to see us regress as our technology evolves. Oh, and you can even get a wearable one! Does our penchant for narcissistic absorption have no bounds? That said, there is one positive to come of this dreadful application of a useful invention — the selfie-stick may be on the way out. I will now revert to my quiet cave for the next 50 years.

From NYT:

It was a blistering hot Sunday in Provence. The painted shutters of the houses in Arles were closed. Visitors were scarce. In the Roman amphitheater, built to hold some 20,000 spectators, I sat among empty bleachers, above homes with orange tile roofs, looking past ancient arcades and terraces to the blue horizon. Was this the sort of stillness van Gogh experienced when he was in Arles on this same June day in 1888? I began to entertain the thought but was distracted by a soft whirring; a faint electric hum. Something was drawing near. I looked around and saw nothing — until it and I were eye to eye.

Or rather, eye to lens. A drone resembling one of those round Roomba robotic vacuums had levitated from the pit of the nearly 2,000-year-old arena and was hovering in the air between me and the cloudless horizon. Reflexively I turned away and tugged on the hem of my dress. Who knew where this flying Roomba was looking or what it was recording?

Unexpected moments of tranquility, like finding yourself in a near-empty Roman arena during a heat wave, are becoming more and more elusive. If someone isn’t about to inadvertently impale you with a selfie-stick, another may catch you on video with a recreational drone, like the DJI Phantom (about $500 to $1,600), which is easy to use (unless you’re inebriated, like the man who crashed a Phantom on the White House grounds in January).

Yet what travelers are seeing today — remote-controlled drones bobbing around tourist sites, near airports, in the Floridian National Golf Club in Palm City while President Obama played golf — is but the tip of the iceberg. Think remote-controlled drones and selfie-sticks are intrusive? Prepare for the selfie-drone.

This next generation of drones, which are just beginning to roll out, doesn’t require users to hold remote controllers: They are hands-free. Simply toss them in the air, and they will follow you like Tinker Bell. With names such as Lily (around $700 on pre-order) and Nixie (not yet available for pre-order), they are capable of recording breathtaking video footage and trailing adventure travelers across bridges and streams, down ski slopes and into secluded gardens.

Nixie, which you can wear on your wrist until you want to fling it off for a photo or video, has a “boomerang mode” that allows it to fly back to you as if it were a trained raptor. A promotional video for Lily shows a man with a backpack lobbing the drone like a stone over a bridge and casually walking away, only to have the thing float up and follow him. Think you can outmaneuver the contraption in white-water rapids? Lily is waterproof. I watched with awe a video of Lily being dumped into a river beside a woman in a kayak (where one assumes Lily will perish), yet within seconds emerging and rising, like Glenn Close from the bathtub in “Fatal Attraction.”

There is no denying that the latest drone technology is impressive. And the footage is striking. Adventure travelers who wish to watch themselves scale Kilimanjaro or surf in Hawaii along the North Shore of Oahu will no doubt want one. But if selfie-drones become staples of every traveler who can afford them, we stand to lose more than we stand to gain when it comes to privacy, safety and quality-of-life factors like peace and beauty.

Imagine sunsets at the lake or beach with dozens of selfie-drones cluttering the sky, each vying for that perfect shot. Picture canoodling on a seemingly remote park bench during your romantic getaway and ending up on video. The intimate walks and tête-à-têtes that call to mind Jane Eyre and Mr. Rochester would hardly be the same with drones whizzing by. Think of your children building sand castles and being videotaped by passing drones. Who will be watching and recording us, and where will that information end up?

I shudder to think of 17- and 18-year-olds receiving drones for Christmas and on their winter vacations crashing the contraptions into unsuspecting sunbathers. Or themselves. Lest you think I joke, consider that in May the singer Enrique Iglesias, who is well past his teenage years, sliced his fingers while trying to snap a photo with a (remote-controlled) drone during his concert in Mexico.

Read the entire article here.

Image courtesy of Google Search.

Send to Kindle

The Old School Social Network Returns

Not too long ago newbies to a community might first have met their neighbors face-to-face by knocking on each others’ front doors, through strolling around the neighborhood or at browsing at the local, communal market or store. But busy schedules, privacy fences, garage doors, a car-centric culture and a general fear of strangers have raised barriers and successfully isolated us. So, it’s wonderful to see the digital tools of our modern age being put to a more ancient use — meeting the neighbors, and breaking down some barriers — many of which seem to be caused by our technologies. Long may the old school (face-to-face) social network prosper!

From NYT:

When Laurell Boyers, 34, and her husband, Federico Bastiani, 37, moved in together in Bologna in 2012, they did not know any of their neighbors. It was a lonely feeling.

“All my friends back home had babies, play dates, people to talk to, and I felt so left out,” Ms. Boyers, who moved from South Africa, said on a recent afternoon. “We didn’t have family or friends connections here. We knew people occasionally, but none in our same situation.”

So Mr. Bastiani took a chance and posted a flier along his street, Via Fondazza, explaining that he had created a closed group on Facebook just for the people who lived there. He was merely looking to make some new friends.

In three or four days, the group had about 20 followers. Almost two years later, the residents say, walking along Via Fondazza does not feel like strolling in a big city neighborhood anymore. Rather, it is more like exploring a small town, where everyone knows one another, as the group now has 1,100 members.

“Now I am obligated to speak to everyone when I leave the house,” Ms. Boyers said jokingly. “It’s comforting and also tiring, sometimes. You have to be careful what you ask for.”

The idea, Italy’s first “social street,” has been such a success that it has caught on beyond Bologna and the narrow confines of Via Fondazza. There are 393 social streets in Europe, Brazil and New Zealand, inspired by Mr. Bastiani’s idea, according to the Social Street Italia website, which was created out of the Facebook group to help others replicate the project.

Bologna, a midsize northern city, is known for its progressive politics and cooperatives. It is home to what is considered Italy’s oldest university, and it has a mix of a vibrant, young crowd and longtime residents, known for their strong sense of community.

Still, socially speaking, Italy — Bologna included — can be conservative. Friendships and relationships often come through family connections. It is not always easy to meet new people. In large cities, neighbors typically keep to themselves.

But today, the residents of Via Fondazza help one another fix broken appliances, run chores or recharge car batteries. They exchange train tickets and organize parties.

About half of Via Fondazza’s residents belong to the Facebook group. Those who do not use the Internet are invited to events via leaflets or word of mouth.

“I’ve noticed that people at first wonder whether they need to pay something” for the help from others, said Mr. Bastiani, referring to the experience of an 80-year-old woman who needed someone to go pick up some groceries for her, or a resident who sought help assembling a piece of Ikea furniture.

“But that’s not the point,” he added. “The best part of this is that it breaks all the schemes. We live near one another, and we help each other. That’s it.”

The impact of the experiment has surprised almost everyone here.

It “has changed the walking in Via Fondazza,” said Francesca D’Alonzo, a 27-year-old law graduate who joined the group in 2013.

“We greet each other, we speak, we ask about our lives, we feel we belong here now,” she said.

The exchanges usually start virtually but soon become concrete, allowing residents to get to know one another in person.

Everyone on Via Fondazza seems to have an anecdote. Ms. D’Alonzo remembers the party she gave on New Year’s Eve in 2013, when her then mostly unknown neighbors brought so much food and wine that she did not know where to put it.

“It’s the mental habit that is so healthy,” she said. “You let people into your house because you know some and trust them enough to bring along some more. You open up your life.”

Read the entire article here.

Send to Kindle

The Tech Emperor Has No Clothes

OLYMPUS DIGITAL CAMERA

Bill Hewlett. David Packard. Bill Gates. Steve Allen. Steve Jobs. Larry Ellison. Gordon Moore. Tech titans. Moguls of the microprocessor. Their names hold a key place in the founding and shaping of our technological evolution. That they catalyzed and helped create entire economic sectors goes without doubt. Yet, a deeper, objective analysis of market innovation shows that the view of the lone, great-man (or two) — combating and succeeding against all-comers — may be more of a self-perpetuating myth than actual reality. The idea that single, visionary individual drives history and shapes the future is but a long and enduring invention.

From Technology Review:

Since Steve Jobs’s death, in 2011, Elon Musk has emerged as the leading celebrity of Silicon Valley. Musk is the CEO of Tesla Motors, which produces electric cars; the CEO of SpaceX, which makes rockets; and the chairman of SolarCity, which provides solar power systems. A self-made billionaire, programmer, and engineer—as well as an inspiration for Robert Downey Jr.’s Tony Stark in the Iron Man movies—he has been on the cover of Fortune and Time. In 2013, he was first on the Atlantic’s list of “today’s greatest inventors,” nominated by leaders at Yahoo, Oracle, and Google. To believers, Musk is steering the history of technology. As one profile described his mystique, his “brilliance, his vision, and the breadth of his ambition make him the one-man embodiment of the future.”

Musk’s companies have the potential to change their sectors in fundamental ways. Still, the stories around these advances—and around Musk’s role, in particular—can feel strangely outmoded.

The idea of “great men” as engines of change grew popular in the 19th century. In 1840, the Scottish philosopher Thomas Carlyle wrote that “the history of what man has accomplished in this world is at bottom the history of the Great Men who have worked here.” It wasn’t long, however, before critics questioned this one–dimensional view, arguing that historical change is driven by a complex mix of trends and not by any one person’s achievements. “All of those changes of which he is the proximate initiator have their chief causes in the generations he descended from,” Herbert Spencer wrote in 1873. And today, most historians of science and technology do not believe that major innovation is driven by “a lone inventor who relies only on his own imagination, drive, and intellect,” says Daniel Kevles, a historian at Yale. Scholars are “eager to identify and give due credit to significant people but also recognize that they are operating in a context which enables the work.” In other words, great leaders rely on the resources and opportunities available to them, which means they do not shape history as much as they are molded by the moments in which they live.

Musk’s success would not have been possible without, among other things, government funding for basic research and subsidies for electric cars and solar panels. Above all, he has benefited from a long series of innovations in batteries, solar cells, and space travel. He no more produced the technological landscape in which he operates than the Russians created the harsh winter that allowed them to vanquish Napoleon. Yet in the press and among venture capitalists, the great-man model of Musk persists, with headlines citing, for instance, “His Plan to Change the Way the World Uses Energy” and his own claim of “changing history.”

The problem with such portrayals is not merely that they are inaccurate and unfair to the many contributors to new technologies. By warping the popular understanding of how technologies develop, great-man myths threaten to undermine the structure that is actually necessary for future innovations.

Space cowboy

Elon Musk, the best-selling biography by business writer Ashlee Vance, describes Musk’s personal and professional trajectory—and seeks to explain how, exactly, the man’s repeated “willingness to tackle impossible things” has “turned him into a deity in Silicon Valley.”

Born in South Africa in 1971, Musk moved to Canada at age 17; he took a job cleaning the boiler room of a lumber mill and then talked his way into an internship at a bank by cold-calling a top executive. After studying physics and economics in Canada and at the Wharton School of the University of Pennsylvania, he enrolled in a PhD program at Stanford but opted out after a couple of days. Instead, in 1995, he cofounded a company called Zip2, which provided an online map of businesses—“a primitive Google maps meets Yelp,” as Vance puts it. Although he was not the most polished coder, Musk worked around the clock and slept “on a beanbag next to his desk.” This drive is “what the VCs saw—that he was willing to stake his existence on building out this platform,” an early employee told Vance. After Compaq bought Zip2, in 1999, Musk helped found an online financial services company that eventually became PayPal. This was when he “began to hone his trademark style of entering an ultracomplex business and not letting the fact that he knew very little about the industry’s nuances bother him,” Vance writes.

When eBay bought PayPal for $1.5 billion, in 2002, Musk emerged with the wherewithal to pursue two passions he believed could change the world. He founded SpaceX with the goal of building cheaper rockets that would facilitate research and space travel. Investing over $100 million of his personal fortune, he hired engineers with aeronautics experience, built a factory in Los Angeles, and began to oversee test launches from a remote island between Hawaii and Guam. At the same time, Musk cofounded Tesla Motors to develop battery technology and electric cars. Over the years, he cultivated a media persona that was “part playboy, part space cowboy,” Vance writes.

Musk sells himself as a singular mover of mountains and does not like to share credit for his success. At SpaceX, in particular, the engineers “flew into a collective rage every time they caught Musk in the press claiming to have designed the Falcon rocket more or less by himself,” Vance writes, referring to one of the company’s early models. In fact, Musk depends heavily on people with more technical expertise in rockets and cars, more experience with aeronautics and energy, and perhaps more social grace in managing an organization. Those who survive under Musk tend to be workhorses willing to forgo public acclaim. At SpaceX, there is Gwynne Shotwell, the company president, who manages operations and oversees complex negotiations. At Tesla, there is JB Straubel, the chief technology officer, responsible for major technical advances. Shotwell and Straubel are among “the steady hands that will forever be expected to stay in the shadows,” writes Vance. (Martin Eberhard, one of the founders of Tesla and its first CEO, arguably contributed far more to its engineering achievements. He had a bitter feud with Musk and left the company years ago.)

Likewise, Musk’s success at Tesla is undergirded by public-sector investment and political support for clean tech. For starters, Tesla relies on lithium-ion batteries pioneered in the late 1980s with major funding from the Department of Energy and the National Science Foundation. Tesla has benefited significantly from guaranteed loans and state and federal subsidies. In 2010, the company reached a loan agreement with the Department of Energy worth $465 million. (Under this arrangement, Tesla agreed to produce battery packs that other companies could benefit from and promised to manufacture electric cars in the United States.) In addition, Tesla has received $1.29 billion in tax incentives from Nevada, where it is building a “gigafactory” to produce batteries for cars and consumers. It has won an array of other loans and tax credits, plus rebates for its consumers, totaling another $1 billion, according to a recent series by the Los Angeles Times.

It is striking, then, that Musk insists on a success story that fails to acknowledge the importance of public-sector support. (He called the L.A. Times series “misleading and deceptive,” for instance, and told CNBC that “none of the government subsidies are necessary,” though he did admit they are “helpful.”)

If Musk’s unwillingness to look beyond himself sounds familiar, Steve Jobs provides a recent antecedent. Like Musk, who obsessed over Tesla cars’ door handles and touch screens and the layout of the SpaceX factory, Jobs brought a fierce intensity to product design, even if he did not envision the key features of the Mac, the iPod, or the iPhone. An accurate version of Apple’s story would give more acknowledgment not only to the work of other individuals, from designer Jonathan Ive on down, but also to the specific historical context in which Apple’s innovation occurred. “There is not a single key technology behind the iPhone that has not been state funded,” says economist Mazzucato. This includes the wireless networks, “the Internet, GPS, a touch-screen display, and … the voice-activated personal assistant Siri.” Apple has recombined these technologies impressively. But its achievements rest on many years of public-sector investment. To put it another way, do we really think that if Jobs and Musk had never come along, there would have been no smartphone revolution, no surge of interest in electric vehicles?

Read the entire story here.

Image: Titan Oceanus. Trevi Fountain, Rome. Public Domain.

Send to Kindle

Digital Forensics and the Wayback Machine

Amazon-Aug1999

Many of us see history — the school subject — as rather dull and boring. After all, how can the topic be made interesting when it’s usually taught by a coach who has other things on his or her mind [no joke, I have evidence of this from both sides of the Atlantic!].

Yet we also know that history’s lessons are essential to shaping our current world view and our vision for the future, in a myriad of ways. Since humans could speak and then write, our ancestors have recorded and transmitted their histories through oral storytelling, and then through books and assorted media.

Then came the internet. The explosion of content, media formats and related technologies over the last quarter-century has led to an immense challenge for archivists and historians intent on cataloging our digital stories. One facet of this challenge is the tremendous volume of information and its accelerating growth. Another is the dynamic nature of the content — much of it being constantly replaced and refreshed.

But, all is not lost. The Internet Archive founded in 1996 has been quietly archiving text, pages, images, audio and more recently entire web sites from the Tubes of the vast Internets. Currently the non-profit has archived around half a trillion web pages. It’s our modern day equivalent of the Library of Alexandria.

Please say hello to the Internet Archive Wayback Machine, and give it a try. The Wayback Machine took the screenshot above of Amazon.com in 1999, in case you’ve ever wondered what Amazon looked like before it swallowed or destroyed entire retail sectors.

From the New Yorker:

Malaysia Airlines Flight 17 took off from Amsterdam at 10:31 A.M. G.M.T. on July 17, 2014, for a twelve-hour flight to Kuala Lumpur. Not much more than three hours later, the plane, a Boeing 777, crashed in a field outside Donetsk, Ukraine. All two hundred and ninety-eight people on board were killed. The plane’s last radio contact was at 1:20 P.M. G.M.T. At 2:50 P.M. G.M.T., Igor Girkin, a Ukrainian separatist leader also known as Strelkov, or someone acting on his behalf, posted a message on VKontakte, a Russian social-media site: “We just downed a plane, an AN-26.” (An Antonov 26 is a Soviet-built military cargo plane.) The post includes links to video of the wreckage of a plane; it appears to be a Boeing 777.

Two weeks before the crash, Anatol Shmelev, the curator of the Russia and Eurasia collection at the Hoover Institution, at Stanford, had submitted to the Internet Archive, a nonprofit library in California, a list of Ukrainian and Russian Web sites and blogs that ought to be recorded as part of the archive’s Ukraine Conflict collection. Shmelev is one of about a thousand librarians and archivists around the world who identify possible acquisitions for the Internet Archive’s subject collections, which are stored in its Wayback Machine, in San Francisco. Strelkov’s VKontakte page was on Shmelev’s list. “Strelkov is the field commander in Slaviansk and one of the most important figures in the conflict,” Shmelev had written in an e-mail to the Internet Archive on July 1st, and his page “deserves to be recorded twice a day.”

On July 17th, at 3:22 P.M. G.M.T., the Wayback Machine saved a screenshot of Strelkov’s VKontakte post about downing a plane. Two hours and twenty-two minutes later, Arthur Bright, the Europe editor of the Christian Science Monitor, tweeted a picture of the screenshot, along with the message “Grab of Donetsk militant Strelkov’s claim of downing what appears to have been MH17.” By then, Strelkov’s VKontakte page had already been edited: the claim about shooting down a plane was deleted. The only real evidence of the original claim lies in the Wayback Machine.

The average life of a Web page is about a hundred days. Strelkov’s “We just downed a plane” post lasted barely two hours. It might seem, and it often feels, as though stuff on the Web lasts forever, for better and frequently for worse: the embarrassing photograph, the regretted blog (more usually regrettable not in the way the slaughter of civilians is regrettable but in the way that bad hair is regrettable). No one believes any longer, if anyone ever did, that “if it’s on the Web it must be true,” but a lot of people do believe that if it’s on the Web it will stay on the Web. Chances are, though, that it actually won’t. In 2006, David Cameron gave a speech in which he said that Google was democratizing the world, because “making more information available to more people” was providing “the power for anyone to hold to account those who in the past might have had a monopoly of power.” Seven years later, Britain’s Conservative Party scrubbed from its Web site ten years’ worth of Tory speeches, including that one. Last year, BuzzFeed deleted more than four thousand of its staff writers’ early posts, apparently because, as time passed, they looked stupider and stupider. Social media, public records, junk: in the end, everything goes.

Web pages don’t have to be deliberately deleted to disappear. Sites hosted by corporations tend to die with their hosts. When MySpace, GeoCities, and Friendster were reconfigured or sold, millions of accounts vanished. (Some of those companies may have notified users, but Jason Scott, who started an outfit called Archive Team—its motto is “We are going to rescue your shit”—says that such notification is usually purely notional: “They were sending e-mail to dead e-mail addresses, saying, ‘Hello, Arthur Dent, your house is going to be crushed.’ ”) Facebook has been around for only a decade; it won’t be around forever. Twitter is a rare case: it has arranged to archive all of its tweets at the Library of Congress. In 2010, after the announcement, Andy Borowitz tweeted, “Library of Congress to acquire entire Twitter archive—will rename itself Museum of Crap.” Not long after that, Borowitz abandoned that Twitter account. You might, one day, be able to find his old tweets at the Library of Congress, but not anytime soon: the Twitter Archive is not yet open for research. Meanwhile, on the Web, if you click on a link to Borowitz’s tweet about the Museum of Crap, you get this message: “Sorry, that page doesn’t exist!”

The Web dwells in a never-ending present. It is—elementally—ethereal, ephemeral, unstable, and unreliable. Sometimes when you try to visit a Web page what you see is an error message: “Page Not Found.” This is known as “link rot,” and it’s a drag, but it’s better than the alternative. More often, you see an updated Web page; most likely the original has been overwritten. (To overwrite, in computing, means to destroy old data by storing new data in their place; overwriting is an artifact of an era when computer storage was very expensive.) Or maybe the page has been moved and something else is where it used to be. This is known as “content drift,” and it’s more pernicious than an error message, because it’s impossible to tell that what you’re seeing isn’t what you went to look for: the overwriting, erasure, or moving of the original is invisible. For the law and for the courts, link rot and content drift, which are collectively known as “reference rot,” have been disastrous. In providing evidence, legal scholars, lawyers, and judges often cite Web pages in their footnotes; they expect that evidence to remain where they found it as their proof, the way that evidence on paper—in court records and books and law journals—remains where they found it, in libraries and courthouses. But a 2013 survey of law- and policy-related publications found that, at the end of six years, nearly fifty per cent of the URLs cited in those publications no longer worked. According to a 2014 study conducted at Harvard Law School, “more than 70% of the URLs within the Harvard Law Review and other journals, and 50% of the URLs within United States Supreme Court opinions, do not link to the originally cited information.” The overwriting, drifting, and rotting of the Web is no less catastrophic for engineers, scientists, and doctors. Last month, a team of digital library researchers based at Los Alamos National Laboratory reported the results of an exacting study of three and a half million scholarly articles published in science, technology, and medical journals between 1997 and 2012: one in five links provided in the notes suffers from reference rot. It’s like trying to stand on quicksand.

The footnote, a landmark in the history of civilization, took centuries to invent and to spread. It has taken mere years nearly to destroy. A footnote used to say, “Here is how I know this and where I found it.” A footnote that’s a link says, “Here is what I used to know and where I once found it, but chances are it’s not there anymore.” It doesn’t matter whether footnotes are your stock-in-trade. Everybody’s in a pinch. Citing a Web page as the source for something you know—using a URL as evidence—is ubiquitous. Many people find themselves doing it three or four times before breakfast and five times more before lunch. What happens when your evidence vanishes by dinnertime?

The day after Strelkov’s “We just downed a plane” post was deposited into the Wayback Machine, Samantha Power, the U.S. Ambassador to the United Nations, told the U.N. Security Council, in New York, that Ukrainian separatist leaders had “boasted on social media about shooting down a plane, but later deleted these messages.” In San Francisco, the people who run the Wayback Machine posted on the Internet Archive’s Facebook page, “Here’s why we exist.”

Read the entire story here.

Image: Wayback Machine’s screenshot of Amazon.com’s home page, August 1999.

Send to Kindle

Viva Vinyl

Hotel-California-album

When I first moved to college and a tiny dorm room (in the UK they’re called halls of residence), my first purchase was a Garrard turntable and a pair of Denon stereo speakers. Books would come later. First, I had to build a new shrine to my burgeoning vinyl collection, which thrives even today.

So, after what seems like a hundred years since those heady days and countless music technology revolutions, it comes as quite a surprise — but perhaps not — to see vinyl on a resurgent path. The disruptors tried to kill LPs, 45s and 12-inchers with 8-track (ha), compact cassette (yuk), minidisk (yawn), CD (cool), MP3 (meh), iPod (yay) and now streaming (hmm).

But like a kind, zombie uncle the music industry cannot completely bury vinyl for good. Why did vinyl capture the imagination and the ears of the audiophile so? Well, perhaps it comes from watching the slow turn of the LP on the cool silver platter. Or, it may be the anticipation from watching the needle spiral its way to the first track. Or the raw, crackling authenticity of the sound. For me it was the weekly pilgrimage to the dusty independent record store — sampling tracks on clunky headphones; soaking up the artistry of the album cover, the lyrics, the liner notes; discussing the pros and cons of the bands with friends. Our digital world has now mostly replaced this experience, but it cannot hope to replicate it. Long live vinyl.

From ars technica:

On Thursday [July 2, 2015] , Nielsen Music released its 2015 US mid-year report, finding that overall music consumption had increased by 14 percent in the first half of the year. What’s driving that boom? Well, certainly a growth in streaming—on-demand streaming increased year-over-year by 92.4 percent, with more than 135 billion songs streamed, and overall sales of digital streaming increased by 23 percent.

But what may be more fascinating is the continued resurgence of the old licorice pizza—that is, vinyl LPs. Nielsen reports that vinyl LP sales are up 38 percent year-to-date. “Vinyl sales now comprise nearly 9 percent of physical album sales,” Nielsen stated.

Who’s leading the charge on all that vinyl? None other than the music industry’s favorite singer-songwriter Taylor Swift with her album 1989, which sold 33,500 LPs. Swift recently flexed her professional muscle when she wrote an open letter to Apple, criticizing the company for failing to pay artists during the free three-month trial of Apple Music. Apple quickly kowtowed to the pop star and reversed its position.

Following behind Swift on the vinyl chart is Sufjan Stevens’ Carrie & Lowell, The Arctic Monkeys’ AM (released in 2013), Alabama Shakes’ Sound & Color, and in fifth place, none other than Miles Davis’ Kind of Blue, which sold 23,200 copies in 2015.

Also interesting is that Nielsen found that digital album sales were flat compared to last year, and digital track sales were down 10.4 percent. Unsurprisingly, CD sales were down 10 percent.

When Nielsen reported in 2010 that 2.5 million vinyl records were sold in 2009, Ars noted that was more than any other year since the media-tracking business started keeping score in 1991. Fast forward five years and that number has more than doubled, as Nielsen counted 5.6 million vinyl records sold. The trend shows little sign of abating—last year, the US’ largest vinyl plant reported that it was adding 16 vinyl presses to its lineup of 30, and just this year Ars reported on a company called Qrates that lets artists solicit crowdfunding to do small-batch vinyl pressing.

Read the entire story here.

Image: Hotel California, The Eagles, album cover. Courtesy of the author.

Send to Kindle

Myth Busting Silicon(e) Valley

map-Silicon-Valley

Question: what do silicone implants and Silicon Valley have in common?  Answer: they are both instruments of a grandiose illusion. The first, on a mostly personal level, promises eternal youth and vigor; the second, on a much grander scale, promises eternal wealth and greatness for humanity.

So, let’s leave aside the human cosmetic question for another time and concentrate on the broad deception that is current Silicon Valley. It’s a deception at many different levels —  self-deception of Silicon Valley’s young geeks and code jockeys, and the wider delusion that promises us all a glittering future underwritten by rapturous tech.

And, how best to debunk the myths that envelop the Valley like San Francisco’s fog, than to turn to Sam Biddle, former editor of Valleywag. He offers a scathing critique, which happens to be spot on. Quite rightly he asks if we need yet another urban, on-demand laundry app and what on earth is the value to society of “Yo”? But more importantly, he asks us to reconsider our misplaced awe and to knock Silicon Valley from its perch of self-fulfilling self-satisfaction. Yo and Facebook and Uber and Clinkle and Ringly and DogVacay and WhatsApp and the thousands of other trivial start-ups — despite astronomical valuations — will not be humanity’s savior. We need better ideas and deeper answers.

From GQ:

I think my life is better because of my iPhone. Yours probably is, too. I’m grateful to live in a time when I can see my baby cousins or experience any album ever without getting out of bed. I’m grateful that I will literally never be lost again, so long as my phone has battery. And I’m grateful that there are so many people so much smarter than I am who devise things like this, which are magical for the first week they show up, then a given in my life a week later.

We live in an era of technical ability that would have nauseated our ancestors with wonder, and so much of it comes from one very small place in California. But all these unimpeachable humanoid upgrades—the smartphones, the Google-gifted knowledge—are increasingly the exception, rather than the rule, of Silicon Valley’s output. What was once a land of upstarts and rebels is now being led by the money-hungry and the unspirited. Which is why we have a start-up that mails your dog curated treats and an app that says “Yo.” The brightest minds in tech just lately seem more concerned with silly business ideas and innocuous “disruption,” all for the shot at an immense payday. And when our country’s smartest people are working on the dumbest things, we all lose out.

That gap between the Silicon Valley that enriches the world and the Silicon Valley that wastes itself on the trivial is widening daily. And one of the biggest contributing factors is that the Valley has lost touch with reality by subscribing to its own self-congratulatory mythmaking. That these beliefs are mostly baseless, or at least egotistically distorted, is a problem—not just for Silicon Valley but for the rest of us. Which is why we’re here to help the Valley tear down its own myths—these seven in particular.

Myth #1: Silicon Valley Is the Universe’s Only True Meritocracy

 Everyone in Silicon Valley has convinced himself he’s helped create a free-market paradise, the software successor to Jefferson’s brotherhood of noble yeomen. “Silicon Valley has this way of finding greatness and supporting it,” said a member of Greylock Partners, a major venture-capital firm with over $2 billion under management. “It values meritocracy more than anyplace else.” After complaints of the start-up economy’s profound whiteness reached mainstream discussion just last year, companies like Apple, Facebook, and Twitter reluctantly released internal diversity reports. The results were as homogenized as expected: At Twitter, 79 percent of the leadership is male and 72 percent of it is white. At Facebook, senior positions are 77 percent male and 74 percent white. Twitter—a company whose early success can be directly attributed to the pioneering downloads of black smartphone users—hosts an entirely white board of directors. It’s a pounding indictment of Silicon Valley’s corporate psyche that Mark Zuckerberg—a bourgeois white kid from suburban New York who attended Harvard—is considered the Horatio Alger 2.0 paragon. When Paul Graham, the then head of the massive start-up incubator Y Combinator, told The New York Times that he could “be tricked by anyone who looks like Mark Zuckerberg,” he wasn’t just talking about Zuck’s youth.

If there’s any reassuring news, it’s not that tech’s diversity crisis is getting better, but that in the face of so much dismal news, people are becoming angry enough and brave enough to admit that the state of things is not good. Silicon Valley loves data, after all, and with data readily demonstrating tech’s overwhelming white-guy problem, even the true believers in meritocracy see the circumstances as they actually are.

Earlier this year, Ellen Pao became the most mentioned name in Silicon Valley as her gender-discrimination suit against her former employer, Kleiner Perkins Caufield & Byers, played out in court. Although the jury sided with the legendary VC firm, the Pao case was a watershed moment, bringing sunlight and national scrutiny to the issue of unchecked Valley sexism. For every defeated Ellen Pao, we can hope there are a hundred other female technology workers who feel new courage to speak up against wrongdoing, and a thousand male co-workers and employers who’ll reconsider their boys’-club bullshit. But they’ve got their work cut out for them.

Myth #4: School Is for Suckers, Just Drop Out

 Every year PayPal co-founder, investor-guru, and rabid libertarian Peter Thiel awards a small group of college-age students the Thiel Fellowship, a paid offer to either drop out or forgo college entirely. In exchange, the students receive money, mentorship, and networking opportunities from Thiel as they pursue a start-up of their choice. We’re frequently reminded of the tech titans of industry who never got a degree—Steve Jobs, Bill Gates, and Mark Zuckerberg are the most cited, though the fact that they’re statistical exceptions is an aside at best. To be young in Silicon Valley is great; to be a young dropout is golden.

The virtuous dropout hasn’t just made college seem optional for many aspiring barons—formal education is now excoriated in Silicon Valley as an obsolete system dreamed up by people who’d never heard of photo filters or Snapchat. Mix this cynicism with the libertarian streak many tech entrepreneurs carry already and you’ve got yourself a legit anti-education movement.

And for what? There’s no evidence that avoiding a conventional education today grants business success tomorrow. The gifted few who end up dropping out and changing tech history would have probably changed tech history anyway—you can’t learn start-up greatness by refusing to learn in a college classroom. And given that most start-ups fail, do we want an appreciable segment of bright young people gambling so heavily on being the next Zuck? More important, do we want an economy of CEOs who never had to learn to get along with their dorm-mates? Who never had the opportunity to grow up and figure out how to be a human being functioning in society? Who went straight from a bedroom in their parents’ house to an incubator that paid for their meals? It’s no wonder tech has an antisocial rep.

Myth #7: Silicon Valley Is Saving the World

Two years ago an online list of “57 start-up lessons” made its way through the coder community, bolstered by a co-sign from Paul Graham. “Wow, is this list good,” he commented. “It has the kind of resonance you only get when you’re writing from a lot of hard experience.” Among the platitudinous menagerie was this gem: “If it doesn’t augment the human condition for a huge number of people in a meaningful way, it’s not worth doing.” In a mission statement published on Andreessen Horowitz’s website, Marc Andreessen claimed he was “looking for the companies who are going to be the big winners because they are going to cause a fundamental change in the world.” The firm’s portfolio includes Ringly (maker of rings that light up when your phone does something), Teespring (custom T-shirts), DogVacay (pet-sitters on demand), and Hem (the zombified corpse of the furniture store Fab.com). Last year, wealthy Facebook alum Justin Rosenstein told a packed audience at TechCrunch Disrupt, “We in this room, we in technology, have a greater capacity to change the world than the kings and presidents of even a hundred years ago.” No one laughed, even though Rosenstein’s company, Asana, sells instant-messaging software.

 This isn’t just a matter of preening guys in fleece vests building giant companies predicated on their own personal annoyances. It’s wasteful and genuinely harmful to have so many people working on such trivial projects (Clinkle and fucking Yo) under the auspices of world-historical greatness. At one point recently, there were four separate on-demand laundry services operating in San Francisco, each no doubt staffed by smart young people who thought they were carving out a place of small software greatness. And yet for every laundry app, there are smart people doing smart, valuable things: Among the most recent batch of Y Combinator start-ups featured during March’s “Demo Day” were Diassess (twenty-minute HIV tests), Standard Cyborg (3D-printed limbs), and Atomwise (using supercomputing to develop new medical compounds). Those start-ups just happen to be sharing desk space at the incubator with “world changers” like Lumi (easy logo printing) and Underground Cellar (“curated, limited-edition wines with a twist”).

Read the entire article here.

Map: Silicon Valley, CA. Courtesy of Google.

 

Send to Kindle

Your Current Dystopian Nightmare: In Just One Click

Amazon was supposed to give you back precious time by making shopping and spending painlessly simple. Apps on your smartphone were supposed to do the same for all manner of re-tooled on-demand services. What wonderful time-saving inventions! So, now you can live in the moment and make use of all this extra free time. It’s your time now. You’ve won it back and no one can take it away.

And, what do you spend this newly earned free time doing? Well, you sit at home in your isolated cocoon, you shop for more things online, you download some more great apps that promise to bring even greater convenience, you interact less with real humans, and, best of all, you spend more time working. Welcome to your new dystopian nightmare, and it’s happening right now. Click.

From Medium:

Angel the concierge stands behind a lobby desk at a luxe apartment building in downtown San Francisco, and describes the residents of this imperial, 37-story tower. “Ubers, Squares, a few Twitters,” she says. “A lot of work-from-homers.”

And by late afternoon on a Tuesday, they’re striding into the lobby at a just-get-me-home-goddammit clip, some with laptop bags slung over their shoulders, others carrying swank leather satchels. At the same time a second, temporary population streams into the building: the app-based meal delivery people hoisting thermal carrier bags and sacks. Green means Sprig. A huge M means Munchery. Down in the basement, Amazon Prime delivery people check in packages with the porter. The Instacart groceries are plunked straight into a walk-in fridge.

This is a familiar scene. Five months ago I moved into a spartan apartment a few blocks away, where dozens of startups and thousands of tech workers live. Outside my building there’s always a phalanx of befuddled delivery guys who seem relieved when you walk out, so they can get in. Inside, the place is stuffed with the goodies they bring: Amazon Prime boxes sitting outside doors, evidence of the tangible, quotidian needs that are being serviced by the web. The humans who live there, though, I mostly never see. And even when I do, there seems to be a tacit agreement among residents to not talk to one another. I floated a few “hi’s” in the elevator when I first moved in, but in return I got the monosyllabic, no-eye-contact mumble. It was clear: Lady, this is not that kind of building.

Back in the elevator in the 37-story tower, the messengers do talk, one tells me. They end up asking each other which apps they work for: Postmates. Seamless. EAT24. GrubHub. Safeway.com. A woman hauling two Whole Foods sacks reads the concierge an apartment number off her smartphone, along with the resident’s directions: “Please deliver to my door.”

“They have a nice kitchen up there,” Angel says. The apartments rent for as much as $5,000 a month for a one-bedroom. “But so much, so much food comes in. Between 4 and 8 o’clock, they’re on fire.”

I start to walk toward home. En route, I pass an EAT24 ad on a bus stop shelter, and a little further down the street, a Dungeons & Dragons–type dude opens the locked lobby door of yet another glass-box residential building for a Sprig deliveryman:

“You’re…”

“Jonathan?”

“Sweet,” Dungeons & Dragons says, grabbing the bag of food. The door clanks behind him.

And that’s when I realized: the on-demand world isn’t about sharing at all. It’s about being served. This is an economy of shut-ins.

In 1998, Carnegie Mellon researchers warned that the internet could make us into hermits. They released a study monitoring the social behavior of 169 people making their first forays online. The web-surfers started talking less with family and friends, and grew more isolated and depressed. “We were surprised to find that what is a social technology has such anti-social consequences,” said one of the researchers at the time. “And these are the same people who, when asked, describe the Internet as a positive thing.”

We’re now deep into the bombastic buildout of the on-demand economy— with investment in the apps, platforms and services surging exponentially. Right now Americans buy nearly eight percent of all their retail goods online, though that seems a wild underestimate in the most congested, wired, time-strapped urban centers.

Many services promote themselves as life-expanding?—?there to free up your time so you can spend it connecting with the people you care about, not standing at the post office with strangers. Rinse’s ad shows a couple chilling at a park, their laundry being washed by someone, somewhere beyond the picture’s frame. But plenty of the delivery companies are brutally honest that, actually, they never want you to leave home at all.

GrubHub’s advertising banks on us secretly never wanting to talk to a human again: “Everything great about eating, combined with everything great about not talking to people.” DoorDash, another food delivery service, goes for the all-caps, batshit extreme:

“NEVER LEAVE HOME AGAIN.”

Katherine van Ekert isn’t a shut-in, exactly, but there are only two things she ever has to run errands for any more: trash bags and saline solution. For those, she must leave her San Francisco apartment and walk two blocks to the drug store, “so woe is my life,” she tells me. (She realizes her dry humor about #firstworldproblems may not translate, and clarifies later: “Honestly, this is all tongue in cheek. We’re not spoiled brats.”) Everything else is done by app. Her husband’s office contracts with Washio. Groceries come from Instacart. “I live on Amazon,” she says, buying everything from curry leaves to a jogging suit for her dog, complete with hoodie.

She’s so partial to these services, in fact, that she’s running one of her own: A veterinarian by trade, she’s a co-founder of VetPronto, which sends an on-call vet to your house. It’s one of a half-dozen on-demand services in the current batch at Y Combinator, the startup factory, including a marijuana delivery app called Meadow (“You laugh, but they’re going to be rich,” she says). She took a look at her current clients?—?they skew late 20s to late 30s, and work in high-paying jobs: “The kinds of people who use a lot of on demand services and hang out on Yelp a lot ?”

Basically, people a lot like herself. That’s the common wisdom: the apps are created by the urban young for the needs of urban young. The potential of delivery with a swipe of the finger is exciting for van Ekert, who grew up without such services in Sydney and recently arrived in wired San Francisco. “I’m just milking this city for all it’s worth,” she says. “I was talking to my father on Skype the other day. He asked, ‘Don’t you miss a casual stroll to the shop?’ Everything we do now is time-limited, and you do everything with intention. There’s not time to stroll anywhere.”

Suddenly, for people like van Ekert, the end of chores is here. After hours, you’re free from dirty laundry and dishes. (TaskRabbit’s ad rolls by me on a bus: “Buy yourself time?—?literally.”)

So here’s the big question. What does she, or you, or any of us do with all this time we’re buying? Binge on Netflix shows? Go for a run? Van Ekert’s answer: “It’s more to dedicate more time to working.”

Read the entire story here.

Send to Kindle

3D Printing Magic

If you’ve visited this blog before you know I’m a great fan of 3D printing. Though some uses, such as printing 3D selfies, seem dubious at best. So, when Carbon3D unveiled its fundamentally different, and better, approach to 3D printing I was intrigued. The company uses an approach called continuous liquid interface production (CLIP), which seems to construct objects from a magical ooze. Check out the video — you’ll be enthralled. The future is here.

Learn more about Carbon3D here.

From Wired:

EVEN IF YOU have little interest in 3-D printing, you’re likely to find  Carbon3D’s Continuous Liquid Interface Production (CLIP) technology fascinating. Rather than the time-intensive printing of a 3-D object layer by layer like most printers, Carbon3D’s technique works 25 to 100 times faster than what you may have seen before, and looks a bit like Terminator 2‘s liquid metal T-1000 in the process.

CLIP creations grow out of a pool of UV-sensitive resin in a process that’s similar to the way laser 3-D printers work, but at a much faster pace. Instead of the laser used in conventional 3-D printers, CLIP uses an ultraviolet projector on the underside of a resin tray to project an image for how each layer should form. Light shines through an oxygen-permeable window onto the resin, which hardens it. Areas of resin that are exposed to oxygen don’t harden, while those that are cut off form the 3-D printed shape.

In practice, all that physics translates to unprecedented 3-D printing speed. At this week’s TED Conference in Vancouver, Carbon3D CEO and co-founder Dr. Joseph DeSimone demonstrated the printer onstage with a bit of theatrical underselling, wagering that his creation could produce in 10 minutes a geometric ball shape that would take a regular 3-D printer up to 10 hours. The CLIP process churned out the design in a little under 7 minutes.

Read the entire story here.

Video courtesy of Carbon3D.

Send to Kindle

The Internet 0f Th1ngs

Google-search-IoT

Technologist Marc Goodman describes a not too distant future in which all our appliances, tools, products… anything and everything is plugged into the so-called Internet of Things (IoT). The IoT describes a world where all things are connected to everything else, making for a global mesh of intelligent devices from your connected car and your WiFi enabled sneakers to your smartwatch and home thermostat. You may well believe it advantageous to have your refrigerator ping the local grocery store when it runs out of fresh eggs and milk or to have your toilet auto-call a local plumber when it gets stopped-up.

But, as our current Internet shows us — let’s call it the Internet of People — not all is rosy in this hyper-connected, 24/7, always-on digital ocean. What are you to do when hackers attack all your home appliances in a “denial of home service attack (DohS)”, or when your every move inside your home is scrutinized, collected, analyzed and sold to the nearest advertiser, or when your cooktop starts taking and sharing selfies with the neighbors?

Goodman’s new book on this important subject, excerpted here, is titled Future Crimes.

From the Guardian:

If we think of today’s internet metaphorically as about the size of a golf ball, tomorrow’s will be the size of the sun. Within the coming years, not only will every computer, phone and tablet be online, but so too will every car, house, dog, bridge, tunnel, cup, clock, watch, pacemaker, cow, streetlight, bridge, tunnel, pipeline, toy and soda can. Though in 2013 there were only 13bn online devices, Cisco Systems has estimated that by 2020 there will be 50bn things connected to the internet, with room for exponential growth thereafter. As all of these devices come online and begin sharing data, they will bring with them massive improvements in logistics, employee efficiency, energy consumption, customer service and personal productivity.

This is the promise of the internet of things (IoT), a rapidly emerging new paradigm of computing that, when it takes off, may very well change the world we live in forever.

The Pew Research Center defines the internet of things as “a global, immersive, invisible, ambient networked computing environment built through the continued proliferation of smart sensors, cameras, software, databases, and massive data centres in a world-spanning information fabric”. Back in 1999, when the term was first coined by MIT researcher Kevin Ashton, the technology did not exist to make the IoT a reality outside very controlled environments, such as factory warehouses. Today we have low-powered, ultra-cheap computer chips, some as small as the head of a pin, that can be embedded in an infinite number of devices, some for mere pennies. These miniature computing devices only need milliwatts of electricity and can run for years on a minuscule battery or small solar cell. As a result, it is now possible to make a web server that fits on a fingertip for $1.

The microchips will receive data from a near-infinite range of sensors, minute devices capable of monitoring anything that can possibly be measured and recorded, including temperature, power, location, hydro-flow, radiation, atmospheric pressure, acceleration, altitude, sound and video. They will activate miniature switches, valves, servos, turbines and engines – and speak to the world using high-speed wireless data networks. They will communicate not only with the broader internet but with each other, generating unfathomable amounts of data. The result will be an always-on “global, immersive, invisible, ambient networked computing environment”, a mere prelude to the tidal wave of change coming next.

In the future all objects may be smart

The broad thrust sounds rosy. Because chips and sensors will be embedded in everyday objects, we will have much better information and convenience in our lives. Because your alarm clock is connected to the internet, it will be able to access and read your calendar. It will know where and when your first appointment of the day is and be able to cross-reference that information against the latest traffic conditions. Light traffic, you get to sleep an extra 10 minutes; heavy traffic, and you might find yourself waking up earlier than you had hoped.

When your alarm does go off, it will gently raise the lights in the house, perhaps turn up the heat or run your bath. The electronic pet door will open to let Fido into the backyard for his morning visit, and the coffeemaker will begin brewing your coffee. You won’t have to ask your kids if they’ve brushed their teeth; the chip in their toothbrush will send a message to your smartphone letting you know the task is done. As you walk out the door, you won’t have to worry about finding your keys; the beacon sensor on the key chain makes them locatable to within two inches. It will be as if the Jetsons era has finally arrived.

While the hype-o-meter on the IoT has been blinking red for some time, everything described above is already technically feasible. To be certain, there will be obstacles, in particular in relation to a lack of common technical standards, but a wide variety of companies, consortia and government agencies are hard at work to make the IoT a reality. The result will be our transition from connectivity to hyper-connectivity, and like all things Moore’s law related, it will be here sooner than we realise.

The IoT means that all physical objects in the future will be assigned an IP address and be transformed into information technologies. As a result, your lamp, cat or pot plant will be part of an IT network. Things that were previously silent will now have a voice, and every object will be able to tell its own story and history. The refrigerator will know exactly when it was manufactured, the names of the people who built it, what factory it came from, and the day it left the assembly line, arrived at the retailer, and joined your home network. It will keep track of every time its door has been opened and which one of your kids forgot to close it. When the refrigerator’s motor begins to fail, it can signal for help, and when it finally dies, it will tell us how to disassemble its parts and best recycle them. Buildings will know every person who has ever worked there, and streetlights every car that has ever driven by.

All of these objects will communicate with each other and have access to the massive processing and storage power of the cloud, further enhanced by additional mobile and social networks. In the future all objects may become smart, in fact much smarter than they are today, and as these devices become networked, they will develop their own limited form of sentience, resulting in a world in which people, data and things come together. As a consequence of the power of embedded computing, we will see billions of smart, connected things joining a global neural network in the cloud.

In this world, the unknowable suddenly becomes knowable. For example, groceries will be tracked from field to table, and restaurants will keep tabs on every plate, what’s on it, who ate from it, and how quickly the waiters are moving it from kitchen to customer. As a result, when the next E coli outbreak occurs, we won’t have to close 500 eateries and wonder if it was the chicken or beef that caused the problem. We will know exactly which restaurant, supplier and diner to contact to quickly resolve the problem. The IoT and its billions of sensors will create an ambient intelligence network that thinks, senses and feels and contributes profoundly to the knowable universe.

Things that used to make sense suddenly won’t, such as smoke detectors. Why do most smoke detectors do nothing more than make loud beeps if your life is in mortal danger because of fire? In the future, they will flash your bedroom lights to wake you, turn on your home stereo, play an MP3 audio file that loudly warns, “Fire, fire, fire.” They will also contact the fire department, call your neighbours (in case you are unconscious and in need of help), and automatically shut off flow to the gas appliances in the house.

The byproduct of the IoT will be a living, breathing, global information grid, and technology will come alive in ways we’ve never seen before, except in science fiction movies. As we venture down the path toward ubiquitous computing, the results and implications of the phenomenon are likely to be mind-blowing. Just as the introduction of electricity was astonishing in its day, it eventually faded into the background, becoming an imperceptible, omnipresent medium in constant interaction with the physical world. Before we let this happen, and for all the promise of the IoT, we must ask critically important questions about this brave new world. For just as electricity can shock and kill, so too can billions of connected things networked online.

One of the central premises of the IoT is that everyday objects will have the capacity to speak to us and to each other. This relies on a series of competing communications technologies and protocols, many of which are eminently hackable. Take radio-frequency identification (RFID) technology, considered by many the gateway to the IoT. Even if you are unfamiliar with the name, chances are you have already encountered it in your life, whether it’s the security ID card you use to swipe your way into your office, your “wave and pay” credit card, the key to your hotel room, your Oyster card.

Even if you don’t use an RFID card for work, there’s a good chance you either have it or will soon have it embedded in the credit card sitting in your wallet. Hackers have been able to break into these as well, using cheap RFID readers available on eBay for just $50, tools that allow an attacker to wirelessly capture a target’s credit card number, expiration date and security code. Welcome to pocket picking 2.0.

More productive and more prison-like

A much rarer breed of hacker targets the physical elements that make up a computer system, including the microchips, electronics, controllers, memory, circuits, components, transistors and sensors – core elements of the internet of things. These hackers attack a device’s firmware, the set of computer instructions present on every electronic device we encounter, including TVs, mobile phones, game consoles, digital cameras, network routers, alarm systems, CCTVs, USB drives, traffic lights, gas station pumps and smart home management systems. Before we add billions of hackable things and communicate with hackable data transmission protocols, important questions must be asked about the risks for the future of security, crime, terrorism, warfare and privacy.

In the same way our every move online can be tracked, recorded, sold and monetised today, so too will that be possible in the near future in the physical world. Real space will become just like cyberspace. With the widespread adoption of more networked devices, what people do in their homes, cars, workplaces, schools and communities will be subjected to increased monitoring and analysis by the corporations making these devices. Of course these data will be resold to advertisers, data brokers and governments, providing an unprecedented view into our daily lives. Unfortunately, just like our social, mobile, locational and financial information, our IoT data will leak, providing further profound capabilities to stalkers and other miscreants interested in persistently tracking us. While it would certainly be possible to establish regulations and build privacy protocols to protect consumers from such activities, the greater likelihood is that every IoT-enabled device, whether an iron, vacuum, refrigerator, thermostat or lightbulb, will come with terms of service that grant manufacturers access to all your data. More troublingly, while it may be theoretically possible to log off in cyberspace, in your well-connected smart home there will be no “opt-out” provision.

We may find ourselves interacting with thousands of little objects around us on a daily basis, each collecting seemingly innocuous bits of data 24/7, information these things will report to the cloud, where it will be processed, correlated, and reviewed. Your smart watch will reveal your lack of exercise to your health insurance company, your car will tell your insurer of your frequent speeding, and your dustbin will tell your local council that you are not following local recycling regulations. This is the “internet of stool pigeons”, and though it may sound far-fetched, it’s already happening. Progressive, one of the largest US auto insurance companies, offers discounted personalised rates based on your driving habits. “The better you drive, the more you can save,” according to its advertising. All drivers need to do to receive the lower pricing is agree to the installation of Progressive’s Snapshot black-box technology in their cars and to having their braking, acceleration and mileage persistently tracked.

The IoT will also provide vast new options for advertisers to reach out and touch you on every one of your new smart connected devices. Every time you go to your refrigerator to get ice, you will be presented with ads for products based on the food your refrigerator knows you’re most likely to buy. Screens too will be ubiquitous, and marketers are already planning for the bounty of advertising opportunities. In late 2013, Google sent a letter to the Securities and Exchange Commission noting, “we and other companies could [soon] be serving ads and other content on refrigerators, car dashboards, thermostats, glasses and watches, to name just a few possibilities.”

Knowing that Google can already read your Gmail, record your every web search, and track your physical location on your Android mobile phone, what new powerful insights into your personal life will the company develop when its entertainment system is in your car, its thermostat regulates the temperature in your home, and its smart watch monitors your physical activity?

Not only will RFID and other IoT communications technologies track inanimate objects, they will be used for tracking living things as well. The British government has considered implanting RFID chips directly under the skin of prisoners, as is common practice with dogs. School officials across the US have begun embedding RFID chips in student identity cards, which pupils are required to wear at all times. In Contra Costa County, California, preschoolers are now required to wear basketball-style jerseys with electronic tracking devices built in that allow teachers and administrators to know exactly where each student is. According to school district officials, the RFID system saves “3,000 labour hours a year in tracking and processing students”.

Meanwhile, the ability to track employees, how much time they take for lunch, the length of their toilet breaks and the number of widgets they produce will become easy. Moreover, even things such as words typed per minute, eye movements, total calls answered, respiration, time away from desk and attention to detail will be recorded. The result will be a modern workplace that is simultaneously more productive and more prison-like.

At the scene of a suspected crime, police will be able to interrogate the refrigerator and ask the equivalent of, “Hey, buddy, did you see anything?” Child social workers will know there haven’t been any milk or nappies in the home, and the only thing stored in the fridge has been beer for the past week. The IoT also opens up the world for “perfect enforcement”. When sensors are everywhere and all data is tracked and recorded, it becomes more likely that you will receive a moving violation for going 26 miles per hour in a 25-mile-per-hour zone and get a parking ticket for being 17 seconds over on your meter.

The former CIA director David Petraeus has noted that the IoT will be “transformational for clandestine tradecraft”. While the old model of corporate and government espionage might have involved hiding a bug under the table, tomorrow the very same information might be obtained by intercepting in real time the data sent from your Wi-Fi lightbulb to the lighting app on your smart phone. Thus the devices you thought were working for you may in fact be on somebody else’s payroll, particularly that of Crime, Inc.

A network of unintended consequences

For all the untold benefits of the IoT, its potential downsides are colossal. Adding 50bn new objects to the global information grid by 2020 means that each of these devices, for good or ill, will be able to potentially interact with the other 50bn connected objects on earth. The result will be 2.5 sextillion potential networked object-to-object interactions – a network so vast and complex it can scarcely be understood or modelled. The IoT will be a global network of unintended consequences and black swan events, ones that will do things nobody ever planned. In this world, it is impossible to know the consequences of connecting your home’s networked blender to the same information grid as an ambulance in Tokyo, a bridge in Sydney, or a Detroit auto manufacturer’s production line.

The vast levels of cyber crime we currently face make it abundantly clear we cannot even adequately protect the standard desktops and laptops we presently have online, let alone the hundreds of millions of mobile phones and tablets we are adding annually. In what vision of the future, then, is it conceivable that we will be able to protect the next 50bn things, from pets to pacemakers to self-driving cars? The obvious reality is that we cannot.

Our technological threat surface area is growing exponentially and we have no idea how to defend it effectively. The internet of things will become nothing more than the Internet of things to be hacked.

Read the entire article here.

Image courtesy of Google Search.

Send to Kindle

Bit Rot is In Your Future

1978_AMC_Matador_sedan_red_NC_detail_of_factory_AM-FM-stereo-8-track_unit

If you are over the age of 55 or 60 you may well have some 8-track cassettes still stashed in the trunk (or boot if you’re a Brit) of your car. If you’re over 50 it’s possible that you may have some old floppy disks or regular music cassettes stored in a bottom drawer. If you’re over 40 you’re likely to have boxes of old VHS tapes and crate-loads of CDs (or even laser disks) under your bed. So, if you fall into one of these categories most of the content memorized on any of these media types is now very likely to be beyond your reach — your car (hopefully) does not have an 8-track player; you dumped your Sony Walkman for an iPod; and your CDs have been rendered obsolete by music that descends to your ears from the “cloud”.

[Of course, 45s and 33s still seem to have a peculiar and lasting appeal — and thanks to the analog characteristics of vinyl the music encoded in the spiral grooves is still relatively easily accessible. But this will be the subject of another post].

So our technological progress, paradoxically, comes at a cost. As our technologies become simpler to use and content becomes easier to construct and disseminate, it becomes “bit rot” for future generations. That is, our digital present will become lost to more advanced technologies in the future. One solution would be to hold on to your 8-track player. But, Vint Cerf, currently a VP at Google and one of the founding fathers of the internet, has other ideas.

From the Guardian:

Piles of digitised material – from blogs, tweets, pictures and videos, to official documents such as court rulings and emails – may be lost forever because the programs needed to view them will become defunct, Google’s vice-president has warned.

Humanity’s first steps into the digital world could be lost to future historians, Vint Cerf told the American Association for the Advancement of Science’s annual meeting in San Jose, California, warning that we faced a “forgotten generation, or even a forgotten century” through what he called “bit rot”, where old computer files become useless junk.

Cerf called for the development of “digital vellum” to preserve old software and hardware so that out-of-date files could be recovered no matter how old they are.

“When you think about the quantity of documentation from our daily lives that is captured in digital form, like our interactions by email, people’s tweets, and all of the world wide web, it’s clear that we stand to lose an awful lot of our history,” he said.

“We don’t want our digital lives to fade away. If we want to preserve them, we need to make sure that the digital objects we create today can still be rendered far into the future,” he added.

The warning highlights an irony at the heart of modern technology, where music, photos, letters and other documents are digitised in the hope of ensuring their long-term survival. But while researchers are making progress in storing digital files for centuries, the programs and hardware needed to make sense of the files are continually falling out of use.

“We are nonchalantly throwing all of our data into what could become an information black hole without realising it. We digitise things because we think we will preserve them, but what we don’t understand is that unless we take other steps, those digital versions may not be any better, and may even be worse, than the artefacts that we digitised,” Cerf told the Guardian. “If there are photos you really care about, print them out.”

Advertisement

Ancient civilisations suffered no such problems, because histories written in cuneiform on baked clay tablets, or rolled papyrus scrolls, needed only eyes to read them. To study today’s culture, future scholars would be faced with PDFs, Word documents, and hundreds of other file types that can only be interpreted with dedicated software and sometimes hardware too.

The problem is already here. In the 1980s, it was routine to save documents on floppy disks, upload Jet Set Willy from cassette to the ZX spectrum, slaughter aliens with a Quickfire II joystick, and have Atari games cartridges in the attic. Even if the disks and cassettes are in good condition, the equipment needed to run them is mostly found only in museums.

The rise of gaming has its own place in the story of digital culture, but Cerf warns that important political and historical documents will also be lost to bit rot. In 2005, American historian Doris Kearns Goodwin wrote Team of Rivals: the Political Genius of Abraham Lincoln, describing how Lincoln hired those who ran against him for presidency. She went to libraries around the US, found the physical letters of the people involved, and reconstructed their conversations. “In today’s world those letters would be emails and the chances of finding them will be vanishingly small 100 years from now,” said Cerf.

He concedes that historians will take steps to preserve material considered important by today’s standards, but argues that the significance of documents and correspondence is often not fully appreciated until hundreds of years later. Historians have learned how the greatest mathematician of antiquity considered the concept of infinity and anticipated calculus in 3BC after the Archimedes palimpsest was found hidden under the words of a Byzantine prayer book from the 13th century. “We’ve been surprised by what we’ve learned from objects that have been preserved purely by happenstance that give us insights into an earlier civilisation,” he said.

Researchers at Carnegie Mellon University in Pittsburgh have made headway towards a solution to bit rot, or at least a partial one. There, Mahadev Satyanarayanan takes digital snapshots of computer hard drives while they run different software programs. These can then be uploaded to a computer that mimics the one the software ran on. The result is a computer that can read otherwise defunct files. Under a project called Olive, the researchers have archived Mystery House, the original 1982 graphic adventure game for the Apple II, an early version of WordPerfect, and Doom, the original 1993 first person shooter game.

Inventing new technology is only half the battle, though. More difficult still could be navigating the legal permissions to copy and store software before it dies. When IT companies go out of business, or stop supporting their products, they may sell the rights on, making it a nightmarish task to get approval.

Read the entire article here.

Image: 1978 AMC Matador sedan red NC detail of factory AM-FM-stereo-8-track unit. Courtesy of CZmarlin / Wikipedia.

Send to Kindle

Yawn. Selfies Are So, Like, Yesterday!

DOOB 3D-image

If you know a dedicated and impassioned narcissist it’s time to convince him or her to ditch the selfie. Oh, and please ensure she or he discards the selfie-stick while they’re at it. You see, the selfie — that ubiquitous expression of the me-me-generation — is now rather passé.

So, where does a self-absorbed individual turn next? Enter the 3D printed version of yourself courtesy of a German company called DOOB 3D, with its Dooblicator scanner and high-res 3D printer. Connoisseurs of self can now — for a mere $395 — replicate themselves with a 10-inch facsimile. If you’re a cheapskate, you can get a Playmobil-sized replica for $95; while a 14-inch selfie-doll will fetch you $695. Love it!

To learn more about DOOB 3D visit their website.

From Wired:

We love looking at images of ourselves. First there were Olan Mills portraits. Nowadays there are selfies and selfie-stick selfies and drone selfies.

If you’re wondering what comes next, Dusseldorf-based DOOB 3D thinks it has the answer—and contrary to what the company’s name suggests, it doesn’t involve getting high and watching Avatar.

DOOB 3D can produce a detailed, four-inch figurine of your body—yes, a 3-D selfie. Making one of these figurines requires a massive pile of hardware and software: 54 DSLRs, 54 lenses, a complex 3-D modeling pipeline, and an $80,000 full-color 3-D printer, not to mention a room-size scanning booth.

Factor that all in and the $95 asking price for a replica of yourself that’s roughly the size of most classic Star Wars action figures doesn’t seem so bad. A Barbie-esque 10-inch model goes for $395, while a 14-inch figure that’s more along the lines of an old-school G.I. Joe doll costs $695.

The company has eight 3-D scanning booths (called “Doob-licators”) scattered in strategic locations throughout the world. There’s one in Dusseldorf, one in Tokyo, one at Santa Monica Place in Los Angeles, and one in New York City’s Chelsea Market. The company also says they’re set to add more U.S. locations soon, although details aren’t public yet.

In New York, the pop-up DOOB shop in Chelsea Market was a pretty big hit. According to Michael Anderson, CEO of DOOB 3D USA, the Doob-licator saw about 500 customers over the winter holiday season. About 10 percent of the booth’s customers got their pets Doob-licated.

“At first, (people got DOOBs made) mostly on a whim,” says Anderson of the holiday-season spike. Most people just walk up and stand in line, but you can also book an appointment in advance.

“Now that awareness has been built,” Anderson says, “there has been a shift where at least two thirds of our customers have planned ahead to get a DOOB.”

Each Doob-licator is outfitted with 54 Canon EOS Rebel T5i DSLRs, arranged in nine columns of six cameras each. You can make an appointment or just wait in line: A customer steps in, strikes a pose, and the Doob-licator operator fires all the cameras at once. That creates a full-body scan in a fraction of a second. The next step involves feeding all those 18-megapixel images through the company’s proprietary software, which creates a 3-D model of the subject.

The printing process requires more patience. The company operates three high-end 3-D printing centers to support its scanning operations: One in Germany, one in Tokyo, and one in Brooklyn. They all use 3D Systems’ ProJet 660Pro, a high-resolution (600 x 540 dpi) 3-D printer that creates full-color objects on the fly. The printer uses a resin polymer material, and the full range of CMYK color is added to each powder layer as it’s printed.

With a top printing speed of 1.1 inches per hour and a process that sometimes involves thousands of layers of powder, the process takes a few hours for the smallest-size DOOB and half a day or more for the larger ones. And depending on how many DOOBs are lined up in the queue, your mini statue takes between two and eight weeks to arrive in the mail.

Once you step inside that Doob-licator, it’s like international waters: You are largely unbound by laws and restrictions. Do you want to get naked? Go right ahead. Along with your nude statue, the company will also send you a 3-D PDF and keep your data in its database in case you want additional copies made (you can request that your data be deleted if that sounds too creepy).

Read the entire article here.

Image courtesy of of DOOB 3D.

Send to Kindle

The Thugs of Cultural Disruption

What becomes of our human culture as Amazon crushes booksellers and publishers, Twitter dumbs down journalism, knowledge is replaced by keyword search, and the internet becomes a popularity contest?

Leon Wieseltier contributing editor at The Atlantic has some thoughts.

From NYT:

Amid the bacchanal of disruption, let us pause to honor the disrupted. The streets of American cities are haunted by the ghosts of bookstores and record stores, which have been destroyed by the greatest thugs in the history of the culture industry. Writers hover between a decent poverty and an indecent one; they are expected to render the fruits of their labors for little and even for nothing, and all the miracles of electronic dissemination somehow do not suffice for compensation, either of the fiscal or the spiritual kind. Everybody talks frantically about media, a second-order subject if ever there was one, as content disappears into “content.” What does the understanding of media contribute to the understanding of life? Journalistic institutions slowly transform themselves into silent sweatshops in which words cannot wait for thoughts, and first responses are promoted into best responses, and patience is a professional liability. As the frequency of expression grows, the force of expression diminishes: Digital expectations of alacrity and terseness confer the highest prestige upon the twittering cacophony of one-liners and promotional announcements. It was always the case that all things must pass, but this is ridiculous.

Meanwhile the discussion of culture is being steadily absorbed into the discussion of business. There are “metrics” for phenomena that cannot be metrically measured. Numerical values are assigned to things that cannot be captured by numbers. Economic concepts go rampaging through noneconomic realms: Economists are our experts on happiness! Where wisdom once was, quantification will now be. Quantification is the most overwhelming influence upon the contemporary American understanding of, well, everything. It is enabled by the idolatry of data, which has itself been enabled by the almost unimaginable data-generating capabilities of the new technology. The distinction between knowledge and information is a thing of the past, and there is no greater disgrace than to be a thing of the past. Beyond its impact upon culture, the new technology penetrates even deeper levels of identity and experience, to cognition and to consciousness. Such transformations embolden certain high priests in the church of tech to espouse the doctrine of “transhumanism” and to suggest, without any recollection of the bankruptcy of utopia, without any consideration of the cost to human dignity, that our computational ability will carry us magnificently beyond our humanity and “allow us to transcend these limitations of our biological bodies and brains. . . . There will be no distinction, post-Singularity, between human and machine.” (The author of that updated mechanistic nonsense is a director of engineering at Google.)

And even as technologism, which is not the same as technology, asserts itself over more and more precincts of human life, so too does scientism, which is not the same as science. The notion that the nonmaterial dimensions of life must be explained in terms of the material dimensions, and that nonscientific understandings must be translated into scientific understandings if they are to qualify as knowledge, is increasingly popular inside and outside the university, where the humanities are disparaged as soft and impractical and insufficiently new. The contrary insistence that the glories of art and thought are not evolutionary adaptations, or that the mind is not the brain, or that love is not just biology’s bait for sex, now amounts to a kind of heresy. So, too, does the view that the strongest defense of the humanities lies not in the appeal to their utility — that literature majors may find good jobs, that theaters may economically revitalize neighborhoods — but rather in the appeal to their defiantly nonutilitarian character, so that individuals can know more than how things work, and develop their powers of discernment and judgment, their competence in matters of truth and goodness and beauty, to equip themselves adequately for the choices and the crucibles of private and public life.

Read the entire essay here.

Send to Kindle

Those 25,000 Unread Emails

Google-search-emailIt may not be you. You may not be the person who has tens of thousands of unread emails scattered across various email accounts. However, you know someone just like this — buried in a virtual avalanche of unopened text, unable to extricate herself (or him) and with no pragmatic plan to tackle the digital morass.

Washington Post writer Brigid Schulte has some ideas to help your friend  (or you of course — your secret is safe with us).

From the Washington Post:

I was drowning in e-mail. Overwhelmed. Overloaded. Spending hours a day, it seemed, roiling in an unending onslaught of info turds and falling further and further behind. The day I returned from a two-week break, I had 23,768 messages in my inbox. And 14,460 of them were unread.

I had to do something. I kept missing stuff. Forgetting stuff. Apologizing. And getting miffed and increasingly angry e-mails from friends and others who wondered why I was ignoring them. It wasn’t just vacation that put me so far behind. I’d been behind for more than a year. Vacation only made it worse. Every time I thought of my inbox, I’d start to hyperventilate.

I’d tried tackling it before: One night a few months ago, I was determined to stay at my desk until I’d powered through all the unread e-mails. At dawn, I was still powering through and nowhere near the end. And before long, the inbox was just as crammed as it had been before I lost that entire night’s sleep.

On the advice of a friend, I’d even hired a Virtual Assistant to help me with the backlog. But I had no idea how to use one. And though I’d read about people declaring e-mail bankruptcy when their inbox was overflowing — deleting everything and starting over from scratch — I was positive there were gems somewhere in that junk, and I couldn’t bear to lose them.

I knew I wasn’t alone. I’d get automatic response messages saying someone was on vacation and the only way they could relax was by telling me they’d never, ever look at my e-mail, so please send it again when they returned. My friend, Georgetown law professor Rosa Brooks, often sends out this auto response: “My inbox looks like Pompeii, post-volcano. Will respond as soon as I have time to excavate.” And another friend, whenever an e-mail is longer than one or two lines, sends a short note, “This sounds like a conversation,” and she won’t respond unless you call her.

E-mail made the late writer Nora Ephron’s list of the 22 things she won’t miss in life. Twice. In 2013, more than 182 billion e-mails were sent every day, no doubt clogging up millions of inboxes around the globe.

Bordering on despair, I sought help from four productivity gurus. And, following their advice, in two weeks of obsession-bordering-on-compulsion, my inbox was down to zero.

Here’s how.

*CREATE A SYSTEM. Julie Gray, a time coach who helps people dig out of e-mail overload all the time, said the first thing I had to change was my mind.

“This is such a pervasive problem. People think, ‘What am I doing wrong? They think they don’t have discipline or focus or that there’s some huge character flaw and they’re beating themselves up all the time. Which only makes it worse,” she said.

“So I first start changing their e-mail mindset from ‘This is an example of my failure,’ to ‘This just means I haven’t found the right system for me yet.’ It’s really all about finding your own path through the craziness.”

Do not spend another minute on e-mail, she admonished me, until you’ve begun to figure out a system. Otherwise, she said, I’d never dig out.

So we talked systems. It soon became clear that I’d created a really great e-mail system for when I was writing my book — ironically enough, on being overwhelmed — spending most of my time not at all overwhelmed in yoga pants in my home office working on my iMac. I was a follower of Randy Pausch who wrote, in “The Last Lecture,” to keep your e-mail inbox down to one page and religiously file everything once you’ve handled it. And I had for a couple years.

But now that I was traveling around the country to talk about the book, and back at work at The Washington Post, using my laptop, iPhone and iPad, that system was completely broken. I had six different e-mail accounts. And my main Verizon e-mail that I’d used for years and the Mac Mail inbox with meticulous file folders that I loved on my iMac didn’t sync across any of them.

Gray asked: “If everything just blew up today, and you had to start over, how would you set up your system?”

I wanted one inbox. One e-mail account. And I wanted the same inbox on all my devices. If I deleted an e-mail on my laptop, I wanted it deleted on my iMac. If I put an e-mail into a folder on my iMac, I wanted that same folder on my laptop.

So I decided to use Gmail, which does sync, as my main account. I set up an auto responder on my Verizon e-mail saying I was no longer using it and directing people to my Gmail account. I updated all my accounts to send to Gmail. And I spent hours on the phone with Apple one Sunday (thank you, Chazz,) to get my Gmail account set up in my beloved Mac mail inbox that would sync. Then I transferred old files and created new ones on Gmail. I had to keep my Washington Post account separate, but that wasn’t the real problem.

All systems go.

Read the entire article here.

Image courtesy of Google Search.

 

Send to Kindle

The Enigma of Privacy

Privacy is still a valued and valuable right. It should not be a mere benefit in a democratic society. But, in our current age privacy is becoming an increasingly threatened species. We are surrounded with social networks that share and mine our behaviors and we are assaulted by the snoopers and spooks from local and national governments.

From the Observer:

We have come to the end of privacy; our private lives, as our grandparents would have recognised them, have been winnowed away to the realm of the shameful and secret. To quote ex-tabloid hack Paul McMullan, “privacy is for paedos”. Insidiously, through small concessions that only mounted up over time, we have signed away rights and privileges that other generations fought for, undermining the very cornerstones of our personalities in the process. While outposts of civilisation fight pyrrhic battles, unplugging themselves from the web – “going dark” – the rest of us have come to accept that the majority of our social, financial and even sexual interactions take place over the internet and that someone, somewhere, whether state, press or corporation, is watching.

The past few years have brought an avalanche of news about the extent to which our communications are being monitored: WikiLeaks, the phone-hacking scandal, the Snowden files. Uproar greeted revelations about Facebook’s “emotional contagion” experiment (where it tweaked mathematical formulae driving the news feeds of 700,000 of its members in order to prompt different emotional responses). Cesar A Hidalgo of the Massachusetts Institute of Technology described the Facebook news feed as “like a sausage… Everyone eats it, even though nobody knows how it is made”.

Sitting behind the outrage was a particularly modern form of disquiet – the knowledge that we are being manipulated, surveyed, rendered and that the intelligence behind this is artificial as well as human. Everything we do on the web, from our social media interactions to our shopping on Amazon, to our Netflix selections, is driven by complex mathematical formulae that are invisible and arcane.

Most recently, campaigners’ anger has turned upon the so-called Drip (Data Retention and Investigatory Powers) bill in the UK, which will see internet and telephone companies forced to retain and store their customers’ communications (and provide access to this data to police, government and up to 600 public bodies). Every week, it seems, brings a new furore over corporations – Apple, Google, Facebook – sidling into the private sphere. Often, it’s unclear whether the companies act brazenly because our governments play so fast and loose with their citizens’ privacy (“If you have nothing to hide, you’ve nothing to fear,” William Hague famously intoned); or if governments see corporations feasting upon the private lives of their users and have taken this as a licence to snoop, pry, survey.

We, the public, have looked on, at first horrified, then cynical, then bored by the revelations, by the well-meaning but seemingly useless protests. But what is the personal and psychological impact of this loss of privacy? What legal protection is afforded to those wishing to defend themselves against intrusion? Is it too late to stem the tide now that scenes from science fiction have become part of the fabric of our everyday world?

Novels have long been the province of the great What If?, allowing us to see the ramifications from present events extending into the murky future. As long ago as 1921, Yevgeny Zamyatin imagined One State, the transparent society of his dystopian novel, We. For Orwell, Huxley, Bradbury, Atwood and many others, the loss of privacy was one of the establishing nightmares of the totalitarian future. Dave Eggers’s 2013 novel The Circle paints a portrait of an America without privacy, where a vast, internet-based, multimedia empire surveys and controls the lives of its people, relying on strict adherence to its motto: “Secrets are lies, sharing is caring, and privacy is theft.” We watch as the heroine, Mae, disintegrates under the pressure of scrutiny, finally becoming one of the faceless, obedient hordes. A contemporary (and because of this, even more chilling) account of life lived in the glare of the privacy-free internet is Nikesh Shukla’s Meatspace, which charts the existence of a lonely writer whose only escape is into the shallows of the web. “The first and last thing I do every day,” the book begins, “is see what strangers are saying about me.”

Our age has seen an almost complete conflation of the previously separate spheres of the private and the secret. A taint of shame has crept over from the secret into the private so that anything that is kept from the public gaze is perceived as suspect. This, I think, is why defecation is so often used as an example of the private sphere. Sex and shitting were the only actions that the authorities in Zamyatin’s One State permitted to take place in private, and these remain the battlegrounds of the privacy debate almost a century later. A rather prim leaked memo from a GCHQ operative monitoring Yahoo webcams notes that “a surprising number of people use webcam conversations to show intimate parts of their body to the other person”.

It is to the bathroom that Max Mosley turns when we speak about his own campaign for privacy. “The need for a private life is something that is completely subjective,” he tells me. “You either would mind somebody publishing a film of you doing your ablutions in the morning or you wouldn’t. Personally I would and I think most people would.” In 2008, Mosley’s “sick Nazi orgy”, as the News of the World glossed it, featured in photographs published first in the pages of the tabloid and then across the internet. Mosley’s defence argued, successfully, that the romp involved nothing more than a “standard S&M prison scenario” and the former president of the FIA won £60,000 damages under Article 8 of the European Convention on Human Rights. Now he has rounded on Google and the continued presence of both photographs and allegations on websites accessed via the company’s search engine. If you type “Max Mosley” into Google, the eager autocomplete presents you with “video,” “case”, “scandal” and “with prostitutes”. Half-way down the first page of the search we find a link to a professional-looking YouTube video montage of the NotW story, with no acknowledgment that the claims were later disproved. I watch it several times. I feel a bit grubby.

“The moment the Nazi element of the case fell apart,” Mosley tells me, “which it did immediately, because it was a lie, any claim for public interest also fell apart.”

Here we have a clear example of the blurred lines between secrecy and privacy. Mosley believed that what he chose to do in his private life, even if it included whips and nipple-clamps, should remain just that – private. The News of the World, on the other hand, thought it had uncovered a shameful secret that, given Mosley’s professional position, justified publication. There is a momentary tremor in Mosley’s otherwise fluid delivery as he speaks about the sense of invasion. “Your privacy or your private life belongs to you. Some of it you may choose to make available, some of it should be made available, because it’s in the public interest to make it known. The rest should be yours alone. And if anyone takes it from you, that’s theft and it’s the same as the theft of property.”

Mosley has scored some recent successes, notably in continental Europe, where he has found a culture more suspicious of Google’s sweeping powers than in Britain or, particularly, the US. Courts in France and then, interestingly, Germany, ordered Google to remove pictures of the orgy permanently, with far-reaching consequences for the company. Google is appealing against the rulings, seeing it as absurd that “providers are required to monitor even the smallest components of content they transmit or store for their users”. But Mosley last week extended his action to the UK, filing a claim in the high court in London.

Mosley’s willingness to continue fighting, even when he knows that it means keeping alive the image of his white, septuagenarian buttocks in the minds (if not on the computers) of the public, seems impressively principled. He has fallen victim to what is known as the Streisand Effect, where his very attempt to hide information about himself has led to its proliferation (in 2003 Barbra Streisand tried to stop people taking pictures of her Malibu home, ensuring photos were posted far and wide). Despite this, he continues to battle – both in court, in the media and by directly confronting the websites that continue to display the pictures. It is as if he is using that initial stab of shame, turning it against those who sought to humiliate him. It is noticeable that, having been accused of fetishising one dark period of German history, he uses another to attack Google. “I think, because of the Stasi,” he says, “the Germans can understand that there isn’t a huge difference between the state watching everything you do and Google watching everything you do. Except that, in most European countries, the state tends to be an elected body, whereas Google isn’t. There’s not a lot of difference between the actions of the government of East Germany and the actions of Google.”

All this brings us to some fundamental questions about the role of search engines. Is Google the de facto librarian of the internet, given that it is estimated to handle 40% of all traffic? Is it something more than a librarian, since its algorithms carefully (and with increasing use of your personal data) select the sites it wants you to view? To what extent can Google be held responsible for the content it puts before us?

Read the entire article here.

Send to Kindle

Nuclear Codes and Floppy Disks

Floppy_disksSometimes a good case can be made for remaining a technological Luddite; sometimes eschewing the latest-and-greatest technical gizmo may actually work for you.

 

Take the case of the United States’ nuclear deterrent. A recent report on CBS 60 Minutes showed us how part of the computer system responsible for launch control of US intercontinental ballistic missiles (ICBM) still uses antiquated 8-inch floppy disks. This part of the national defense is so old and arcane it’s actually more secure than most contemporary computing systems and communications infrastructure. So, next time your internet-connected, cloud-based tablet or laptop gets hacked consider reverting to a pre-1980s device.

From ars technica:

In a report that aired on April 27, CBS 60 Minutes correspondent Leslie Stahl expressed surprise that part of the computer system responsible for controlling the launch of the Minuteman III intercontinental ballistic missiles relied on data loaded from 8-inch floppy disks. Most of the young officers stationed at the launch control center had never seen a floppy disk before they became “missileers.”

An Air Force officer showed Stahl one of the disks, marked “Top Secret,” which is used with the computer that handles what was once called the Strategic Air Command Digital Network (SACDIN), a communication system that delivers launch commands to US missile forces. Beyond the floppies, a majority of the systems in the Wyoming US Air Force launch control center (LCC) Stahl visited dated back to the 1960s and 1970s, offering the Air Force’s missile forces an added level of cyber security, ICBM forces commander Major General Jack Weinstein told 60 Minutes.

“A few years ago we did a complete analysis of our entire network,” Weinstein said. “Cyber engineers found out that the system is extremely safe and extremely secure in the way it’s developed.”

However, not all of the Minuteman launch control centers’ aging hardware is an advantage. The analog phone systems, for example, often make it difficult for the missileers to communicate with each other or with their base. The Air Force commissioned studies on updating the ground-based missile force last year, and it’s preparing to spend $19 million this year on updates to the launch control centers. The military has also requested $600 million next year for further improvements.

Read the entire article here.

Image: Various floppy disks. Courtesy: George George Chernilevsky,  2009 / Wikipedia.

Send to Kindle

Mesh Networks: Coming to a Phone Near You

firechat-screenshot

Soon you’ll be able to text and chat online without the need of a cellular network or the Internet. There is a catch though: you’ll need yet another chat-app for your smartphone and you will need to be within a 100 or so yards of your chatting friend. But, this is just the beginning of so-called “mesh networks” that can be formed through peer-to-peer device connections avoiding the need for cellular communications. As mobile devices continue to proliferate such local, device-to-device connections could become more practical.

From Technology Review:

Mobile app stores are stuffed with messaging apps from WhatsApp to Tango and their many imitators. But FireChat, released last week for the iPhone, stands out. It’s the only one that can be used without cell-phone reception.

FireChat makes use of a feature Apple introduced in the latest version of its iOS mobile software, iOS7, called multipeer connectivity. This feature allows phones to connect to one another directly using Bluetooth or Wi-Fi as an alternative to the Internet. If you’re using FireChat, its “nearby” chat room lets you exchange messages with other users within 100 feet without sending data via your cellular provider.

Micha Benoliel, CEO and cofounder of startup Open Garden, which made FireChat, says the app shows how smartphones can be set free from cellular networks. He hopes to enable many more Internet-optional apps with the upcoming release of software tools that will help developers build FireChat-style apps for iPhone, or for Android, Mac, and Windows devices. “This approach is very interesting for multiplayer gaming and all kinds of communication apps,” says Benoliel.

Anthony DiPasquale, a developer with consultancy Thoughtbot, says FireChat is the only app he’s aware of that’s been built to make use of multipeer connectivity, perhaps because the feature remains unfamiliar to most Apple developers. “I hope more people start to use it soon,” he says. “It’s an awesome framework with a lot of potential. There is probably a great use for multipeer connectivity in every situation where there are people grouped together wanting to share some sort of information.” DiPasquale has dabbled in using multipeer connectivity himself, creating an experimental app that streams music from one device to several others nearby.

The new feature of iOS7 currently only supports data moving directly from one device to another, and from one device to several others. However, Open Garden’s forthcoming software will extend the feature so that data can hop between two iPhones out of range of one another via intermediary devices. That approach, known as mesh networking, is at the heart of several existing projects to create disaster-proof or community-controlled communications networks (see “Build Your Own Internet with Mobile Mesh Networking”).

Apps built to exploit such device-to-device schemes can offer security and privacy benefits over those that rely on the Internet. For example, messages sent using FireChat to nearby devices don’t pass through any systems operated by either Open Garden or a wireless carrier (although they are broadcast to all FireChat users nearby).

That means the content of a message and metadata could not be harvested from a central communications hub by an attacker or government agency. “This method of communication is immune to firewalls like the ones installed in China and North Korea,” says Mattt Thompson, a software engineer who writes the iOS and Mac development blog NSHipster. Recent revelations about large-scale surveillance of online services and the constant litany of data breaches make this a good time for apps that don’t rely on central servers, he says. “As users become more mindful of the security and privacy implications of technologies they rely on, moving in the direction of local, ad-hoc networking makes a lot of sense.”

However, peer-to-peer and mesh networking apps also come with their own risks, since an eavesdropper could gain access to local traffic just by using a device within range.

Read the entire article here.

Image courtesy of Open Garden.

Send to Kindle