Tag Archives: internet

None of Us Is As Smart As All of Us

Bob Taylor died on April 13, 2017 aged 85. An ordinary sounding name for someone who had a hand in founding almost every computing-related technology in the last 50 years.

Bob Taylor was a firm believer in the power of teamwork; one of his favorite proverbs was, “None of us is as smart as all of us”. And, the teams he was part of, and directed or funded, are stuff of Silicon Valley Legend. To name but a few:

In 1961, as a project manager at NASA, his support of computer scientist Douglas Engelbart, led to the invention of the computer mouse.

In 1966, at ARPA (Advanced Research Projects Agency) Taylor convinced his boss to spend half a million dollars on an experimental computer network. This became known as ARPAnet — the precursor to the Internet that we all live on today.

In 1972, now at Xerox PARC (Palo Alto Research Center) he and his teams of computer scientists ushered in the era of the personal computer. Some of the notable inventions at PARC during Taylor’s tenure include: the first true personal computer (Xerox Alto); windowed displays and graphical user interfaces, which led to the Apple Macintosh; Ethernet to connect networks of local computers; a communications protocol that later became TCP/IP, upon which runs most of today’s Internet traffic; hardware and software that led to the laser printer; and word and graphics processing tools that led engineers to develop PhotoShop and PageMaker (Adobe Systems) and Bravo, which later became Microsoft Word.

Read more about Bob Taylor’s unique and lasting legacy over at Wired.

Image: Bob Taylor, 2008. Credit: Gardner Campbell / Wikipedia. CC BY-SA 2.0.

Clicks or Truth

The internet is a tremendous resource for learning, entertainment and communication. It’s also a vast, accreting blob of misinformation, lies, rumor, exaggeration and just plain bulls**t.

So, is there any hope for those of us who care about fact and truth over truthiness? Well, the process of combating conspiracies and mythology is likely to remain a difficult and continuous one for the foreseeable future.

But, there are small pockets on the internet where the important daily fight against disinformation thrives. As managing editor Brooke Binkowski at the fact-checking site Snopes.com puts it, “In cases where clickability and virality trump fact, we feel that knowledge is the best antidote to fear.”

From Washington Post:

In a famous xkcd cartoon, “Duty Calls,” a man’s partner beckons him to bed as he sits alone at his computer. “I can’t. This is important,” he demurs, pecking furiously at the keyboard. “What?” comes the reply. His answer: “Someone is wrong on the Internet.”

His nighttime frustration is my day job. I work at Snopes.com, the fact-checking site pledged to running down rumors, debunking cant and calling out liars. Just this past week, for instance, we wrestled with a mysterious lump on Hillary Clinton’s back that turned out to be a mic pack (not the defibrillator some had alleged). It’s a noble and worthwhile calling, but it’s also a Sisyphean one. On the Internet, no matter how many facts you marshal, someone is always wrong.

Every day, the battle against error begins with email. At Snopes, which is supported entirely by advertising, our staff of about a dozen writers and editors plows through some 1,000 messages that have accumulated overnight, which helps us get a feel for what our readers want to know about this morning. Unfortunately, it also means a healthy helping of venom, racism and fury. A Boston-based email specialist on staff helps sort the wheat (real questions we could answer) from the vituperative chaff.

Out in the physical world (where we rarely get to venture during the election season, unless it’s to investigate yet another rumor about Pokémon Go), our interactions with the site’s readers are always positive. But in the virtual world, anonymous communication emboldens the disaffected to treat us as if we were agents of whatever they’re perturbed by today. The writers of these missives, who often send the same message over and over, think they’re on to us: We’re shills for big government, big pharma, the Department of Defense or any number of other prominent, arguably shadowy organizations. You have lost all credibility! they tell us. They never consider that the actual truth is what’s on our website — that we’re completely independent.

Read the entire article here.

The Death of Permissionless Innovation

NeXTcube_first_webserver

The internet and its user-friendly interface, the World Wide Web (Web), was founded on the principle of openness. The acronym soup of standards, such as TCP/IP, HTTP and HTML, paved the way for unprecedented connectivity and interoperability. Anyone armed with a computer and a connection, adhering to these standards, could now connect and browse and share data with any one else.

This is a simplified view of Sir Tim Berners-Lee vision for the Web in 1989 — the same year that brought us Seinfeld and The Simpsons. Berners-Lee invented the Web. His invention fostered an entire global technological and communications revolution over the next  quarter century.

However, Berners-Lee did something much more important. Rather than keeping the Web to himself and his colleagues, and turning to Silicon Valley to found and fund the next billion dollar startup, he pursued a path to give the ideas and technologies away. Critically, the open standards of the internet and Web enabled countless others to innovate and to profit.

One of the innovators to reap the greatest rewards from this openness is Facebook’s Mark Zuckerberg. Yet, in the ultimate irony, Facebook has turned the Berners-Lee model of openness and permissionless innovation on its head. It’s billion-plus users are members of a private, corporate-controlled walled garden. Innovation, to a large extent, is now limited by the whims of Facebook. Increasingly so, open innovation on the internet is stifled and extinguished by the constraints manufactured and controlled for Facebook’s own ends. This makes Zuckerberg’s vision of making the world “more open and connected” thoroughly laughable.

From the Guardian:

If there were a Nobel prize for hypocrisy, then its first recipient ought to be Mark Zuckerberg, the Facebook boss. On 23 August, all his 1.7 billion users were greeted by this message: “Celebrating 25 years of connecting people. The web opened up to the world 25 years ago today! We thank Sir Tim Berners-Lee and other internet pioneers for making the world more open and connected.”

Aw, isn’t that nice? From one “pioneer” to another. What a pity, then, that it is a combination of bullshit and hypocrisy. In relation to the former, the guy who invented the web, Tim Berners-Lee, is as mystified by this “anniversary” as everyone else. “Who on earth made up 23 August?” he asked on Twitter. Good question. In fact, as the Guardian pointed out: “If Facebook had asked Berners-Lee, he’d probably have told them what he’s been telling people for years: the web’s 25th birthday already happened, two years ago.”

“In 1989, I delivered a proposal to Cern for the system that went on to become the worldwide web,” he wrote in 2014. It was that year, not this one, that he said we should celebrate as the web’s 25th birthday.

It’s not the inaccuracy that grates, however, but the hypocrisy. Zuckerberg thanks Berners-Lee for “making the world more open and connected”. So do I. What Zuck conveniently omits to mention, though, is that he is embarked upon a commercial project whose sole aim is to make the world more “connected” but less open. Facebook is what we used to call a “walled garden” and now call a silo: a controlled space in which people are allowed to do things that will amuse them while enabling Facebook to monetise their data trails. One network to rule them all. If you wanted a vision of the opposite of the open web, then Facebook is it.

The thing that makes the web distinctive is also what made the internet special, namely that it was designed as an open platform. It was designed to facilitate “permissionless innovation”. If you had a good idea that could be realised using data packets, and possessed the programming skills to write the necessary software, then the internet – and the web – would do it for you, no questions asked. And you didn’t need much in the way of financial resources – or to ask anyone for permission – in order to realise your dream.

An open platform is one on which anyone can build whatever they like. It’s what enabled a young Harvard sophomore, name of Zuckerberg, to take an idea lifted from two nice-but-dim oarsmen, translate it into computer code and launch it on an unsuspecting world. And in the process create an empire of 1.7 billion subjects with apparently limitless revenues. That’s what permissionless innovation is like.

The open web enabled Zuckerberg to do this. But – guess what? – the Facebook founder has no intention of allowing anyone to build anything on his platform that does not have his express approval. Having profited mightily from the openness of the web, in other words, he has kicked away the ladder that elevated him to his current eminence. And the whole thrust of his company’s strategy is to persuade billions of future users that Facebook is the only bit of the internet they really need.

Read the entire article here.

Image: The NeXT Computer used by Tim Berners-Lee at CERN. Courtesy: Science Museum, London. GFDL CC-BY-SA.

The Story of the Default Coordinate: 38°N 97°W

Map-US-center

About 40 miles and 40 minutes north-east of Wichita, Kansas lies the small town of Potwin. The 2010 census put the official population of Potwin at 449.

Potwin would be an unremarkable town, situated in the Great Plains surrounded by vast farms and feedlots, if it were not for one unique fact. Potwin is home to a farmhouse with a Lat-Long location of 38°N 97°W.

You see, 38°N 97°W happens to coincide with the coordinates, incorrectly, chosen as the geographical center of the United States by a digital mapping company in 2002. Geographically the official center of the country is 39°50′ N (or 39.8333333), 98°35′ W (or -98.585522), which is a spot in northern Kansas near the Nebraska border.

But, back in 2002, a digital mapping company, called MaxMind, decided to round the actual, unwieldy Lat-Long coordinates to 38.0000, -97.0000. These coordinates would become the default point and de facto center of the United States.

Now, the internet uses a protocol (IP) to allow any device to connect with any other, via a unique IP address. This allows a message or webpage from one device, say a server, to find its way to another device, such as your computer. Every device connected to the internet has a unique IP address. Companies soon realized that having an IP address, in cyberspace, would be much more valuable — for technical maintenance or marketing purposes — if it could be tied to a physical location. So, companies like MaxMind came along to provide the digital mapping and location translation service.

However, for those IP addresses that could not be adequately resolved to a physical address, the company assigned the default coordinate — the center of the United States.

Unfortunately, there are now around 600 million IP addresses that point to this default location, 38°N 97°W, which also happens to be the farmhouse in Potwin.

This becomes rather problematic for the residents of 38°N 97°W because internet scammers, spammers,  cyber-thieves and other digitally-minded criminals typically like to hide their locations, which end up resolving to the default coordinate and the farmhouse in Potwin. As a result, the federal authorities have made quite a habit of visiting this unremarkable farmhouse in Potwin, and the residents now lead far from unremarkable lives.

Read more of this surreal story here.

Image: Potwin, Kansas. Courtesy: Google Maps.

MondayMap: Internet Racism

map-internet-racism

Darkest blue and light blue respectively indicate much less and less racist areas than the national average. The darkest red indicates the most racist zones.

No surprise: the areas with the highest number of racists are in the South and the rural Northeastern United States. Head west of Texas and you’ll find fewer and fewer pockets of racists. Further, and perhaps not surprisingly, the greater the degree of n-word usage the higher is the rate of black mortality.

Sadly, this map is not of 18th or 19th century America, it’s from a recent study, April 2015, posted on Public Library of Science (PLOS) ONE.

Now keep in mind that the map highlights racism through tracking of pejorative search terms such as the n-word, and doesn’t count actual people, and it’s a geographic generalization. Nonetheless it’s a stark reminder that we seem to be two nations divided by the mighty Mississippi River and we still have a very long way to go before we are all “westerners”.

From Washington Post:

Where do America’s most racist people live? “The rural Northeast and South,” suggests a new study just published in PLOS ONE.

The paper introduces a novel but makes-tons-of-sense-when-you-think-about-it method for measuring the incidence of racist attitudes: Google search data. The methodology comes from data scientist Seth Stephens-Davidowitz. He’s used it before to measure the effect of racist attitudes on Barack Obama’s electoral prospects.

“Google data, evidence suggests, are unlikely to suffer from major social censoring,” Stephens-Davidowitz wrote in a previous paper. “Google searchers are online and likely alone, both of which make it easier to express socially taboo thoughts. Individuals, indeed, note that they are unusually forthcoming with Google.” He also notes that the Google measure correlates strongly with other standard measures social science researchers have used to study racist attitudes.

This is important, because racism is a notoriously tricky thing to measure. Traditional survey methods don’t really work — if you flat-out ask someone if they’re racist, they will simply tell you no. That’s partly because most racism in society today operates at the subconscious level, or gets vented anonymously online.

For the PLOS ONE paper, researchers looked at searches containing the N-word. People search frequently for it, roughly as often as searches for  “migraine(s),” “economist,” “sweater,” “Daily Show,” and “Lakers.” (The authors attempted to control for variants of the N-word not necessarily intended as pejoratives, excluding the “a” version of the word that analysis revealed was often used “in different contexts compared to searches of the term ending in ‘-er’.”)

Read the entire article here.

Image: Association between an Internet-Based Measure of Area Racism and Black Mortality. Courtesy of Washington Post / PLOS (Public Library of Science) ONE.

Software That Learns to Eat Itself

Google became a monstrously successful technology company by inventing a solution to index and search content scattered across the Web, and then monetizing the search results through contextual ads. Since its inception the company has relied on increasingly sophisticated algorithms for indexing mountains of information and then serving up increasingly relevant results. These algorithms are based on a secret sauce that ranks the relevance of a webpage by evaluating its content, structure and relationships with other pages. They are defined and continuously improved by technologists and encoded into software by teams of engineers.

But as is the case in many areas of human endeavor, the underlying search engine technology and its teams of human designers and caregivers are being replaced by newer, better technology. In this case the better technology is based on artificial intelligence (AI), and it doesn’t rely on humans. It is based on machine or deep learning and neural networks — a combination of hardware and software that increasingly mimics the human brain in its ability to aggregate and filter information, decipher patterns and infer meaning.

[I’m sure it will not be long before yours truly is replaced by a bot.]

From Wired:

Yesterday, the 46-year-old Google veteran who oversees its search engine, Amit Singhal, announced his retirement. And in short order, Google revealed that Singhal’s rather enormous shoes would be filled by a man named John Giannandrea. On one level, these are just two guys doing something new with their lives. But you can also view the pair as the ideal metaphor for a momentous shift in the way things work inside Google—and across the tech world as a whole.

Giannandrea, you see, oversees Google’s work in artificial intelligence. This includes deep neural networks, networks of hardware and software that approximate the web of neurons in the human brain. By analyzing vast amounts of digital data, these neural nets can learn all sorts of useful tasks, like identifying photos, recognizing commands spoken into a smartphone, and, as it turns out, responding to Internet search queries. In some cases, they can learn a task so well that they outperform humans. They can do it better. They can do it faster. And they can do it at a much larger scale.

This approach, called deep learning, is rapidly reinventing so many of the Internet’s most popular services, from Facebook to Twitter to Skype. Over the past year, it has also reinvented Google Search, where the company generates most of its revenue. Early in 2015, as Bloomberg recently reported, Google began rolling out a deep learning system called RankBrain that helps generate responses to search queries. As of October, RankBrain played a role in “a very large fraction” of the millions of queries that go through the search engine with each passing second.

Read the entire story here.

Meet the Broadband Preacher

Welcome-to-mississippi_I20

This fascinating article follows Roberto Gallardo an extension professor at Mississippi State University as he works to bring digital literacy, the internet and other services of our 21st century electronic age to rural communities across the South. It’s an uphill struggle.

From Wired:

For a guy born and raised in Mexico, Roberto Gallardo has an exquisite knack for Southern manners. That’s one of the first things I notice about him when we meet up one recent morning at a deli in Starkville, Mississippi. Mostly it’s the way he punctuates his answers to my questions with a decorous “Yes sir” or “No sir”—a verbal tic I associate with my own Mississippi upbringing in the 1960s.

Gallardo is 36 years old, with a salt-and-pepper beard, oval glasses, and the faint remnant of a Latino accent. He came to Mississippi from Mexico a little more than a decade ago for a doctorate in public policy. Then he never left.

I’m here in Starkville, sitting in this booth, to learn about the work that has kept Gallardo in Mississippi all these years—work that seems increasingly vital to the future of my home state. I’m also here because Gallardo reminds me of my father.

Gallardo is affiliated with something called the Extension Service, an institution that dates back to the days when America was a nation of farmers. Its original purpose was to disseminate the latest agricultural know-how to all the homesteads scattered across the interior. Using land grant universities as bases of operations, each state’s extension service would deploy a network of experts and “county agents” to set up 4-H Clubs or instruct farmers in cultivation science or demonstrate how to can and freeze vegetables without poisoning yourself in your own kitchen.

State extension services still do all this, but Gallardo’s mission is a bit of an update. Rather than teach modern techniques of crop rotation, his job—as an extension professor at Mississippi State University—is to drive around the state in his silver 2013 Nissan Sentra and teach rural Mississippians the value of the Internet.

In sleepy public libraries, at Rotary breakfasts, and in town halls, he gives PowerPoint presentations that seem calculated to fill rural audiences with healthy awe for the technological sublime. Rather than go easy, he starts with a rapid-fire primer on heady concepts like the Internet of Things, the mobile revolution, cloud computing, digital disruption, and the perpetual increase of processing power. (“It’s exponential, folks. It’s just growing and growing.”) The upshot: If you don’t at least try to think digitally, the digital economy will disrupt you. It will drain your town of young people and leave your business in the dust.

Then he switches gears and tries to stiffen their spines with confidence. Start a website, he’ll say. Get on social media. See if the place where you live can finally get a high-speed broadband connection—a baseline point of entry into modern economic and civic life.

Even when he’s talking to me, Gallardo delivers this message with the straitlaced intensity of a traveling preacher. “Broadband is as essential to this country’s infrastructure as electricity was 110 years ago or the Interstate Highway System 50 years ago,” he says from his side of our booth at the deli, his voice rising high enough above the lunch-hour din that a man at a nearby table starts paying attention. “If you don’t have access to the technology, or if you don’t know how to use it, it’s similar to not being able to read and write.”

These issues of digital literacy, access, and isolation are especially pronounced here in the Magnolia State. Mississippi today ranks around the bottom of nearly every national tally of health and economic well-being. It has the lowest median household income and the highest rate of child mortality. It also ranks last in high-speed household Internet access. In human terms, that means more than a million Mississippians—over a third of the state’s population—lack access to fast wired broadband at home.

Gallardo doesn’t talk much about race or history, but that’s the broader context for his work in a state whose population has the largest percentage of African-Americans (38 percent) of any in the union. The most Gallardo will say on the subject is that he sees the Internet as a natural way to level out some of the persistent inequalities—between black and white, urban and rural—that threaten to turn parts of Mississippi into places of exile, left further and further behind the rest of the country.

And yet I can’t help but wonder how Gallardo’s work figures into the sweep of Mississippi’s history, which includes—looking back over just the past century—decades of lynchings, huge outward migrations, the fierce, sustained defense of Jim Crow, and now a period of unprecedented mass incarceration. My curiosity on this point is not merely journalistic. During the lead-up to the civil rights era, my father worked with the Extension Service in southern Mississippi as well. Because the service was segregated at the time, his title was “negro county agent.” As a very young child, I would travel from farm to farm with him. Now I’m here to travel around Mississippi with Gallardo, much as I did with my father. I want to see whether the deliberate isolation of the Jim Crow era—when Mississippi actively fought to keep itself apart from the main currents of American life—has any echoes in today’s digital divide.

Read the entire article here.

Image: Welcome to Mississippi. Courtesy of WebTV3.

The Internet of Flow

Time-based structures of information and flowing data — on a global scale — will increasingly dominate the Web. Eventually, this flow is likely to transform how we organize, consume and disseminate our digital knowledge. While we see evidence of this in effect today, in blogs, Facebook’s wall and timeline and, most basically, via Twitter, the long-term implications of this fundamentally new organizing principle have yet to be fully understood — especially in business.

For a brief snapshot on a possible, and likely, future of the Internet I turn to David Gelernter. He is Professor of Computer Science at Yale University, an important thinker and author who has helped shape the fields of parallel computing, artificial intelligence (AI) and networking. Many of Gelernter’s papers, some written over 20 years ago offer a remarkably prescient view, most notably: Mirror Worlds (1991), The Muse In The Machine (1994) and The Second Coming – A Manifesto (1999).

From WSJ:

People ask where the Web is going; it’s going nowhere. The Web was a brilliant first shot at making the Internet usable, but it backed the wrong horse. It chose space over time. The conventional website is “space-organized,” like a patterned beach towel—pineapples upper left, mermaids lower right. Instead it might have been “time-organized,” like a parade—first this band, three minutes later this float, 40 seconds later that band.

We go to the Internet for many reasons, but most often to discover what’s new. We have had libraries for millennia, but never before have we had a crystal ball that can tell us what is happening everywhere right now. Nor have we ever had screens, from room-sized to wrist-sized, that can show us high-resolution, constantly flowing streams of information.

Today, time-based structures, flowing data—in streams, feeds, blogs—increasingly dominate the Web. Flow has become the basic organizing principle of the cybersphere. The trend is widely understood, but its implications aren’t.

Working together at Yale in the mid-1990s, we forecast the coming dominance of time-based structures and invented software called the “lifestream.” We had been losing track of our digital stuff, which was scattered everywhere, across proliferating laptops and desktops. Lifestream unified our digital life: Each new document, email, bookmark or video became a bead threaded onto a single wire in the Cloud, in order of arrival.

To find a bead, you search, as on the Web. Or you can watch the wire and see each new bead as it arrives. Whenever you add a bead to the lifestream, you specify who may see it: everyone, my friends, me. Each post is as private as you make it.

Where do these ideas lead? Your future home page—the screen you go to first on your phone, laptop or TV—is a bouquet of your favorite streams from all over. News streams are blended with shopping streams, blogs, your friends’ streams, each running at its own speed.

This home stream includes your personal stream as part of the blend—emails, documents and so on. Your home stream is just one tiny part of the world stream. You can see your home stream in 3-D on your laptop or desktop, in constant motion on your phone or as a crawl on your big TV.

By watching one stream, you watch the whole world—all the public and private events you care about. To keep from being overwhelmed, you adjust each stream’s flow rate when you add it to your collection. The system slows a stream down by replacing many entries with one that lists short summaries—10, 100 or more.

An all-inclusive home stream creates new possibilities. You could build a smartwatch to display the stream as it flows past. It could tap you on the wrist when there’s something really important onstream. You can set something aside or rewind if necessary. Just speak up to respond to messages or add comments. True in-car computing becomes easy. Because your home stream gathers everything into one line, your car can read it to you as you drive.

Read the entire article here.

 

Digital Forensics and the Wayback Machine

Amazon-Aug1999

Many of us see history — the school subject — as rather dull and boring. After all, how can the topic be made interesting when it’s usually taught by a coach who has other things on his or her mind [no joke, I have evidence of this from both sides of the Atlantic!].

Yet we also know that history’s lessons are essential to shaping our current world view and our vision for the future, in a myriad of ways. Since humans could speak and then write, our ancestors have recorded and transmitted their histories through oral storytelling, and then through books and assorted media.

Then came the internet. The explosion of content, media formats and related technologies over the last quarter-century has led to an immense challenge for archivists and historians intent on cataloging our digital stories. One facet of this challenge is the tremendous volume of information and its accelerating growth. Another is the dynamic nature of the content — much of it being constantly replaced and refreshed.

But, all is not lost. The Internet Archive founded in 1996 has been quietly archiving text, pages, images, audio and more recently entire web sites from the Tubes of the vast Internets. Currently the non-profit has archived around half a trillion web pages. It’s our modern day equivalent of the Library of Alexandria.

Please say hello to the Internet Archive Wayback Machine, and give it a try. The Wayback Machine took the screenshot above of Amazon.com in 1999, in case you’ve ever wondered what Amazon looked like before it swallowed or destroyed entire retail sectors.

From the New Yorker:

Malaysia Airlines Flight 17 took off from Amsterdam at 10:31 A.M. G.M.T. on July 17, 2014, for a twelve-hour flight to Kuala Lumpur. Not much more than three hours later, the plane, a Boeing 777, crashed in a field outside Donetsk, Ukraine. All two hundred and ninety-eight people on board were killed. The plane’s last radio contact was at 1:20 P.M. G.M.T. At 2:50 P.M. G.M.T., Igor Girkin, a Ukrainian separatist leader also known as Strelkov, or someone acting on his behalf, posted a message on VKontakte, a Russian social-media site: “We just downed a plane, an AN-26.” (An Antonov 26 is a Soviet-built military cargo plane.) The post includes links to video of the wreckage of a plane; it appears to be a Boeing 777.

Two weeks before the crash, Anatol Shmelev, the curator of the Russia and Eurasia collection at the Hoover Institution, at Stanford, had submitted to the Internet Archive, a nonprofit library in California, a list of Ukrainian and Russian Web sites and blogs that ought to be recorded as part of the archive’s Ukraine Conflict collection. Shmelev is one of about a thousand librarians and archivists around the world who identify possible acquisitions for the Internet Archive’s subject collections, which are stored in its Wayback Machine, in San Francisco. Strelkov’s VKontakte page was on Shmelev’s list. “Strelkov is the field commander in Slaviansk and one of the most important figures in the conflict,” Shmelev had written in an e-mail to the Internet Archive on July 1st, and his page “deserves to be recorded twice a day.”

On July 17th, at 3:22 P.M. G.M.T., the Wayback Machine saved a screenshot of Strelkov’s VKontakte post about downing a plane. Two hours and twenty-two minutes later, Arthur Bright, the Europe editor of the Christian Science Monitor, tweeted a picture of the screenshot, along with the message “Grab of Donetsk militant Strelkov’s claim of downing what appears to have been MH17.” By then, Strelkov’s VKontakte page had already been edited: the claim about shooting down a plane was deleted. The only real evidence of the original claim lies in the Wayback Machine.

The average life of a Web page is about a hundred days. Strelkov’s “We just downed a plane” post lasted barely two hours. It might seem, and it often feels, as though stuff on the Web lasts forever, for better and frequently for worse: the embarrassing photograph, the regretted blog (more usually regrettable not in the way the slaughter of civilians is regrettable but in the way that bad hair is regrettable). No one believes any longer, if anyone ever did, that “if it’s on the Web it must be true,” but a lot of people do believe that if it’s on the Web it will stay on the Web. Chances are, though, that it actually won’t. In 2006, David Cameron gave a speech in which he said that Google was democratizing the world, because “making more information available to more people” was providing “the power for anyone to hold to account those who in the past might have had a monopoly of power.” Seven years later, Britain’s Conservative Party scrubbed from its Web site ten years’ worth of Tory speeches, including that one. Last year, BuzzFeed deleted more than four thousand of its staff writers’ early posts, apparently because, as time passed, they looked stupider and stupider. Social media, public records, junk: in the end, everything goes.

Web pages don’t have to be deliberately deleted to disappear. Sites hosted by corporations tend to die with their hosts. When MySpace, GeoCities, and Friendster were reconfigured or sold, millions of accounts vanished. (Some of those companies may have notified users, but Jason Scott, who started an outfit called Archive Team—its motto is “We are going to rescue your shit”—says that such notification is usually purely notional: “They were sending e-mail to dead e-mail addresses, saying, ‘Hello, Arthur Dent, your house is going to be crushed.’ ”) Facebook has been around for only a decade; it won’t be around forever. Twitter is a rare case: it has arranged to archive all of its tweets at the Library of Congress. In 2010, after the announcement, Andy Borowitz tweeted, “Library of Congress to acquire entire Twitter archive—will rename itself Museum of Crap.” Not long after that, Borowitz abandoned that Twitter account. You might, one day, be able to find his old tweets at the Library of Congress, but not anytime soon: the Twitter Archive is not yet open for research. Meanwhile, on the Web, if you click on a link to Borowitz’s tweet about the Museum of Crap, you get this message: “Sorry, that page doesn’t exist!”

The Web dwells in a never-ending present. It is—elementally—ethereal, ephemeral, unstable, and unreliable. Sometimes when you try to visit a Web page what you see is an error message: “Page Not Found.” This is known as “link rot,” and it’s a drag, but it’s better than the alternative. More often, you see an updated Web page; most likely the original has been overwritten. (To overwrite, in computing, means to destroy old data by storing new data in their place; overwriting is an artifact of an era when computer storage was very expensive.) Or maybe the page has been moved and something else is where it used to be. This is known as “content drift,” and it’s more pernicious than an error message, because it’s impossible to tell that what you’re seeing isn’t what you went to look for: the overwriting, erasure, or moving of the original is invisible. For the law and for the courts, link rot and content drift, which are collectively known as “reference rot,” have been disastrous. In providing evidence, legal scholars, lawyers, and judges often cite Web pages in their footnotes; they expect that evidence to remain where they found it as their proof, the way that evidence on paper—in court records and books and law journals—remains where they found it, in libraries and courthouses. But a 2013 survey of law- and policy-related publications found that, at the end of six years, nearly fifty per cent of the URLs cited in those publications no longer worked. According to a 2014 study conducted at Harvard Law School, “more than 70% of the URLs within the Harvard Law Review and other journals, and 50% of the URLs within United States Supreme Court opinions, do not link to the originally cited information.” The overwriting, drifting, and rotting of the Web is no less catastrophic for engineers, scientists, and doctors. Last month, a team of digital library researchers based at Los Alamos National Laboratory reported the results of an exacting study of three and a half million scholarly articles published in science, technology, and medical journals between 1997 and 2012: one in five links provided in the notes suffers from reference rot. It’s like trying to stand on quicksand.

The footnote, a landmark in the history of civilization, took centuries to invent and to spread. It has taken mere years nearly to destroy. A footnote used to say, “Here is how I know this and where I found it.” A footnote that’s a link says, “Here is what I used to know and where I once found it, but chances are it’s not there anymore.” It doesn’t matter whether footnotes are your stock-in-trade. Everybody’s in a pinch. Citing a Web page as the source for something you know—using a URL as evidence—is ubiquitous. Many people find themselves doing it three or four times before breakfast and five times more before lunch. What happens when your evidence vanishes by dinnertime?

The day after Strelkov’s “We just downed a plane” post was deposited into the Wayback Machine, Samantha Power, the U.S. Ambassador to the United Nations, told the U.N. Security Council, in New York, that Ukrainian separatist leaders had “boasted on social media about shooting down a plane, but later deleted these messages.” In San Francisco, the people who run the Wayback Machine posted on the Internet Archive’s Facebook page, “Here’s why we exist.”

Read the entire story here.

Image: Wayback Machine’s screenshot of Amazon.com’s home page, August 1999.

Professional Trolling

Just a few short years ago the word “troll” in the context of the internet had not even entered our lexicon. Now, you can enter a well-paid career in the distasteful practice, especially if you live in Russia. You have to admire the human ability to find innovative and profitable ways to inflict pain on others.

From NYT:

Around 8:30 a.m. on Sept. 11 last year, Duval Arthur, director of the Office of Homeland Security and Emergency Preparedness for St. Mary Parish, Louisiana, got a call from a resident who had just received a disturbing text message. “Toxic fume hazard warning in this area until 1:30 PM,” the message read. “Take Shelter. Check Local Media and columbiachemical.com.”

St. Mary Parish is home to many processing plants for chemicals and natural gas, and keeping track of dangerous accidents at those plants is Arthur’s job. But he hadn’t heard of any chemical release that morning. In fact, he hadn’t even heard of Columbia Chemical. St. Mary Parish had a Columbian Chemicals plant, which made carbon black, a petroleum product used in rubber and plastics. But he’d heard nothing from them that morning, either. Soon, two other residents called and reported the same text message. Arthur was worried: Had one of his employees sent out an alert without telling him?

If Arthur had checked Twitter, he might have become much more worried. Hundreds of Twitter accounts were documenting a disaster right down the road. “A powerful explosion heard from miles away happened at a chemical plant in Centerville, Louisiana #ColumbianChemicals,” a man named Jon Merritt tweeted. The #ColumbianChemicals hashtag was full of eyewitness accounts of the horror in Centerville. @AnnRussela shared an image of flames engulfing the plant. @Ksarah12 posted a video of surveillance footage from a local gas station, capturing the flash of the explosion. Others shared a video in which thick black smoke rose in the distance.

Dozens of journalists, media outlets and politicians, from Louisiana to New York City, found their Twitter accounts inundated with messages about the disaster. “Heather, I’m sure that the explosion at the #ColumbianChemicals is really dangerous. Louisiana is really screwed now,” a user named @EricTraPPP tweeted at the New Orleans Times-Picayune reporter Heather Nolan. Another posted a screenshot of CNN’s home page, showing that the story had already made national news. ISIS had claimed credit for the attack, according to one YouTube video; in it, a man showed his TV screen, tuned to an Arabic news channel, on which masked ISIS fighters delivered a speech next to looping footage of an explosion. A woman named Anna McClaren (@zpokodon9) tweeted at Karl Rove: “Karl, Is this really ISIS who is responsible for #ColumbianChemicals? Tell @Obama that we should bomb Iraq!” But anyone who took the trouble to check CNN.com would have found no news of a spectacular Sept. 11 attack by ISIS. It was all fake: the screenshot, the videos, the photographs.

 In St. Mary Parish, Duval Arthur quickly made a few calls and found that none of his employees had sent the alert. He called Columbian Chemicals, which reported no problems at the plant. Roughly two hours after the first text message was sent, the company put out a news release, explaining that reports of an explosion were false. When I called Arthur a few months later, he dismissed the incident as a tasteless prank, timed to the anniversary of the attacks of Sept. 11, 2001. “Personally I think it’s just a real sad, sick sense of humor,” he told me. “It was just someone who just liked scaring the daylights out of people.” Authorities, he said, had tried to trace the numbers that the text messages had come from, but with no luck. (The F.B.I. told me the investigation was still open.)

The Columbian Chemicals hoax was not some simple prank by a bored sadist. It was a highly coordinated disinformation campaign, involving dozens of fake accounts that posted hundreds of tweets for hours, targeting a list of figures precisely chosen to generate maximum attention. The perpetrators didn’t just doctor screenshots from CNN; they also created fully functional clones of the websites of Louisiana TV stations and newspapers. The YouTube video of the man watching TV had been tailor-made for the project. A Wikipedia page was even created for the Columbian Chemicals disaster, which cited the fake YouTube video. As the virtual assault unfolded, it was complemented by text messages to actual residents in St. Mary Parish. It must have taken a team of programmers and content producers to pull off.

And the hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. The campaign followed the same pattern of fake news reports and videos, this time under the hashtag #EbolaInAtlanta, which briefly trended in Atlanta. Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncé’s recent single “7/11” played in the background, an apparent attempt to establish the video’s contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.

On the same day as the Ebola hoax, a totally different group of accounts began spreading a rumor that an unarmed black woman had been shot to death by police. They all used the hashtag #shockingmurderinatlanta. Here again, the hoax seemed designed to piggyback on real public anxiety; that summer and fall were marked by protests over the shooting of Michael Brown in Ferguson, Mo. In this case, a blurry video purports to show the shooting, as an onlooker narrates. Watching it, I thought I recognized the voice — it sounded the same as the man watching TV in the Columbian Chemicals video, the one in which ISIS supposedly claims responsibility. The accent was unmistakable, if unplaceable, and in both videos he was making a very strained attempt to sound American. Somehow the result was vaguely Australian.

Who was behind all of this? When I stumbled on it last fall, I had an idea. I was already investigating a shadowy organization in St. Petersburg, Russia, that spreads false information on the Internet. It has gone by a few names, but I will refer to it by its best known: the Internet Research Agency. The agency had become known for employing hundreds of Russians to post pro-Kremlin propaganda online under fake identities, including on Twitter, in order to create the illusion of a massive army of supporters; it has often been called a “troll farm.” The more I investigated this group, the more links I discovered between it and the hoaxes. In April, I went to St. Petersburg to learn more about the agency and its brand of information warfare, which it has aggressively deployed against political opponents at home, Russia’s perceived enemies abroad and, more recently, me.

Read the entire article here.

Your Current Dystopian Nightmare: In Just One Click

Amazon was supposed to give you back precious time by making shopping and spending painlessly simple. Apps on your smartphone were supposed to do the same for all manner of re-tooled on-demand services. What wonderful time-saving inventions! So, now you can live in the moment and make use of all this extra free time. It’s your time now. You’ve won it back and no one can take it away.

And, what do you spend this newly earned free time doing? Well, you sit at home in your isolated cocoon, you shop for more things online, you download some more great apps that promise to bring even greater convenience, you interact less with real humans, and, best of all, you spend more time working. Welcome to your new dystopian nightmare, and it’s happening right now. Click.

From Medium:

Angel the concierge stands behind a lobby desk at a luxe apartment building in downtown San Francisco, and describes the residents of this imperial, 37-story tower. “Ubers, Squares, a few Twitters,” she says. “A lot of work-from-homers.”

And by late afternoon on a Tuesday, they’re striding into the lobby at a just-get-me-home-goddammit clip, some with laptop bags slung over their shoulders, others carrying swank leather satchels. At the same time a second, temporary population streams into the building: the app-based meal delivery people hoisting thermal carrier bags and sacks. Green means Sprig. A huge M means Munchery. Down in the basement, Amazon Prime delivery people check in packages with the porter. The Instacart groceries are plunked straight into a walk-in fridge.

This is a familiar scene. Five months ago I moved into a spartan apartment a few blocks away, where dozens of startups and thousands of tech workers live. Outside my building there’s always a phalanx of befuddled delivery guys who seem relieved when you walk out, so they can get in. Inside, the place is stuffed with the goodies they bring: Amazon Prime boxes sitting outside doors, evidence of the tangible, quotidian needs that are being serviced by the web. The humans who live there, though, I mostly never see. And even when I do, there seems to be a tacit agreement among residents to not talk to one another. I floated a few “hi’s” in the elevator when I first moved in, but in return I got the monosyllabic, no-eye-contact mumble. It was clear: Lady, this is not that kind of building.

Back in the elevator in the 37-story tower, the messengers do talk, one tells me. They end up asking each other which apps they work for: Postmates. Seamless. EAT24. GrubHub. Safeway.com. A woman hauling two Whole Foods sacks reads the concierge an apartment number off her smartphone, along with the resident’s directions: “Please deliver to my door.”

“They have a nice kitchen up there,” Angel says. The apartments rent for as much as $5,000 a month for a one-bedroom. “But so much, so much food comes in. Between 4 and 8 o’clock, they’re on fire.”

I start to walk toward home. En route, I pass an EAT24 ad on a bus stop shelter, and a little further down the street, a Dungeons & Dragons–type dude opens the locked lobby door of yet another glass-box residential building for a Sprig deliveryman:

“You’re…”

“Jonathan?”

“Sweet,” Dungeons & Dragons says, grabbing the bag of food. The door clanks behind him.

And that’s when I realized: the on-demand world isn’t about sharing at all. It’s about being served. This is an economy of shut-ins.

In 1998, Carnegie Mellon researchers warned that the internet could make us into hermits. They released a study monitoring the social behavior of 169 people making their first forays online. The web-surfers started talking less with family and friends, and grew more isolated and depressed. “We were surprised to find that what is a social technology has such anti-social consequences,” said one of the researchers at the time. “And these are the same people who, when asked, describe the Internet as a positive thing.”

We’re now deep into the bombastic buildout of the on-demand economy— with investment in the apps, platforms and services surging exponentially. Right now Americans buy nearly eight percent of all their retail goods online, though that seems a wild underestimate in the most congested, wired, time-strapped urban centers.

Many services promote themselves as life-expanding?—?there to free up your time so you can spend it connecting with the people you care about, not standing at the post office with strangers. Rinse’s ad shows a couple chilling at a park, their laundry being washed by someone, somewhere beyond the picture’s frame. But plenty of the delivery companies are brutally honest that, actually, they never want you to leave home at all.

GrubHub’s advertising banks on us secretly never wanting to talk to a human again: “Everything great about eating, combined with everything great about not talking to people.” DoorDash, another food delivery service, goes for the all-caps, batshit extreme:

“NEVER LEAVE HOME AGAIN.”

Katherine van Ekert isn’t a shut-in, exactly, but there are only two things she ever has to run errands for any more: trash bags and saline solution. For those, she must leave her San Francisco apartment and walk two blocks to the drug store, “so woe is my life,” she tells me. (She realizes her dry humor about #firstworldproblems may not translate, and clarifies later: “Honestly, this is all tongue in cheek. We’re not spoiled brats.”) Everything else is done by app. Her husband’s office contracts with Washio. Groceries come from Instacart. “I live on Amazon,” she says, buying everything from curry leaves to a jogging suit for her dog, complete with hoodie.

She’s so partial to these services, in fact, that she’s running one of her own: A veterinarian by trade, she’s a co-founder of VetPronto, which sends an on-call vet to your house. It’s one of a half-dozen on-demand services in the current batch at Y Combinator, the startup factory, including a marijuana delivery app called Meadow (“You laugh, but they’re going to be rich,” she says). She took a look at her current clients?—?they skew late 20s to late 30s, and work in high-paying jobs: “The kinds of people who use a lot of on demand services and hang out on Yelp a lot ?”

Basically, people a lot like herself. That’s the common wisdom: the apps are created by the urban young for the needs of urban young. The potential of delivery with a swipe of the finger is exciting for van Ekert, who grew up without such services in Sydney and recently arrived in wired San Francisco. “I’m just milking this city for all it’s worth,” she says. “I was talking to my father on Skype the other day. He asked, ‘Don’t you miss a casual stroll to the shop?’ Everything we do now is time-limited, and you do everything with intention. There’s not time to stroll anywhere.”

Suddenly, for people like van Ekert, the end of chores is here. After hours, you’re free from dirty laundry and dishes. (TaskRabbit’s ad rolls by me on a bus: “Buy yourself time?—?literally.”)

So here’s the big question. What does she, or you, or any of us do with all this time we’re buying? Binge on Netflix shows? Go for a run? Van Ekert’s answer: “It’s more to dedicate more time to working.”

Read the entire story here.

Jon Ronson Versus His Spambot Infomorph Imposter

[tube]mPUjvP-4Xaw[/tube]

While this may sound like a 1980’s monster flick, it’s rather more serious.

Author, journalist, filmmaker Jon Ronson weaves a fun but sinister tale of the theft of his own identity. The protagonists: a researcher in technology and cyberculture, a so-called “creative technologist” and a university lecturer in English and American literature. Not your typical collection of “identity thieves”, trolls, revenge pornographers, and online shamers. But an unnerving, predatory trio nevertheless.

From the Guardian:

In early January 2012, I noticed that another Jon Ronson had started posting on Twitter. His photograph was a photograph of my face. His Twitter name was @jon_ronson. His most recent tweet read: “Going home. Gotta get the recipe for a huge plate of guarana and mussel in a bap with mayonnaise 😀 #yummy.”

“Who are you?” I tweeted him.

“Watching #Seinfeld. I would love a big plate of celeriac, grouper and sour cream kebab with lemongrass #foodie,” he tweeted. I didn’t know what to do.

The next morning, I checked @jon_ronson’s timeline before I checked my own. In the night he had tweeted, “I’m dreaming something about #time and #cock.” He had 20 followers.

I did some digging. A young academic from Warwick University called Luke Robert Mason had a few weeks earlier posted a comment on the Guardian site. It was in response to a short video I had made about spambots. “We’ve built Jon his very own infomorph,” he wrote. “You can follow him on Twitter here: @jon_ronson.”

I tweeted him: “Hi!! Will you take down your spambot please?”

Ten minutes passed. Then he replied, “We prefer the term infomorph.”

“But it’s taken my identity,” I wrote.

“The infomorph isn’t taking your identity,” he wrote back. “It is repurposing social media data into an infomorphic aesthetic.”

I felt a tightness in my chest.

“#woohoo damn, I’m in the mood for a tidy plate of onion grill with crusty bread. #foodie,” @jon_ronson tweeted.

I was at war with a robot version of myself.

Advertisement

A month passed. @jon_ronson was tweeting 20 times a day about its whirlwind of social engagements, its “soirées” and wide circle of friends. The spambot left me feeling powerless and sullied.

I tweeted Luke Robert Mason. If he was adamant that he wouldn’t take down his spambot, perhaps we could at least meet? I could film the encounter and put it on YouTube. He agreed.

I rented a room in central London. He arrived with two other men – the team behind the spambot. All three were academics. Luke was the youngest, handsome, in his 20s, a “researcher in technology and cyberculture and director of the Virtual Futures conference”. David Bausola was a “creative technologist” and the CEO of the digital agency Philter Phactory. Dan O’Hara had a shaved head and a clenched jaw. He was in his late 30s, a lecturer in English and American literature at the University of Cologne.

I spelled out my grievances. “Academics,” I began, “don’t swoop into a person’s life uninvited and use him for some kind of academic exercise, and when I ask you to take it down you’re, ‘Oh, it’s not a spambot, it’s an infomorph.’”

Dan nodded. He leaned forward. “There must be lots of Jon Ronsons out there?” he began. “People with your name? Yes?”

I looked suspiciously at him. “I’m sure there are people with my name,” I replied, carefully.

“I’ve got the same problem,” Dan said with a smile. “There’s another academic out there with my name.”

“You don’t have exactly the same problem as me,” I said, “because my exact problem is that three strangers have stolen my identity and have created a robot version of me and are refusing to take it down.”

Dan let out a long-suffering sigh. “You’re saying, ‘There is only one Jon Ronson’,” he said. “You’re proposing yourself as the real McCoy, as it were, and you want to maintain that integrity and authenticity. Yes?”

I stared at him.

“We’re not quite persuaded by that,” he continued. “We think there’s already a layer of artifice and it’s your online personality – the brand Jon Ronson – you’re trying to protect. Yeah?”

“No, it’s just me tweeting,” I yelled.

“The internet is not the real world,” said Dan.

“I write my tweets,” I replied. “And I press send. So it’s me on Twitter.” We glared at each other. “That’s not academic,” I said. “That’s not postmodern. That’s the fact of it. It’s a misrepresentation of me.”

“You’d like it to be more like you?” Dan said.

“I’d like it to not exist,” I said.

“I find that quite aggressive,” he said. “You’d like to kill these algorithms? You must feel threatened in some way.” He gave me a concerned look. “We don’t go around generally trying to kill things we find annoying.”

“You’re a troll!” I yelled.

I dreaded uploading the footage to YouTube, because I’d been so screechy. I steeled myself for mocking comments and posted it. I left it 10 minutes. Then, with apprehension, I had a look.

“This is identity theft,” read the first comment I saw. “They should respect Jon’s personal liberty.”

Read the entire story here.

Video: JON VS JON Part 2 | Escape and Control. Courtesy of Jon Ronson.

Net Neutrality Lives!

The US Federal Communications Commission (FCC) took a giant step in the right direction, on February 26, 2015, when it voted to regulate internet broadband much like a public utility. This is a great victory for net neutrality advocates and consumers who had long sought to protect equal access for all to online services and information. Tim Berners Lee, inventor of the World Wide Web, offered his support and praise for the ruling, saying:

“It’s about consumer rights, it’s about free speech, it’s about democracy.”

From the Guardian:

Internet activists scored a landmark victory on Thursday as the top US telecommunications regulator approved a plan to govern broadband internet like a public utility.

Following one of the most intense – and bizarre – lobbying battles in the history of modern Washington politics, the Federal Communications Commission (FCC) passed strict new rules that give the body its greatest power over the cable industry since the internet went mainstream.

FCC chairman Tom Wheeler – a former telecom lobbyist turned surprise hero of net-neutrality supporters – thanked the 4m people who had submitted comments on the new rules. “Your participation has made this the most open process in FCC history,” he said. “We listened and we learned.”

Wheeler said that while other countries were trying to control the internet, the sweeping new US protections on net neutrality – the concept that all information and services should have equal access online – represented “a red-letter day for internet freedom”.

“The internet is simply too important to be left without rules and without a referee on the field,” said Wheeler. “Today’s order is more powerful and more expansive than any previously suggested.”

Broadband providers will be banned from creating so-called “fast lanes” blocking or slowing traffic online, and will oversee mobile broadband as well as cable. The FCC would also have the authority to challenge unforeseen barriers broadband providers might create as the internet develops.

Activists and tech companies argue the new rules are vital to protect net neutrality – the concept that all information and services should have equal access to the internet. The FCC’s two Republican commissioners, Ajit Pai and Michael O’Rielly, voted against the plan but were overruled at a much anticipated meeting by three Democratic members on the panel.

Republicans have long fought the FCC’s net neutrality protections, arguing the rules will create an unnecessary burden on business. They have accused Barack Obama of bullying the regulator into the move in order to score political points, with conservative lawmakers and potential 2016 presidential candidates expected to keep the fight going well into that election campaign.

Pai said the FCC was flip-flopping for “one reason and one reason only: president Obama told us to do so”.

Wheeler dismissed accusations of a “secret” plan “nonsense”. “This is no more a plan to regulate the internet than the first amendment is a plan to regulate free speech,” Wheeler said.

“This is the FCC using all the tools in our toolbox to protect innovators and consumers.”

Obama offered his support to the rules late last year, following an online activism campaign that pitched internet organisers and companies from Netflix and Reddit to the online craft market Etsy and I Can Has Cheezburger? – weblog home of the Lolcats meme – against Republican leaders and the cable and telecom lobbies.

Broadband will now be regulated under Title II of the Communications Act – the strongest legal authority the FCC has in its authority. Obama called on the independent regulator to implement Title II last year, leading to charges that he unduly influenced Wheeler’s decision that are now being investigated in Congress.

A small band of protesters gathered in the snow outside the FCC’s Washington headquarters before the meeting on Thursday, in celebration of their success in lobbying for a dramatic U-turn in regulation. Wheeler and his Democratic colleagues, Mignon Clyburn and Jessica Rosenworcel, were cheered as they sat down for the meeting.

Joining the activists outside was Apple co-counder Steve Wozniak, who said the FCC also needed more power to prevent future attacks on the open internet.

“We have won on net neutrality,” Wozniak told the Guardian. “This is important because they don’t want the FCC to have oversight over other bad stuff.”

Tim Berners Lee, inventor of the world wide web, addressed the meeting via video, saying he applauded the FCC’s decision to protect net neutrality: “More than anything else, the action you take today will preserve the reality of a permission-less innovation that is the heart of the internet.”

“It’s about consumer rights, it’s about free speech, it’s about democracy,” Berners Lee said.

Clyburn compared the new rules to the Bill of Rights. “We are here to ensure that there is only one internet,” she said. “We want to ensure that those with deep pockets have the same opportunity as those with empty pockets too succeed.”

Read the entire story here.

Creative Destruction

Internet_map

Author Andrew Keen ponders the true value of the internet in his new book The Internet is Not the Answer. Quite rightfully he asserts that many billions of consumers have benefited from the improved convenience and usually lower prices of every product imaginable delivered through a couple of clicks online. But there is a higher price to pay — one that touches on the values we want for our society and the deeper costs to our culture.

From the Guardian:

During every minute of every day of 2014, according to Andrew Keen’s new book, the world’s internet users – all three billion of them – sent 204m emails, uploaded 72 hours of YouTube video, undertook 4m Google searches, shared 2.46m pieces of Facebook content, published 277,000 tweets, posted 216,000 new photos on Instagram and spent $83,000 on Amazon.

By any measure, for a network that has existed recognisably for barely 20 years (the first graphical web browser, Mosaic, was released in 1993), those are astonishing numbers: the internet, plainly, has transformed all our lives, making so much of what we do every day – communicating, shopping, finding, watching, booking – unimaginably easier than it was. A Pew survey in the United States found last year that 90% of Americans believed the internet had been good for them.

So it takes a brave man to argue that there is another side to the internet; that stratospheric numbers and undreamed-of personal convenience are not the whole story. Keen (who was once so sure the internet was the answer that he sank all he had into a startup) is now a thoughtful and erudite contrarian who believes the internet is actually doing untold damage. The net, he argues, was meant to be “power to the people, a platform for equality”: an open, decentralised, democratising technology that liberates as it empowers as it informs.

Instead, it has handed extraordinary power and wealth to a tiny handful of people, while simultaneously, for the rest of us, compounding and often aggravating existing inequalities – cultural, social and economic – whenever and wherever it has found them. Individually, it may work wonders for us. Collectively, it’s doing us no good at all. “It was supposed to be win-win,” Keen declares. “The network’s users were supposed to be its beneficiaries. But in a lot of ways, we are its victims.”

This is not, Keen acknowledges, a very popular view, especially in Silicon Valley, where he has spent the best part of the past 30-odd years after an uneventful north London childhood (the family was in the rag trade). But The Internet is Not the Answer – Keen’s third book (the first questioned the value of user-generated content, the second the point of social media; you get where he’s coming from) – has been “remarkably well received”, he says. “I’m not alone in making these points. Moderate opinion is starting to see that this is a problem.”

What seems most unarguable is that, whatever else it has done, the internet – after its early years as a network for academics and researchers from which vulgar commercial activity was, in effect, outlawed – has been largely about the money. The US government’s decision, in 1991, to throw the nascent network open to private enterprise amounted, as one leading (and now eye-wateringly wealthy) Californian venture capitalist has put it, to “the largest creation of legal wealth in the history of the planet”.

The numbers Keen reels off are eye-popping: Google, which now handles 3.5bn searches daily and controls more than 90% of the market in some countries, including Britain, was valued at $400bn last year – more than seven times General Motors, which employs nearly four times more people. Its two founders, Larry Page and Sergey Brin, are worth $30bn apiece. Facebook’s Mark Zuckerberg, head of the world’s second biggest internet site – used by 19% of people in the world, half of whom access it six days a week or more – is sitting on a similar personal pile, while at $190bn in July last year, his company was worth more than Coca-Cola, Disney and AT&T.

Jeff Bezos of Amazon also has $30bn in his bank account. And even more recent online ventures look to be headed the same way: Uber, a five-year-old startup employing about 1,000 people and once succinctly described as “software that eats taxis”, was valued last year at more than $18bn – roughly the same as Hertz and Avis combined. The 700-staff lodging rental site Airbnb was valued at $10bn in February last year, not far off half as much as the Hilton group, which owns nearly 4,000 hotels and employs 150,000 people. The messaging app WhatsApp, bought by Facebook for $19bn, employs just 55, while the payroll of Snapchat – which turned down an offer of $3bn – numbers barely 20.

Part of the problem here, argues Keen, is that the digital economy is, by its nature, winner-takes-all. “There’s no inevitable or conspiratorial logic here; no one really knew it would happen,” he says. “There are just certain structural qualities that mean the internet lends itself to monopolies. The internet is a perfect global platform for free-market capitalism – a pure, frictionless, borderless economy … It’s a libertarian’s wet dream. Digital Milton Friedman.”Nor are those monopolies confined to just one business. Keen cites San Francisco-based writer Rebecca Solnit’s incisive take on Google: imagine it is 100 years ago, and the post office, the phone company, the public libraries, the printing houses, Ordnance Survey maps and the cinemas were all controlled by the same secretive and unaccountable organisation. Plus, he adds, almost as an afterthought: “Google doesn’t just own the post office – it has the right to open everyone’s letters.”

Advertisement

This, Keen argues, is the net economy’s natural tendency: “Google is the search and information monopoly and the largest advertising company in history. It is incredibly strong, joining up the dots across more and more industries. Uber’s about being the transport monopoly; Airbnb the hospitality monopoly; TaskRabbit the labour monopoly. These are all, ultimately, monopoly plays – that’s the logic. And that should worry people.”

It is already having consequences, Keen says, in the real world. Take – surely the most glaring example – Amazon. Keen’s book cites a 2013 survey by the US Institute for Local Self-Reliance, which found that while it takes, on average, a regular bricks-and-mortar store 47 employees to generate $10m in turnover, Bezos’s many-tentacled, all-consuming and completely ruthless “Everything Store” achieves the same with 14. Amazon, that report concluded, probably destroyed 27,000 US jobs in 2012.

“And we love it,” Keen says. “We all use Amazon. We strike this Faustian deal. It’s ultra-convenient, fantastic service, great interface, absurdly cheap prices. But what’s the cost? Truly appalling working conditions; we know this. Deep hostility to unions. A massive impact on independent retail; in books, savage bullying of publishers. This is back to the early years of the 19th century. But we’re seduced into thinking it’s good; Amazon has told us what we want to hear. Bezos says, ‘This is about you, the consumer.’ The problem is, we’re not just consumers. We’re citizens, too.”

Read the entire article here.

Image: Visualization of routing paths through a portion of the Internet. Courtesy of the Opte Project.

FCC Flexes Title II

US-FCC-Seal.svgChairman of the US Federal Communications Commission (FCC) was once beholden to the pseudo-monopolies that are cable and wireless providers. Now, he seems to be fighting to keep the internet fair, neutral and open — for consumers. Hard to believe. But, let’s face it, if Comcast and other telecoms behemoths are against Wheeler’s proposal then it must be good for consumer.

From Wired:

After more than a decade of debate and a record-setting proceeding that attracted nearly 4 million public comments, the time to settle the Net Neutrality question has arrived. This week, I will circulate to the members of the Federal Communications Commission (FCC) proposed new rules to preserve the internet as an open platform for innovation and free expression. This proposal is rooted in long-standing regulatory principles, marketplace experience, and public input received over the last several months.

Broadband network operators have an understandable motivation to manage their network to maximize their business interests. But their actions may not always be optimal for network users. The Congress gave the FCC broad authority to update its rules to reflect changes in technology and marketplace behavior in a way that protects consumers. Over the years, the Commission has used this authority to the public’s great benefit.

The internet wouldn’t have emerged as it did, for instance, if the FCC hadn’t mandated open access for network equipment in the late 1960s. Before then, AT&T prohibited anyone from attaching non-AT&T equipment to the network. The modems that enabled the internet were usable only because the FCC required the network to be open.

Companies such as AOL were able to grow in the early days of home computing because these modems gave them access to the open telephone network.

I personally learned the importance of open networks the hard way. In the mid-1980s I was president of a startup, NABU: The Home Computer Network. My company was using new technology to deliver high-speed data to home computers over cable television lines. Across town Steve Case was starting what became AOL. NABU was delivering service at the then-blazing speed of 1.5 megabits per second—hundreds of times faster than Case’s company. “We used to worry about you a lot,” Case told me years later.

But NABU went broke while AOL became very successful. Why that is highlights the fundamental problem with allowing networks to act as gatekeepers.

While delivering better service, NABU had to depend on cable television operators granting access to their systems. Steve Case was not only a brilliant entrepreneur, but he also had access to an unlimited number of customers nationwide who only had to attach a modem to their phone line to receive his service. The phone network was open whereas the cable networks were closed. End of story.

The phone network’s openness did not happen by accident, but by FCC rule. How we precisely deliver that kind of openness for America’s broadband networks has been the subject of a debate over the last several months.

Originally, I believed that the FCC could assure internet openness through a determination of “commercial reasonableness” under Section 706 of the Telecommunications Act of 1996. While a recent court decision seemed to draw a roadmap for using this approach, I became concerned that this relatively new concept might, down the road, be interpreted to mean what is reasonable for commercial interests, not consumers.

That is why I am proposing that the FCC use its Title II authority to implement and enforce open internet protections.

Using this authority, I am submitting to my colleagues the strongest open internet protections ever proposed by the FCC. These enforceable, bright-line rules will ban paid prioritization, and the blocking and throttling of lawful content and services. I propose to fully apply—for the first time ever—those bright-line rules to mobile broadband. My proposal assures the rights of internet users to go where they want, when they want, and the rights of innovators to introduce new products without asking anyone’s permission.

All of this can be accomplished while encouraging investment in broadband networks. To preserve incentives for broadband operators to invest in their networks, my proposal will modernize Title II, tailoring it for the 21st century, in order to provide returns necessary to construct competitive networks. For example, there will be no rate regulation, no tariffs, no last-mile unbundling. Over the last 21 years, the wireless industry has invested almost $300 billion under similar rules, proving that modernized Title II regulation can encourage investment and competition.

Congress wisely gave the FCC the power to update its rules to keep pace with innovation. Under that authority my proposal includes a general conduct rule that can be used to stop new and novel threats to the internet. This means the action we take will be strong enough and flexible enough not only to deal with the realities of today, but also to establish ground rules for the as yet unimagined.

The internet must be fast, fair and open. That is the message I’ve heard from consumers and innovators across this nation. That is the principle that has enabled the internet to become an unprecedented platform for innovation and human expression. And that is the lesson I learned heading a tech startup at the dawn of the internet age. The proposal I present to the commission will ensure the internet remains open, now and in the future, for all Americans.

Read the entire article here.

Image: Official US FCC government seal.

Silicon Death Valley

boo-com

Have you ever wondered what happens to the 99 percent of Silicon Valley startups that don’t make billionaires (or even millionaires) of their founders? It’s not all milk and honey in the land of sunshine. After all, for every Google or Facebook there are hundreds of humiliating failures — think: Webvan, Boo.com, Pets.com. Beautyjungle.com, Boxman, Flooz, eToys.

The valley’s venture capitalists tend to bury their business failures rather quietly, careful not to taint their reputations as omnipotent, infallible futurists. From the ashes of these failures some employees move on to well-established corporate serfdom and others find fresh challenges at new startups. But there is a fascinating middle-ground, between success and failure — an entrepreneurial twilight zone populated by zombie businesses.

From the Guardian:

It is probably Silicon Valley’s most striking mantra: “Fail fast, fail often.” It is recited at technology conferences, pinned to company walls, bandied in conversation.

Failure is not only invoked but celebrated. Entrepreneurs give speeches detailing their misfires. Academics laud the virtue of making mistakes. FailCon, a conference about “embracing failure”, launched in San Francisco in 2009 and is now an annual event, with technology hubs in Barcelona, Tokyo, Porto Alegre and elsewhere hosting their own versions.

While the rest of the world recoils at failure, in other words, technology’s dynamic innovators enshrine it as a rite of passage en route to success.

But what about those tech entrepreneurs who lose – and keep on losing? What about those who start one company after another, refine pitches, tweak products, pivot strategies, reinvent themselves … and never succeed? What about the angst masked behind upbeat facades?

Silicon Valley is increasingly asking such questions, even as the tech boom rewards some startups with billion-dollar valuations, sprinkling stardust on founders who talk of changing the world.

“It’s frustrating if you’re trying and trying and all you read about is how much money Airbnb and Uber are making,” said Johnny Chin, 28, who endured three startup flops but is hopeful for his fourth attempt. “The way startups are portrayed, everything seems an overnight success, but that’s a disconnect from reality. There can be a psychic toll.”

It has never been easier or cheaper to launch a company in the hothouse of ambition, money and software that stretches from San Francisco to Cupertino, Mountain View, Menlo Park and San Jose.

In 2012 the number of seed investment deals in US tech reportedly more than tripled, to 1,700, from three years earlier. Investment bankers are quitting Wall Street for Silicon Valley, lured by hopes of a cooler and more creative way to get rich.

Most startups fail. However many entrepreneurs still overestimate the chances of success – and the cost of failure.

Some estimates put the failure rate at 90% – on a par with small businesses in other sectors. A similar proportion of alumni from Y Combinator, a legendary incubator which mentors bright prospects, are said to also struggle.

Companies typically die around 20 months after their last financing round and after having raised $1.3m, according to a study by the analytics firms CB Insights titled The RIP Report – startup death trends.

Advertisement

Failure is difficult to quantify because it does not necessarily mean liquidation. Many startups limp on for years, ignored by the market but sustained by founders’ savings or investors.

“We call them the walking dead,” said one manager at a tech behemoth, who requested anonymity. “They don’t necessarily die. They putter along.”

Software engineers employed by such zombies face a choice. Stay in hope the company will take off, turning stock options into gold. Or quit and take one of the plentiful jobs at other startups or giants like Apple and Google.

Founders face a more agonising dilemma. Continue working 100-hour weeks and telling employees and investors their dream is alive, that the metrics are improving, and hope it’s true, or pull the plug.

The loss aversion principle – the human tendency to strongly prefer avoiding losses to acquiring gains – tilts many towards the former, said Bruno Bowden, a former engineering manager at Google who is now a venture investor and entrepreneur.

“People will do a lot of irrational things to avoid losing even if it’s to their detriment. You push and push and exhaust yourself.”

Silicon Valley wannabes tell origin fables of startup founders who maxed out credit cards before dazzling Wall Street, the same way Hollywood’s struggling actors find solace in the fact Brad Pitt dressed as a chicken for El Pollo Loco before his breakthrough.

“It’s painful to be one of the walking dead. You lie to yourself and mask what’s not working. You amplify little wins,” said Chin, who eventually abandoned startups which offered micro, specialised versions of Amazon and Yelp.

That startup founders were Silicon Valley’s “cool kids”, glamorous buccaneers compared to engineers and corporate drones, could make failure tricky to recognise, let alone accept, he said. “People are very encouraging. Everything is amazing, cool, awesome. But then they go home and don’t use your product.”

Chin is bullish about his new company, Bannerman, an Uber-type service for event security and bodyguards, and has no regrets about rolling the tech dice. “I love what I do. I couldn’t do anything else.”

Read the entire story here.

Image: Boo.com, 1999. Courtesy of the WayBackMachine, Internet Archive.

How to Get Blazingly Fast Internet

Chattanooga,_TennesseeIt’s rather simple in theory, and only requires two steps. Step 1: Follow the lead of a city like Chattanooga, Tennessee. Step 2: Tell you monopolistic cable company what to do with its cables. Done. Now you have a 1 Gigabit Internet connection — around 50-100 times faster than your mother’s Wifi.

This experiment is fueling a renaissance of sorts in the Southern U.S. city and other metropolitan areas can only look on in awe. It comes as no surprise that the cable oligarchs at Comcast, Time Warner and AT&T are looking for any way to halt the city’s progress into the 21st Century.

The Guardian:

Loveman’s department store on Market Street in Chattanooga closed its doors in 1993 after almost a century in business, another victim of a nationwide decline in downtowns that hollowed out so many US towns. Now the opulent building is buzzing again, this time with tech entrepreneurs taking advantage of the fastest internet in the western hemisphere.

Financed by the cash raised from the sale of logistics group Access America, a group of thirty-something local entrepreneurs have set up Lamp Post, an incubator for a new generation of tech companies, in the building. A dozen startups are currently working out of the glitzy downtown office.

“We’re not Silicon Valley. No one will ever replicate that,” says Allan Davis, one of Lamp Post’s partners. “But we don’t need to be and not everyone wants that. The expense, the hassle. You don’t need to be there to create great technology. You can do it here.”

He’s not alone in thinking so. Lamp Post is one of several tech incubators in this mid-sized Tennessee city. Money is flowing in. Chattanooga has gone from close to zero venture capital in 2009 to more than five organized funds with investable capital over $50m in 2014 – not bad for a city of 171,000 people.

The city’s go-getting mayor Andy Berke, a Democrat tipped for higher office, is currently reviewing plans for a city center tech zone specifically designed to meet the needs of its new workforce.

In large part the success is being driven by The Gig. Thanks to an ambitious roll-out by the city’s municipally owned electricity company, EPB, Chattanooga is one of the only places on Earth with internet at speeds as fast as 1 gigabit per second – about 50 times faster than the US average.

The tech buildup comes after more than a decade of reconstruction in Chattanooga that has regenerated the city with a world-class aquarium, 12 miles of river walks along the Tennessee River, an arts district built around the Hunter Museum of American Arts, high-end restaurants and outdoor activities.

But it’s the city’s tech boom has sparked interest from other municipalities across the world. It also comes as the Federal Communications Commission (FCC) prepares to address some of the biggest questions the internet has faced when it returns from the summer break. And while the FCC discusses whether Comcast, the world’s biggest cable company, should take over Time Warner, the US’s second largest cable operator, and whether to allow those companies to set up fast lanes (and therefore slow lanes) for internet traffic, Chattanooga is proof that another path is possible.

It’s a story that is being watched very closely by Big Cable’s critics. “In DC there is often an attitude that the only way to solve our problems is to hand them over to big business. Chattanooga is a reminder that the best solutions are often local and work out better than handing over control to Comcast or AT&T to do whatever they want with us,” said Chris Mitchell, director of community broadband networks at advocacy group the Institute for Local Self-Reliance.

On Friday, the US cable industry called on the FCC to block Chattanooga’s plan to expand, as well as a similar plan for Wilson, North Carolina.

“The success of public broadband is a mixed record, with numerous examples of failures,” USTelecom said in a blog post. “With state taxpayers on the financial hook when a municipal broadband network goes under, it is entirely reasonable for state legislatures to be cautious in limiting or even prohibiting that activity.”

Mayor Berke has dealt with requests for visits from everyone from tiny rural communities to “humungous international cities”. “You don’t see many mid-sized cities that have the kind of activity that we have right now in Chattanooga,” he said. “What the Gig did was change the idea of what our city could be. Mid-sized southern cities are not generally seen as being ahead of the technological curve, the Gig changed that. We now have people coming in looking to us as a leader.”

It’s still early days but there have already been notable successes. In addition to Access America’s sale for an undisclosed sum, last year restaurant booking site OpenTable bought a local company, QuickCue, for $11.5m. “That’s a great example of a story that just doesn’t happen in other mid-sized southern cities,” said Berke.

But it’s what Chattanooga can do next that has the local tech community buzzed.

EPB’s high-speed network came about after it decided to set up a smart electric grid in order to cut power outages. EPB estimated it would take 10 years to build the system and raised a $170m through a municipal bond to pay for it. In 2009 president Barack Obama launched the American Recovery and Reinvestment Act, a stimulus programme aimed at getting the US economy back on track amid the devastation of the recession. EPB was awarded $111m to get its smart grid up and running. Less than three years later the whole service territory was built.

The fibre-optic network uses IntelliRupter PulseClosers, made by S&C Electric, that can reroute power during outages. The University of California at Berkeley estimates that power outages cost the US economy $80bn a year through business disruption with manufacturers stopping their lines and restaurants closing. Chattanooga’s share of that loss was about $100m, EPB estimates. The smart grid can detect a fault in milliseconds and route power around problems. Since the system was installed the duration of power outages has been cut in half.

But it was the other uses of that fiber that fired up enthusiasm in Chattanooga. “When we first started talking about this and the uses of the smart grid we would say to customers and community groups ‘Oh and it can also offer very high-speed internet, TV and phone.’ The electric power stuff was no longer of interest. This is what what people got excited about and it’s the same today,” said EPB vice president Danna Bailey.

Read the entire story here.

Image: Chattanooga, TN skyline. Courtesy of Wikipedia.

Dinosaurs of Retail

moa

Shopping malls in the United States were in their prime in the 1970s and ’80s. Many had positioned themselves a a bright, clean, utopian alternative to inner-city blight and decay. A quarter of a century on, while the mega-malls may be thriving, the numerous smaller suburban brethren are seeing lower sales. As internet shopping and retailing pervades all reaches of our society many midsize malls are decaying or shutting down completely.  Documentary photographer Seth Lawless captures this fascinating transition in a new book: Black Friday: the Collapse of the American Shopping Mall.

From the Guardian:

It is hard to believe there has ever been any life in this place. Shattered glass crunches under Seph Lawless’s feet as he strides through its dreary corridors. Overhead lights attached to ripped-out electrical wires hang suspended in the stale air and fading wallpaper peels off the walls like dead skin.

Lawless sidesteps debris as he passes from plot to plot in this retail graveyard called Rolling Acres Mall in Akron, Ohio. The shopping centre closed in 2008, and its largest retailers, which had tried to make it as standalone stores, emptied out by the end of last year. When Lawless stops to overlook a two-storey opening near the mall’s once-bustling core, only an occasional drop of water, dribbling through missing ceiling tiles, breaks the silence.

“You came, you shopped, you dressed nice – you went to the mall. That’s what people did,” says Lawless, a pseudonymous photographer who grew up in a suburb of nearby Cleveland. “It was very consumer-driven and kind of had an ugly side, but there was something beautiful about it. There was something there.”

Gazing down at the motionless escalators, dead plants and empty benches below, he adds: “It’s still beautiful, though. It’s almost like ancient ruins.”

Dying shopping malls are speckled across the United States, often in middle-class suburbs wrestling with socioeconomic shifts. Some, like Rolling Acres, have already succumbed. Estimates on the share that might close or be repurposed in coming decades range from 15 to 50%. Americans are returning downtown; online shopping is taking a 6% bite out of brick-and-mortar sales; and to many iPhone-clutching, city-dwelling and frequently jobless young people, the culture that spawned satire like Mallrats seems increasingly dated, even cartoonish.

According to longtime retail consultant Howard Davidowitz, numerous midmarket malls, many of them born during the country’s suburban explosion after the second world war, could very well share Rolling Acres’ fate. “They’re going, going, gone,” Davidowitz says. “They’re trying to change; they’re trying to get different kinds of anchors, discount stores … [But] what’s going on is the customers don’t have the fucking money. That’s it. This isn’t rocket science.”

Shopping culture follows housing culture. Sprawling malls were therefore a natural product of the postwar era, as Americans with cars and fat wallets sprawled to the suburbs. They were thrown up at a furious pace as shoppers fled cities, peaking at a few hundred per year at one point in the 1980s, according to Paco Underhill, an environmental psychologist and author of Call of the Mall: The Geography of Shopping. Though construction has since tapered off, developers left a mall overstock in their wake.

Currently, the US contains around 1,500 of the expansive “malls” of suburban consumer lore. Most share a handful of bland features. Brick exoskeletons usually contain two storeys of inward-facing stores separated by tile walkways. Food courts serve mediocre pizza. Parking lots are big enough to easily misplace a car. And to anchor them economically, malls typically depend on department stores: huge vendors offering a variety of products across interconnected sections.

For mid-century Americans, these gleaming marketplaces provided an almost utopian alternative to the urban commercial district, an artificial downtown with less crime and fewer vermin. As Joan Didion wrote in 1979, malls became “cities in which no one lives but everyone consumes”. Peppered throughout disconnected suburbs, they were a place to see and be seen, something shoppers have craved since the days of the Greek agora. And they quickly matured into a self-contained ecosystem, with their own species – mall rats, mall cops, mall walkers – and an annual feeding frenzy known as Black Friday.

“Local governments had never dealt with this sort of development and were basically bamboozled [by developers],” Underhill says of the mall planning process. “In contrast to Europe, where shopping malls are much more a product of public-private negotiation and funding, here in the US most were built under what I call ‘cowboy conditions’.”

Shopping centres in Europe might contain grocery stores or childcare centres, while those in Japan are often built around mass transit. But the suburban American variety is hard to get to and sells “apparel and gifts and damn little else”, Underhill says.

Nearly 700 shopping centres are “super-regional” megamalls, retail leviathans usually of at least 1 million square feet and upward of 80 stores. Megamalls typically outperform their 800 slightly smaller, “regional” counterparts, though size and financial health don’t overlap entirely. It’s clearer, however, that luxury malls in affluent areas are increasingly forcing the others to fight for scraps. Strip malls – up to a few dozen tenants conveniently lined along a major traffic artery – are retail’s bottom feeders and so well-suited to the new environment. But midmarket shopping centres have begun dying off alongside the middle class that once supported them. Regional malls have suffered at least three straight years of declining profit per square foot, according to the International Council of Shopping Centres (ICSC).

Read the entire story here.

Image: Mall of America. Courtesy of Wikipedia.

Your Tax Dollars At Work — Leetspeak

US-FBI-ShadedSealIt’s fascinating to see what our government agencies are doing with some of our hard earned tax dollars.

In this head-scratching example, the FBI — the FBI’s Intelligence Research Support Unit, no less — has just completed a 83-page glossary of Internet slang or “leetspeak”. LOL and Ugh! (the latter is not an acronym).

Check out the document via Muckrock here — they obtained the “secret” document through the Freedom of Information Act.

From the Washington Post:

The Internet is full of strange and bewildering neologisms, which anyone but a text-addled teen would struggle to understand. So the fine, taxpayer-funded people of the FBI — apparently not content to trawl Urban Dictionary, like the rest of us — compiled a glossary of Internet slang.

An 83-page glossary. Containing nearly 3,000 terms.

The glossary was recently made public through a Freedom of Information request by the group MuckRock, which posted the PDF, called “Twitter shorthand,” online. Despite its name, this isn’t just Twitter slang: As the FBI’s Intelligence Research Support Unit explains in the introduction, it’s a primer on shorthand used across the Internet, including in “instant messages, Facebook and Myspace.” As if that Myspace reference wasn’t proof enough that the FBI’s a tad out of touch, the IRSU then promises the list will prove useful both professionally and “for keeping up with your children and/or grandchildren.” (Your tax dollars at work!)

All of these minor gaffes could be forgiven, however, if the glossary itself was actually good. Obviously, FBI operatives and researchers need to understand Internet slang — the Internet is, increasingly, where crime goes down these days. But then we get things like ALOTBSOL (“always look on the bright side of life”) and AMOG (“alpha male of group”) … within the first 10 entries.

ALOTBSOL has, for the record, been tweeted fewer than 500 times in the entire eight-year history of Twitter. AMOG has been tweeted far more often, but usually in Spanish … as a misspelling, it would appear, of “amor” and “amigo.”

Among the other head-scratching terms the FBI considers can’t-miss Internet slang:

  1. AYFKMWTS (“are you f—— kidding me with this s—?”) — 990 tweets
  2. BFFLTDDUP (“best friends for life until death do us part) — 414 tweets
  3. BOGSAT (“bunch of guys sitting around talking”) — 144 tweets
  4. BTDTGTTSAWIO (“been there, done that, got the T-shirt and wore it out”) — 47 tweets
  5. BTWITIAILWY (“by the way, I think I am in love with you”) — 535 tweets
  6. DILLIGAD (“does it look like I give a damn?”) — 289 tweets
  7. DITYID (“did I tell you I’m depressed?”) — 69 tweets
  8. E2EG (“ear-to-ear grin”) — 125 tweets
  9. GIWIST (“gee, I wish I said that”) — 56 tweets
  10. HCDAJFU (“he could do a job for us”) — 25 tweets
  11. IAWTCSM (“I agree with this comment so much”) — 20 tweets
  12. IITYWIMWYBMAD (“if I tell you what it means will you buy me a drink?”) — 250 tweets
  13. LLTA (“lots and lots of thunderous applause”) — 855 tweets
  14. NIFOC (“naked in front of computer”) — 1,065 tweets, most of them referring to acronym guides like this one.
  15. PMYMHMMFSWGAD (“pardon me, you must have mistaken me for someone who gives a damn”) — 128 tweets
  16. SOMSW (“someone over my shoulder watching) — 170 tweets
  17. WAPCE (“women are pure concentrated evil”) — 233 tweets, few relating to women
  18. YKWRGMG (“you know what really grinds my gears?”) — 1,204 tweets

In all fairness to the FBI, they do get some things right: “crunk” is helpfully defined as “crazy and drunk,” FF is “a recommendation to follow someone referenced in the tweet,” and a whole range of online patois is translated to its proper English equivalent: hafta is “have to,” ima is “I’m going to,” kewt is “cute.”

Read the entire article here.

Image: FBI Seal. Courtesy of U.S. Government.

Life and Death: Sharing Startups

The great cycle of re-invention spawned by the Internet and mobile technologies continues apace. This time it’s the entrepreneurial businesses laying the foundation for the sharing economy — whether that be beds, room, clothes, tuition, bicycles or cars. A few succeed to become great new businesses; most fail.

From the WSJ:

A few high-profile “sharing-economy” startups are gaining quick traction with users, including those that let consumers rent apartments and homes like Airbnb Inc., or get car rides, such as Uber Technologies Inc.

Both Airbnb and Uber are valued in the billions of dollars, a sign that investors believe the segment is hot—and a big reason why more entrepreneurs are embracing the business model.

At MassChallenge, a Boston-based program to help early-stage entrepreneurs, about 9% of participants in 2013 were starting companies to connect consumers or businesses with products and services that would otherwise go unused. That compares with about 5% in 2010, for instance.

“We’re bullish on the sharing economy, and we’ll definitely make more investments in it,” said Sam Altman, president of Y Combinator, a startup accelerator in Mountain View, Calif., and one of Airbnb’s first investors.

Yet at least a few dozen sharing-economy startups have failed since 2012, including BlackJet, a Florida-based service that touted itself as the “Uber for jet travel,” and Tutorspree, a New York service dubbed the “Airbnb for tutors.” Most ran out of money, following struggles that ranged from difficulties building a critical mass of supply and demand, to higher-than-expected operating costs.

“We ended up being unable to consistently produce a level of demand on par with what we needed to scale rapidly,” said Aaron Harris, co-founder of Tutorspree, which launched in January 2011 and shuttered in August 2013.

“If you have to reacquire the customer every six months, they’ll forget you,” said Howard Morgan, co-founder of First Round Capital, which was an investor in BlackJet. “A private jet ride isn’t something you do every day. If you’re very wealthy, you have your own plane.” By comparison, he added that he recently used Uber’s ride-sharing service three times in one day.

Consider carpooling startup Ridejoy, for example. During its first year in 2011, its user base was growing by about 30% a month, with more than 25,000 riders and drivers signed up, and an estimated 10,000 rides completed, said Kalvin Wang, one of its three founders. But by the spring of 2013, Ridejoy, which had raised $1.3 million from early-stage investors like Freestyle Capital, was facing ferocious competition from free alternatives, such as carpooling forums on college websites.

Also, some riders could—and did—begin to sidestep the middleman. Many skipped paying its 10% transaction fee by handing their drivers cash instead of paying by credit card on Ridejoy’s website or mobile app. Others just didn’t get it, and even 25,000 users wasn’t sufficient to sustain the business. “You never really have enough inventory,” said Mr. Wang.

After it folded in the summer of 2013, Ridejoy returned about half of its funding to investors, according to Mr. Wang. Alexis Ohanian, an entrepreneur in Brooklyn, N.Y., who was an investor in Ridejoy, said it “could just be the timing or execution that was off.” He cited the success so far of Lyft Inc., the two-year-old San Francisco company that is valued at more than $700 million and offers a short-distance ride-sharing service. “It turned out the short rides are what the market really wanted,” Mr. Ohanian said.

One drawback is that because much of the revenue a sharing business generates goes directly back to the suppliers—of bedrooms, parking spots, vehicles or other “shared” assets—the underlying business may be continuously strapped for cash.

Read the entire article here.

The Magnificent Seven

Magnificent-seven

Actually, these seven will not save your village from bandits. Nor will they ride triumphant into the sunset on horseback. These seven are more mundane, but they are nonetheless shrouded in a degree of mystery, albeit rather technical. These are the seven holders of the seven keys that control the Internet’s core directory — the Domain Name System. Without it the Internet’s billions of users would not be able to browse or search or shop or email or text.

From the Guardian:

In a nondescript industrial estate in El Segundo, a boxy suburb in south-west Los Angeles just a mile or two from LAX international airport, 20 people wait in a windowless canteen for a ceremony to begin. Outside, the sun is shining on an unseasonably warm February day; inside, the only light comes from the glare of halogen bulbs.

There is a strange mix of accents – predominantly American, but smatterings of Swedish, Russian, Spanish and Portuguese can be heard around the room, as men and women (but mostly men) chat over pepperoni pizza and 75-cent vending machine soda. In the corner, an Asteroids arcade machine blares out tinny music and flashing lights.

It might be a fairly typical office scene, were it not for the extraordinary security procedures that everyone in this room has had to complete just to get here, the sort of measures normally reserved for nuclear launch codes or presidential visits. The reason we are all here sounds like the stuff of science fiction, or the plot of a new Tom Cruise franchise: the ceremony we are about to witness sees the coming together of a group of people, from all over the world, who each hold a key to the internet. Together, their keys create a master key, which in turn controls one of the central security measures at the core of the web. Rumours about the power of these keyholders abound: could their key switch off the internet? Or, if someone somehow managed to bring the whole system down, could they turn it on again?

The keyholders have been meeting four times a year, twice on the east coast of the US and twice here on the west, since 2010. Gaining access to their inner sanctum isn’t easy, but last month I was invited along to watch the ceremony and meet some of the keyholders – a select group of security experts from around the world. All have long backgrounds in internet security and work for various international institutions. They were chosen for their geographical spread as well as their experience – no one country is allowed to have too many keyholders. They travel to the ceremony at their own, or their employer’s, expense.

What these men and women control is the system at the heart of the web: the domain name system, or DNS. This is the internet’s version of a telephone directory – a series of registers linking web addresses to a series of numbers, called IP addresses. Without these addresses, you would need to know a long sequence of numbers for every site you wanted to visit. To get to the Guardian, for instance, you’d have to enter “77.91.251.10” instead of theguardian.com.

The master key is part of a new global effort to make the whole domain name system secure and the internet safer: every time the keyholders meet, they are verifying that each entry in these online “phone books” is authentic. This prevents a proliferation of fake web addresses which could lead people to malicious sites, used to hack computers or steal credit card details.

The east and west coast ceremonies each have seven keyholders, with a further seven people around the world who could access a last-resort measure to reconstruct the system if something calamitous were to happen. Each of the 14 primary keyholders owns a traditional metal key to a safety deposit box, which in turn contains a smartcard, which in turn activates a machine that creates a new master key. The backup keyholders have something a bit different: smartcards that contain a fragment of code needed to build a replacement key-generating machine. Once a year, these shadow holders send the organisation that runs the system – the Internet Corporation for Assigned Names and Numbers (Icann) – a photograph of themselves with that day’s newspaper and their key, to verify that all is well.

The fact that the US-based, not-for-profit organisation Icann – rather than a government or an international body – has one of the biggest jobs in maintaining global internet security has inevitably come in for criticism. Today’s occasionally over-the-top ceremony (streamed live on Icann’s website) is intended to prove how seriously they are taking this responsibility. It’s one part The Matrix (the tech and security stuff) to two parts The Office (pretty much everything else).

For starters: to get to the canteen, you have to walk through a door that requires a pin code, a smartcard and a biometric hand scan. This takes you into a “mantrap”, a small room in which only one door at a time can ever be open. Another sequence of smartcards, handprints and codes opens the exit. Now you’re in the break room.

Already, not everything has gone entirely to plan. Leaning next to the Atari arcade machine, ex-state department official Rick Lamb, smartly suited and wearing black-rimmed glasses (he admits he’s dressed up for the occasion), is telling someone that one of the on-site guards had asked him out loud, “And your security pin is 9925, yes?” “Well, it was…” he says, with an eye-roll. Looking in our direction, he says it’s already been changed.

Lamb is now a senior programme manager for Icann, helping to roll out the new, secure system for verifying the web. This is happening fast, but it is not yet fully in play. If the master key were lost or stolen today, the consequences might not be calamitous: some users would receive security warnings, some networks would have problems, but not much more. But once everyone has moved to the new, more secure system (this is expected in the next three to five years), the effects of losing or damaging the key would be far graver. While every server would still be there, nothing would connect: it would all register as untrustworthy. The whole system, the backbone of the internet, would need to be rebuilt over weeks or months. What would happen if an intelligence agency or hacker – the NSA or Syrian Electronic Army, say – got hold of a copy of the master key? It’s possible they could redirect specific targets to fake websites designed to exploit their computers – although Icann and the keyholders say this is unlikely.

Standing in the break room next to Lamb is Dmitry Burkov, one of the keyholders, a brusque and heavy-set Russian security expert on the boards of several internet NGOs, who has flown in from Moscow for the ceremony. “The key issue with internet governance is always trust,” he says. “No matter what the forum, it always comes down to trust.” Given the tensions between Russia and the US, and Russia’s calls for new organisations to be put in charge of the internet, does he have faith in this current system? He gestures to the room at large: “They’re the best part of Icann.” I take it he means he likes these people, and not the wider organisation, but he won’t be drawn further.

It’s time to move to the ceremony room itself, which has been cleared for the most sensitive classified information. No electrical signals can come in or out. Building security guards are barred, as are cleaners. To make sure the room looks decent for visitors, an east coast keyholder, Anne-Marie Eklund Löwinder of Sweden, has been in the day before to vacuum with a $20 dustbuster.

We’re about to begin a detailed, tightly scripted series of more than 100 actions, all recorded to the minute using the GMT time zone for consistency. These steps are a strange mix of high-security measures lifted straight from a thriller (keycards, safe combinations, secure cages), coupled with more mundane technical details – a bit of trouble setting up a printer – and occasional bouts of farce. In short, much like the internet itself.

Read the entire article here.

Image: The Magnificent Seven, movie poster. Courtesy of Wikia.

Fast Fashion and Smartphones

google-search-teen-fashion

Teen retail isn’t what it used to be. Once dominated by the likes of Aeropostale, Abercrombie and Fitch, and American Eagle, the sector is in a downward spiral. Many retail analysts place the blame on the internet. While discretionary income is down and unemployment is up among teens, there are two other key factors driving the change: first, smartphones loaded with apps seem to be more important to a teen’s self identity than an emblazoned tee-shirt; second, fast-fashion houses, such as H&M, can churn out fresh designs at a fraction thanks to fully integrated, on-demand supply chains. Perhaps, the silver lining in all of this, if you could call it such, is that malls may soon become the hang-out for old-timers.

From the NYT:

Luring young shoppers into traditional teenage clothing stores has become a tough sell.

When 19-year-old Tsarina Merrin thinks of a typical shopper at some of the national chains, she doesn’t think of herself, her friends or even contemporaries.

“When I think of who is shopping at Abercrombie,” she said, “I think it’s more of people’s parents shopping for them.”

Sales are down across the shelves of many traditional teenage apparel retailers, and some analysts and others suggest that it’s not just a tired fashion sense causing the slump. The competition for teenage dollars, at a time of high unemployment within that age group, spans from more stores to shop in to more tempting technology.

And sometimes phones loaded with apps or a game box trump the latest in jeans.

Mainstays in the industry like Abercrombie & Fitch, American Eagle Outfitters and Aéropostale, which dominated teenage closets for years, have been among those hit hard.

The grim reports of the last holiday season have already proved punishing for senior executives at the helm of a few retailers. In a move that caught many analysts by surprise, the chief executive of American Eagle, Robert L. Hanson, announced he was leaving the company last week. And on Tuesday, Abercrombie announced they were making several changes to the company’s board and leadership, including separating the role of chief executive and chairman.

Aside from those shake-ups, analysts are saying they do not expect much improvement in this retail sector any time soon.

According to a survey of analysts conducted by Thomson Reuters, sales at teenage apparel retailers open for more than a year, like Wet Seal, Zumiez, Abercrombie and American Eagle, are expected to be 6.4 percent lower in the fourth quarter over the previous period. That is worse than any other retail category.

“It’s enough to make you think the teen is going to be walking around naked,” said John D. Morris, an analyst at BMO Capital Markets. “What happened to them?”

Paul Lejuez, an analyst at Wells Fargo, said he and his team put out a note in May on the health of the teenage sector and department stores called “Watch Out for the Kid With the Cough.” (Aéropostale was the coughing teenager.) Nonetheless, he said, “We ended up being surprised just how bad things got so quickly. There’s really no sign of life anywhere among the traditional players.”

Causes are ticked off easily. Mentioned often is the high teenage unemployment rate, reaching 20.2 percent among 16- to 19-year-olds, far above the national rate of 6.7 percent.

Cheap fashion has also driven a more competitive market. So-called fast-fashion companies, like Forever 21 and H&M, which sell trendy clothes at low prices, have muscled into the space, while some department stores and discount retailers like T. J. Maxx now cater to teenagers, as well.

“You can buy a plaid shirt at Abercrombie that’s like $70,” said Daniela Donayre, 17, standing in a Topshop in Manhattan. “Or I can go to Forever 21 and buy the same shirt for $20.”

Online shopping, which has been roiling the industry for years, may play an especially pronounced role in the teenage sector, analysts say. A study of a group of teenagers released in the fall by Piper Jaffray found that more than three-fourths of young men and women said they shopped online.

Not only did teenagers grow up on the Internet, but it has shaped and accelerated fashion cycles. Things take off quickly and fade even faster, watched by teenagers who are especially sensitive to the slightest shift in the winds of a trend.

Matthew McClintock, an analyst at Barclays, pointed to Justin Bieber as an example.

“Today, if you saw that Justin Bieber got arrested drag-racing,” Mr. McClintock said, “and you saw in the picture that he had on a cool red shirt, then you can go online and find that cool red shirt and have it delivered to you in two days from some boutique in Los Angeles.

“Ten years ago, teens were dependent on going to Abercrombie & Fitch and buying from the select items that Mike Jeffries, the C.E.O., thought would be popular nine months ago.”

Read the entire story here.

Image courtesy of Google Search.

Your Toaster on the Internet

Toaster

Billions of people have access to the Internet. Now, whether a significant proportion of these do anything productive with this tremendous resource is open to debate — many preferring only to post pictures of their breakfasts, themselves or to watch last-minute’s viral video hit.

Despite all these humans clogging up the Tubes of the Internets most traffic along the information superhighway is in fact not even human. Over 60 percent of all activity comes from computer systems, such as web crawlers, botnets, and increasingly, industrial control systems, ranging from security and monitoring devices, to in-home devices such as your thermostat, refrigerator, smart TV , smart toilet and toaster. So, soon Google will know what you eat and when, and your fridge will tell you what you should eat (or not) based on what it knows of your body mass index (BMI) from your bathroom scales.

Jokes aside, the Internet of Things (IoT) promises to herald an even more significant information revolution over the coming decades as all our devices and machines, from home to farm to factory, are connected and inter-connected.

From the ars technica:

If you believe what the likes of LG and Samsung have been promoting this week at CES, everything will soon be smart. We’ll be able to send messages to our washing machines, run apps on our fridges, and have TVs as powerful as computers. It may be too late to resist this movement, with smart TVs already firmly entrenched in the mid-to-high end market, but resist it we should. That’s because the “Internet of things” stands a really good chance of turning into the “Internet of unmaintained, insecure, and dangerously hackable things.”

These devices will inevitably be abandoned by their manufacturers, and the result will be lots of “smart” functionality—fridges that know what we buy and when, TVs that know what shows we watch—all connected to the Internet 24/7, all completely insecure.

While the value of smart watches or washing machines isn’t entirely clear, at least some smart devices—I think most notably phones and TVs—make sense. The utility of the smartphone, an Internet-connected computer that fits in your pocket, is obvious. The growth of streaming media services means that your antenna or cable box are no longer the sole source of televisual programming, so TVs that can directly use these streaming services similarly have some appeal.

But these smart features make the devices substantially more complex. Your smart TV is not really a TV so much as an all-in-one computer that runs Android, WebOS, or some custom operating system of the manufacturer’s invention. And where once it was purely a device for receiving data over a coax cable, it’s now equipped with bidirectional networking interfaces, exposing the Internet to the TV and the TV to the Internet.

The result is a whole lot of exposure to security problems. Even if we assume that these devices ship with no known flaws—a questionable assumption in and of itself if SOHO routers are anything to judge by—a few months or years down the line, that will no longer be the case. Flaws and insecurities will be uncovered, and the software components of these smart devices will need to be updated to address those problems. They’ll need these updates for the lifetime of the device, too. Old software is routinely vulnerable to newly discovered flaws, so there’s no point in any reasonable timeframe at which it’s OK to stop updating the software.

In addition to security, there’s also a question of utility. Netflix and Hulu may be hot today, but that may not be the case in five years’ time. New services will arrive; old ones will die out. Even if the service lineup remains the same, its underlying technology is unlikely to be static. In the future, Netflix, for example, might want to deprecate old APIs and replace them with new ones; Netflix apps will need to be updated to accommodate the changes. I can envision changes such as replacing the H.264 codec with H.265 (for reduced bandwidth and/or improved picture quality), which would similarly require updated software.

To remain useful, app platforms need up-to-date apps. As such, for your smart device to remain safe, secure, and valuable, it needs a lifetime of software fixes and updates.

A history of non-existent updates

Herein lies the problem, because if there’s one thing that companies like Samsung have demonstrated in the past, it’s a total unwillingness to provide a lifetime of software fixes and updates. Even smartphones, which are generally assumed to have a two-year lifecycle (with replacements driven by cheap or “free” contract-subsidized pricing), rarely receive updates for the full two years (Apple’s iPhone being the one notable exception).

A typical smartphone bought today will remain useful and usable for at least three years, but its system software support will tend to dry up after just 18 months.

This isn’t surprising, of course. Samsung doesn’t make any money from making your two-year-old phone better. Samsung makes its money when you buy a new Samsung phone. Improving the old phones with software updates would cost money, and that tends to limit sales of new phones. For Samsung, it’s lose-lose.

Our fridges, cars, and TVs are not even on a two-year replacement cycle. Even if you do replace your TV after it’s a couple years old, you probably won’t throw the old one away. It will just migrate from the living room to the master bedroom, and then from the master bedroom to the kids’ room. Likewise, it’s rare that a three-year-old car is simply consigned to the scrap heap. It’s given away or sold off for a second, third, or fourth “life” as someone else’s primary vehicle. Your fridge and washing machine will probably be kept until they blow up or you move houses.

These are all durable goods, kept for the long term without any equivalent to the smartphone carrier subsidy to promote premature replacement. If they’re going to be smart, software-powered devices, they’re going to need software lifecycles that are appropriate to their longevity.

That costs money, it requires a commitment to providing support, and it does little or nothing to promote sales of the latest and greatest devices. In the software world, there are companies that provide this level of support—the Microsofts and IBMs of the world—but it tends to be restricted to companies that have at least one eye on the enterprise market. In the consumer space, you’re doing well if you’re getting updates and support five years down the line. Consumer software fixes a decade later are rare, especially if there’s no system of subscriptions or other recurring payments to monetize the updates.

Of course, the companies building all these products have the perfect solution. Just replace all our stuff every 18-24 months. Fridge no longer getting updated? Not a problem. Just chuck out the still perfectly good fridge you have and buy a new one. This is, after all, the model that they already depend on for smartphones. Of course, it’s not really appropriate even to smartphones (a mid/high-end phone bought today will be just fine in three years), much less to stuff that will work well for 10 years.

These devices will be abandoned by their manufacturers, and it’s inevitable that they are abandoned long before they cease to be useful.

Superficially, this might seem to be no big deal. Sure, your TV might be insecure, but your NAT router will probably provide adequate protection, and while it wouldn’t be tremendously surprising to find that it has some passwords for online services or other personal information on it, TVs are sufficiently diverse that people are unlikely to expend too much effort targeting specific models.

Read the entire story here.

Image: A classically styled chrome two-slot automatic electric toaster. Courtesy of Wikipedia.

Playing Music, Playing Ads – Same Difference

pandoraThe internet music radio service Pandora knows a lot about you and another 200 million or so registered members. If you use the service regularly it comes to recognize your musical likes and dislikes. In this way Pandora learns to deliver more music programming that it thinks you will like, and it works rather well.

But, the story does not end there since Pandora is not just fun, it’s a business. For in its quest to monetize you even more effectively Pandora is seeking to pair personalized ads to your specific musical tastes. So, beware forthcoming ads tailored to your music perferences — metalheads, you have been warned!

From the NYT:

Pandora, the Internet radio service, is plying a new tune.

After years of customizing playlists to individual listeners by analyzing components of the songs they like, then playing them tracks with similar traits, the company has started data-mining users’ musical tastes for clues about the kinds of ads most likely to engage them.

“It’s becoming quite apparent to us that the world of playing the perfect music to people and the world of playing perfect advertising to them are strikingly similar,” says Eric Bieschke, Pandora’s chief scientist.

Consider someone who’s in an adventurous musical mood on a weekend afternoon, he says. One hypothesis is that this listener may be more likely to click on an ad for, say, adventure travel in Costa Rica than a person in an office on a Monday morning listening to familiar tunes. And that person at the office, Mr. Bieschke says, may be more inclined to respond to a more conservative travel ad for a restaurant-and-museum tour of Paris. Pandora is now testing hypotheses like these by, among other methods, measuring the frequency of ad clicks. “There are a lot of interesting things we can do on the music side that bridge the way to advertising,” says Mr. Bieschke, who led the development of Pandora’s music recommendation engine.

A few services, like Pandora, Amazon and Netflix, were early in developing algorithms to recommend products based on an individual customer’s preferences or those of people with similar profiles. Now, some companies are trying to differentiate themselves by using their proprietary data sets to make deeper inferences about individuals and try to influence their behavior.

This online ad customization technique is known as behavioral targeting, but Pandora adds a music layer. Pandora has collected song preference and other details about more than 200 million registered users, and those people have expressed their song likes and dislikes by pressing the site’s thumbs-up and thumbs-down buttons more than 35 billion times. Because Pandora needs to understand the type of device a listener is using in order to deliver songs in a playable format, its system also knows whether people are tuning in from their cars, from iPhones or Android phones or from desktops.

So it seems only logical for the company to start seeking correlations between users’ listening habits and the kinds of ads they might be most receptive to.

“The advantage of using our own in-house data is that we have it down to the individual level, to the specific person who is using Pandora,” Mr. Bieschke says. “We take all of these signals and look at correlations that lead us to come up with magical insights about somebody.”

People’s music, movie or book choices may reveal much more than commercial likes and dislikes. Certain product or cultural preferences can give glimpses into consumers’ political beliefs, religious faith, sexual orientation or other intimate issues. That means many organizations now are not merely collecting details about where we go and what we buy, but are also making inferences about who we are.

“I would guess, looking at music choices, you could probably predict with high accuracy a person’s worldview,” says Vitaly Shmatikov, an associate professor of computer science at the University of Texas at Austin, where he studies computer security and privacy. “You might be able to predict people’s stance on issues like gun control or the environment because there are bands and music tracks that do express strong positions.”

Pandora, for one, has a political ad-targeting system that has been used in presidential and congressional campaigns, and even a few for governor. It can deconstruct users’ song preferences to predict their political party of choice. (The company does not analyze listeners’ attitudes to individual political issues like abortion or fracking.)

During the next federal election cycle, for instance, Pandora users tuning into country music acts, stand-up comedians or Christian bands might hear or see ads for Republican candidates for Congress. Others listening to hip-hop tunes, or to classical acts like the Berlin Philharmonic, might hear ads for Democrats.

Because Pandora users provide their ZIP codes when they register, Mr. Bieschke says, “we can play ads only for the specific districts political campaigns want to target,” and “we can use their music to predict users’ political affiliations.” But he cautioned that the predictions about users’ political parties are machine-generated forecasts for groups of listeners with certain similar characteristics and may not be correct for any particular listener.

Shazam, the song recognition app with 80 million unique monthly users, also plays ads based on users’ preferred music genres. “Hypothetically, a Ford F-150 pickup truck might over-index to country music listeners,” says Kevin McGurn, Shazam’s chief revenue officer. For those who prefer U2 and Coldplay, a demographic that skews to middle-age people with relatively high incomes, he says, the app might play ads for luxury cars like Jaguars.

Read the entire article here.

Image courtesy of Pandora.

What About Telecleaning?

suitable-technologies

Telepresence devices and systems made some ripples in the vast oceans of new technology at the recent CES (Consumer Electronics Show) in Las Vegas. Telepresence allows anyone armed with an internet-connected camera to beam themselves elsewhere with the aid of a remote controlled screen on wheels. Some clinics and workplaces have experimented with the technology, allowing medical staff and workers to be virtually present in one location while being physically remote. Now, a handful of innovators are experimenting with telepresence for the home market.

So, sick of being around the kids, or need to see grandma but can’t get away from the office? Or, even better, buy buy one for your office so you can replace yourself with a robot, work from home and never visit the workplace again. Well, a telepresence robot for a mere $1,000 may be a very sound investment.

Sounds great, but where is the robot that will tidy, clean, dust, cook, repair, mow, launder…

From Technology Review:

When Scott Hassan went to Las Vegas for the International Consumer Electronics Show last week, he was still able to get the kids up in the morning and help them make breakfast at his California home. Hassan used a remote-controlled screen on wheels to spend time with his family, and today his company, Suitable Technologies, started taking orders for Beam+, a version of the same telepresence technology aimed at home users. This summer, it will also be available via Amazon and other retailers.

Hassan thinks the Beam+, essentially a 10-inch screen and camera mounted on wheels, will be popular with other businesspeople who want to spend more time with their kids, or those with aging parents they’d like to check up on more often.

Hassan says a person “visiting” aging parents this way could check up on them less obtrusively than via phone, for example by walking around to look for signs they’d taken their medication rather than bluntly asking, or watching to check that they take their pills with their meal. “For people with dementia or Alzheimer’s, I think that being able to see and hear and walk around with a familiar face is a lot better than just a phone call,” he says. “You could also just Beam in and watch Jeopardy! with your grandmother on TV.”

The Beam+ is designed so that once installed in a home, anyone with the login credentials can bring it to life and start moving around. The operator’s interface shows the view from a camera over the screen, as well as a smaller view looking down toward the unit’s base to aid maneuvering. A user drives it by moving a mouse over their view and clicking where they want to go.

The first 1,000 units of the Beam+ can be preordered for $995, with later units expected to costs $1,995. Both prices include the charging dock to which the device must return every two hours. The exterior design of the Beam+ was created by Fred Bould, who designed the Nest thermostat, among other gadgets.

The Beam+ is a cheaper, smaller, and restyled version of the company’s first product, known as the Beam, which is aimed at corporate users (see “Beam Yourself to Work in a Remote-Controlled Body”).

Intel, IBM, and Square all use Beam’s original product to give employees an option somewhere between a conventional video chat and an in-person visit when working with colleagues in distant offices. Hassan says interest has come from more than just technology companies, though. In Vegas he sold two Beam devices to a restaurant owner planning to use them as street barkers; meanwhile, a real-estate agency in California’s Lake Tahoe has started using them to show people around luxury condos.

Several startups and large companies, such as iRobot, which created the Roomba robotic vacuum cleaner, have launched mobile telepresence devices in recent years. However, despite it being clear that many people wish they could travel more easily in their professional and personal lives, the devices have sometimes been clunky (see “The New, More Awkward You”) and remain relatively expensive.

Read the entire article here.

Image: Beam+. Courtesy of Suitable Technologies, inc.

Zynga: Out to Pasture or Buying the Farm?

FarmVille_logoBy one measure, Zynga’s FarmVille on Facebook (and MSN) is extremely successful. The measure being dedicated and addicted players numbering in the millions each day. By another measure, Zynga isn’t faring very well at all, and that’s making money. Despite a valuation of over $3 billion, the company is struggling to find a way to convert virtual game currency into real dollar spend.

How the internet ecosystem manages to reward the lack of real and sustainable value creation is astonishing to those on the outside — but good for those on the inside. Would that all companies could bask in the glory of venture capital and IPO bubbles on such flimsy financial foundations. Quack!

Zynga has been on company deathwatch for a while. Read on to see some of its peers that seem to be on life-support

From ars technica:

HTC

To say that 2013 was a bad year for Taiwanese handset maker HTC is probably something of an understatement. The year was capped off by the indictment of six HTC employees on a variety of charges such as taking kickbacks, falsifying expenses, and leaking company trade secrets—including elements of HTC’s new interface for Android phones. Thomas Chien, the former vice president of design for HTC, was reportedly taking the information to a group in Beijing that was planning to form a new company, according to The Wall Street Journal.

On top of that, despite positive reviews for its flagship HTC One line, the company has been struggling to sell the phone. Blame it on bad marketing, bad execution, or just bad management, but HTC has been beaten down badly by Samsung.

The investigation of Chien started in August, but it was hardly the worst news HTC had last year as the company’s executive ranks thinned and losses mounted. There was reshuffling of deck chairs at the top of the company as CEO Peter Chou handed off chunks of his operational duties to co-founder and chairwoman Cher Wang—giving her control over marketing, sales, and the company’s supply chain in the wake of a parts shortage that hampered the launch of the HTC One. The Wall Street Journal reported that HTC couldn’t get camera parts for the One because suppliers believed “it is no longer a tier one customer,” according to an unnamed executive.

That’s a pretty dramatic fall from HTC’s peak, when the company vaulted from contract manufacturer to major mobile player. Way back in the heady days of 2011, HTC was second only to Apple in US cell phone market share, and it held 9.3 percent of the global market. Now it’s in fourth place in the US, with just 6.7 percent market share based on comScore numbers—behind Google’s Motorola and just ahead of LG Electronics by a hair. Its sales in the last quarter of 2013 were down by 40 percent from last year, and revenues for 2013 were down by 28.6 percent from 2012. With a patent infringement suit from Nokia over chips in the HTC One and One Mini still hanging over its head in the United Kingdom, the company could face a ban on selling some of its phones there.

Executives insist that HTC won’t be sold, especially to a Chinese buyer—the politics of such a deal being toxic to a Taiwanese company. But ironically, the Chinese market is perhaps HTC’s best hope in the long term—the company does more than a third of its business there. The company’s best bet may be going back to manufacturing phones with someone else’s name on the faceplate and leaving the marketing to someone else.

AMD

Advanced Micro Devices is still on deathwatch. Yes, AMD reported a quarterly profit of $48 million in September thanks to a gift from the game console gods (and IBM Power’s fall from grace). But that was hardly enough to jolt the chip company out of what has been a really bad year—and AMD is trying to manage expectations for the results for the final quarter of 2013.

AMD is caught between a rock and a hard place—or more specifically, between Intel and ARM. On the bright side, it probably has nothing to fear from ARM in the low-cost Windows device market considering how horrifically Windows RT fared in 2013. AMD actually gained in market share in the x86 space thanks to the Xbox One and PS4—both of which replace non-x86 consoles. And AMD still holds a substantial chunk of the graphics processor market—and all those potential sales in Bitcoin miners to go with it.

But in the PC space, AMD’s market share declined to a mere 15.8 percent (of what is a much smaller pie than it used to be). And in a future driven increasingly by mobile and low-power devices, AMD hasn’t been able to make any gains with the two low-power chips it introduced in 2013—Kabini and Temash. Those chips were supposed to finally give AMD a competitive footing with Intel on low-cost PCs and tablets, but they ended up being middling in comparison.

All that adds up to 2014 being a very important year for AMD—one that could end with AMD essentially being a graphics and specialty processor chip designer. The company has already divorced itself from its own fabrication capability and slashed its workforce, so there isn’t much more to cut but bone if the markets demand better margins.

Read the entire article here.

Image: FarmVille logo. Courtesy of Wikipedia.

The Future Tubes of the Internets

CerfKahnMedalOfFreedom

Back in 1973, when computer scientists Vint Cerf and Robert Kahn sketched out plans to connect a handful of government networks little did they realize the scale of their invention — TCP/IP (a standard protocol for the interconnection of computer networks. Now, the two patriarchs of the Internet revolution — with no Al Gore in sight — prognosticate on the next 40 years of the internet.

From the NYT:

Will 2014 be the year that the Internet is reined in?

When Edward J. Snowden, the disaffected National Security Agency contract employee, purloined tens of thousands of classified documents from computers around the world, his actions — and their still-reverberating consequences — heightened international pressure to control the network that has increasingly become the world’s stage. At issue is the technical principle that is the basis for the Internet, its “any-to-any” connectivity. That capability has defined the technology ever since Vinton Cerf and Robert Kahn sequestered themselves in the conference room of a Palo Alto, Calif., hotel in 1973, with the task of interconnecting computer networks for an elite group of scientists, engineers and military personnel.

The two men wound up developing a simple and universal set of rules for exchanging digital information — the conventions of the modern Internet. Despite many technological changes, their work prevails.

But while the Internet’s global capability to connect anyone with anything has affected every nook and cranny of modern life — with politics, education, espionage, war, civil liberties, entertainment, sex, science, finance and manufacturing all transformed — its growth increasingly presents paradoxes.

It was, for example, the Internet’s global reach that made classified documents available to Mr. Snowden — and made it so easy for him to distribute them to news organizations.

Yet the Internet also made possible widespread surveillance, a practice that alarmed Mr. Snowden and triggered his plan to steal and publicly release the information.

With the Snowden affair starkly highlighting the issues, the new year is likely to see renewed calls to change the way the Internet is governed. In particular, governments that do not favor the free flow of information, especially if it’s through a system designed by Americans, would like to see the Internet regulated in a way that would “Balkanize” it by preventing access to certain websites.

The debate right now involves two international organizations, usually known by their acronyms, with different views: Icann, the Internet Corporation for Assigned Names and Numbers, and the I.T.U., or International Telecommunication Union.

Icann, a nonprofit that oversees the Internet’s basic functions, like the assignment of names to websites, was established in 1998 by the United States government to create an international forum for “governing” the Internet. The United States continues to favor this group.

The I.T.U., created in 1865 as the International Telegraph Convention, is the United Nations telecommunications regulatory agency. Nations like Brazil, China and Russia have been pressing the United States to switch governance of the Internet to this organization.

Dr. Cerf, 70, and Dr. Kahn, 75, have taken slightly different positions on the matter. Dr. Cerf, who was chairman of Icann from 2000-7, has become known as an informal “Internet ambassador” and a strong proponent of an Internet that remains independent of state control. He has been one of the major supporters of the idea of “network neutrality” — the principle that Internet service providers should enable access to all content and applications, regardless of the source.

Dr. Kahn has made a determined effort to stay out of the network neutrality debate. Nevertheless, he has been more willing to work with the I.T.U., particularly in attempting to build support for a system, known as Digital Object Architecture, for tracking and authenticating all content distributed through the Internet.

Both men agreed to sit down, in separate interviews, to talk about their views on the Internet’s future. The interviews were edited and condensed.

The Internet Ambassador

After serving as a program manager at the Pentagon’s Defense Advanced Research Projects Agency, Vinton Cerf joined MCI Communications Corp., an early commercial Internet company that was purchased by Verizon in 2006, to lead the development of electronic mail systems for the Internet. In 2005, he became a vice president and “Internet evangelist” for Google. Last year he became the president of the Association for Computing Machinery, a leading international educational and scientific computing society.

Q. Edward Snowden’s actions have raised a new storm of controversy about the role of the Internet. Is it a significant new challenge to an open and global Internet?

A. The answer is no, I don’t think so. There are some similar analogues in history. The French historically copied every telex or every telegram that you sent, and they shared it with businesses in order to remain competitive. And when that finally became apparent, it didn’t shut down the telegraph system.

The Snowden revelations will increase interest in end-to-end cryptography for encrypting information both in transit and at rest. For many of us, including me, who believe that is an important capacity to have, this little crisis may be the trigger that induces people to spend time and energy learning how to use it.

You’ve drawn the analogy to a road or highway system. That brings to mind the idea of requiring a driver’s license to use the Internet, which raises questions about responsibility and anonymity.

I still believe that anonymity is an important capacity, that people should have the ability to speak anonymously. It’s argued that people will be encouraged to say untrue things, harmful things, especially if they believe they are anonymous.

There is a tension there, because in some environments the only way you will be able to behave safely is to have some anonymity.

Read the entire article here.

Image: Vinton Cerf and Robert Kahn receiving the Presidential Medal of Freedom from President George W. Bush in 2005. Courtesy of Wikipedia.

What’s Up With Bitcoin?

The digital, internet currency Bitcoin seems to be garnering much attention recently from some surprising corners, and it’s beyond speculators and computer geeks. Why?

From the Guardian:

The past weeks have seen a surprising meeting of minds between chairman of the US Federal Reserve Ben Bernanke, the Bank of England, the Olympic-rowing and Zuckerberg-bothering Winklevoss twins, and the US Department of Homeland Security. The connection? All have decided it’s time to take Bitcoin seriously.

Until now, what pundits called in a rolling-eye fashion “the new peer-to-peer cryptocurrency” had been seen just as a digital form of gold, with all the associated speculation, stake-claiming and even “mining”; perfect for the digital wild west of the internet, but no use for real transactions.

Bitcoins are mined by computers solving fiendishly hard mathematical problems. The “coin” doesn’t exist physically: it is a virtual currency that exists only as a computer file. No one computer controls the currency. A network keeps track of all transactions made using Bitcoins but it doesn’t know what they were used for – just the ID of the computer “wallet” they move from and to.

Right now the currency is tricky to use, both in terms of the technological nous required to actually acquire Bitcoins, and finding somewhere to spend them. To get them, you have to first set up a wallet, probably online at a site such as Blockchain.info, and then pay someone hard currency to get them to transfer the coins into that wallet.

A Bitcoin payment address is a short string of random characters, and if used carefully, it’s possible to make transactions anonymously. That’s what made it the currency of choice for sites such as the Silk Road and Black Market Reloaded, which let users buy drugs anonymously over the internet. It also makes it very hard to tax transactions, despite the best efforts of countries such as Germany, which in August declared that Bitcoin was “private money” in which transactions should be taxed as normal.

It doesn’t have all the advantages of cash, though the fact you can’t forge it is a definite plus: Bitcoin is “peer-to-peer” and every coin “spent” is authenticated with the network. Thus you can’t spend the same coin in two different places. (But nor can you spend it without an internet connection.) You don’t have to spend whole Bitcoins: each one can be split into 100m pieces (each known as a satoshi), and spent separately.

Although most people have now vaguely heard of Bitcoin, you’re unlikely to find someone outside the tech community who really understands it in detail, let alone accepts it as payment. Nobody knows who invented it; its pseudonymous creator, Satoshi Nakamoto, hasn’t come forward. He or she may not even be Japanese but certainly knows a lot about cryptography, economics and computing.

It was first presented in November 2008 in an academic paper shared with a cryptography mailing list. It caught the attention of that community but took years to take off as a niche transaction tool. The first Bitcoin boom and bust came in 2011, and signalled that it had caught the attention of enough people for real money to get involved – but also posed the question of whether it could ever be more than a novelty.

The algorithm for mining Bitcoins means the number in circulation will never exceed 21m and this limit will be reached in around 2140. Already 57% of all Bitcoins have been created; by 2017, 75% will have been. If you tried to create a Bitcoin in 2141, every other computer on the network would reject it as fake because it would not have been made according to the rules of currency.

The number of companies taking Bitcoin payments is increasing from a small base, and a few payment processors such as Atlanta-based Bitpay are making real money from the currency. But it’s difficult to get accurate numbers on conventional transactions, and it still seems that the most popular uses of Bitcoins are buying drugs in the shadier parts of the internet, as people did on the Silk Road website, and buying the currency in the hope that in a few weeks’ time you will be able to sell it at a profit.

This is remarkable because there’s no fundamental reason why Bitcoin should have any value at all. The only reason people are willing to pay money for the currency is because other people are willing to as well. (Try not to think about it too hard.) Now, though, sensible economists are saying that Bitcoin might become part of our future economy. That’s quite a shift from October last year, when the European Central Bank said that Bitcoin was “characteristic of a Ponzi [pyramid] scheme”. This month, the Chicago Federal Reserve commented that the currency was “a remarkable conceptual and technical achievement, which may well be used by existing financial institutions (which could issue their own bitcoins) or even by governments themselves”.

It might not sound thrilling. But for a central banker, that’s like yelling “BITCOIIINNNN!” from the rooftops. And Bernanke, in a carefully dull letter to the US Senate committee on Homeland Security, said that when it came to virtual currencies (read: Bitcoin), the US Federal Reserve had “ongoing initiatives” to “identify additional areas of … concern that require heightened attention by the banking organisations we supervise”.

In other words, Bernanke is ready to make Bitcoin part of US currency regulation – the key step towards legitimacy.

Most reporting about Bitcoin until now has been of its extraordinary price ramp – from a low of $1 in 2011 to more than $900 earlier this month. That massive increase has sparked a classic speculative rush, with more and more people hoping to get a piece of the pie by buying and then selling Bitcoins. Others are investing thousands of pounds in custom “mining rigs”, computers specially built to solve the mathematical problems necessary to confirm a Bitcoin transaction.

But bubbles can burst: in 2011 it went from $33 to $1. The day after hitting that $900 high, Bitcoin’s value halved on MtGox, the biggest exchange. Then it rose again.

Speculative bubbles happen everywhere, though, from stock markets to Beanie Babies. All that’s needed is enough people who think that they are the smart money, and that everyone else is sufficiently stupid to buy from them. But the Bitcoin bubbles tell us as much about the usefulness of the currency itself as the tulip mania of 17th century Holland did about flower-arranging.

History does provide some lessons. While the Dutch were selling single tulip bulbs for 10 times a craftsman’s annual income, the British were panicking about their own economic crisis. The silver coinage that had been the basis of the national economy for centuries was rapidly becoming unfit for purpose: it was constrained in supply and too easy to forge. The economy was taking on the features of a modern capitalist state, and the currency simply couldn’t catch up.

Describing the problem Britain faced then, David Birch, a consultant specialising in electronic transactions, says: “We had a problem in matching the nature of the economy to the nature of the money we used.” Birch has been talking about electronic money for over two decades and is convinced that we find ourselves on the edge of the same shift that occurred 400 years ago.

The cause of that shift is the internet, because even though you might want to, you can’t use cash – untraceable, no-fee-charged cash – online. Existing payment systems such as PayPal and credit cards demand a cut. So for individuals looking for a digital equivalent of cash – no middleman, quick, easy – Bitcoin looks pretty good.

In 1613, as people looked for a replacement for silver, Birch says, “we might have been saying ‘the idea of tulip bulbs as an asset class looks pretty good, but this central bank nonsense will never catch on.’ We knew we needed a change, but we couldn’t tell which made sense.” Back then, the currency crisis was solved with the introduction first of Isaac Newton’s Royal Mint (“official” silver and gold) and later with the creation of the Bank of England (“official” paper money that could in theory be swapped for official silver or gold).

And now? Bitcoin offers unprecedented flexibility compared with what has gone before. “Some people in the mid-90s asked: ‘Why do we need the web when we have AOL and CompuServe?'” says Mike Hearn, who works on the programs that underpin Bitcoin. “And so now people ask the same of Bitcoin. The web came to dominate because it was flexible and open, so anyone could take part, innovate and build interesting applications like YouTube, Facebook or Wikipedia, none of which would have ever happened on the AOL platform. I think the same will be true of Bitcoin.”

For a small (but vocal) group in the US, Bitcoin represents the next best alternative to the gold standard, the 19th-century conception that money ought to be backed by precious metals rather than government printing presses and promises. This love of “hard money” is baked into Bitcoin itself, and is the reason why the owners who set computers to do the maths required to make the currency work are known as “miners”, and is why the total supply of Bitcoin is capped.

And for Tyler and Cameron Winklevoss, the twins who sued Mark Zuckerberg (claiming he stole their idea for Facebook; the case was settled out of court), it’s a handy vehicle for speculation. The two of them are setting up the “Winklevoss Bitcoin Trust”, letting conventional investors gamble on the price of the currency.

Some of the hurdles left between Bitcoin and widespread adoption can be fixed. But until and unless Bitcoin develops a fully fledged banking system, some things that we take for granted with conventional money won’t work.

Others are intrinsic to the currency. At some point in the early 22nd century, the last Bitcoin will be generated. Long before that, the creation of new coins will have dropped to near-zero. And through the next 100 or so years, it will follow an economic path laid out by “Nakomoto” in 2009 – a path that rejects the consensus view of modern economics that management by a central bank is beneficial. For some, that means Bitcoin can never achieve ubiquity. “Economies perform better when they have managed monetary policies,” the Bank of England’s chief cashier, Chris Salmon, said at an event to discuss Bitcoin last week. “As a result, it will never be more than an alternative [to state-backed money].” To macroeconomists, Bitcoin isn’t scary because it enables crime, or eases tax dodging. It’s scary because a world where it’s used for all transactions is one where the ability of a central bank to guide the economy is destroyed, by design.

Read the entire article here.

Image courtesy of Google Search.

Good, Old-Fashioned Spying

The spied-upon — and that’s most of us — must wonder how the spymasters of the NSA eavesdrop on their electronic communications. After all, we are led to believe that the agency with a voracious appetite for our personal data — phone records, financial transactions, travel reservations, texts and email conversations — gathered it all without permission. And, apparently, companies such as Google, Yahoo and AT&T with vast data centers and sprawling interconnections between them, did not collude with the government.

So, there is growing speculation that the agency tapped into the physical cables that make up the very backbone of the Internet. It brings a whole new meaning to the phrase World Wide Web.

From the NYT:

The recent revelation that the National Security Agency was able to eavesdrop on the communications of Google and Yahoo users without breaking into either companies’ data centers sounded like something pulled from a Robert Ludlum spy thriller.

How on earth, the companies asked, did the N.S.A. get their data without them knowing about it?

The most likely answer is a modern spin on a century-old eavesdropping tradition.

People knowledgeable about Google and Yahoo’s infrastructure say they believe that government spies bypassed the big Internet companies and hit them at a weak spot — the fiber-optic cables that connect data centers around the world that are owned by companies like Verizon Communications, the BT Group, the Vodafone Group and Level 3 Communications. In particular, fingers have been pointed at Level 3, the world’s largest so-called Internet backbone provider, whose cables are used by Google and Yahoo.

The Internet companies’ data centers are locked down with full-time security and state-of-the-art surveillance, including heat sensors and iris scanners. But between the data centers — on Level 3’s fiber-optic cables that connected those massive computer farms — information was unencrypted and an easier target for government intercept efforts, according to three people with knowledge of Google’s and Yahoo’s systems who spoke on the condition of anonymity.

It is impossible to say for certain how the N.S.A. managed to get Google and Yahoo’s data without the companies’ knowledge. But both companies, in response to concerns over those vulnerabilities, recently said they were now encrypting data that runs on the cables between their data centers. Microsoft is considering a similar move.

“Everyone was so focused on the N.S.A. secretly getting access to the front door that there was an assumption they weren’t going behind the companies’ backs and tapping data through the back door, too,” said Kevin Werbach, an associate professor at the Wharton School.

Data transmission lines have a long history of being tapped.

As far back as the days of the telegraph, spy agencies have located their operations in proximity to communications companies. Indeed, before the advent of the Internet, the N.S.A. and its predecessors for decades operated listening posts next to the long-distance lines of phone companies to monitor all international voice traffic.

Beginning in the 1960s, a spy operation code-named Echelon targeted the Soviet Union and its allies’ voice, fax and data traffic via satellite, microwave and fiber-optic cables.

In the 1990s, the emergence of the Internet both complicated the task of the intelligence agencies and presented powerful new spying opportunities based on the ability to process vast amounts of computer data.

In 2002, John M. Poindexter, former national security adviser under President Ronald Reagan, proposed the Total Information Awareness plan, an effort to scan the world’s electronic information — including phone calls, emails and financial and travel records. That effort was scrapped in 2003 after a public outcry over potential privacy violations.

The technologies Mr. Poindexter proposed are similar to what became reality years later in N.S.A. surveillance programs like Prism and Bullrun.

The Internet effectively mingled domestic and international communications, erasing the bright line that had been erected to protect against domestic surveillance. Although the Internet is designed to be a highly decentralized system, in practice a small group of backbone providers carry almost all of the network’s data.

The consequences of the centralization and its value for surveillance was revealed in 2006 by Mark Klein, an AT&T technician who described an N.S.A. listening post inside a room at an AT&T switching facility.

The agency was capturing a copy of all the data passing over the telecommunications links and then filtering it in AT&T facilities that housed systems that were able to filter data packets at high speed.

Documents taken by Edward J. Snowden and reported by The Washington Post indicate that, seven years after Mr. Klein first described the N.S.A.’s surveillance technologies, they have been refined and modernized.

Read the entire article here.

Image: fiber-optic cables. Courtesy of Daily Mail.

Gnarly Names

By most accounts the internet is home to around 650 million websites, of which around 200 million are active. About 8,000 new websites go live every hour of every day.

These are big numbers and the continued phenomenal growth means that it’s increasingly difficult to find a unique and unused domain name (think website). So, web entrepreneurs are getting creative with website and company names, with varying degrees of success.

From Wall Street Journal:

The New York cousins who started a digital sing-along storybook business have settled on the name Mibblio.

The Australian founder of a startup connecting big companies to big-data scientists has dubbed his service Kaggle.

The former toy executive behind a two-year-old mobile screen-sharing platform is going with the name Shodogg.

And the Missourian who founded a website giving customers access to local merchants and service providers? He thinks it should be called Zaarly.

Quirky names for startups first surfaced about 20 years ago in Silicon Valley, with the birth of search engines such as Yahoo, which stands for “Yet Another Hierarchical Officious Oracle,” and Google, a misspelling of googol,? the almost unfathomably high number represented by a 1 followed by 100 zeroes.

By the early 2000s, the trend had spread to startups outside the Valley, including the Vancouver-based photo-sharing site Flickr and New York-based blogging platform Tumblr, to name just two.

The current crop of startups boasts even wackier spellings. The reason, they say, is that practically every new business—be it a popsicle maker or a furniture retailer—needs its own website. With about 252 million domain names currently registered across the Internet, the short, recognizable dot-com Web addresses, or URLs, have long been taken.

The only practical solution, some entrepreneurs say, is to invent words, like Mibblio, Kaggle, Shodogg and Zaarly, to avoid paying as much as $2 million for a concise, no-nonsense dot-com URL.

The rights to Investing.com, for example, sold for about $2.5 million last year.

Choosing a name that’s a made-up word also helps entrepreneurs steer clear of trademark entanglements.

The challenge is to come up with something that conveys meaning, is memorable,?and isn’t just alphabet soup. Most founders don’t have the budget to hire naming advisers.

Founders tend to favor short names of five to seven letters, because they worry that potential customers might forget longer ones, according to Steve Manning, founder of Igor, a name-consulting company.

Linguistically speaking, there are only a few methods of forming new words. They include misspelling, compounding, blending and scrambling.

At Mibblio, the naming process was “the length of a human gestation period,” says the company’s 28-year-old co-founder David Leiberman, “but only more painful,” adds fellow co-founder Sammy Rubin, 35.

The two men made several trips back to the drawing board; early contenders included Babethoven, Yipsqueak and Canarytales, but none was a perfect fit. One they both loved, Squeakbox, was taken.

Read the entire article here.