Category Archives: Computer Science

None of Us Is As Smart As All of Us

Bob Taylor died on April 13, 2017 aged 85. An ordinary sounding name for someone who had a hand in founding almost every computing-related technology in the last 50 years.

Bob Taylor was a firm believer in the power of teamwork; one of his favorite proverbs was, “None of us is as smart as all of us”. And, the teams he was part of, and directed or funded, are stuff of Silicon Valley Legend. To name but a few:

In 1961, as a project manager at NASA, his support of computer scientist Douglas Engelbart, led to the invention of the computer mouse.

In 1966, at ARPA (Advanced Research Projects Agency) Taylor convinced his boss to spend half a million dollars on an experimental computer network. This became known as ARPAnet — the precursor to the Internet that we all live on today.

In 1972, now at Xerox PARC (Palo Alto Research Center) he and his teams of computer scientists ushered in the era of the personal computer. Some of the notable inventions at PARC during Taylor’s tenure include: the first true personal computer (Xerox Alto); windowed displays and graphical user interfaces, which led to the Apple Macintosh; Ethernet to connect networks of local computers; a communications protocol that later became TCP/IP, upon which runs most of today’s Internet traffic; hardware and software that led to the laser printer; and word and graphics processing tools that led engineers to develop PhotoShop and PageMaker (Adobe Systems) and Bravo, which later became Microsoft Word.

Read more about Bob Taylor’s unique and lasting legacy over at Wired.

Image: Bob Taylor, 2008. Credit: Gardner Campbell / Wikipedia. CC BY-SA 2.0.

Nightmare Machine

mit-nightmare-machine

Now that the abject terror of the US presidential election is over — at least for a while — we have to turn our minds to new forms of pain and horror.

In recent years a growing number of illustrious scientists and technologists has described artificial intelligence (AI) as the greatest existential threat to humanity. They worry, rightfully, that a well-drilled, unfettered AI could eventually out-think and out-smart us at every level. Eventually, a super-intelligent AI would determine that humans were either peripheral or superfluous to its needs and goals, and then either enslave or extinguish us. This is the stuff of real nightmares.

Yet, at a more playful level, AI can also learn to deliver imagined nightmares. This Halloween researchers at MIT used AI techniques to create and optimize horrifying images of human faces and places. They called their AI the Nightmare Machine.

For the first step, researchers fed hundreds of thousands of celebrity photos into their AI algorithm, known as a deep convolutional generative adversarial network. This allowed the AI to learn about faces and how to create new ones. Second, they flavored the results with a second learning algorithm that had been trained on images of zombies. The combination allowed the AI to learn the critical factors that make for scary images and to selectively improve upon upon them. It turns out that blood on the face, empty eyeball sockets, and missing or misshaped teeth tend to illicit the greatest horror and fear.

While the results are not quite as scary as Stephen Hawkins’ warning of AI-led human extinction the images are terrorizing nonetheless.

Learn more about the MIT Media Lab’s Nightmare Machine here.

Image: Horror imagery generated by artificial intelligence. Courtesy: MIT Media Lab.

The Death of Permissionless Innovation

NeXTcube_first_webserver

The internet and its user-friendly interface, the World Wide Web (Web), was founded on the principle of openness. The acronym soup of standards, such as TCP/IP, HTTP and HTML, paved the way for unprecedented connectivity and interoperability. Anyone armed with a computer and a connection, adhering to these standards, could now connect and browse and share data with any one else.

This is a simplified view of Sir Tim Berners-Lee vision for the Web in 1989 — the same year that brought us Seinfeld and The Simpsons. Berners-Lee invented the Web. His invention fostered an entire global technological and communications revolution over the next  quarter century.

However, Berners-Lee did something much more important. Rather than keeping the Web to himself and his colleagues, and turning to Silicon Valley to found and fund the next billion dollar startup, he pursued a path to give the ideas and technologies away. Critically, the open standards of the internet and Web enabled countless others to innovate and to profit.

One of the innovators to reap the greatest rewards from this openness is Facebook’s Mark Zuckerberg. Yet, in the ultimate irony, Facebook has turned the Berners-Lee model of openness and permissionless innovation on its head. It’s billion-plus users are members of a private, corporate-controlled walled garden. Innovation, to a large extent, is now limited by the whims of Facebook. Increasingly so, open innovation on the internet is stifled and extinguished by the constraints manufactured and controlled for Facebook’s own ends. This makes Zuckerberg’s vision of making the world “more open and connected” thoroughly laughable.

From the Guardian:

If there were a Nobel prize for hypocrisy, then its first recipient ought to be Mark Zuckerberg, the Facebook boss. On 23 August, all his 1.7 billion users were greeted by this message: “Celebrating 25 years of connecting people. The web opened up to the world 25 years ago today! We thank Sir Tim Berners-Lee and other internet pioneers for making the world more open and connected.”

Aw, isn’t that nice? From one “pioneer” to another. What a pity, then, that it is a combination of bullshit and hypocrisy. In relation to the former, the guy who invented the web, Tim Berners-Lee, is as mystified by this “anniversary” as everyone else. “Who on earth made up 23 August?” he asked on Twitter. Good question. In fact, as the Guardian pointed out: “If Facebook had asked Berners-Lee, he’d probably have told them what he’s been telling people for years: the web’s 25th birthday already happened, two years ago.”

“In 1989, I delivered a proposal to Cern for the system that went on to become the worldwide web,” he wrote in 2014. It was that year, not this one, that he said we should celebrate as the web’s 25th birthday.

It’s not the inaccuracy that grates, however, but the hypocrisy. Zuckerberg thanks Berners-Lee for “making the world more open and connected”. So do I. What Zuck conveniently omits to mention, though, is that he is embarked upon a commercial project whose sole aim is to make the world more “connected” but less open. Facebook is what we used to call a “walled garden” and now call a silo: a controlled space in which people are allowed to do things that will amuse them while enabling Facebook to monetise their data trails. One network to rule them all. If you wanted a vision of the opposite of the open web, then Facebook is it.

The thing that makes the web distinctive is also what made the internet special, namely that it was designed as an open platform. It was designed to facilitate “permissionless innovation”. If you had a good idea that could be realised using data packets, and possessed the programming skills to write the necessary software, then the internet – and the web – would do it for you, no questions asked. And you didn’t need much in the way of financial resources – or to ask anyone for permission – in order to realise your dream.

An open platform is one on which anyone can build whatever they like. It’s what enabled a young Harvard sophomore, name of Zuckerberg, to take an idea lifted from two nice-but-dim oarsmen, translate it into computer code and launch it on an unsuspecting world. And in the process create an empire of 1.7 billion subjects with apparently limitless revenues. That’s what permissionless innovation is like.

The open web enabled Zuckerberg to do this. But – guess what? – the Facebook founder has no intention of allowing anyone to build anything on his platform that does not have his express approval. Having profited mightily from the openness of the web, in other words, he has kicked away the ladder that elevated him to his current eminence. And the whole thrust of his company’s strategy is to persuade billions of future users that Facebook is the only bit of the internet they really need.

Read the entire article here.

Image: The NeXT Computer used by Tim Berners-Lee at CERN. Courtesy: Science Museum, London. GFDL CC-BY-SA.

Beware. Your Teaching Assistant May Be a Robot

RobotsMODO

All college level students have at some point wondered if one or more of their professorial teaching assistants was from planet Earth. If you fall into this category — as I once did — your skepticism and paranoia are completely justified. You see, some assistants aren’t even human.

So, here’s my first tip to any students wondering how to tell if their assistant is an alien entity: be skeptical if her or his last name is Watson.

From WSJ:

One day in January, Eric Wilson dashed off a message to the teaching assistants for an online course at the Georgia Institute of Technology.

“I really feel like I missed the mark in giving the correct amount of feedback,” he wrote, pleading to revise an assignment.

Thirteen minutes later, the TA responded. “Unfortunately, there is not a way to edit submitted feedback,” wrote Jill Watson, one of nine assistants for the 300-plus students.

Last week, Mr. Wilson found out he had been seeking guidance from a computer.

Since January, “Jill,” as she was known to the artificial-intelligence class, had been helping graduate students design programs that allow computers to solve certain problems, like choosing an image to complete a logical sequence.

“She was the person—well, the teaching assistant—who would remind us of due dates and post questions in the middle of the week to spark conversations,” said student Jennifer Gavin.

Ms. Watson—so named because she’s powered by International Business Machines Inc. ’s Watson analytics system—wrote things like “Yep!” and “we’d love to,” speaking on behalf of her fellow TAs, in the online forum where students discussed coursework and submitted projects.

“It seemed very much like a normal conversation with a human being,” Ms. Gavin said.

Shreyas Vidyarthi, another student, ascribed human attributes to the TA—imagining her as a friendly Caucasian 20-something on her way to a Ph.D.

Students were told of their guinea-pig status last month. “I was flabbergasted,” said Mr. Vidyarthi.

Read the whole story here.

Image: Toy robots on display at the Museo del Objeto del Objeto in Mexico City, 2011. Courtesy: Alejandro Linares Garcia. Creative Commons Attribution-Share Alike 3.0.

Meet the Chatbot Speech Artist

While speech recognition technology has been in the public sphere for several decades, Silicon Valley has re-discovered it with a renewed fervor. Companies from the tech giants, such as Facebook and Amazon, down to dozens of start-ups and their VC handlers have declared the next few years those of the chatbot; natural language-based messaging is the next big thing.

Thanks to Apple the most widespread incarnation of the chatbot is of course Siri — a personalized digital assistant capable of interacting with a user through a natural language conversation (well, almost). But while the parsing and understanding of human conversation, and the construction of chatbot responses, is all done via software — the vocalizations themselves are human. As a result, a new career field is opening up for enterprising speech artists.

From Washington Post:

Until recently, Robyn Ewing was a writer in Hollywood, developing TV scripts and pitching pilots to film studios.

Now she’s applying her creative talents toward building the personality of a different type of character — a virtual assistant, animated by artifical intelligence, that interacts with sick patients.

Ewing works with engineers on the software program, called Sophie, which can be downloaded to a smartphone. The virtual nurse gently reminds users to check their medication, asks them how they are feeling or if they are in pain, and then sends the data to a real doctor.

As tech behemoths and a wave of start-ups double down on virtual assistants that can chat with human beings, writing for AI is becoming a hot job in Silicon Valley. Behind Apple’s Siri, Amazon’s Alexa and Microsoft’s Cortana are not just software engineers. Increasingly, there are poets, comedians, fiction writers, and other artistic types charged with engineering the personalities for a fast-growing crop of artificial intelligence tools.

“Maybe this will help pay back all the student loans,” joked Ewing, who has master’s degrees from the Iowa Writer’s Workshop and film school.

Unlike the fictional characters that Ewing developed in Hollywood, who are put through adventures, personal trials and plot twists, most virtual assistants today are designed to perform largely prosaic tasks, such as reading through email, sending meetings reminders or turning off the lights as you shout across the room.

But a new crop of virtual assistant start-ups, whose products will soon flood the market, have in mind more ambitious bots that can interact seamlessly with human beings.

Because this wave of technology is distinguished by the ability to chat, writers for AI must focus on making the conversation feel natural. Designers for Amazon’s Alexa have built humanizing “hmms” and “ums” into her responses to questions. Apple’s Siri assistant is known for her wry jokes, as well as her ability to beatbox upon request.

As in fiction, the AI writers for virtual assistants dream up a life story for their bots. Writers for medical and productivity apps make character decisions such as whether bots should be workaholics, eager beavers or self-effacing. “You have to develop an entire backstory — even if you never use it,” Ewing said.

Even mundane tasks demand creative effort, as writers try to build personality quirks into the most rote activities. At the start-up x.ai, a Harvard theater graduate is tasked with deciding whether its scheduling bots, Amy and Andrew, should use emojis or address people by first names. “We don’t want people saying, ‘Your assistant is too casual — or too much,’?” said Anna Kelsey, whose title is AI interaction designer. “We don’t want her to be one of those crazy people who uses 15 million exclamation points.”

Virtual assistant start-ups garnered at least $35 million in investment over the past year, according to CBInsights and Washington Post research (This figure doesn’t count the many millions spent by tech giants Google, Amazon, Apple, Facebook, and Microsoft).

The surge of investor interest in virtual assistants that can converse has been fueled in part by the popularity of messaging apps, such as WeChat, WhatsApp, and Facebook’s Messenger, which are among the most widely downloaded smartphone applications. Investors see that users are increasingly drawn to conversational platforms, and hope to build additional features into them.

Read the entire story here.

The Rembrandt Algorithm

new-rembrandt

Over the last few decades robots have been steadily replacing humans in industrial and manufacturing sectors. Increasingly, robots are appearing in a broader array of service sectors; they’re stocking shelves, cleaning hotels, buffing windows, tending bar, dispensing cash.

Nowadays you’re likely to be the recipient of news articles filtered, and in some cases written, by pieces of code and business algorithms. Indeed, many boilerplate financial reports are now “written” by “analysts” who reside, not as flesh-and-bones, but virtually, inside server-farms. Just recently a collection of circuitry and software trounced a human being at the strategic board game, Go.

So, can computers progress from repetitive, mechanical and programmatic roles to more creative, free-wheeling vocations? Can computers become artists?

A group of data scientists, computer engineers, software developers and art historians set out to answer the question.

Jonathan Jones over at the Guardian has a few choice words on the result:

I’ve been away for a few days and missed the April Fool stories in Friday’s papers – until I spotted the one about a team of Dutch “data analysts, developers, engineers and art historians” creating a new painting using digital technology: a virtual Rembrandt painted by a Rembrandt app. Hilarious! But wait, this was too late to be an April Fool’s joke. This is a real thing that is actually happening.

What a horrible, tasteless, insensitive and soulless travesty of all that is creative in human nature. What a vile product of our strange time when the best brains dedicate themselves to the stupidest “challenges”, when technology is used for things it should never be used for and everybody feels obliged to applaud the heartless results because we so revere everything digital.

Hey, they’ve replaced the most poetic and searching portrait painter in history with a machine. When are we going to get Shakespeare’s plays and Bach’s St Matthew Passion rebooted by computers? I cannot wait for Love’s Labours Have Been Successfully Functionalised by William Shakesbot.

You cannot, I repeat, cannot, replicate the genius of Rembrandt van Rijn. His art is not a set of algorithms or stylistic tics that can be recreated by a human or mechanical imitator. He can only be faked – and a fake is a dead, dull thing with none of the life of the original. What these silly people have done is to invent a new way to mock art. Bravo to them! But the Dutch art historians and museums who appear to have lent their authority to such a venture are fools.

Rembrandt lived from 1606 to 1669. His art only has meaning as a historical record of his encounters with the people, beliefs and anguishes of his time. Its universality is the consequence of the depth and profundity with which it does so. Looking into the eyes of Rembrandt’s Self-Portrait at the Age of 63, I am looking at time itself: the time he has lived, and the time since he lived. A man who stared, hard, at himself in his 17th-century mirror now looks back at me, at you, his gaze so deep his mottled flesh is just the surface of what we see.

We glimpse his very soul. It’s not style and surface effects that make his paintings so great but the artist’s capacity to reveal his inner life and make us aware in turn of our own interiority – to experience an uncanny contact, soul to soul. Let’s call it the Rembrandt Shudder, that feeling I long for – and get – in front of every true Rembrandt masterpiece..

Is that a mystical claim? The implication of the digital Rembrandt is that we get too sentimental and moist-eyed about art, that great art is just a set of mannerisms that can be digitised. I disagree. If it’s mystical to see Rembrandt as a special and unique human being who created unrepeatable, inexhaustible masterpieces of perception and intuition then count me a mystic.

Read the entire story here.

Image: The Next Rembrandt (based on 168,263 Rembrandt painting fragments). Courtesy: Microsoft, Delft University of Technology,  Mauritshuis (Hague), Rembrandt House Museum (Amsterdam).

Beware the Beauty of Move 37

AlphaGo-Lee-Sedol-Game 2

Make a note of the date, March 15, 2016. On this day, AlphaGo the Go playing artificial intelligence (AI) system from Google’s DeepMind unit, wrapped up its five game series. It beat Lee Sedol, a human and one of the world’s best Go players, by 4 games to 1.

This marks the first time a machine has beaten a human at Go, an ancient and notoriously complex board game.  AlphaGo’s victory stunned the Go-playing world, but its achievement is merely the opening shot in the coming AI revolution.

The AlphaGo system is based on deep neural networks and machine learning, which means it is driven by software that learns. In fact, AlphaGo became an expert Go player by analyzing millions of previous Go games and also by playing itself tens of millions of times, and learning and improving in the process.

While the AI technology that underlies AlphaGo has been around for decades, it is now reaching a point where AI-based systems can out-think and outperform their human masters. In fact, many considered it impossible for a computer to play Go at this level due to the immeasurable number of possible positions on the board, mastery of strategy, tactical obfuscation, and the need for a human-like sense of intuition.

Indeed, in game 2 of the series AlphaGo made a strange, seemingly inexplicable decision on move 37. This turned the game to AlphaGo’s favor and Lee Sedol never recovered. Commentators and AlphaGo’s human adversary noted move 37 as extraordinarily unexpected and “beautiful”.

And, from that story of beauty comes a tale of caution from David Gelernter, professor of computer science at Yale. Gelernter rightly wonders what an AI with an IQ of 5,000 would mean. After all, it is only a matter of time — rapidly approaching — before we have constructed machines with a human average IQ of 100, then 500.

Image: Game 2, first 99 moves, screenshot. AlphaGo (black) versus Lee Sedol (white), March 10, 2016. Courtesy of Wikipedia.

Software That Learns to Eat Itself

Google became a monstrously successful technology company by inventing a solution to index and search content scattered across the Web, and then monetizing the search results through contextual ads. Since its inception the company has relied on increasingly sophisticated algorithms for indexing mountains of information and then serving up increasingly relevant results. These algorithms are based on a secret sauce that ranks the relevance of a webpage by evaluating its content, structure and relationships with other pages. They are defined and continuously improved by technologists and encoded into software by teams of engineers.

But as is the case in many areas of human endeavor, the underlying search engine technology and its teams of human designers and caregivers are being replaced by newer, better technology. In this case the better technology is based on artificial intelligence (AI), and it doesn’t rely on humans. It is based on machine or deep learning and neural networks — a combination of hardware and software that increasingly mimics the human brain in its ability to aggregate and filter information, decipher patterns and infer meaning.

[I’m sure it will not be long before yours truly is replaced by a bot.]

From Wired:

Yesterday, the 46-year-old Google veteran who oversees its search engine, Amit Singhal, announced his retirement. And in short order, Google revealed that Singhal’s rather enormous shoes would be filled by a man named John Giannandrea. On one level, these are just two guys doing something new with their lives. But you can also view the pair as the ideal metaphor for a momentous shift in the way things work inside Google—and across the tech world as a whole.

Giannandrea, you see, oversees Google’s work in artificial intelligence. This includes deep neural networks, networks of hardware and software that approximate the web of neurons in the human brain. By analyzing vast amounts of digital data, these neural nets can learn all sorts of useful tasks, like identifying photos, recognizing commands spoken into a smartphone, and, as it turns out, responding to Internet search queries. In some cases, they can learn a task so well that they outperform humans. They can do it better. They can do it faster. And they can do it at a much larger scale.

This approach, called deep learning, is rapidly reinventing so many of the Internet’s most popular services, from Facebook to Twitter to Skype. Over the past year, it has also reinvented Google Search, where the company generates most of its revenue. Early in 2015, as Bloomberg recently reported, Google began rolling out a deep learning system called RankBrain that helps generate responses to search queries. As of October, RankBrain played a role in “a very large fraction” of the millions of queries that go through the search engine with each passing second.

Read the entire story here.

Google AI Versus the Human Race

Korean_Go_Game_ca_1910-1920

It does indeed appear that a computer armed with Google’s experimental AI (artificial intelligence) software just beat a grandmaster of the strategy board game Go. The game was devised in ancient China — it’s been around for several millennia. Go is commonly held to be substantially more difficult than chess to master, to which I can personally attest.

So, does this mean that the human race is next in line for a defeat at the hands of an uber-intelligent AI? Well, not really, not yet anyway.

But, I’m with prominent scientists and entrepreneurs — including Stephen Hawking, Bill Gates and Elon Musk — who warn of the long-term existential peril to humanity from unfettered AI. In the meantime check out how AlphaGo from Google’s DeepMind unit set about thrashing a human.

From Wired:

An artificially intelligent Google machine just beat a human grandmaster at the game of Go, the 2,500-year-old contest of strategy and intellect that’s exponentially more complex than the game of chess. And Nick Bostrom isn’t exactly impressed.

Bostrom is the Swedish-born Oxford philosophy professor who rose to prominence on the back of his recent bestseller Superintelligence: Paths, Dangers, Strategies, a book that explores the benefits of AI, but also argues that a truly intelligent computer could hasten the extinction of humanity. It’s not that he discounts the power of Google’s Go-playing machine. He just argues that it isn’t necessarily a huge leap forward. The technologies behind Google’s system, Bostrom points out, have been steadily improving for years, including much-discussed AI techniques such as deep learning and reinforcement learning. Google beating a Go grandmaster is just part of a much bigger arc. It started long ago, and it will continue for years to come.

“There has been, and there is, a lot of progress in state-of-the-art artificial intelligence,” Bostrom says. “[Google’s] underlying technology is very much continuous with what has been under development for the last several years.”

But if you look at this another way, it’s exactly why Google’s triumph is so exciting—and perhaps a little frightening. Even Bostrom says it’s a good excuse to stop and take a look at how far this technology has come and where it’s going. Researchers once thought AI would struggle to crack Go for at least another decade. Now, it’s headed to places that once seemed unreachable. Or, at least, there are many people—with much power and money at their disposal—who are intent on reaching those places.

Building a Brain

Google’s AI system, known as AlphaGo, was developed at DeepMind, the AI research house that Google acquired for $400 million in early 2014. DeepMind specializes in both deep learning and reinforcement learning, technologies that allow machines to learn largely on their own.

Using what are called neural networks—networks of hardware and software that approximate the web of neurons in the human brain—deep learning is what drives the remarkably effective image search tool build into Google Photos—not to mention the face recognition service on Facebook and the language translation tool built into Microsoft’s Skype and the system that identifies porn on Twitter. If you feed millions of game moves into a deep neural net, you can teach it to play a video game.

Reinforcement learning takes things a step further. Once you’ve built a neural net that’s pretty good at playing a game, you can match it against itself. As two versions of this neural net play thousands of games against each other, the system tracks which moves yield the highest reward—that is, the highest score—and in this way, it learns to play the game at an even higher level.

AlphaGo uses all this. And then some. Hassabis [Demis Hassabis, AlphaGo founder] and his team added a second level of “deep reinforcement learning” that looks ahead to the longterm results of each move. And they lean on traditional AI techniques that have driven Go-playing AI in the past, including the Monte Carlo tree search method, which basically plays out a huge number of scenarios to their eventual conclusions. Drawing from techniques both new and old, they built a system capable of beating a top professional player. In October, AlphaGo played a close-door match against the reigning three-time European Go champion, which was only revealed to the public on Wednesday morning. The match spanned five games, and AlphaGo won all five.

Read the entire story here.

Image: Korean couple, in traditional dress, play Go; photograph dated between 1910 and 1920. Courtesy: Frank and Frances Carpenter Collection. Public Domain.

DeepDrumpf the 4th-Grader

DeepDrumpf is a Twitter bot out of MIT’s Computer Science and Artificial Intelligence Lab (CSAIL). It uses artificial intelligence (AI) to learn from the jaw-dropping rants of the current Republican frontrunner for the Presidential nomination and then tweets its own remarkably Trump-like musings.

A handful of DeepDrumpf’s recent deep-thoughts here:

DeepDrumpf-Twitter-bot

 

The bot’s designer CSAIL postdoc Bradley Hayes says DeepDrumpf uses “techniques from ‘deep-learning,’ a field of artificial intelligence that uses systems called neural networks to teach computers to to find patterns on their own. ”

I would suggest that the deep-learning algorithms, in the case of Trump’s speech patterns, did not have to be too deep. After all, linguists who have studied his words agree that it’s mostly at a  4th-grade level — coherent language is not required.

Patterns aside, I think I prefer the bot over the real thing — it’s likely to do far less damage to our country and the globe than the real thing.