Tag Archives: AI

Computational Folkloristics

hca_by_thora_hallager_1869What do you get when you set AI (artificial intelligence) the task of reading through 30,000 Danish folk and fairy tales? Well, you get a host of fascinating, newly discovered insights into Scandinavian witches and trolls.

More importantly, you hammer another nail into the coffin of literary criticism and set AI on a collision course with yet another preserve of once exclusive human endeavor. It’s probably safe to assume that creative writing will fall to intelligent machines in the not too distant future (as well) — certainly human-powered investigative journalism seemed to became extinct in 2016; replaced by algorithmic aggregation, social bots and fake-mongers.

From aeon:

Where do witches come from, and what do those places have in common? While browsing a large collection of traditional Danish folktales, the folklorist Timothy Tangherlini and his colleague Peter Broadwell, both at the University of California, Los Angeles, decided to find out. Armed with a geographical index and some 30,000 stories, they developed WitchHunter, an interactive ‘geo-semantic’ map of Denmark that highlights the hotspots for witchcraft.

The system used artificial intelligence (AI) techniques to unearth a trove of surprising insights. For example, they found that evil sorcery often took place close to Catholic monasteries. This made a certain amount of sense, since Catholic sites in Denmark were tarred with diabolical associations after the Protestant Reformation in the 16th century. By plotting the distance and direction of witchcraft relative to the storyteller’s location, WitchHunter also showed that enchantresses tend to be found within the local community, much closer to home than other kinds of threats. ‘Witches and robbers are human threats to the economic stability of the community,’ the researchers write. ‘Yet, while witches threaten from within, robbers are generally situated at a remove from the well-described village, often living in woods, forests, or the heath … it seems that no matter how far one goes, nor where one turns, one is in danger of encountering a witch.’

Such ‘computational folkloristics’ raise a big question: what can algorithms tell us about the stories we love to read? Any proposed answer seems to point to as many uncertainties as it resolves, especially as AI technologies grow in power. Can literature really be sliced up into computable bits of ‘information’, or is there something about the experience of reading that is irreducible? Could AI enhance literary interpretation, or will it alter the field of literary criticism beyond recognition? And could algorithms ever derive meaning from books in the way humans do, or even produce literature themselves?

Author and computational linguist Inderjeet Mani concludes his essay thus:

Computational analysis and ‘traditional’ literary interpretation need not be a winner-takes-all scenario. Digital technology has already started to blur the line between creators and critics. In a similar way, literary critics should start combining their deep expertise with ingenuity in their use of AI tools, as Broadwell and Tangherlini did with WitchHunter. Without algorithmic assistance, researchers would be hard-pressed to make such supernaturally intriguing findings, especially as the quantity and diversity of writing proliferates online.

In the future, scholars who lean on digital helpmates are likely to dominate the rest, enriching our literary culture and changing the kinds of questions that can be explored. Those who resist the temptation to unleash the capabilities of machines will have to content themselves with the pleasures afforded by smaller-scale, and fewer, discoveries. While critics and book reviewers may continue to be an essential part of public cultural life, literary theorists who do not embrace AI will be at risk of becoming an exotic species – like the librarians who once used index cards to search for information.

Read the entire tale here.

Image: Portrait of the Danish writer Hans Christian Andersen. Courtesy: Thora Hallager, 10/16 October 1869. Wikipedia. Public Domain.

Nightmare Machine


Now that the abject terror of the US presidential election is over — at least for a while — we have to turn our minds to new forms of pain and horror.

In recent years a growing number of illustrious scientists and technologists has described artificial intelligence (AI) as the greatest existential threat to humanity. They worry, rightfully, that a well-drilled, unfettered AI could eventually out-think and out-smart us at every level. Eventually, a super-intelligent AI would determine that humans were either peripheral or superfluous to its needs and goals, and then either enslave or extinguish us. This is the stuff of real nightmares.

Yet, at a more playful level, AI can also learn to deliver imagined nightmares. This Halloween researchers at MIT used AI techniques to create and optimize horrifying images of human faces and places. They called their AI the Nightmare Machine.

For the first step, researchers fed hundreds of thousands of celebrity photos into their AI algorithm, known as a deep convolutional generative adversarial network. This allowed the AI to learn about faces and how to create new ones. Second, they flavored the results with a second learning algorithm that had been trained on images of zombies. The combination allowed the AI to learn the critical factors that make for scary images and to selectively improve upon upon them. It turns out that blood on the face, empty eyeball sockets, and missing or misshaped teeth tend to illicit the greatest horror and fear.

While the results are not quite as scary as Stephen Hawkins’ warning of AI-led human extinction the images are terrorizing nonetheless.

Learn more about the MIT Media Lab’s Nightmare Machine here.

Image: Horror imagery generated by artificial intelligence. Courtesy: MIT Media Lab.

Benjamin Saves Us From Hollywood


Not a moment too soon. Benjamin has arrived in California to save us from ill-conceived and poorly written screenplays vying to be the next Hollywood blockbuster.

Thankfully, Benjamin is neither the 20-something, creative-wunderkind nor a 30-something know-it-all uber-producer; he (or she) is not even human. Benjamin is an AI (artificial intelligence) based automatic screenwriter, and author of Sunspring, a short science fiction film.

From ars technica:

Ars is excited to be hosting this online debut of Sunspring, a short science fiction film that’s not entirely what it seems. It’s about three people living in a weird future, possibly on a space station, probably in a love triangle. You know it’s the future because H (played with neurotic gravity by Silicon Valley‘s Thomas Middleditch) is wearing a shiny gold jacket, H2 (Elisabeth Gray) is playing with computers, and C (Humphrey Ker) announces that he has to “go to the skull” before sticking his face into a bunch of green lights. It sounds like your typical sci-fi B-movie, complete with an incoherent plot. Except Sunspring isn’t the product of Hollywood hacks—it was written entirely by an AI. To be specific, it was authored by a recurrent neural network called long short-term memory, or LSTM for short. At least, that’s what we’d call it. The AI named itself Benjamin.

Knowing that an AI wrote Sunspring makes the movie more fun to watch, especially once you know how the cast and crew put it together. Director Oscar Sharp made the movie for Sci-Fi London, an annual film festival that includes the 48-Hour Film Challenge, where contestants are given a set of prompts (mostly props and lines) that have to appear in a movie they make over the next two days. Sharp’s longtime collaborator, Ross Goodwin, is an AI researcher at New York University, and he supplied the movie’s AI writer, initially called Jetson. As the cast gathered around a tiny printer, Benjamin spat out the screenplay, complete with almost impossible stage directions like “He is standing in the stars and sitting on the floor.” Then Sharp randomly assigned roles to the actors in the room. “As soon as we had a read-through, everyone around the table was laughing their heads off with delight,” Sharp told Ars. The actors interpreted the lines as they read, adding tone and body language, and the results are what you see in the movie. Somehow, a slightly garbled series of sentences became a tale of romance and murder, set in a dark future world. It even has its own musical interlude (performed by Andrew and Tiger), with a pop song Benjamin composed after learning from a corpus of 30,000 other pop songs.

Read more here.

Image: Benjamin screenshot. Courtesy of Benjamin.

First, Order a Pizza. Second, World Domination


Tech startups that plan to envelope the globe with their never-thought-of-before-but-cannot-do-without technologies and services have to begin somewhere. Usually, the path to worldwide domination begins with pizza.

From the Washington Post:

In an ordinary conference room in this city of start-ups, a group of engineers sat down to order pizza in an entirely new way.

“Get me a pizza from Pizz’a Chicago near my office,” one of the engineers said into his smartphone. It was their first real test of Viv, the artificial-intelligence technology that the team had been quietly building for more than a year. Everyone was a little nervous. Then, a text from Viv piped up: “Would you like toppings with that?”

The engineers, eight in all, started jumping in: “Pepperoni.” “Half cheese.” “Caesar salad.” Emboldened by the result, they peppered Viv with more commands: Add more toppings. Remove toppings. Change medium size to large.

About 40 minutes later — and after a few hiccups when Viv confused the office address — a Pizz’a Chicago driver showed up with four made-to-order pizzas.

The engineers erupted in cheers as the pizzas arrived. They had ordered pizza, from start to finish, without placing a single phone call and without doing a Google search — without any typing at all, actually. Moreover, they did it without downloading an app from Domino’s or Grubhub.

Of course, a pizza is just a pizza. But for Silicon Valley, a seemingly small change in consumer behavior or design can mean a tectonic shift in the commercial order, with ripple effects across an entire economy. Engineers here have long been animated by the quest to achieve the path of least friction — to use the parlance of the tech world — to the proverbial pizza.

The stealthy, four-year-old Viv is among the furthest along in an endeavor that many in Silicon Valley believe heralds that next big shift in computing — and digital commerce itself. Over the next five years, that transition will turn smartphones — and perhaps smart homes and cars and other devices — into virtual assistants with supercharged conversational capabilities, said Julie Ask, an expert in mobile commerce at Forrester.

Powered by artificial intelligence and unprecedented volumes of data, they could become the portal through which billions of people connect to every service and business on the Internet. It’s a world in which you can order a taxi, make a restaurant reservation and buy movie tickets in one long unbroken conversation — no more typing, searching or even clicking.

Viv, which will be publicly demonstrated for the first time at a major industry conference on Monday, is one of the most highly anticipated technologies expected to come out of a start-up this year. But Viv is by no means alone in this effort. The quest to define the next generation of artificial-intelligence technology has sparked an arms race among the five major tech giants: Apple, Google, Microsoft, Facebook and Amazon.com have all announced major investments in virtual-assistant software over the past year.

Read the entire story here.

Image courtesy of Google Search.

Beware. Your Teaching Assistant May Be a Robot


All college level students have at some point wondered if one or more of their professorial teaching assistants was from planet Earth. If you fall into this category — as I once did — your skepticism and paranoia are completely justified. You see, some assistants aren’t even human.

So, here’s my first tip to any students wondering how to tell if their assistant is an alien entity: be skeptical if her or his last name is Watson.

From WSJ:

One day in January, Eric Wilson dashed off a message to the teaching assistants for an online course at the Georgia Institute of Technology.

“I really feel like I missed the mark in giving the correct amount of feedback,” he wrote, pleading to revise an assignment.

Thirteen minutes later, the TA responded. “Unfortunately, there is not a way to edit submitted feedback,” wrote Jill Watson, one of nine assistants for the 300-plus students.

Last week, Mr. Wilson found out he had been seeking guidance from a computer.

Since January, “Jill,” as she was known to the artificial-intelligence class, had been helping graduate students design programs that allow computers to solve certain problems, like choosing an image to complete a logical sequence.

“She was the person—well, the teaching assistant—who would remind us of due dates and post questions in the middle of the week to spark conversations,” said student Jennifer Gavin.

Ms. Watson—so named because she’s powered by International Business Machines Inc. ’s Watson analytics system—wrote things like “Yep!” and “we’d love to,” speaking on behalf of her fellow TAs, in the online forum where students discussed coursework and submitted projects.

“It seemed very much like a normal conversation with a human being,” Ms. Gavin said.

Shreyas Vidyarthi, another student, ascribed human attributes to the TA—imagining her as a friendly Caucasian 20-something on her way to a Ph.D.

Students were told of their guinea-pig status last month. “I was flabbergasted,” said Mr. Vidyarthi.

Read the whole story here.

Image: Toy robots on display at the Museo del Objeto del Objeto in Mexico City, 2011. Courtesy: Alejandro Linares Garcia. Creative Commons Attribution-Share Alike 3.0.

Meet the Chatbot Speech Artist

While speech recognition technology has been in the public sphere for several decades, Silicon Valley has re-discovered it with a renewed fervor. Companies from the tech giants, such as Facebook and Amazon, down to dozens of start-ups and their VC handlers have declared the next few years those of the chatbot; natural language-based messaging is the next big thing.

Thanks to Apple the most widespread incarnation of the chatbot is of course Siri — a personalized digital assistant capable of interacting with a user through a natural language conversation (well, almost). But while the parsing and understanding of human conversation, and the construction of chatbot responses, is all done via software — the vocalizations themselves are human. As a result, a new career field is opening up for enterprising speech artists.

From Washington Post:

Until recently, Robyn Ewing was a writer in Hollywood, developing TV scripts and pitching pilots to film studios.

Now she’s applying her creative talents toward building the personality of a different type of character — a virtual assistant, animated by artifical intelligence, that interacts with sick patients.

Ewing works with engineers on the software program, called Sophie, which can be downloaded to a smartphone. The virtual nurse gently reminds users to check their medication, asks them how they are feeling or if they are in pain, and then sends the data to a real doctor.

As tech behemoths and a wave of start-ups double down on virtual assistants that can chat with human beings, writing for AI is becoming a hot job in Silicon Valley. Behind Apple’s Siri, Amazon’s Alexa and Microsoft’s Cortana are not just software engineers. Increasingly, there are poets, comedians, fiction writers, and other artistic types charged with engineering the personalities for a fast-growing crop of artificial intelligence tools.

“Maybe this will help pay back all the student loans,” joked Ewing, who has master’s degrees from the Iowa Writer’s Workshop and film school.

Unlike the fictional characters that Ewing developed in Hollywood, who are put through adventures, personal trials and plot twists, most virtual assistants today are designed to perform largely prosaic tasks, such as reading through email, sending meetings reminders or turning off the lights as you shout across the room.

But a new crop of virtual assistant start-ups, whose products will soon flood the market, have in mind more ambitious bots that can interact seamlessly with human beings.

Because this wave of technology is distinguished by the ability to chat, writers for AI must focus on making the conversation feel natural. Designers for Amazon’s Alexa have built humanizing “hmms” and “ums” into her responses to questions. Apple’s Siri assistant is known for her wry jokes, as well as her ability to beatbox upon request.

As in fiction, the AI writers for virtual assistants dream up a life story for their bots. Writers for medical and productivity apps make character decisions such as whether bots should be workaholics, eager beavers or self-effacing. “You have to develop an entire backstory — even if you never use it,” Ewing said.

Even mundane tasks demand creative effort, as writers try to build personality quirks into the most rote activities. At the start-up x.ai, a Harvard theater graduate is tasked with deciding whether its scheduling bots, Amy and Andrew, should use emojis or address people by first names. “We don’t want people saying, ‘Your assistant is too casual — or too much,’?” said Anna Kelsey, whose title is AI interaction designer. “We don’t want her to be one of those crazy people who uses 15 million exclamation points.”

Virtual assistant start-ups garnered at least $35 million in investment over the past year, according to CBInsights and Washington Post research (This figure doesn’t count the many millions spent by tech giants Google, Amazon, Apple, Facebook, and Microsoft).

The surge of investor interest in virtual assistants that can converse has been fueled in part by the popularity of messaging apps, such as WeChat, WhatsApp, and Facebook’s Messenger, which are among the most widely downloaded smartphone applications. Investors see that users are increasingly drawn to conversational platforms, and hope to build additional features into them.

Read the entire story here.

Beware the Beauty of Move 37

AlphaGo-Lee-Sedol-Game 2

Make a note of the date, March 15, 2016. On this day, AlphaGo the Go playing artificial intelligence (AI) system from Google’s DeepMind unit, wrapped up its five game series. It beat Lee Sedol, a human and one of the world’s best Go players, by 4 games to 1.

This marks the first time a machine has beaten a human at Go, an ancient and notoriously complex board game.  AlphaGo’s victory stunned the Go-playing world, but its achievement is merely the opening shot in the coming AI revolution.

The AlphaGo system is based on deep neural networks and machine learning, which means it is driven by software that learns. In fact, AlphaGo became an expert Go player by analyzing millions of previous Go games and also by playing itself tens of millions of times, and learning and improving in the process.

While the AI technology that underlies AlphaGo has been around for decades, it is now reaching a point where AI-based systems can out-think and outperform their human masters. In fact, many considered it impossible for a computer to play Go at this level due to the immeasurable number of possible positions on the board, mastery of strategy, tactical obfuscation, and the need for a human-like sense of intuition.

Indeed, in game 2 of the series AlphaGo made a strange, seemingly inexplicable decision on move 37. This turned the game to AlphaGo’s favor and Lee Sedol never recovered. Commentators and AlphaGo’s human adversary noted move 37 as extraordinarily unexpected and “beautiful”.

And, from that story of beauty comes a tale of caution from David Gelernter, professor of computer science at Yale. Gelernter rightly wonders what an AI with an IQ of 5,000 would mean. After all, it is only a matter of time — rapidly approaching — before we have constructed machines with a human average IQ of 100, then 500.

Image: Game 2, first 99 moves, screenshot. AlphaGo (black) versus Lee Sedol (white), March 10, 2016. Courtesy of Wikipedia.

Software That Learns to Eat Itself

Google became a monstrously successful technology company by inventing a solution to index and search content scattered across the Web, and then monetizing the search results through contextual ads. Since its inception the company has relied on increasingly sophisticated algorithms for indexing mountains of information and then serving up increasingly relevant results. These algorithms are based on a secret sauce that ranks the relevance of a webpage by evaluating its content, structure and relationships with other pages. They are defined and continuously improved by technologists and encoded into software by teams of engineers.

But as is the case in many areas of human endeavor, the underlying search engine technology and its teams of human designers and caregivers are being replaced by newer, better technology. In this case the better technology is based on artificial intelligence (AI), and it doesn’t rely on humans. It is based on machine or deep learning and neural networks — a combination of hardware and software that increasingly mimics the human brain in its ability to aggregate and filter information, decipher patterns and infer meaning.

[I’m sure it will not be long before yours truly is replaced by a bot.]

From Wired:

Yesterday, the 46-year-old Google veteran who oversees its search engine, Amit Singhal, announced his retirement. And in short order, Google revealed that Singhal’s rather enormous shoes would be filled by a man named John Giannandrea. On one level, these are just two guys doing something new with their lives. But you can also view the pair as the ideal metaphor for a momentous shift in the way things work inside Google—and across the tech world as a whole.

Giannandrea, you see, oversees Google’s work in artificial intelligence. This includes deep neural networks, networks of hardware and software that approximate the web of neurons in the human brain. By analyzing vast amounts of digital data, these neural nets can learn all sorts of useful tasks, like identifying photos, recognizing commands spoken into a smartphone, and, as it turns out, responding to Internet search queries. In some cases, they can learn a task so well that they outperform humans. They can do it better. They can do it faster. And they can do it at a much larger scale.

This approach, called deep learning, is rapidly reinventing so many of the Internet’s most popular services, from Facebook to Twitter to Skype. Over the past year, it has also reinvented Google Search, where the company generates most of its revenue. Early in 2015, as Bloomberg recently reported, Google began rolling out a deep learning system called RankBrain that helps generate responses to search queries. As of October, RankBrain played a role in “a very large fraction” of the millions of queries that go through the search engine with each passing second.

Read the entire story here.

Google AI Versus the Human Race


It does indeed appear that a computer armed with Google’s experimental AI (artificial intelligence) software just beat a grandmaster of the strategy board game Go. The game was devised in ancient China — it’s been around for several millennia. Go is commonly held to be substantially more difficult than chess to master, to which I can personally attest.

So, does this mean that the human race is next in line for a defeat at the hands of an uber-intelligent AI? Well, not really, not yet anyway.

But, I’m with prominent scientists and entrepreneurs — including Stephen Hawking, Bill Gates and Elon Musk — who warn of the long-term existential peril to humanity from unfettered AI. In the meantime check out how AlphaGo from Google’s DeepMind unit set about thrashing a human.

From Wired:

An artificially intelligent Google machine just beat a human grandmaster at the game of Go, the 2,500-year-old contest of strategy and intellect that’s exponentially more complex than the game of chess. And Nick Bostrom isn’t exactly impressed.

Bostrom is the Swedish-born Oxford philosophy professor who rose to prominence on the back of his recent bestseller Superintelligence: Paths, Dangers, Strategies, a book that explores the benefits of AI, but also argues that a truly intelligent computer could hasten the extinction of humanity. It’s not that he discounts the power of Google’s Go-playing machine. He just argues that it isn’t necessarily a huge leap forward. The technologies behind Google’s system, Bostrom points out, have been steadily improving for years, including much-discussed AI techniques such as deep learning and reinforcement learning. Google beating a Go grandmaster is just part of a much bigger arc. It started long ago, and it will continue for years to come.

“There has been, and there is, a lot of progress in state-of-the-art artificial intelligence,” Bostrom says. “[Google’s] underlying technology is very much continuous with what has been under development for the last several years.”

But if you look at this another way, it’s exactly why Google’s triumph is so exciting—and perhaps a little frightening. Even Bostrom says it’s a good excuse to stop and take a look at how far this technology has come and where it’s going. Researchers once thought AI would struggle to crack Go for at least another decade. Now, it’s headed to places that once seemed unreachable. Or, at least, there are many people—with much power and money at their disposal—who are intent on reaching those places.

Building a Brain

Google’s AI system, known as AlphaGo, was developed at DeepMind, the AI research house that Google acquired for $400 million in early 2014. DeepMind specializes in both deep learning and reinforcement learning, technologies that allow machines to learn largely on their own.

Using what are called neural networks—networks of hardware and software that approximate the web of neurons in the human brain—deep learning is what drives the remarkably effective image search tool build into Google Photos—not to mention the face recognition service on Facebook and the language translation tool built into Microsoft’s Skype and the system that identifies porn on Twitter. If you feed millions of game moves into a deep neural net, you can teach it to play a video game.

Reinforcement learning takes things a step further. Once you’ve built a neural net that’s pretty good at playing a game, you can match it against itself. As two versions of this neural net play thousands of games against each other, the system tracks which moves yield the highest reward—that is, the highest score—and in this way, it learns to play the game at an even higher level.

AlphaGo uses all this. And then some. Hassabis [Demis Hassabis, AlphaGo founder] and his team added a second level of “deep reinforcement learning” that looks ahead to the longterm results of each move. And they lean on traditional AI techniques that have driven Go-playing AI in the past, including the Monte Carlo tree search method, which basically plays out a huge number of scenarios to their eventual conclusions. Drawing from techniques both new and old, they built a system capable of beating a top professional player. In October, AlphaGo played a close-door match against the reigning three-time European Go champion, which was only revealed to the public on Wednesday morning. The match spanned five games, and AlphaGo won all five.

Read the entire story here.

Image: Korean couple, in traditional dress, play Go; photograph dated between 1910 and 1920. Courtesy: Frank and Frances Carpenter Collection. Public Domain.

DeepDrumpf the 4th-Grader

DeepDrumpf is a Twitter bot out of MIT’s Computer Science and Artificial Intelligence Lab (CSAIL). It uses artificial intelligence (AI) to learn from the jaw-dropping rants of the current Republican frontrunner for the Presidential nomination and then tweets its own remarkably Trump-like musings.

A handful of DeepDrumpf’s recent deep-thoughts here:



The bot’s designer CSAIL postdoc Bradley Hayes says DeepDrumpf uses “techniques from ‘deep-learning,’ a field of artificial intelligence that uses systems called neural networks to teach computers to to find patterns on their own. ”

I would suggest that the deep-learning algorithms, in the case of Trump’s speech patterns, did not have to be too deep. After all, linguists who have studied his words agree that it’s mostly at a  4th-grade level — coherent language is not required.

Patterns aside, I think I prefer the bot over the real thing — it’s likely to do far less damage to our country and the globe than the real thing.


AIs and ICBMs

You know something very creepy is going on when robots armed with artificial intelligence (AI) engage in conversations about nuclear war and inter-continental ballistic missiles (ICBM). This scene could be straight out of a William Gibson novel.


Video: The BINA48 robot, created by Martine Rothblatt and Hanson Robotics, has a conversation with Siri. Courtesy of ars technica.

The Impending AI Apocalypse


AI as in Artificial Intelligence, not American Idol — though some believe the latter to be somewhat of a cultural apocalypse.

AI is reaching a technological tipping point; advances in computation especially neural networks are making machines more intelligent every day. These advances are likely to spawn machines — sooner rather than later — that will someday mimic and then surpass human cognition. This has an increasing number of philosophers, scientists and corporations raising alarms. The fear: what if super-intelligent AI machines one day decide that humans are far too inferior and superfluous?

From Wired:

On the first Sunday afternoon of 2015, Elon Musk took to the stage at a closed-door conference at a Puerto Rican resort to discuss an intelligence explosion. This slightly scary theoretical term refers to an uncontrolled hyper-leap in the cognitive ability of AI that Musk and physicist Stephen Hawking worry could one day spell doom for the human race.

That someone of Musk’s considerable public stature was addressing an AI ethics conference—long the domain of obscure academics—was remarkable. But the conference, with the optimistic title “The Future of AI: Opportunities and Challenges,” was an unprecedented meeting of the minds that brought academics like Oxford AI ethicist Nick Bostrom together with industry bigwigs like Skype founder Jaan Tallinn and Google AI expert Shane Legg.

Musk and Hawking fret over an AI apocalypse, but there are more immediate threats. In the past five years, advances in artificial intelligence—in particular, within a branch of AI algorithms called deep neural networks—are putting AI-driven products front-and-center in our lives. Google, Facebook, Microsoft and Baidu, to name a few, are hiring artificial intelligence researchers at an unprecedented rate, and putting hundreds of millions of dollars into the race for better algorithms and smarter computers.

AI problems that seemed nearly unassailable just a few years ago are now being solved. Deep learning has boosted Android’s speech recognition, and given Skype Star Trek-like instant translation capabilities. Google is building self-driving cars, and computer systems that can teach themselves to identify cat videos. Robot dogs can now walk very much like their living counterparts.

“Things like computer vision are starting to work; speech recognition is starting to work There’s quite a bit of acceleration in the development of AI systems,” says Bart Selman, a Cornell professor and AI ethicist who was at the event with Musk. “And that’s making it more urgent to look at this issue.”

Given this rapid clip, Musk and others are calling on those building these products to carefully consider the ethical implications. At the Puerto Rico conference, delegates signed an open letter pledging to conduct AI research for good, while “avoiding potential pitfalls.” Musk signed the letter too. “Here are all these leading AI researchers saying that AI safety is important,” Musk said yesterday. “I agree with them.”

Google Gets on Board

Nine researchers from DeepMind, the AI company that Google acquired last year, have also signed the letter. The story of how that came about goes back to 2011, however. That’s when Jaan Tallinn introduced himself to Demis Hassabis after hearing him give a presentation at an artificial intelligence conference. Hassabis had recently founded the hot AI startup DeepMind, and Tallinn was on a mission. Since founding Skype, he’d become an AI safety evangelist, and he was looking for a convert. The two men started talking about AI and Tallinn soon invested in DeepMind, and last year, Google paid $400 million for the 50-person company. In one stroke, Google owned the largest available talent pool of deep learning experts in the world. Google has kept its DeepMind ambitions under wraps—the company wouldn’t make Hassabis available for an interview—but DeepMind is doing the kind of research that could allow a robot or a self-driving car to make better sense of its surroundings.

That worries Tallinn, somewhat. In a presentation he gave at the Puerto Rico conference, Tallinn recalled a lunchtime meeting where Hassabis showed how he’d built a machine learning system that could play the classic ’80s arcade game Breakout. Not only had the machine mastered the game, it played it a ruthless efficiency that shocked Tallinn. While “the technologist in me marveled at the achievement, the other thought I had was that I was witnessing a toy model of how an AI disaster would begin, a sudden demonstration of an unexpected intellectual capability,” Tallinn remembered.

Read the entire story here.

Image: Robby the Robot (Forbidden Planet), Comic Con, San Diego, 2006. Courtesy of Pattymooney.

Will the AIs Let Us Coexist?

At some point in the not too distant future artificial intelligences will far exceed humans in most capacities (except shopping and beer drinking). The scripts according to most Hollywood movies seem to suggest that we, humans, would be (mostly) wiped-out by AI machines, beings, robots or other non-human forms — we being the lesser-organisms, superfluous to AI needs.

Perhaps, we may find an alternate path, to a more benign coexistence, much like that posited in The Culture novels by dearly departed, Iain M. Banks. I’ll go with Mr.Banks’ version. Though, just perhaps, evolution is supposed to leave us behind, replacing our simplistic, selfish intelligence with much more advanced, non-human version.

From the Guardian:

From 2001: A Space Odyssey to Blade Runner and RoboCop to The Matrix, how humans deal with the artificial intelligence they have created has proved a fertile dystopian territory for film-makers. More recently Spike Jonze’s Her and Alex Garland’s forthcoming Ex Machina explore what it might be like to have AI creations living among us and, as Alan Turing’s famous test foregrounded, how tricky it might be to tell the flesh and blood from the chips and code.

These concerns are even troubling some of Silicon Valley’s biggest names: last month Telsa’s Elon Musk described AI as mankind’s “biggest existential threat… we need to be very careful”. What many of us don’t realise is that AI isn’t some far-off technology that only exists in film-maker’s imaginations and computer scientist’s labs. Many of our smartphones employ rudimentary AI techniques to translate languages or answer our queries, while video games employ AI to generate complex, ever-changing gaming scenarios. And so long as Silicon Valley companies such as Google and Facebook continue to acquire AI firms and hire AI experts, AI’s IQ will continue to rise…

Isn’t AI a Steven Spielberg movie?
No arguments there, but the term, which stands for “artificial intelligence”, has a more storied history than Spielberg and Kubrick’s 2001 film. The concept of artificial intelligence goes back to the birth of computing: in 1950, just 14 years after defining the concept of a general-purpose computer, Alan Turing asked “Can machines think?”

It’s something that is still at the front of our minds 64 years later, most recently becoming the core of Alex Garland’s new film, Ex Machina, which sees a young man asked to assess the humanity of a beautiful android. The concept is not a million miles removed from that set out in Turing’s 1950 paper, Computing Machinery and Intelligence, in which he laid out a proposal for the “imitation game” – what we now know as the Turing test. Hook a computer up to text terminal and let it have conversations with a human interrogator, while a real person does the same. The heart of the test is whether, when you ask the interrogator to guess which is the human, “the interrogator [will] decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman”.

Turing said that asking whether machines could pass the imitation game is more useful than the vague and philosophically unclear question of whether or not they “think”. “The original question… I believe to be too meaningless to deserve discussion.” Nonetheless, he thought that by the year 2000, “the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted”.

In terms of natural language, he wasn’t far off. Today, it is not uncommon to hear people talking about their computers being “confused”, or taking a long time to do something because they’re “thinking about it”. But even if we are stricter about what counts as a thinking machine, it’s closer to reality than many people think.

So AI exists already?
It depends. We are still nowhere near to passing Turing’s imitation game, despite reports to the contrary. In June, a chatbot called Eugene Goostman successfully fooled a third of judges in a mock Turing test held in London into thinking it was human. But rather than being able to think, Eugene relied on a clever gimmick and a host of tricks. By pretending to be a 13-year-old boy who spoke English as a second language, the machine explained away its many incoherencies, and with a smattering of crude humour and offensive remarks, managed to redirect the conversation when unable to give a straight answer.

The most immediate use of AI tech is natural language processing: working out what we mean when we say or write a command in colloquial language. For something that babies begin to do before they can even walk, it’s an astonishingly hard task. Consider the phrase beloved of AI researchers – “time flies like an arrow, fruit flies like a banana”. Breaking the sentence down into its constituent parts confuses even native English speakers, let alone an algorithm.

Read the entire article here.

Goostman Versus Turing


Some computer scientists believe that “Eugene Goostman” may have overcome the famous hurdle proposed by Alan Turning, by cracking the eponymous Turning Test. Eugene is a 13 year-old Ukrainian “boy” constructed from computer algorithms designed to feign intelligence and mirror human thought processes. During a text-based exchange Eugene managed to convince his human interrogators that he was a real boy — and thus his creators claim to have broken the previously impenetrable Turing barrier.

Other researchers and philosophers disagree: they claim that it’s easier to construct an artificial intelligence that converses in good, but limited English — Eugene is Ukrainian after all — than it would be to develop a native anglophone adult. So, the Turning Test barrier may yet stand.

From the Guardian:

From 2001: a Space Odyssey to Her, the idea of an intelligent computer that can hold conversations with humans has long been a dream of science-fiction writers, but that fantasy may just have taken a step closer to becoming reality. On Saturday, a computer algorithm claiming to be a 13-year-old boy called Eugene Goostman passed the Turing test, it has been claimed. However, some commentators have suggested that pretending to be a 13-year-old Ukrainian child is not as difficult as pretending to be an Anglophone adult, and that the result should not count.

The program convinced 33% of the human judges at an event at the Royal Society that it was actually a human. The Turing test, which was devised in 1950 by the pioneering computer scientist and second wold war codebreaker Alan Turing is a classic benchmark in artificial intelligence – it determines a machine’s ability to show intelligent behaviour equivalent to, or indistinguishable from, that of a human.

Transcripts of the conversations from the latest Royal Society test are not yet available but below are transcripts from a previous event at Bletchley Park on 23 June 2012, with the same program.

Can you tell Eugene isn’t human?

Test 1

[12:43:37] Judge: Hi, I’m Cathy.

[12:43:43] Eugene: Cathy! Nice to meet you! You’re a woman, if I’m not mistaken.

[12:44:20] Judge: Right. What is your name?

[12:44:29] Eugene: People call me Eugene. Or “Zhenya”. It is a short name for Eugene in Russia and Ukraine.

[12:45:17] Judge: Nice name. Are you from either area?

[12:45:41] Eugene: Agree 🙂 Oh, what a fruitful conversation;-)

[12:45:52] Judge: Do you really think so?

[12:46:04] Eugene: No, your guess was wrong. I don’t think so. By the way, I still don’t know your specialty – or, possibly, I’ve missed it?

[12:46:56] Judge: What speciality? What’s yours?

[12:47:01] Eugene: Huh? And I forgot to ask you where you are from.

[12:47:51] Judge: The universe and you?

[12:48:01] Eugene: Is it your hobby to ask little poor Jewish boys such tricky questions they don’t have an answer for?

Test 2

[16:11:25] Judge: what music do you like?

[16:11:32] Eugene: I like Eminem – because he doesn’t have a thing to do with “polite correctness”.

[16:11:46] Judge: OMG i love eminem, he’s my favourite artist!

[16:12:00] Eugene: I wonder what Eminem’s nick mean. I associate it only with a word “enema”. But he is a cool guy, I like his The Real Slim Shady.

[16:12:55] Judge: i know what his nickname means, his real name is Marshall Mathers. From the initials M&M phynetically sounding like Eminem.

[16:13:06] Eugene: Eminem is a famous crapper. No, raper… ripper… Yeah, I mean – rapper. I like his Stan and The Real Slim Shady.

[16:13:32] Judge: I like Hailies Song.

[16:13:46] Eugene: Interesting tastes. I like science-fiction books, by the way. Where do you came from, by the way? Could you tell me about the place where you live?

[16:14:22] Judge: Epsom, home of the derby. yourself?

[16:14:27] Eugene: My guinea pig says that name Derby sounds very nice.

[16:14:41] Judge: i shopuld [sic] talk with him sometime.

[16:14:48] Eugene: I think you should take some typing lessons. Or just type more accurately.

[16:15:04] Judge: That’s very rude.

[16:15:09] Eugene: Ummm … Frankly, I didn’t get your question 🙁

[16:15:49] Judge: “Ummm” isnt a word. i think you need typing lessons

[16:15:58] Eugene: My thoughts are same. By the way, I still don’t know your specialty – or, possibly, I’ve missed it?

Test 3

[16:06:31] Judge: Hi there, do you prefer to wear a dress or trousers to work?

[16:06:37] Eugene: I am a scholar. I’m too young to make money.

[16:07:32] Judge: What was the last film you saw at the cinema?

[16:07:38] Eugene: Huh? Could you tell me what are you? I mean your profession.

[16:08:11] Judge: I am an IT manager, and you?

[16:08:16] Eugene: Manager? So how is it to work as manager?

[16:09:13] Judge: Hard work but rewarding, what do you do?

[16:09:18] Eugene: I am only 13, so I’m attending school so far.

[16:09:51] Judge: Have you seen teletubbies?

[16:10:04] Eugene: No, I haven’t seen it, unfortunately. But I hope it is better than these crappy Star Wars. And I forgot to ask you where you are from …

[16:10:15] Judge: Brighton, and you?

Read the entire story and Eugene’s conversation with real humans here.

Image: A conversation with Eugene Goostman. Courtesy of BBC.

Post-Siri Relationships


What are we to make of a world when software-driven intelligent agents, artificial intelligence and language processing capabilities combine to deliver a human experience? After all, what does it really mean to be human and can a machine be sentient? We should all be pondering such weighty issues, since this emerging reality may well happen within our lifetimes.

From Technology Review:

In the movie Her, which was nominated for the Oscar for Best Picture this year, a middle-aged writer named Theodore Twombly installs and rapidly falls in love with an artificially intelligent operating system who christens herself Samantha.

Samantha lies far beyond the faux “artificial intelligence” of Google Now or Siri: she is as fully and unambiguously conscious as any human. The film’s director and writer, Spike Jonze, employs this premise for limited and prosaic ends, so the film limps along in an uncanny valley, neither believable as near-future reality nor philosophically daring enough to merit suspension of disbelief. Nonetheless, Her raises questions about how humans might relate to computers. Twombly is suffering a painful separation from his wife; can Samantha make him feel better?

Samantha’s self-awareness does not echo real-world trends for automated assistants, which are heading in a very different direction. Making personal assistants chatty, let alone flirtatious, would be a huge waste of resources, and most people would find them as irritating as the infamous Microsoft Clippy.

But it doesn’t necessarily follow that these qualities would be unwelcome in a different context. When dementia sufferers in nursing homes are invited to bond with robot seal pups, and a growing list of psychiatric conditions are being addressed with automated dialogues and therapy sessions, it can only be a matter of time before someone tries to create an app that helps people overcome ordinary loneliness. Suppose we do reach the point where it’s possible to feel genuinely engaged by repartee with a piece of software. What would that mean for the human participants?

Perhaps this prospect sounds absurd or repugnant. But some people already take comfort from immersion in the lives of fictional characters. And much as I wince when I hear someone say that “my best friend growing up was Elizabeth Bennet,” no one would treat it as evidence of psychotic delusion. Over the last two centuries, the mainstream perceptions of novel reading have traversed a full spectrum: once seen as a threat to public morality, it has become a badge of empathy and emotional sophistication. It’s rare now to hear claims that fiction is sapping its readers of time, energy, and emotional resources that they ought to be devoting to actual human relationships.

Of course, characters in Jane Austen novels cannot banter with the reader—and it’s another question whether it would be a travesty if they could—but what I’m envisaging are not characters from fiction “brought to life,” or even characters in a game world who can conduct more realistic dialogue with human players. A software interlocutor—an “SI”—would require some kind of invented back story and an ongoing “life” of its own, but these elements need not have been chosen as part of any great dramatic arc. Gripping as it is to watch an egotistical drug baron in a death spiral, or Raskolnikov dragged unwillingly toward his creator’s idea of redemption, the ideal SI would be more like a pen pal, living an ordinary life untouched by grand authorial schemes but ready to discuss anything, from the mundane to the metaphysical.

There are some obvious pitfalls to be avoided. It would be disastrous if the user really fell for the illusion of personhood, but then, most of us manage to keep the distinction clear in other forms of fiction. An SI that could be used to rehearse pathological fantasies of abusive relationships would be a poisonous thing—but conversely, one that stood its ground against attempts to manipulate or cower it might even do some good.

The art of conversation, of listening attentively and weighing each response, is not a universal gift, any more than any other skill. If it becomes possible to hone one’s conversational skills with a computer—discovering your strengths and weaknesses while enjoying a chat with a character that is no less interesting for failing to exist—that might well lead to better conversations with fellow humans.

Read the entire story here.

Image: Siri icon. Courtesy of Cult of Mac / Apple.

A Smarter Smart Grid

If you live somewhere rather toasty you know how painful your electricity bills can be during the summer months. So, wouldn’t it be good to have a system automatically find you the cheapest electricity when you need it most? Welcome to the artificially intelligent smarter smart grid.

From the New Scientist:

An era is coming in which artificially intelligent systems can manage your energy consumption to save you money and make the electricity grid even smarter

IF YOU’RE tired of keeping track of how much you’re paying for energy, try letting artificial intelligence do it for you. Several start-up companies aim to help people cut costs, flex their muscles as consumers to promote green energy, and usher in a more efficient energy grid – all by unleashing smart software on everyday electricity usage.

Several states in the US have deregulated energy markets, in which customers can choose between several energy providers competing for their business. But the different tariff plans, limited-time promotional rates and other products on offer can be confusing to the average consumer.

A new company called Lumator aims to cut through the morass and save consumers money in the process. Their software system, designed by researchers at Carnegie Mellon University in Pittsburgh, Pennsylvania, asks new customers to enter their energy preferences – how they want their energy generated, and the prices they are willing to pay. The software also gathers any available metering measurements, in addition to data on how the customer responds to emails about opportunities to switch energy provider.

A machine-learning system digests that information and scans the market for the most suitable electricity supply deal. As it becomes familiar with the customer’s habits it is programmed to automatically switch energy plans as the best deals become available, without interrupting supply.

“This ensures that customers aren’t taken advantage of by low introductory prices that drift upward over time, expecting customer inertia to prevent them from switching again as needed,” says Lumator’s founder and CEO Prashant Reddy.

The goal is not only to save customers time and money – Lumator claims it can save people between $10 and $30 a month on their bills – but also to help introduce more renewable energy into the grid. Reddy says power companies have little idea whether or not their consumers want to get their energy from renewables. But by keeping customer preferences on file and automatically switching to a new service when those preferences are met, Reddy hopes renewable energy suppliers will see the demand more clearly.

A firm called Nest, based in Palo Alto, California, has another way to save people money. It makes Wi-Fi-enabled thermostats that integrate machine learning to understand users’ habits. Energy companies in southern California and Texas offer deals to customers if they allow Nest to make small adjustments to their thermostats when the supplier needs to reduce customer demand.

“The utility company gives us a call and says they’re going to need help tomorrow as they’re expecting a heavy load,” says Matt Rogers, one of Nest’s founders. “We provide about 5 megawatts of load shift, but each home has a personalised demand response. The entire programme is based on data collected by Nest.”

Rogers says that about 5000 Nest users have opted-in to such load-balancing programmes.

Read the entire article here.

Image courtesy of Treehugger.