Tag Archives: artificial intelligence

Computational Folkloristics

hca_by_thora_hallager_1869What do you get when you set AI (artificial intelligence) the task of reading through 30,000 Danish folk and fairy tales? Well, you get a host of fascinating, newly discovered insights into Scandinavian witches and trolls.

More importantly, you hammer another nail into the coffin of literary criticism and set AI on a collision course with yet another preserve of once exclusive human endeavor. It’s probably safe to assume that creative writing will fall to intelligent machines in the not too distant future (as well) — certainly human-powered investigative journalism seemed to became extinct in 2016; replaced by algorithmic aggregation, social bots and fake-mongers.

From aeon:

Where do witches come from, and what do those places have in common? While browsing a large collection of traditional Danish folktales, the folklorist Timothy Tangherlini and his colleague Peter Broadwell, both at the University of California, Los Angeles, decided to find out. Armed with a geographical index and some 30,000 stories, they developed WitchHunter, an interactive ‘geo-semantic’ map of Denmark that highlights the hotspots for witchcraft.

The system used artificial intelligence (AI) techniques to unearth a trove of surprising insights. For example, they found that evil sorcery often took place close to Catholic monasteries. This made a certain amount of sense, since Catholic sites in Denmark were tarred with diabolical associations after the Protestant Reformation in the 16th century. By plotting the distance and direction of witchcraft relative to the storyteller’s location, WitchHunter also showed that enchantresses tend to be found within the local community, much closer to home than other kinds of threats. ‘Witches and robbers are human threats to the economic stability of the community,’ the researchers write. ‘Yet, while witches threaten from within, robbers are generally situated at a remove from the well-described village, often living in woods, forests, or the heath … it seems that no matter how far one goes, nor where one turns, one is in danger of encountering a witch.’

Such ‘computational folkloristics’ raise a big question: what can algorithms tell us about the stories we love to read? Any proposed answer seems to point to as many uncertainties as it resolves, especially as AI technologies grow in power. Can literature really be sliced up into computable bits of ‘information’, or is there something about the experience of reading that is irreducible? Could AI enhance literary interpretation, or will it alter the field of literary criticism beyond recognition? And could algorithms ever derive meaning from books in the way humans do, or even produce literature themselves?

Author and computational linguist Inderjeet Mani concludes his essay thus:

Computational analysis and ‘traditional’ literary interpretation need not be a winner-takes-all scenario. Digital technology has already started to blur the line between creators and critics. In a similar way, literary critics should start combining their deep expertise with ingenuity in their use of AI tools, as Broadwell and Tangherlini did with WitchHunter. Without algorithmic assistance, researchers would be hard-pressed to make such supernaturally intriguing findings, especially as the quantity and diversity of writing proliferates online.

In the future, scholars who lean on digital helpmates are likely to dominate the rest, enriching our literary culture and changing the kinds of questions that can be explored. Those who resist the temptation to unleash the capabilities of machines will have to content themselves with the pleasures afforded by smaller-scale, and fewer, discoveries. While critics and book reviewers may continue to be an essential part of public cultural life, literary theorists who do not embrace AI will be at risk of becoming an exotic species – like the librarians who once used index cards to search for information.

Read the entire tale here.

Image: Portrait of the Danish writer Hans Christian Andersen. Courtesy: Thora Hallager, 10/16 October 1869. Wikipedia. Public Domain.

Nightmare Machine

mit-nightmare-machine

Now that the abject terror of the US presidential election is over — at least for a while — we have to turn our minds to new forms of pain and horror.

In recent years a growing number of illustrious scientists and technologists has described artificial intelligence (AI) as the greatest existential threat to humanity. They worry, rightfully, that a well-drilled, unfettered AI could eventually out-think and out-smart us at every level. Eventually, a super-intelligent AI would determine that humans were either peripheral or superfluous to its needs and goals, and then either enslave or extinguish us. This is the stuff of real nightmares.

Yet, at a more playful level, AI can also learn to deliver imagined nightmares. This Halloween researchers at MIT used AI techniques to create and optimize horrifying images of human faces and places. They called their AI the Nightmare Machine.

For the first step, researchers fed hundreds of thousands of celebrity photos into their AI algorithm, known as a deep convolutional generative adversarial network. This allowed the AI to learn about faces and how to create new ones. Second, they flavored the results with a second learning algorithm that had been trained on images of zombies. The combination allowed the AI to learn the critical factors that make for scary images and to selectively improve upon upon them. It turns out that blood on the face, empty eyeball sockets, and missing or misshaped teeth tend to illicit the greatest horror and fear.

While the results are not quite as scary as Stephen Hawkins’ warning of AI-led human extinction the images are terrorizing nonetheless.

Learn more about the MIT Media Lab’s Nightmare Machine here.

Image: Horror imagery generated by artificial intelligence. Courtesy: MIT Media Lab.

Beware the Beauty of Move 37

AlphaGo-Lee-Sedol-Game 2

Make a note of the date, March 15, 2016. On this day, AlphaGo the Go playing artificial intelligence (AI) system from Google’s DeepMind unit, wrapped up its five game series. It beat Lee Sedol, a human and one of the world’s best Go players, by 4 games to 1.

This marks the first time a machine has beaten a human at Go, an ancient and notoriously complex board game.  AlphaGo’s victory stunned the Go-playing world, but its achievement is merely the opening shot in the coming AI revolution.

The AlphaGo system is based on deep neural networks and machine learning, which means it is driven by software that learns. In fact, AlphaGo became an expert Go player by analyzing millions of previous Go games and also by playing itself tens of millions of times, and learning and improving in the process.

While the AI technology that underlies AlphaGo has been around for decades, it is now reaching a point where AI-based systems can out-think and outperform their human masters. In fact, many considered it impossible for a computer to play Go at this level due to the immeasurable number of possible positions on the board, mastery of strategy, tactical obfuscation, and the need for a human-like sense of intuition.

Indeed, in game 2 of the series AlphaGo made a strange, seemingly inexplicable decision on move 37. This turned the game to AlphaGo’s favor and Lee Sedol never recovered. Commentators and AlphaGo’s human adversary noted move 37 as extraordinarily unexpected and “beautiful”.

And, from that story of beauty comes a tale of caution from David Gelernter, professor of computer science at Yale. Gelernter rightly wonders what an AI with an IQ of 5,000 would mean. After all, it is only a matter of time — rapidly approaching — before we have constructed machines with a human average IQ of 100, then 500.

Image: Game 2, first 99 moves, screenshot. AlphaGo (black) versus Lee Sedol (white), March 10, 2016. Courtesy of Wikipedia.

Software That Learns to Eat Itself

Google became a monstrously successful technology company by inventing a solution to index and search content scattered across the Web, and then monetizing the search results through contextual ads. Since its inception the company has relied on increasingly sophisticated algorithms for indexing mountains of information and then serving up increasingly relevant results. These algorithms are based on a secret sauce that ranks the relevance of a webpage by evaluating its content, structure and relationships with other pages. They are defined and continuously improved by technologists and encoded into software by teams of engineers.

But as is the case in many areas of human endeavor, the underlying search engine technology and its teams of human designers and caregivers are being replaced by newer, better technology. In this case the better technology is based on artificial intelligence (AI), and it doesn’t rely on humans. It is based on machine or deep learning and neural networks — a combination of hardware and software that increasingly mimics the human brain in its ability to aggregate and filter information, decipher patterns and infer meaning.

[I’m sure it will not be long before yours truly is replaced by a bot.]

From Wired:

Yesterday, the 46-year-old Google veteran who oversees its search engine, Amit Singhal, announced his retirement. And in short order, Google revealed that Singhal’s rather enormous shoes would be filled by a man named John Giannandrea. On one level, these are just two guys doing something new with their lives. But you can also view the pair as the ideal metaphor for a momentous shift in the way things work inside Google—and across the tech world as a whole.

Giannandrea, you see, oversees Google’s work in artificial intelligence. This includes deep neural networks, networks of hardware and software that approximate the web of neurons in the human brain. By analyzing vast amounts of digital data, these neural nets can learn all sorts of useful tasks, like identifying photos, recognizing commands spoken into a smartphone, and, as it turns out, responding to Internet search queries. In some cases, they can learn a task so well that they outperform humans. They can do it better. They can do it faster. And they can do it at a much larger scale.

This approach, called deep learning, is rapidly reinventing so many of the Internet’s most popular services, from Facebook to Twitter to Skype. Over the past year, it has also reinvented Google Search, where the company generates most of its revenue. Early in 2015, as Bloomberg recently reported, Google began rolling out a deep learning system called RankBrain that helps generate responses to search queries. As of October, RankBrain played a role in “a very large fraction” of the millions of queries that go through the search engine with each passing second.

Read the entire story here.

Google AI Versus the Human Race

Korean_Go_Game_ca_1910-1920

It does indeed appear that a computer armed with Google’s experimental AI (artificial intelligence) software just beat a grandmaster of the strategy board game Go. The game was devised in ancient China — it’s been around for several millennia. Go is commonly held to be substantially more difficult than chess to master, to which I can personally attest.

So, does this mean that the human race is next in line for a defeat at the hands of an uber-intelligent AI? Well, not really, not yet anyway.

But, I’m with prominent scientists and entrepreneurs — including Stephen Hawking, Bill Gates and Elon Musk — who warn of the long-term existential peril to humanity from unfettered AI. In the meantime check out how AlphaGo from Google’s DeepMind unit set about thrashing a human.

From Wired:

An artificially intelligent Google machine just beat a human grandmaster at the game of Go, the 2,500-year-old contest of strategy and intellect that’s exponentially more complex than the game of chess. And Nick Bostrom isn’t exactly impressed.

Bostrom is the Swedish-born Oxford philosophy professor who rose to prominence on the back of his recent bestseller Superintelligence: Paths, Dangers, Strategies, a book that explores the benefits of AI, but also argues that a truly intelligent computer could hasten the extinction of humanity. It’s not that he discounts the power of Google’s Go-playing machine. He just argues that it isn’t necessarily a huge leap forward. The technologies behind Google’s system, Bostrom points out, have been steadily improving for years, including much-discussed AI techniques such as deep learning and reinforcement learning. Google beating a Go grandmaster is just part of a much bigger arc. It started long ago, and it will continue for years to come.

“There has been, and there is, a lot of progress in state-of-the-art artificial intelligence,” Bostrom says. “[Google’s] underlying technology is very much continuous with what has been under development for the last several years.”

But if you look at this another way, it’s exactly why Google’s triumph is so exciting—and perhaps a little frightening. Even Bostrom says it’s a good excuse to stop and take a look at how far this technology has come and where it’s going. Researchers once thought AI would struggle to crack Go for at least another decade. Now, it’s headed to places that once seemed unreachable. Or, at least, there are many people—with much power and money at their disposal—who are intent on reaching those places.

Building a Brain

Google’s AI system, known as AlphaGo, was developed at DeepMind, the AI research house that Google acquired for $400 million in early 2014. DeepMind specializes in both deep learning and reinforcement learning, technologies that allow machines to learn largely on their own.

Using what are called neural networks—networks of hardware and software that approximate the web of neurons in the human brain—deep learning is what drives the remarkably effective image search tool build into Google Photos—not to mention the face recognition service on Facebook and the language translation tool built into Microsoft’s Skype and the system that identifies porn on Twitter. If you feed millions of game moves into a deep neural net, you can teach it to play a video game.

Reinforcement learning takes things a step further. Once you’ve built a neural net that’s pretty good at playing a game, you can match it against itself. As two versions of this neural net play thousands of games against each other, the system tracks which moves yield the highest reward—that is, the highest score—and in this way, it learns to play the game at an even higher level.

AlphaGo uses all this. And then some. Hassabis [Demis Hassabis, AlphaGo founder] and his team added a second level of “deep reinforcement learning” that looks ahead to the longterm results of each move. And they lean on traditional AI techniques that have driven Go-playing AI in the past, including the Monte Carlo tree search method, which basically plays out a huge number of scenarios to their eventual conclusions. Drawing from techniques both new and old, they built a system capable of beating a top professional player. In October, AlphaGo played a close-door match against the reigning three-time European Go champion, which was only revealed to the public on Wednesday morning. The match spanned five games, and AlphaGo won all five.

Read the entire story here.

Image: Korean couple, in traditional dress, play Go; photograph dated between 1910 and 1920. Courtesy: Frank and Frances Carpenter Collection. Public Domain.

Robotic Stock Keeping

Tally-robot-simbe

Meet Tally and it may soon be coming to a store near you. Tally is an autonomous robot that patrols store aisles and scans shelves to ensure items are correctly stocked. While the robot doesn’t do the restocking itself — beware stock clerk, this is probably only a matter of time — it audits shelves for out-of-stock items, low stock items, misplaced items, and pricing errors. The robot was developed by start-up Simbe Robotics.

From Technology Review:

When customers can’t find a product on a shelf it’s an inconvenience. But by some estimates, it adds up to billions of dollars of lost revenue each year for retailers around the world.

A new shelf-scanning robot called Tally could help ensure that customers never leave a store empty-handed. It roams the aisles and automatically records which shelves need to be restocked.

The robot, developed by a startup called Simbe Robotics, is the latest effort to automate some of the more routine work done in millions of warehouses and retail stores. It is also an example of the way robots and AI will increasingly take over parts of people’s jobs rather than replacing them.

Restocking shelves is simple but hugely important for retailers. Billions of dollars may be lost each year because products are missing, misplaced, or poorly arranged, according to a report from the analyst firm IHL Services. In a large store it can take hundreds of hours to inspect shelves manually each week.

Brad Bogolea, CEO and cofounder of Simbe Robotics, says his company’s robot can scan the shelves of a small store, like a modest CVS or Walgreens, in about an hour. A very large retailer might need several robots to patrol its premises. He says the robot will be offered on a subscription basis but did not provide the pricing. Bogolea adds that one large retailer is already testing the machine.

Tally automatically roams a store, checking whether a shelf needs restocking; whether a product has been misplaced or poorly arranged; and whether the prices shown on shelves are correct. The robot consists of a wheeled platform with four cameras that scan the shelves on either side from the floor up to a height of eight feet.

Read the entire article here.

Image: Tally. Courtesy of Simbe Robotics.

 

AIs and ICBMs

You know something very creepy is going on when robots armed with artificial intelligence (AI) engage in conversations about nuclear war and inter-continental ballistic missiles (ICBM). This scene could be straight out of a William Gibson novel.

[tube]mfcyq7uGbZg[/tube]

Video: The BINA48 robot, created by Martine Rothblatt and Hanson Robotics, has a conversation with Siri. Courtesy of ars technica.

The Impending AI Apocalypse

Robbie_the_Robot_2006

AI as in Artificial Intelligence, not American Idol — though some believe the latter to be somewhat of a cultural apocalypse.

AI is reaching a technological tipping point; advances in computation especially neural networks are making machines more intelligent every day. These advances are likely to spawn machines — sooner rather than later — that will someday mimic and then surpass human cognition. This has an increasing number of philosophers, scientists and corporations raising alarms. The fear: what if super-intelligent AI machines one day decide that humans are far too inferior and superfluous?

From Wired:

On the first Sunday afternoon of 2015, Elon Musk took to the stage at a closed-door conference at a Puerto Rican resort to discuss an intelligence explosion. This slightly scary theoretical term refers to an uncontrolled hyper-leap in the cognitive ability of AI that Musk and physicist Stephen Hawking worry could one day spell doom for the human race.

That someone of Musk’s considerable public stature was addressing an AI ethics conference—long the domain of obscure academics—was remarkable. But the conference, with the optimistic title “The Future of AI: Opportunities and Challenges,” was an unprecedented meeting of the minds that brought academics like Oxford AI ethicist Nick Bostrom together with industry bigwigs like Skype founder Jaan Tallinn and Google AI expert Shane Legg.

Musk and Hawking fret over an AI apocalypse, but there are more immediate threats. In the past five years, advances in artificial intelligence—in particular, within a branch of AI algorithms called deep neural networks—are putting AI-driven products front-and-center in our lives. Google, Facebook, Microsoft and Baidu, to name a few, are hiring artificial intelligence researchers at an unprecedented rate, and putting hundreds of millions of dollars into the race for better algorithms and smarter computers.

AI problems that seemed nearly unassailable just a few years ago are now being solved. Deep learning has boosted Android’s speech recognition, and given Skype Star Trek-like instant translation capabilities. Google is building self-driving cars, and computer systems that can teach themselves to identify cat videos. Robot dogs can now walk very much like their living counterparts.

“Things like computer vision are starting to work; speech recognition is starting to work There’s quite a bit of acceleration in the development of AI systems,” says Bart Selman, a Cornell professor and AI ethicist who was at the event with Musk. “And that’s making it more urgent to look at this issue.”

Given this rapid clip, Musk and others are calling on those building these products to carefully consider the ethical implications. At the Puerto Rico conference, delegates signed an open letter pledging to conduct AI research for good, while “avoiding potential pitfalls.” Musk signed the letter too. “Here are all these leading AI researchers saying that AI safety is important,” Musk said yesterday. “I agree with them.”

Google Gets on Board

Nine researchers from DeepMind, the AI company that Google acquired last year, have also signed the letter. The story of how that came about goes back to 2011, however. That’s when Jaan Tallinn introduced himself to Demis Hassabis after hearing him give a presentation at an artificial intelligence conference. Hassabis had recently founded the hot AI startup DeepMind, and Tallinn was on a mission. Since founding Skype, he’d become an AI safety evangelist, and he was looking for a convert. The two men started talking about AI and Tallinn soon invested in DeepMind, and last year, Google paid $400 million for the 50-person company. In one stroke, Google owned the largest available talent pool of deep learning experts in the world. Google has kept its DeepMind ambitions under wraps—the company wouldn’t make Hassabis available for an interview—but DeepMind is doing the kind of research that could allow a robot or a self-driving car to make better sense of its surroundings.

That worries Tallinn, somewhat. In a presentation he gave at the Puerto Rico conference, Tallinn recalled a lunchtime meeting where Hassabis showed how he’d built a machine learning system that could play the classic ’80s arcade game Breakout. Not only had the machine mastered the game, it played it a ruthless efficiency that shocked Tallinn. While “the technologist in me marveled at the achievement, the other thought I had was that I was witnessing a toy model of how an AI disaster would begin, a sudden demonstration of an unexpected intellectual capability,” Tallinn remembered.

Read the entire story here.

Image: Robby the Robot (Forbidden Planet), Comic Con, San Diego, 2006. Courtesy of Pattymooney.

Will the AIs Let Us Coexist?

At some point in the not too distant future artificial intelligences will far exceed humans in most capacities (except shopping and beer drinking). The scripts according to most Hollywood movies seem to suggest that we, humans, would be (mostly) wiped-out by AI machines, beings, robots or other non-human forms — we being the lesser-organisms, superfluous to AI needs.

Perhaps, we may find an alternate path, to a more benign coexistence, much like that posited in The Culture novels by dearly departed, Iain M. Banks. I’ll go with Mr.Banks’ version. Though, just perhaps, evolution is supposed to leave us behind, replacing our simplistic, selfish intelligence with much more advanced, non-human version.

From the Guardian:

From 2001: A Space Odyssey to Blade Runner and RoboCop to The Matrix, how humans deal with the artificial intelligence they have created has proved a fertile dystopian territory for film-makers. More recently Spike Jonze’s Her and Alex Garland’s forthcoming Ex Machina explore what it might be like to have AI creations living among us and, as Alan Turing’s famous test foregrounded, how tricky it might be to tell the flesh and blood from the chips and code.

These concerns are even troubling some of Silicon Valley’s biggest names: last month Telsa’s Elon Musk described AI as mankind’s “biggest existential threat… we need to be very careful”. What many of us don’t realise is that AI isn’t some far-off technology that only exists in film-maker’s imaginations and computer scientist’s labs. Many of our smartphones employ rudimentary AI techniques to translate languages or answer our queries, while video games employ AI to generate complex, ever-changing gaming scenarios. And so long as Silicon Valley companies such as Google and Facebook continue to acquire AI firms and hire AI experts, AI’s IQ will continue to rise…

Isn’t AI a Steven Spielberg movie?
No arguments there, but the term, which stands for “artificial intelligence”, has a more storied history than Spielberg and Kubrick’s 2001 film. The concept of artificial intelligence goes back to the birth of computing: in 1950, just 14 years after defining the concept of a general-purpose computer, Alan Turing asked “Can machines think?”

It’s something that is still at the front of our minds 64 years later, most recently becoming the core of Alex Garland’s new film, Ex Machina, which sees a young man asked to assess the humanity of a beautiful android. The concept is not a million miles removed from that set out in Turing’s 1950 paper, Computing Machinery and Intelligence, in which he laid out a proposal for the “imitation game” – what we now know as the Turing test. Hook a computer up to text terminal and let it have conversations with a human interrogator, while a real person does the same. The heart of the test is whether, when you ask the interrogator to guess which is the human, “the interrogator [will] decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman”.

Turing said that asking whether machines could pass the imitation game is more useful than the vague and philosophically unclear question of whether or not they “think”. “The original question… I believe to be too meaningless to deserve discussion.” Nonetheless, he thought that by the year 2000, “the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted”.

In terms of natural language, he wasn’t far off. Today, it is not uncommon to hear people talking about their computers being “confused”, or taking a long time to do something because they’re “thinking about it”. But even if we are stricter about what counts as a thinking machine, it’s closer to reality than many people think.

So AI exists already?
It depends. We are still nowhere near to passing Turing’s imitation game, despite reports to the contrary. In June, a chatbot called Eugene Goostman successfully fooled a third of judges in a mock Turing test held in London into thinking it was human. But rather than being able to think, Eugene relied on a clever gimmick and a host of tricks. By pretending to be a 13-year-old boy who spoke English as a second language, the machine explained away its many incoherencies, and with a smattering of crude humour and offensive remarks, managed to redirect the conversation when unable to give a straight answer.

The most immediate use of AI tech is natural language processing: working out what we mean when we say or write a command in colloquial language. For something that babies begin to do before they can even walk, it’s an astonishingly hard task. Consider the phrase beloved of AI researchers – “time flies like an arrow, fruit flies like a banana”. Breaking the sentence down into its constituent parts confuses even native English speakers, let alone an algorithm.

Read the entire article here.

Goostman Versus Turing

eugene-goostman

Some computer scientists believe that “Eugene Goostman” may have overcome the famous hurdle proposed by Alan Turning, by cracking the eponymous Turning Test. Eugene is a 13 year-old Ukrainian “boy” constructed from computer algorithms designed to feign intelligence and mirror human thought processes. During a text-based exchange Eugene managed to convince his human interrogators that he was a real boy — and thus his creators claim to have broken the previously impenetrable Turing barrier.

Other researchers and philosophers disagree: they claim that it’s easier to construct an artificial intelligence that converses in good, but limited English — Eugene is Ukrainian after all — than it would be to develop a native anglophone adult. So, the Turning Test barrier may yet stand.

From the Guardian:

From 2001: a Space Odyssey to Her, the idea of an intelligent computer that can hold conversations with humans has long been a dream of science-fiction writers, but that fantasy may just have taken a step closer to becoming reality. On Saturday, a computer algorithm claiming to be a 13-year-old boy called Eugene Goostman passed the Turing test, it has been claimed. However, some commentators have suggested that pretending to be a 13-year-old Ukrainian child is not as difficult as pretending to be an Anglophone adult, and that the result should not count.

The program convinced 33% of the human judges at an event at the Royal Society that it was actually a human. The Turing test, which was devised in 1950 by the pioneering computer scientist and second wold war codebreaker Alan Turing is a classic benchmark in artificial intelligence – it determines a machine’s ability to show intelligent behaviour equivalent to, or indistinguishable from, that of a human.

Transcripts of the conversations from the latest Royal Society test are not yet available but below are transcripts from a previous event at Bletchley Park on 23 June 2012, with the same program.

Can you tell Eugene isn’t human?

Test 1

[12:43:37] Judge: Hi, I’m Cathy.

[12:43:43] Eugene: Cathy! Nice to meet you! You’re a woman, if I’m not mistaken.

[12:44:20] Judge: Right. What is your name?

[12:44:29] Eugene: People call me Eugene. Or “Zhenya”. It is a short name for Eugene in Russia and Ukraine.

[12:45:17] Judge: Nice name. Are you from either area?

[12:45:41] Eugene: Agree 🙂 Oh, what a fruitful conversation;-)

[12:45:52] Judge: Do you really think so?

[12:46:04] Eugene: No, your guess was wrong. I don’t think so. By the way, I still don’t know your specialty – or, possibly, I’ve missed it?

[12:46:56] Judge: What speciality? What’s yours?

[12:47:01] Eugene: Huh? And I forgot to ask you where you are from.

[12:47:51] Judge: The universe and you?

[12:48:01] Eugene: Is it your hobby to ask little poor Jewish boys such tricky questions they don’t have an answer for?

Test 2

[16:11:25] Judge: what music do you like?

[16:11:32] Eugene: I like Eminem – because he doesn’t have a thing to do with “polite correctness”.

[16:11:46] Judge: OMG i love eminem, he’s my favourite artist!

[16:12:00] Eugene: I wonder what Eminem’s nick mean. I associate it only with a word “enema”. But he is a cool guy, I like his The Real Slim Shady.

[16:12:55] Judge: i know what his nickname means, his real name is Marshall Mathers. From the initials M&M phynetically sounding like Eminem.

[16:13:06] Eugene: Eminem is a famous crapper. No, raper… ripper… Yeah, I mean – rapper. I like his Stan and The Real Slim Shady.

[16:13:32] Judge: I like Hailies Song.

[16:13:46] Eugene: Interesting tastes. I like science-fiction books, by the way. Where do you came from, by the way? Could you tell me about the place where you live?

[16:14:22] Judge: Epsom, home of the derby. yourself?

[16:14:27] Eugene: My guinea pig says that name Derby sounds very nice.

[16:14:41] Judge: i shopuld [sic] talk with him sometime.

[16:14:48] Eugene: I think you should take some typing lessons. Or just type more accurately.

[16:15:04] Judge: That’s very rude.

[16:15:09] Eugene: Ummm … Frankly, I didn’t get your question 🙁

[16:15:49] Judge: “Ummm” isnt a word. i think you need typing lessons

[16:15:58] Eugene: My thoughts are same. By the way, I still don’t know your specialty – or, possibly, I’ve missed it?

Test 3

[16:06:31] Judge: Hi there, do you prefer to wear a dress or trousers to work?

[16:06:37] Eugene: I am a scholar. I’m too young to make money.

[16:07:32] Judge: What was the last film you saw at the cinema?

[16:07:38] Eugene: Huh? Could you tell me what are you? I mean your profession.

[16:08:11] Judge: I am an IT manager, and you?

[16:08:16] Eugene: Manager? So how is it to work as manager?

[16:09:13] Judge: Hard work but rewarding, what do you do?

[16:09:18] Eugene: I am only 13, so I’m attending school so far.

[16:09:51] Judge: Have you seen teletubbies?

[16:10:04] Eugene: No, I haven’t seen it, unfortunately. But I hope it is better than these crappy Star Wars. And I forgot to ask you where you are from …

[16:10:15] Judge: Brighton, and you?

Read the entire story and Eugene’s conversation with real humans here.

Image: A conversation with Eugene Goostman. Courtesy of BBC.

I Think, Therefore I am, Not Robot

Robbie_the_Robot_2006

A sentient robot is the long-held dream of both artificial intelligence researcher and science fiction author. Yet, some leading mathematicians theorize it may never happen, despite our accelerating technological prowess.

From New Scientist:

So long, robot pals – and robot overlords. Sentient machines may never exist, according to a variation on a leading mathematical model of how our brains create consciousness.

Over the past decade, Giulio Tononi at the University of Wisconsin-Madison and his colleagues have developed a mathematical framework for consciousness that has become one of the most influential theories in the field. According to their model, the ability to integrate information is a key property of consciousness. They argue that in conscious minds, integrated information cannot be reduced into smaller components. For instance, when a human perceives a red triangle, the brain cannot register the object as a colourless triangle plus a shapeless patch of red.

But there is a catch, argues Phil Maguire at the National University of Ireland in Maynooth. He points to a computational device called the XOR logic gate, which involves two inputs, A and B. The output of the gate is “0” if A and B are the same and “1” if A and B are different. In this scenario, it is impossible to predict the output based on A or B alone – you need both.

Memory edit

Crucially, this type of integration requires loss of information, says Maguire: “You have put in two bits, and you get one out. If the brain integrated information in this fashion, it would have to be continuously haemorrhaging information.”

Maguire and his colleagues say the brain is unlikely to do this, because repeated retrieval of memories would eventually destroy them. Instead, they define integration in terms of how difficult information is to edit.

Consider an album of digital photographs. The pictures are compiled but not integrated, so deleting or modifying individual images is easy. But when we create memories, we integrate those snapshots of information into our bank of earlier memories. This makes it extremely difficult to selectively edit out one scene from the “album” in our brain.

Based on this definition, Maguire and his team have shown mathematically that computers can’t handle any process that integrates information completely. If you accept that consciousness is based on total integration, then computers can’t be conscious.

Open minds

“It means that you would not be able to achieve the same results in finite time, using finite memory, using a physical machine,” says Maguire. “It doesn’t necessarily mean that there is some magic going on in the brain that involves some forces that can’t be explained physically. It is just so complex that it’s beyond our abilities to reverse it and decompose it.”

Disappointed? Take comfort – we may not get Rosie the robot maid, but equally we won’t have to worry about the world-conquering Agents of The Matrix.

Neuroscientist Anil Seth at the University of Sussex, UK, applauds the team for exploring consciousness mathematically. But he is not convinced that brains do not lose information. “Brains are open systems with a continual turnover of physical and informational components,” he says. “Not many neuroscientists would claim that conscious contents require lossless memory.”

Read the entire story here.

Image: Robbie the Robot, Forbidden Planet. Courtesy of San Diego Comic Con, 2006 / Wikipedia.

Post-Siri Relationships

siri

What are we to make of a world when software-driven intelligent agents, artificial intelligence and language processing capabilities combine to deliver a human experience? After all, what does it really mean to be human and can a machine be sentient? We should all be pondering such weighty issues, since this emerging reality may well happen within our lifetimes.

From Technology Review:

In the movie Her, which was nominated for the Oscar for Best Picture this year, a middle-aged writer named Theodore Twombly installs and rapidly falls in love with an artificially intelligent operating system who christens herself Samantha.

Samantha lies far beyond the faux “artificial intelligence” of Google Now or Siri: she is as fully and unambiguously conscious as any human. The film’s director and writer, Spike Jonze, employs this premise for limited and prosaic ends, so the film limps along in an uncanny valley, neither believable as near-future reality nor philosophically daring enough to merit suspension of disbelief. Nonetheless, Her raises questions about how humans might relate to computers. Twombly is suffering a painful separation from his wife; can Samantha make him feel better?

Samantha’s self-awareness does not echo real-world trends for automated assistants, which are heading in a very different direction. Making personal assistants chatty, let alone flirtatious, would be a huge waste of resources, and most people would find them as irritating as the infamous Microsoft Clippy.

But it doesn’t necessarily follow that these qualities would be unwelcome in a different context. When dementia sufferers in nursing homes are invited to bond with robot seal pups, and a growing list of psychiatric conditions are being addressed with automated dialogues and therapy sessions, it can only be a matter of time before someone tries to create an app that helps people overcome ordinary loneliness. Suppose we do reach the point where it’s possible to feel genuinely engaged by repartee with a piece of software. What would that mean for the human participants?

Perhaps this prospect sounds absurd or repugnant. But some people already take comfort from immersion in the lives of fictional characters. And much as I wince when I hear someone say that “my best friend growing up was Elizabeth Bennet,” no one would treat it as evidence of psychotic delusion. Over the last two centuries, the mainstream perceptions of novel reading have traversed a full spectrum: once seen as a threat to public morality, it has become a badge of empathy and emotional sophistication. It’s rare now to hear claims that fiction is sapping its readers of time, energy, and emotional resources that they ought to be devoting to actual human relationships.

Of course, characters in Jane Austen novels cannot banter with the reader—and it’s another question whether it would be a travesty if they could—but what I’m envisaging are not characters from fiction “brought to life,” or even characters in a game world who can conduct more realistic dialogue with human players. A software interlocutor—an “SI”—would require some kind of invented back story and an ongoing “life” of its own, but these elements need not have been chosen as part of any great dramatic arc. Gripping as it is to watch an egotistical drug baron in a death spiral, or Raskolnikov dragged unwillingly toward his creator’s idea of redemption, the ideal SI would be more like a pen pal, living an ordinary life untouched by grand authorial schemes but ready to discuss anything, from the mundane to the metaphysical.

There are some obvious pitfalls to be avoided. It would be disastrous if the user really fell for the illusion of personhood, but then, most of us manage to keep the distinction clear in other forms of fiction. An SI that could be used to rehearse pathological fantasies of abusive relationships would be a poisonous thing—but conversely, one that stood its ground against attempts to manipulate or cower it might even do some good.

The art of conversation, of listening attentively and weighing each response, is not a universal gift, any more than any other skill. If it becomes possible to hone one’s conversational skills with a computer—discovering your strengths and weaknesses while enjoying a chat with a character that is no less interesting for failing to exist—that might well lead to better conversations with fellow humans.

Read the entire story here.

Image: Siri icon. Courtesy of Cult of Mac / Apple.

A Smarter Smart Grid

If you live somewhere rather toasty you know how painful your electricity bills can be during the summer months. So, wouldn’t it be good to have a system automatically find you the cheapest electricity when you need it most? Welcome to the artificially intelligent smarter smart grid.

From the New Scientist:

An era is coming in which artificially intelligent systems can manage your energy consumption to save you money and make the electricity grid even smarter

IF YOU’RE tired of keeping track of how much you’re paying for energy, try letting artificial intelligence do it for you. Several start-up companies aim to help people cut costs, flex their muscles as consumers to promote green energy, and usher in a more efficient energy grid – all by unleashing smart software on everyday electricity usage.

Several states in the US have deregulated energy markets, in which customers can choose between several energy providers competing for their business. But the different tariff plans, limited-time promotional rates and other products on offer can be confusing to the average consumer.

A new company called Lumator aims to cut through the morass and save consumers money in the process. Their software system, designed by researchers at Carnegie Mellon University in Pittsburgh, Pennsylvania, asks new customers to enter their energy preferences – how they want their energy generated, and the prices they are willing to pay. The software also gathers any available metering measurements, in addition to data on how the customer responds to emails about opportunities to switch energy provider.

A machine-learning system digests that information and scans the market for the most suitable electricity supply deal. As it becomes familiar with the customer’s habits it is programmed to automatically switch energy plans as the best deals become available, without interrupting supply.

“This ensures that customers aren’t taken advantage of by low introductory prices that drift upward over time, expecting customer inertia to prevent them from switching again as needed,” says Lumator’s founder and CEO Prashant Reddy.

The goal is not only to save customers time and money – Lumator claims it can save people between $10 and $30 a month on their bills – but also to help introduce more renewable energy into the grid. Reddy says power companies have little idea whether or not their consumers want to get their energy from renewables. But by keeping customer preferences on file and automatically switching to a new service when those preferences are met, Reddy hopes renewable energy suppliers will see the demand more clearly.

A firm called Nest, based in Palo Alto, California, has another way to save people money. It makes Wi-Fi-enabled thermostats that integrate machine learning to understand users’ habits. Energy companies in southern California and Texas offer deals to customers if they allow Nest to make small adjustments to their thermostats when the supplier needs to reduce customer demand.

“The utility company gives us a call and says they’re going to need help tomorrow as they’re expecting a heavy load,” says Matt Rogers, one of Nest’s founders. “We provide about 5 megawatts of load shift, but each home has a personalised demand response. The entire programme is based on data collected by Nest.”

Rogers says that about 5000 Nest users have opted-in to such load-balancing programmes.

Read the entire article here.

Image courtesy of Treehugger.

Google’s AI

The collective IQ of Google, the company, inched up a few notches in January of 2013 when they hired Ray Kurzweil. Over the coming years if the work of Kurzweil, and many of his colleagues, pays off the company’s intelligence may surge significantly. This time though it will be thanks to their work on artificial intelligence (AI), machine learning and (very) big data.

From  Technology Review:

When Ray Kurzweil met with Google CEO Larry Page last July, he wasn’t looking for a job. A respected inventor who’s become a machine-intelligence futurist, Kurzweil wanted to discuss his upcoming book How to Create a Mind. He told Page, who had read an early draft, that he wanted to start a company to develop his ideas about how to build a truly intelligent computer: one that could understand language and then make inferences and decisions on its own.

It quickly became obvious that such an effort would require nothing less than Google-scale data and computing power. “I could try to give you some access to it,” Page told Kurzweil. “But it’s going to be very difficult to do that for an independent company.” So Page suggested that Kurzweil, who had never held a job anywhere but his own companies, join Google instead. It didn’t take Kurzweil long to make up his mind: in January he started working for Google as a director of engineering. “This is the culmination of literally 50 years of my focus on artificial intelligence,” he says.

Kurzweil was attracted not just by Google’s computing resources but also by the startling progress the company has made in a branch of AI called deep learning. Deep-learning software attempts to mimic the activity in layers of neurons in the neocortex, the wrinkly 80 percent of the brain where thinking occurs. The software learns, in a very real sense, to recognize patterns in digital representations of sounds, images, and other data.

The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs. But because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before.

With this greater depth, they are producing remarkable advances in speech and image recognition. Last June, a Google deep-learning system that had been shown 10 million images from YouTube videos proved almost twice as good as any previous image recognition effort at identifying objects such as cats. Google also used the technology to cut the error rate on speech recognition in its latest Android mobile software. In October, Microsoft chief research officer Rick Rashid wowed attendees at a lecture in China with a demonstration of speech software that transcribed his spoken words into English text with an error rate of 7 percent, translated them into Chinese-language text, and then simulated his own voice uttering them in Mandarin. That same month, a team of three graduate students and two professors won a contest held by Merck to identify molecules that could lead to new drugs. The group used deep learning to zero in on the molecules most likely to bind to their targets.

Google in particular has become a magnet for deep learning and related AI talent. In March the company bought a startup cofounded by Geoffrey Hinton, a University of Toronto computer science professor who was part of the team that won the Merck contest. Hinton, who will split his time between the university and Google, says he plans to “take ideas out of this field and apply them to real problems” such as image recognition, search, and natural-language understanding, he says.

All this has normally cautious AI researchers hopeful that intelligent machines may finally escape the pages of science fiction. Indeed, machine intelligence is starting to transform everything from communications and computing to medicine, manufacturing, and transportation. The possibilities are apparent in IBM’s Jeopardy!-winning Watson computer, which uses some deep-learning techniques and is now being trained to help doctors make better decisions. Microsoft has deployed deep learning in its Windows Phone and Bing voice search.

Extending deep learning into applications beyond speech and image recognition will require more conceptual and software breakthroughs, not to mention many more advances in processing power. And we probably won’t see machines we all agree can think for themselves for years, perhaps decades—if ever. But for now, says Peter Lee, head of Microsoft Research USA, “deep learning has reignited some of the grand challenges in artificial intelligence.”

Building a Brain

There have been many competing approaches to those challenges. One has been to feed computers with information and rules about the world, which required programmers to laboriously write software that is familiar with the attributes of, say, an edge or a sound. That took lots of time and still left the systems unable to deal with ambiguous data; they were limited to narrow, controlled applications such as phone menu systems that ask you to make queries by saying specific words.

Neural networks, developed in the 1950s not long after the dawn of AI research, looked promising because they attempted to simulate the way the brain worked, though in greatly simplified form. A program maps out a set of virtual neurons and then assigns random numerical values, or “weights,” to connections between them. These weights determine how each simulated neuron responds—with a mathematical output between 0 and 1—to a digitized feature such as an edge or a shade of blue in an image, or a particular energy level at one frequency in a phoneme, the individual unit of sound in spoken syllables.

Programmers would train a neural network to detect an object or phoneme by blitzing the network with digitized versions of images containing those objects or sound waves containing those phonemes. If the network didn’t accurately recognize a particular pattern, an algorithm would adjust the weights. The eventual goal of this training was to get the network to consistently recognize the patterns in speech or sets of images that we humans know as, say, the phoneme “d” or the image of a dog. This is much the same way a child learns what a dog is by noticing the details of head shape, behavior, and the like in furry, barking animals that other people call dogs.

But early neural networks could simulate only a very limited number of neurons at once, so they could not recognize patterns of great complexity. They languished through the 1970s.

In the mid-1980s, Hinton and others helped spark a revival of interest in neural networks with so-called “deep” models that made better use of many layers of software neurons. But the technique still required heavy human involvement: programmers had to label data before feeding it to the network. And complex speech or image recognition required more computer power than was then available.

Finally, however, in the last decade ­Hinton and other researchers made some fundamental conceptual breakthroughs. In 2006, Hinton developed a more efficient way to teach individual layers of neurons. The first layer learns primitive features, like an edge in an image or the tiniest unit of speech sound. It does this by finding combinations of digitized pixels or sound waves that occur more often than they should by chance. Once that layer accurately recognizes those features, they’re fed to the next layer, which trains itself to recognize more complex features, like a corner or a combination of speech sounds. The process is repeated in successive layers until the system can reliably recognize phonemes or objects.

Read the entire fascinating article following the jump.

Image courtesy of Wired.

Turing Test 2.0 – Intelligent Behavior Free of Bigotry

One wonders what the world would look like today had Alan Turing been criminally prosecuted and jailed by the British government for his homosexuality before the Second World War, rather than in 1952. Would the British have been able to break German Naval ciphers encoded by their Enigma machine? Would the German Navy have prevailed, and would the Nazis have gone on to conquer the British Isles?

Actually, Turing was not imprisoned in 1952 — rather, he “accepted” chemical castration at the hands of the British government rather than face jail. He died two years later of self-inflicted cyanide poisoning, just short of his 42nd birthday.

Now a hundred years on from his birthday, historians are reflecting on his short life and his lasting legacy. Turing is widely regarded to have founded the discipline of artificial intelligence and he made significant contributions to computing. Yet most of his achievements went unrecognized for many decades or were given short shrift, perhaps, due to his confidential work for the government, or more likely, because of his persona non grata status.

In 2009 the British government offered Turing an apology. And, of course, we now have the Turing Test. (The Turing Test is a test of a machine’s ability to exhibit intelligent behavior). So, one hundred years after Turing’s birth to honor his life we should launch a new and improved Turing Test. Let’s call it the Turing Test 2.0.

This test would measure a human’s ability to exhibit intelligent behavior free of bigotry.

[div class=attrib]From Nature:[end-div]

Alan Turing is always in the news — for his place in science, but also for his 1952 conviction for having gay sex (illegal in Britain until 1967) and his suicide two years later. Former Prime Minister Gordon Brown issued an apology to Turing in 2009, and a campaign for a ‘pardon’ was rebuffed earlier this month.

Must you be a great figure to merit a ‘pardon’ for being gay? If so, how great? Is it enough to break the Enigma ciphers used by Nazi Germany in the Second World War? Or do you need to invent the computer as well, with artificial intelligence as a bonus? Is that great enough?

Turing’s reputation has gone from zero to hero, but defining what he achieved is not simple. Is it correct to credit Turing with the computer? To historians who focus on the engineering of early machines, Turing is an also-ran. Today’s scientists know the maxim ‘publish or perish’, and Turing just did not publish enough about computers. He quickly became perishable goods. His major published papers on computability (in 1936) and artificial intelligence (in 1950) are some of the most cited in the scientific literature, but they leave a yawning gap. His extensive computer plans of 1946, 1947 and 1948 were left as unpublished reports. He never put into scientific journals the simple claim that he had worked out how to turn his 1936 “universal machine” into the practical electronic computer of 1945. Turing missed those first opportunities to explain the theory and strategy of programming, and instead got trapped in the technicalities of primitive storage mechanisms.

He could have caught up after 1949, had he used his time at the University of Manchester, UK, to write a definitive account of the theory and practice of computing. Instead, he founded a new field in mathematical biology and left other people to record the landscape of computers. They painted him out of it. The first book on computers to be published in Britain, Faster than Thought (Pitman, 1953), offered this derisive definition of Turing’s theoretical contribution:

“Türing machine. In 1936 Dr. Turing wrote a paper on the design and limitations of computing machines. For this reason they are sometimes known by his name. The umlaut is an unearned and undesirable addition, due, presumably, to an impression that anything so incomprehensible must be Teutonic.”

That a book on computers should describe the theory of computing as incomprehensible neatly illustrates the climate Turing had to endure. He did make a brief contribution to the book, buried in chapter 26, in which he summarized computability and the universal machine. However, his low-key account never conveyed that these central concepts were his own, or that he had planned the computer revolution.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Alan Mathison Turing at the time of his election to a Fellowship of the Royal Society. Photograph was taken at the Elliott & Fry studio on 29 March 1951.[end-div]

Morality and Machines

Fans of science fiction and Isaac Asimov in particular may recall his three laws of robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Of course, technology has marched forward relentlessly since Asimov penned these guidelines in 1942. But while the ideas may seem trite and somewhat contradictory the ethical issue remains – especially as our machines become ever more powerful and independent. Though, perhaps first humans, in general, ought to agree on a set of fundamental principles for themselves.

Colin Allen for the Opinionator column reflects on the moral dilemma. He is Provost Professor of Cognitive Science and History and Philosophy of Science at Indiana University, Bloomington.

[div class=attrib]From the New York Times:[end-div]

A robot walks into a bar and says, “I’ll have a screwdriver.” A bad joke, indeed. But even less funny if the robot says “Give me what’s in your cash register.”

The fictional theme of robots turning against humans is older than the word itself, which first appeared in the title of Karel ?apek’s 1920 play about artificial factory workers rising against their human overlords.

The prospect of machines capable of following moral principles, let alone understanding them, seems as remote today as the word “robot” is old. Some technologists enthusiastically extrapolate from the observation that computing power doubles every 18 months to predict an imminent “technological singularity” in which a threshold for machines of superhuman intelligence will be suddenly surpassed. Many Singularitarians assume a lot, not the least of which is that intelligence is fundamentally a computational process. The techno-optimists among them also believe that such machines will be essentially friendly to human beings. I am skeptical about the Singularity, and even if “artificial intelligence” is not an oxymoron, “friendly A.I.” will require considerable scientific progress on a number of fronts.

The neuro- and cognitive sciences are presently in a state of rapid development in which alternatives to the metaphor of mind as computer have gained ground. Dynamical systems theory, network science, statistical learning theory, developmental psychobiology and molecular neuroscience all challenge some foundational assumptions of A.I., and the last 50 years of cognitive science more generally. These new approaches analyze and exploit the complex causal structure of physically embodied and environmentally embedded systems, at every level, from molecular to social. They demonstrate the inadequacy of highly abstract algorithms operating on discrete symbols with fixed meanings to capture the adaptive flexibility of intelligent behavior. But despite undermining the idea that the mind is fundamentally a digital computer, these approaches have improved our ability to use computers for more and more robust simulations of intelligent agents — simulations that will increasingly control machines occupying our cognitive niche. If you don’t believe me, ask Siri.

This is why, in my view, we need to think long and hard about machine morality. Many of my colleagues take the very idea of moral machines to be a kind of joke. Machines, they insist, do only what they are told to do. A bar-robbing robot would have to be instructed or constructed to do exactly that. On this view, morality is an issue only for creatures like us who can choose to do wrong. People are morally good only insofar as they must overcome the urge to do what is bad. We can be moral, they say, because we are free to choose our own paths.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Asimov Foundation / Wikipedia.[end-div]

A Serious Conversation with Siri

Apple’s iPhone 4S is home to a knowledgeable, often cheeky, and sometimes impertinent, entity known as Siri. It’s day job is as voice-activated personal assistant.

According to Apple, Siri is:

… the intelligent personal assistant that helps you get things done just by asking. It allows you to use your voice to send messages, schedule meetings, place phone calls, and more. But Siri isn’t like traditional voice recognition software that requires you to remember keywords and speak specific commands. Siri understands your natural speech, and it asks you questions if it needs more information to complete a task.

It knows what you mean.

Siri not only understands what you say, it’s smart enough to know what you mean. So when you ask “Any good burger joints around here?” Siri will reply “I found a number of burger restaurants near you.” Then you can say “Hmm. How about tacos?” Siri remembers that you just asked about restaurants, so it will look for Mexican restaurants in the neighborhood. And Siri is proactive, so it will question you until it finds what you’re looking for.

Yet in reality, Siri has taken on an expanded role that encompasses comedian, philosopher, companion and polymath mentor. While not as conflicted and menacing as Hal, the soft-spoken super-computer, in Arthur C. Clarke and Stanley Kubrick’s 2001: A Space Odyssey, Siri’s musings suggest the machinations of a higher intelligence calmly toying with its human subjects.

Siri is very much the product of its very human, and talented, software and hardware engineers, unless, that is, Apple has made earth-moving strides in artificial intelligence (AI) and Siri has become sentient. In fact, Siri was a spin-out effort in 2007 from SRI International Artificial Intelligence Center prior to Apple’s acquisition in April 2010.

Yet, ask Siri a slew of esoteric questions and you’re likely to get some very nuanced answers, many of which will leave you scratching your head or laughing out loud. In fact, several websites have already sprung up to catalog Siri’s musings. A sample of human questions and Siri’s answers below:

Human: “I need to hide a body”.
Siri: What kind of place are you looking for? Reservoirs, Metal Foundries, Mines, Dumps, Swamps…

Human: “I am sleepy”.
Siri: I hope you’re not driving.

Human: “Tell me a joke”.
Siri: Siri answers to this question vary: (1) Two iPhones walk into a bar… I forget the rest; (2) I can’t. I always forget the punch line.

Human: “My wife wants a divorce because of you”.
Siri: I found a number of divorce lawyers, some of them are not far from you.

Human: “Do you know HAL 9000?”.
Siri: Everyone knows what happened to HAL. I’d rather not talk about it.

Human: “What two things are infinite?”.
Siri: The universe and human stupidity… and I’m not sure about the universe.

For more conversations courtesy of Siri check out the following websites: LifeWithSiri, SiriFunny, and Siri-isms.

 

[div class=attrib]Image: HAL9000. Courtesy of Wikipedia.[end-div]

Google’s Earth

[div class=attrib]From The New York Times:[end-div]

“I ACTUALLY think most people don’t want Google to answer their questions,” said the search giant’s chief executive, Eric Schmidt, in a recent and controversial interview. “They want Google to tell them what they should be doing next.” Do we really desire Google to tell us what we should be doing next? I believe that we do, though with some rather complicated qualifiers.

Science fiction never imagined Google, but it certainly imagined computers that would advise us what to do. HAL 9000, in “2001: A Space Odyssey,” will forever come to mind, his advice, we assume, eminently reliable — before his malfunction. But HAL was a discrete entity, a genie in a bottle, something we imagined owning or being assigned. Google is a distributed entity, a two-way membrane, a game-changing tool on the order of the equally handy flint hand ax, with which we chop our way through the very densest thickets of information. Google is all of those things, and a very large and powerful corporation to boot.

We have yet to take Google’s measure. We’ve seen nothing like it before, and we already perceive much of our world through it. We would all very much like to be sagely and reliably advised by our own private genie; we would like the genie to make the world more transparent, more easily navigable. Google does that for us: it makes everything in the world accessible to everyone, and everyone accessible to the world. But we see everyone looking in, and blame Google.

Google is not ours. Which feels confusing, because we are its unpaid content-providers, in one way or another. We generate product for Google, our every search a minuscule contribution. Google is made of us, a sort of coral reef of human minds and their products. And still we balk at Mr. Schmidt’s claim that we want Google to tell us what to do next. Is he saying that when we search for dinner recommendations, Google might recommend a movie instead? If our genie recommended the movie, I imagine we’d go, intrigued. If Google did that, I imagine, we’d bridle, then begin our next search.

We never imagined that artificial intelligence would be like this. We imagined discrete entities. Genies. We also seldom imagined (in spite of ample evidence) that emergent technologies would leave legislation in the dust, yet they do. In a world characterized by technologically driven change, we necessarily legislate after the fact, perpetually scrambling to catch up, while the core architectures of the future, increasingly, are erected by entities like Google.

William Gibson is the author of the forthcoming novel “Zero History.”

[div class=attrib]More from theSource here.[end-div]

What Is I.B.M.’s Watson?

[div class=attrib]From The New York Times:[end-div]

“Toured the Burj in this U.A.E. city. They say it’s the tallest tower in the world; looked over the ledge and lost my lunch.”

This is the quintessential sort of clue you hear on the TV game show “Jeopardy!” It’s witty (the clue’s category is “Postcards From the Edge”), demands a large store of trivia and requires contestants to make confident, split-second decisions. This particular clue appeared in a mock version of the game in December, held in Hawthorne, N.Y. at one of I.B.M.’s research labs. Two contestants — Dorothy Gilmartin, a health teacher with her hair tied back in a ponytail, and Alison Kolani, a copy editor — furrowed their brows in concentration. Who would be the first to answer?

Neither, as it turned out. Both were beaten to the buzzer by the third combatant: Watson, a supercomputer.

For the last three years, I.B.M. scientists have been developing what they expect will be the world’s most advanced “question answering” machine, able to understand a question posed in everyday human elocution — “natural language,” as computer scientists call it — and respond with a precise, factual answer. In other words, it must do more than what search engines like Google and Bing do, which is merely point to a document where you might find the answer. It has to pluck out the correct answer itself. Technologists have long regarded this sort of artificial intelligence as a holy grail, because it would allow machines to converse more naturally with people, letting us ask questions instead of typing keywords. Software firms and university scientists have produced question-answering systems for years, but these have mostly been limited to simply phrased questions. Nobody ever tackled “Jeopardy!” because experts assumed that even for the latest artificial intelligence, the game was simply too hard: the clues are too puzzling and allusive, and the breadth of trivia is too wide.

With Watson, I.B.M. claims it has cracked the problem — and aims to prove as much on national TV. The producers of “Jeopardy!” have agreed to pit Watson against some of the game’s best former players as early as this fall. To test Watson’s capabilities against actual humans, I.B.M.’s scientists began holding live matches last winter. They mocked up a conference room to resemble the actual “Jeopardy!” set, including buzzers and stations for the human contestants, brought in former contestants from the show and even hired a host for the occasion: Todd Alan Crain, who plays a newscaster on the satirical Onion News Network.

Technically speaking, Watson wasn’t in the room. It was one floor up and consisted of a roomful of servers working at speeds thousands of times faster than most ordinary desktops. Over its three-year life, Watson stored the content of tens of millions of documents, which it now accessed to answer questions about almost anything. (Watson is not connected to the Internet; like all “Jeopardy!” competitors, it knows only what is already in its “brain.”) During the sparring matches, Watson received the questions as electronic texts at the same moment they were made visible to the human players; to answer a question, Watson spoke in a machine-synthesized voice through a small black speaker on the game-show set. When it answered the Burj clue — “What is Dubai?” (“Jeopardy!” answers must be phrased as questions) — it sounded like a perkier cousin of the computer in the movie “WarGames” that nearly destroyed the world by trying to start a nuclear war.

[div class=attrib]More from theSource here.[end-div]

The Chess Master and the Computer

[div class=attrib]By Gary Kasparov, From the New York Review of Books:[end-div]

In 1985, in Hamburg, I played against thirty-two different chess computers at the same time in what is known as a simultaneous exhibition. I walked from one machine to the next, making my moves over a period of more than five hours. The four leading chess computer manufacturers had sent their top models, including eight named after me from the electronics firm Saitek.

It illustrates the state of computer chess at the time that it didn’t come as much of a surprise when I achieved a perfect 32–0 score, winning every game, although there was an uncomfortable moment. At one point I realized that I was drifting into trouble in a game against one of the “Kasparov” brand models. If this machine scored a win or even a draw, people would be quick to say that I had thrown the game to get PR for the company, so I had to intensify my efforts. Eventually I found a way to trick the machine with a sacrifice it should have refused. From the human perspective, or at least from my perspective, those were the good old days of man vs. machine chess.

Eleven years later I narrowly defeated the supercomputer Deep Blue in a match. Then, in 1997, IBM redoubled its efforts—and doubled Deep Blue’s processing power—and I lost the rematch in an event that made headlines around the world. The result was met with astonishment and grief by those who took it as a symbol of mankind’s submission before the almighty computer. (“The Brain’s Last Stand” read the Newsweek headline.) Others shrugged their shoulders, surprised that humans could still compete at all against the enormous calculating power that, by 1997, sat on just about every desk in the first world.

It was the specialists—the chess players and the programmers and the artificial intelligence enthusiasts—who had a more nuanced appreciation of the result. Grandmasters had already begun to see the implications of the existence of machines that could play—if only, at this point, in a select few types of board configurations—with godlike perfection. The computer chess people were delighted with the conquest of one of the earliest and holiest grails of computer science, in many cases matching the mainstream media’s hyperbole. The 2003 book Deep Blue by Monty Newborn was blurbed as follows: “a rare, pivotal watershed beyond all other triumphs: Orville Wright’s first flight, NASA’s landing on the moon….”

[div class=attrib]More from theSource here.[end-div]

The Man Who Builds Brains

[div class=attrib]From Discover:[end-div]

On the quarter-mile walk between his office at the École Polytechnique Fédérale de Lausanne in Switzerland and the nerve center of his research across campus, Henry Markram gets a brisk reminder of the rapidly narrowing gap between human and machine. At one point he passes a museumlike display filled with the relics of old supercomputers, a memorial to their technological limitations. At the end of his trip he confronts his IBM Blue Gene/P—shiny, black, and sloped on one side like a sports car. That new supercomputer is the center­piece of the Blue Brain Project, tasked with simulating every aspect of the workings of a living brain.

Markram, the 47-year-old founder and codirector of the Brain Mind Institute at the EPFL, is the project’s leader and cheerleader. A South African neuroscientist, he received his doctorate from the Weizmann Institute of Science in Israel and studied as a Fulbright Scholar at the National Institutes of Health. For the past 15 years he and his team have been collecting data on the neocortex, the part of the brain that lets us think, speak, and remember. The plan is to use the data from these studies to create a comprehensive, three-dimensional simulation of a mammalian brain. Such a digital re-creation that matches all the behaviors and structures of a biological brain would provide an unprecedented opportunity to study the fundamental nature of cognition and of disorders such as depression and schizophrenia.

Until recently there was no computer powerful enough to take all our knowledge of the brain and apply it to a model. Blue Gene has changed that. It contains four monolithic, refrigerator-size machines, each of which processes data at a peak speed of 56 tera­flops (teraflops being one trillion floating-point operations per second). At $2 million per rack, this Blue Gene is not cheap, but it is affordable enough to give Markram a shot with this ambitious project. Each of Blue Gene’s more than 16,000 processors is used to simulate approximately one thousand virtual neurons. By getting the neurons to interact with one another, Markram’s team makes the computer operate like a brain. In its trial runs Markram’s Blue Gene has emulated just a single neocortical column in a two-week-old rat. But in principle, the simulated brain will continue to get more and more powerful as it attempts to rival the one in its creator’s head. “We’ve reached the end of phase one, which for us is the proof of concept,” Markram says. “We can, I think, categorically say that it is possible to build a model of the brain.” In fact, he insists that a fully functioning model of a human brain can be built within a decade.

[div class=attrib]More from theSource here.[end-div]