Tag Archives: Ray Kurzweil

Google’s AI

The collective IQ of Google, the company, inched up a few notches in January of 2013 when they hired Ray Kurzweil. Over the coming years if the work of Kurzweil, and many of his colleagues, pays off the company’s intelligence may surge significantly. This time though it will be thanks to their work on artificial intelligence (AI), machine learning and (very) big data.

From  Technology Review:

When Ray Kurzweil met with Google CEO Larry Page last July, he wasn’t looking for a job. A respected inventor who’s become a machine-intelligence futurist, Kurzweil wanted to discuss his upcoming book How to Create a Mind. He told Page, who had read an early draft, that he wanted to start a company to develop his ideas about how to build a truly intelligent computer: one that could understand language and then make inferences and decisions on its own.

It quickly became obvious that such an effort would require nothing less than Google-scale data and computing power. “I could try to give you some access to it,” Page told Kurzweil. “But it’s going to be very difficult to do that for an independent company.” So Page suggested that Kurzweil, who had never held a job anywhere but his own companies, join Google instead. It didn’t take Kurzweil long to make up his mind: in January he started working for Google as a director of engineering. “This is the culmination of literally 50 years of my focus on artificial intelligence,” he says.

Kurzweil was attracted not just by Google’s computing resources but also by the startling progress the company has made in a branch of AI called deep learning. Deep-learning software attempts to mimic the activity in layers of neurons in the neocortex, the wrinkly 80 percent of the brain where thinking occurs. The software learns, in a very real sense, to recognize patterns in digital representations of sounds, images, and other data.

The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs. But because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before.

With this greater depth, they are producing remarkable advances in speech and image recognition. Last June, a Google deep-learning system that had been shown 10 million images from YouTube videos proved almost twice as good as any previous image recognition effort at identifying objects such as cats. Google also used the technology to cut the error rate on speech recognition in its latest Android mobile software. In October, Microsoft chief research officer Rick Rashid wowed attendees at a lecture in China with a demonstration of speech software that transcribed his spoken words into English text with an error rate of 7 percent, translated them into Chinese-language text, and then simulated his own voice uttering them in Mandarin. That same month, a team of three graduate students and two professors won a contest held by Merck to identify molecules that could lead to new drugs. The group used deep learning to zero in on the molecules most likely to bind to their targets.

Google in particular has become a magnet for deep learning and related AI talent. In March the company bought a startup cofounded by Geoffrey Hinton, a University of Toronto computer science professor who was part of the team that won the Merck contest. Hinton, who will split his time between the university and Google, says he plans to “take ideas out of this field and apply them to real problems” such as image recognition, search, and natural-language understanding, he says.

All this has normally cautious AI researchers hopeful that intelligent machines may finally escape the pages of science fiction. Indeed, machine intelligence is starting to transform everything from communications and computing to medicine, manufacturing, and transportation. The possibilities are apparent in IBM’s Jeopardy!-winning Watson computer, which uses some deep-learning techniques and is now being trained to help doctors make better decisions. Microsoft has deployed deep learning in its Windows Phone and Bing voice search.

Extending deep learning into applications beyond speech and image recognition will require more conceptual and software breakthroughs, not to mention many more advances in processing power. And we probably won’t see machines we all agree can think for themselves for years, perhaps decades—if ever. But for now, says Peter Lee, head of Microsoft Research USA, “deep learning has reignited some of the grand challenges in artificial intelligence.”

Building a Brain

There have been many competing approaches to those challenges. One has been to feed computers with information and rules about the world, which required programmers to laboriously write software that is familiar with the attributes of, say, an edge or a sound. That took lots of time and still left the systems unable to deal with ambiguous data; they were limited to narrow, controlled applications such as phone menu systems that ask you to make queries by saying specific words.

Neural networks, developed in the 1950s not long after the dawn of AI research, looked promising because they attempted to simulate the way the brain worked, though in greatly simplified form. A program maps out a set of virtual neurons and then assigns random numerical values, or “weights,” to connections between them. These weights determine how each simulated neuron responds—with a mathematical output between 0 and 1—to a digitized feature such as an edge or a shade of blue in an image, or a particular energy level at one frequency in a phoneme, the individual unit of sound in spoken syllables.

Programmers would train a neural network to detect an object or phoneme by blitzing the network with digitized versions of images containing those objects or sound waves containing those phonemes. If the network didn’t accurately recognize a particular pattern, an algorithm would adjust the weights. The eventual goal of this training was to get the network to consistently recognize the patterns in speech or sets of images that we humans know as, say, the phoneme “d” or the image of a dog. This is much the same way a child learns what a dog is by noticing the details of head shape, behavior, and the like in furry, barking animals that other people call dogs.

But early neural networks could simulate only a very limited number of neurons at once, so they could not recognize patterns of great complexity. They languished through the 1970s.

In the mid-1980s, Hinton and others helped spark a revival of interest in neural networks with so-called “deep” models that made better use of many layers of software neurons. But the technique still required heavy human involvement: programmers had to label data before feeding it to the network. And complex speech or image recognition required more computer power than was then available.

Finally, however, in the last decade ­Hinton and other researchers made some fundamental conceptual breakthroughs. In 2006, Hinton developed a more efficient way to teach individual layers of neurons. The first layer learns primitive features, like an edge in an image or the tiniest unit of speech sound. It does this by finding combinations of digitized pixels or sound waves that occur more often than they should by chance. Once that layer accurately recognizes those features, they’re fed to the next layer, which trains itself to recognize more complex features, like a corner or a combination of speech sounds. The process is repeated in successive layers until the system can reliably recognize phonemes or objects.

Read the entire fascinating article following the jump.

Image courtesy of Wired.

Ray Kurzweil and Living a Googol Years

By all accounts serial entrepreneur, inventor and futurist Ray Kurzweil is Google’s most famous employee, eclipsing even co-founders Larry Page and Sergei Brin. As an inventor he can lay claim to some impressive firsts, such as the flatbed scanner, optical character recognition and the music synthesizer. As a futurist, for which he is now more recognized in the public consciousness, he ponders longevity, immortality and the human brain.

From the Wall Street Journal:

Ray Kurzweil must encounter his share of interviewers whose first question is: What do you hope your obituary will say?

This is a trick question. Mr. Kurzweil famously hopes an obituary won’t be necessary. And in the event of his unexpected demise, he is widely reported to have signed a deal to have himself frozen so his intelligence can be revived when technology is equipped for the job.

Mr. Kurzweil is the closest thing to a Thomas Edison of our time, an inventor known for inventing. He first came to public attention in 1965, at age 17, appearing on Steve Allen’s TV show “I’ve Got a Secret” to demonstrate a homemade computer he built to compose original music in the style of the great masters.

In the five decades since, he has invented technologies that permeate our world. To give one example, the Web would hardly be the store of human intelligence it has become without the flatbed scanner and optical character recognition, allowing printed materials from the pre-digital age to be scanned and made searchable.

If you are a musician, Mr. Kurzweil’s fame is synonymous with his line of music synthesizers (now owned by Hyundai). As in: “We’re late for the gig. Don’t forget the Kurzweil.”

If you are blind, his Kurzweil Reader relieved one of your major disabilities—the inability to read printed information, especially sensitive private information, without having to rely on somebody else.

In January, he became an employee at Google. “It’s my first job,” he deadpans, adding after a pause, “for a company I didn’t start myself.”

There is another Kurzweil, though—the one who makes seemingly unbelievable, implausible predictions about a human transformation just around the corner. This is the Kurzweil who tells me, as we’re sitting in the unostentatious offices of Kurzweil Technologies in Wellesley Hills, Mass., that he thinks his chances are pretty good of living long enough to enjoy immortality. This is the Kurzweil who, with a bit of DNA and personal papers and photos, has made clear he intends to bring back in some fashion his dead father.

Mr. Kurzweil’s frank efforts to outwit death have earned him an exaggerated reputation for solemnity, even caused some to portray him as a humorless obsessive. This is wrong. Like the best comedians, especially the best Jewish comedians, he doesn’t tell you when to laugh. Of the pushback he receives from certain theologians who insist death is necessary and ennobling, he snarks, “Oh, death, that tragic thing? That’s really a good thing.”

“People say, ‘Oh, only the rich are going to have these technologies you speak of.’ And I say, ‘Yeah, like cellphones.’ “

To listen to Mr. Kurzweil or read his several books (the latest: “How to Create a Mind”) is to be flummoxed by a series of forecasts that hardly seem realizable in the next 40 years. But this is merely a flaw in my brain, he assures me. Humans are wired to expect “linear” change from their world. They have a hard time grasping the “accelerating, exponential” change that is the nature of information technology.

“A kid in Africa with a smartphone is walking around with a trillion dollars of computation circa 1970,” he says. Project that rate forward, and everything will change dramatically in the next few decades.

“I’m right on the cusp,” he adds. “I think some of us will make it through”—he means baby boomers, who can hope to experience practical immortality if they hang on for another 15 years.

By then, Mr. Kurzweil expects medical technology to be adding a year of life expectancy every year. We will start to outrun our own deaths. And then the wonders really begin. The little computers in our hands that now give us access to all the world’s information via the Web will become little computers in our brains giving us access to all the world’s information. Our world will become a world of near-infinite, virtual possibilities.

How will this work? Right now, says Mr. Kurzweil, our human brains consist of 300 million “pattern recognition” modules. “That’s a large number from one perspective, large enough for humans to invent language and art and science and technology. But it’s also very limiting. Maybe I’d like a billion for three seconds, or 10 billion, just the way I might need a million computers in the cloud for two seconds and can access them through Google.”

We will have vast new brainpower at our disposal; we’ll also have a vast new field in which to operate—virtual reality. “As you go out to the 2040s, now the bulk of our thinking is out in the cloud. The biological portion of our brain didn’t go away but the nonbiological portion will be much more powerful. And it will be uploaded automatically the way we back up everything now that’s digital.”

“When the hardware crashes,” he says of humanity’s current condition, “the software dies with it. We take that for granted as human beings.” But when most of our intelligence, experience and identity live in cyberspace, in some sense (vital words when thinking about Kurzweil predictions) we will become software and the hardware will be replaceable.

Read the entire article after the jump.