So, it looks like we humans may have a few more years to go as the smartest beings on the planet, before being overrun by ubiquitous sentient robots. Some may question my assertion based on recent election results in the UK and the US, but I digress.
A recent experiment featuring some of our best-loved voice-activated assistants, such as Apple’s Siri, Amazon’s Alexa and Google’s Home, clearly shows our digital brethren have some learning to do. A conversation between two of these rapidly enters an infinite loop.
All college level students have at some point wondered if one or more of their professorial teaching assistants was from planet Earth. If you fall into this category — as I once did — your skepticism and paranoia are completely justified. You see, some assistants aren’t even human.
So, here’s my first tip to any students wondering how to tell if their assistant is an alien entity: be skeptical if her or his last name is Watson.
One day in January, Eric Wilson dashed off a message to the teaching assistants for an online course at the Georgia Institute of Technology.
“I really feel like I missed the mark in giving the correct amount of feedback,” he wrote, pleading to revise an assignment.
Thirteen minutes later, the TA responded. “Unfortunately, there is not a way to edit submitted feedback,” wrote Jill Watson, one of nine assistants for the 300-plus students.
Last week, Mr. Wilson found out he had been seeking guidance from a computer.
Since January, “Jill,” as she was known to the artificial-intelligence class, had been helping graduate students design programs that allow computers to solve certain problems, like choosing an image to complete a logical sequence.
“She was the person—well, the teaching assistant—who would remind us of due dates and post questions in the middle of the week to spark conversations,” said student Jennifer Gavin.
Ms. Watson—so named because she’s powered by International Business Machines Inc. ’s Watson analytics system—wrote things like “Yep!” and “we’d love to,” speaking on behalf of her fellow TAs, in the online forum where students discussed coursework and submitted projects.
“It seemed very much like a normal conversation with a human being,” Ms. Gavin said.
Shreyas Vidyarthi, another student, ascribed human attributes to the TA—imagining her as a friendly Caucasian 20-something on her way to a Ph.D.
Students were told of their guinea-pig status last month. “I was flabbergasted,” said Mr. Vidyarthi.
AI as in Artificial Intelligence, not American Idol — though some believe the latter to be somewhat of a cultural apocalypse.
AI is reaching a technological tipping point; advances in computation especially neural networks are making machines more intelligent every day. These advances are likely to spawn machines — sooner rather than later — that will someday mimic and then surpass human cognition. This has an increasing number of philosophers, scientists and corporations raising alarms. The fear: what if super-intelligent AI machines one day decide that humans are far too inferior and superfluous?
On the first Sunday afternoon of 2015, Elon Musk took to the stage at a closed-door conference at a Puerto Rican resort to discuss an intelligence explosion. This slightly scary theoretical term refers to an uncontrolled hyper-leap in the cognitive ability of AI that Musk and physicist Stephen Hawking worry could one day spell doom for the human race.
That someone of Musk’s considerable public stature was addressing an AI ethics conference—long the domain of obscure academics—was remarkable. But the conference, with the optimistic title “The Future of AI: Opportunities and Challenges,” was an unprecedented meeting of the minds that brought academics like Oxford AI ethicist Nick Bostrom together with industry bigwigs like Skype founder Jaan Tallinn and Google AI expert Shane Legg.
Musk and Hawking fret over an AI apocalypse, but there are more immediate threats. In the past five years, advances in artificial intelligence—in particular, within a branch of AI algorithms called deep neural networks—are putting AI-driven products front-and-center in our lives. Google, Facebook, Microsoft and Baidu, to name a few, are hiring artificial intelligence researchers at an unprecedented rate, and putting hundreds of millions of dollars into the race for better algorithms and smarter computers.
AI problems that seemed nearly unassailable just a few years ago are now being solved. Deep learning has boosted Android’s speech recognition, and given Skype Star Trek-like instant translation capabilities. Google is building self-driving cars, and computer systems that can teach themselves to identify cat videos. Robot dogs can now walk very much like their living counterparts.
“Things like computer vision are starting to work; speech recognition is starting to work There’s quite a bit of acceleration in the development of AI systems,” says Bart Selman, a Cornell professor and AI ethicist who was at the event with Musk. “And that’s making it more urgent to look at this issue.”
Given this rapid clip, Musk and others are calling on those building these products to carefully consider the ethical implications. At the Puerto Rico conference, delegates signed an open letter pledging to conduct AI research for good, while “avoiding potential pitfalls.” Musk signed the letter too. “Here are all these leading AI researchers saying that AI safety is important,” Musk said yesterday. “I agree with them.”
Google Gets on Board
Nine researchers from DeepMind, the AI company that Google acquired last year, have also signed the letter. The story of how that came about goes back to 2011, however. That’s when Jaan Tallinn introduced himself to Demis Hassabis after hearing him give a presentation at an artificial intelligence conference. Hassabis had recently founded the hot AI startup DeepMind, and Tallinn was on a mission. Since founding Skype, he’d become an AI safety evangelist, and he was looking for a convert. The two men started talking about AI and Tallinn soon invested in DeepMind, and last year, Google paid $400 million for the 50-person company. In one stroke, Google owned the largest available talent pool of deep learning experts in the world. Google has kept its DeepMind ambitions under wraps—the company wouldn’t make Hassabis available for an interview—but DeepMind is doing the kind of research that could allow a robot or a self-driving car to make better sense of its surroundings.
That worries Tallinn, somewhat. In a presentation he gave at the Puerto Rico conference, Tallinn recalled a lunchtime meeting where Hassabis showed how he’d built a machine learning system that could play the classic ’80s arcade game Breakout. Not only had the machine mastered the game, it played it a ruthless efficiency that shocked Tallinn. While “the technologist in me marveled at the achievement, the other thought I had was that I was witnessing a toy model of how an AI disaster would begin, a sudden demonstration of an unexpected intellectual capability,” Tallinn remembered.
A Canadian is trying valiantly to hitchhike across the nation, from coast-to-coast — Nova Scotia to British Columbia. While others have made this trek before, this journey is peculiar in one respect. The intrepid hiker is a child-sized robot. She or he — we don’t really know — is named hitchBOT.
hitchBOT is currently still in eastern Canada; New Brunswick to be more precise. So one has to wonder if (s)he would have made better progress from commandeering one of Google’s self-propelled, driverless cars to make the 3,781 mile journey.
Read the entire story and follow hitchBOT’s progress across Canada here.
A sentient robot is the long-held dream of both artificial intelligence researcher and science fiction author. Yet, some leading mathematicians theorize it may never happen, despite our accelerating technological prowess.
From New Scientist:
So long, robot pals – and robot overlords. Sentient machines may never exist, according to a variation on a leading mathematical model of how our brains create consciousness.
Over the past decade, Giulio Tononi at the University of Wisconsin-Madison and his colleagues have developed a mathematical framework for consciousness that has become one of the most influential theories in the field. According to their model, the ability to integrate information is a key property of consciousness. They argue that in conscious minds, integrated information cannot be reduced into smaller components. For instance, when a human perceives a red triangle, the brain cannot register the object as a colourless triangle plus a shapeless patch of red.
But there is a catch, argues Phil Maguire at the National University of Ireland in Maynooth. He points to a computational device called the XOR logic gate, which involves two inputs, A and B. The output of the gate is “0” if A and B are the same and “1” if A and B are different. In this scenario, it is impossible to predict the output based on A or B alone – you need both.
Crucially, this type of integration requires loss of information, says Maguire: “You have put in two bits, and you get one out. If the brain integrated information in this fashion, it would have to be continuously haemorrhaging information.”
Maguire and his colleagues say the brain is unlikely to do this, because repeated retrieval of memories would eventually destroy them. Instead, they define integration in terms of how difficult information is to edit.
Consider an album of digital photographs. The pictures are compiled but not integrated, so deleting or modifying individual images is easy. But when we create memories, we integrate those snapshots of information into our bank of earlier memories. This makes it extremely difficult to selectively edit out one scene from the “album” in our brain.
Based on this definition, Maguire and his team have shown mathematically that computers can’t handle any process that integrates information completely. If you accept that consciousness is based on total integration, then computers can’t be conscious.
“It means that you would not be able to achieve the same results in finite time, using finite memory, using a physical machine,” says Maguire. “It doesn’t necessarily mean that there is some magic going on in the brain that involves some forces that can’t be explained physically. It is just so complex that it’s beyond our abilities to reverse it and decompose it.”
Disappointed? Take comfort – we may not get Rosie the robot maid, but equally we won’t have to worry about the world-conquering Agents of The Matrix.
Neuroscientist Anil Seth at the University of Sussex, UK, applauds the team for exploring consciousness mathematically. But he is not convinced that brains do not lose information. “Brains are open systems with a continual turnover of physical and informational components,” he says. “Not many neuroscientists would claim that conscious contents require lossless memory.”
She or he is 6 feet 2 inches tall and weighs 330 pounds, and goes by the name Atlas.
Surprisingly this person is not the new draft pick for the Denver Broncos or Ronaldo’s replacement at Real Madrid. Well, it’s not really a person, not yet anyway. Atlas is a humanoid robot. Its primary “parents” are Boston Dynamics and DARPA (Defense Advanced Research Projects Agency), a unit of the U.S. Department of Defense. The collaboration unveiled Atlas to the public on July 11, 2013.
From the New York Times:
Moving its hands as if it were dealing cards and walking with a bit of a swagger, a Pentagon-financed humanoid robot named Atlas made its first public appearance on Thursday.
C3PO it’s not. But its creators have high hopes for the hydraulically powered machine. The robot — which is equipped with both laser and stereo vision systems, as well as dexterous hands — is seen as a new tool that can come to the aid of humanity in natural and man-made disasters.
Atlas is being designed to perform rescue functions in situations where humans cannot survive. The Pentagon has devised a challenge in which competing teams of technologists program it to do things like shut off valves or throw switches, open doors, operate power equipment and travel over rocky ground. The challenge comes with a $2 million prize.
Some see Atlas’s unveiling as a giant — though shaky — step toward the long-anticipated age of humanoid robots.
“People love the wizards in Harry Potter or ‘Lord of the Rings,’ but this is real,” said Gary Bradski, a Silicon Valley artificial intelligence specialist and a co-founder of Industrial Perception Inc., a company that is building a robot able to load and unload trucks. “A new species, Robo sapiens, are emerging,” he said.
The debut of Atlas on Thursday was a striking example of how computers are beginning to grow legs and move around in the physical world.
Although robotic planes already fill the air and self-driving cars are being tested on public roads, many specialists in robotics believe that the learning curve toward useful humanoid robots will be steep. Still, many see them fulfilling the needs of humans — and the dreams of science fiction lovers — sooner rather than later.
Walking on two legs, they have the potential to serve as department store guides, assist the elderly with daily tasks or carry out nuclear power plant rescue operations.
“Two weeks ago 19 brave firefighters lost their lives,” said Gill Pratt, a program manager at the Defense Advanced Projects Agency, part of the Pentagon, which oversaw Atlas’s design and financing. “A number of us who are in the robotics field see these events in the news, and the thing that touches us very deeply is a single kind of feeling which is, can’t we do better? All of this technology that we work on, can’t we apply that technology to do much better? I think the answer is yes.”
Dr. Pratt equated the current version of Atlas to a 1-year-old.
“A 1-year-old child can barely walk, a 1-year-old child falls down a lot,” he said. “As you see these machines and you compare them to science fiction, just keep in mind that this is where we are right now.”
But he added that the robot, which has a brawny chest with a computer and is lit by bright blue LEDs, would learn quickly and would soon have the talents that are closer to those of a 2-year-old.
The event on Thursday was a “graduation” ceremony for the Atlas walking robot at the office of Boston Dynamics, the robotics research firm that led the design of the system. The demonstration began with Atlas shrouded under a bright red sheet. After Dr. Pratt finished his remarks, the sheet was pulled back revealing a machine that looked a like a metallic body builder, with an oversized chest and powerful long arms.
To honor the brilliant new album by the Thin White Duke, we came across the article excerpted below, which at first glance seems to come directly from the songbook of Ziggy Stardust him- or herself. But closer inspection reveals that NASA may have designs on deploying giant manufacturing robots to construct a base on the moon. Can you hear me, Major Tom?
Once you’ve had your fill of Bowie, read on about NASA’s spiders.
From ars technica:
The first lunar base on the Moon may not be built by human hands, but rather by a giant spider-like robot built by NASA that can bind the dusty soil into giant bubble structures where astronauts can live, conduct experiments, relax or perhaps even cultivate crops.
We’ve already covered the European Space Agency’s (ESA) work with architecture firm Foster + Partners on a proposal for a 3D-printed moonbase, and there are similarities between the two bases—both would be located in Shackleton Crater near the Moon’s south pole, where sunlight (and thus solar energy) is nearly constant due to the Moon’s inclination on the crater’s rim, and both use lunar dust as their basic building material. However, while the ESA’s building would be constructed almost exactly the same way a house would be 3D-printed on Earth, this latest wheeze—SinterHab—uses NASA technology for something a fair bit more ambitious.
The product of joint research first started between space architects Tomas Rousek, Katarina Eriksson and Ondrej Doule and scientists from NASA’s Jet Propulsion Laboratory (JPL), SinterHab is so-named because it involves sintering lunar dust—that is, heating it up to just below its melting point, where the fine nanoparticle powders fuse and become one solid block a bit like a piece of ceramic. To do this, the JPL engineers propose using microwaves no more powerful than those found in a kitchen unit, with tiny particles easily reaching between 1200 and 1500 degrees Celsius.
Nanoparticles of iron within lunar soil are heated at certain microwave frequencies, enabling efficient heating and binding of the dust to itself. Not having to fly binding agent from Earth along with a 3D printer is a major advantage over the ESA/Foster + Partners plan. The solar panels to power the microwaves would, like the moon base itself, be based near or on the rim of Shackleton Crater in near-perpetual sunlight.
“Bubbles” of binded dust could be built by a huge six-legged robot (OK, so it’s not technically a spider) that can then be assembled into habitats large enough for astronauts to use as a base. This “Sinterator system” would use the JPL’s Athlete rover, a half-scale prototype of which has already been built and tested. It’s a human-controlled robotic space rover with wheels at the end of its 8.2m limbs and a detachable habitable capsule mounted at the top.
Athlete’s arms have several different functions, dependent on what it needs to do at any point. It has 48 3D cameras that stream video to its operator either inside the capsule, elsewhere on the Moon or back on Earth, it’s got a payload capacity of 300kg in Earth gravity, and it can scoop, dig, grab at and generally poke around in the soil fairly easily, giving it the combined abilities of a normal rover and a construction vehicle. It can even split into two smaller three-legged rovers at any time if needed. In the Sinterator system, a microwave 3D printer would be mounted on one of the Athlete’s legs and used to build the base.
Rousek explained the background of the idea to Wired.co.uk: “Since many of my buildings have advanced geometry that you can’t cut easily from sheet material, I started using 3D printing for rapid prototyping of my architecture models. The construction industry is still lagging several decades behind car and electronics production. The buildings now are terribly wasteful and imprecise—I have always dreamed about creating a factory where the buildings would be robotically mass-produced with parametric personalization, using composite materials and 3D printing. It would be also great to use local materials and precise manufacturing on-site.”