Tag Archives: robotics

A Tale of Frolicsome Engines

Al-Jazari-peacock-foundation

From the Public Domain Review comes a fascinating tale of hydraulic automata, mechanical monkeys, automatic organs and a host of other beautiful robotic inventions predating our current technological revolution by hundreds of years. These wonderful contraptions span the siphonic inventions of 1st-century-AD engineer Hero of Alexandria to the speaking machines and the chess playing mechanical Turk of Hungarian engineer Wolfgang von Kempelen from the late-1700s.

My favorite is the infamous “Defecating Duck”. Designed in the mid-18th century by Frenchman Jacques Vaucanson, the duck was one of the first simulative automata. The mechanical duck flapped its wings and moved much like its real world cousin, but its claim to fame was its ability to peck and swallow bits of food and excrete “droppings”.

More from Public Domain Review:

How old are the fields of robotics and artificial intelligence? Many might trace their origins to the mid-twentieth century, and the work of people such as Alan Turing, who wrote about the possibility of machine intelligence in the ‘40s and ‘50s, or the MIT engineer Norbert Wiener, a founder of cybernetics. But these fields have prehistories — traditions of machines that imitate living and intelligent processes — stretching back centuries and, depending how you count, even millennia.

The word “robot” made its first appearance in a 1920 play by the Czech writer Karel ?apek entitled R.U.R., for Rossum’s Universal Robots. Deriving his neologism from the Czech word “robota,” meaning “drudgery” or “servitude,” ?apek used “robot” to refer to a race of artificial humans who replace human workers in a futurist dystopia. (In fact, the artificial humans in the play are more like clones than what we would consider robots, grown in vats rather than built from parts.)

There was, however, an earlier word for artificial humans and animals, “automaton”, stemming from Greek roots meaning “self-moving”. This etymology was in keeping with Aristotle’s definition of living beings as those things that could move themselves at will. Self-moving machines were inanimate objects that seemed to borrow the defining feature of living creatures: self-motion. The first-century-AD engineer Hero of Alexandria described lots of automata. Many involved elaborate networks of siphons that activated various actions as the water passed through them, especially figures of birds drinking, fluttering, and chirping.

Read the entire article here.

Image: Illustration of the peacock fountain, from a 14th-century edition of Al-Jazari’s Book of Knowledge of Ingenious Mechanical Devices. Courtesy: Public Domain Review.

 

Send to Kindle

Robotic Stock Keeping

Tally-robot-simbe

Meet Tally and it may soon be coming to a store near you. Tally is an autonomous robot that patrols store aisles and scans shelves to ensure items are correctly stocked. While the robot doesn’t do the restocking itself — beware stock clerk, this is probably only a matter of time — it audits shelves for out-of-stock items, low stock items, misplaced items, and pricing errors. The robot was developed by start-up Simbe Robotics.

From Technology Review:

When customers can’t find a product on a shelf it’s an inconvenience. But by some estimates, it adds up to billions of dollars of lost revenue each year for retailers around the world.

A new shelf-scanning robot called Tally could help ensure that customers never leave a store empty-handed. It roams the aisles and automatically records which shelves need to be restocked.

The robot, developed by a startup called Simbe Robotics, is the latest effort to automate some of the more routine work done in millions of warehouses and retail stores. It is also an example of the way robots and AI will increasingly take over parts of people’s jobs rather than replacing them.

Restocking shelves is simple but hugely important for retailers. Billions of dollars may be lost each year because products are missing, misplaced, or poorly arranged, according to a report from the analyst firm IHL Services. In a large store it can take hundreds of hours to inspect shelves manually each week.

Brad Bogolea, CEO and cofounder of Simbe Robotics, says his company’s robot can scan the shelves of a small store, like a modest CVS or Walgreens, in about an hour. A very large retailer might need several robots to patrol its premises. He says the robot will be offered on a subscription basis but did not provide the pricing. Bogolea adds that one large retailer is already testing the machine.

Tally automatically roams a store, checking whether a shelf needs restocking; whether a product has been misplaced or poorly arranged; and whether the prices shown on shelves are correct. The robot consists of a wheeled platform with four cameras that scan the shelves on either side from the floor up to a height of eight feet.

Read the entire article here.

Image: Tally. Courtesy of Simbe Robotics.

 

Send to Kindle

AIs and ICBMs

You know something very creepy is going on when robots armed with artificial intelligence (AI) engage in conversations about nuclear war and inter-continental ballistic missiles (ICBM). This scene could be straight out of a William Gibson novel.

Video: The BINA48 robot, created by Martine Rothblatt and Hanson Robotics, has a conversation with Siri. Courtesy of ars technica.

Send to Kindle

PhotoMash: Tax Credit Cuts and Robots

This week’s PhotoMash comes courtesy of the Guardian online news site. It’s front page carried a photo of George Osborne, UK Chancellor of the Exchequer (Treasury Secretary) wondering how to cut tax credits next to a story concluding that robots will take over a third of all UK manual jobs by 2030.

Photomash-Osborne_Cuts-Robot_Jobs

I dare you to find the real human above.

Images courtesy of the Guardian.

Send to Kindle

I Think, Therefore I am, Not Robot

Robbie_the_Robot_2006

A sentient robot is the long-held dream of both artificial intelligence researcher and science fiction author. Yet, some leading mathematicians theorize it may never happen, despite our accelerating technological prowess.

From New Scientist:

So long, robot pals – and robot overlords. Sentient machines may never exist, according to a variation on a leading mathematical model of how our brains create consciousness.

Over the past decade, Giulio Tononi at the University of Wisconsin-Madison and his colleagues have developed a mathematical framework for consciousness that has become one of the most influential theories in the field. According to their model, the ability to integrate information is a key property of consciousness. They argue that in conscious minds, integrated information cannot be reduced into smaller components. For instance, when a human perceives a red triangle, the brain cannot register the object as a colourless triangle plus a shapeless patch of red.

But there is a catch, argues Phil Maguire at the National University of Ireland in Maynooth. He points to a computational device called the XOR logic gate, which involves two inputs, A and B. The output of the gate is “0” if A and B are the same and “1” if A and B are different. In this scenario, it is impossible to predict the output based on A or B alone – you need both.

Memory edit

Crucially, this type of integration requires loss of information, says Maguire: “You have put in two bits, and you get one out. If the brain integrated information in this fashion, it would have to be continuously haemorrhaging information.”

Maguire and his colleagues say the brain is unlikely to do this, because repeated retrieval of memories would eventually destroy them. Instead, they define integration in terms of how difficult information is to edit.

Consider an album of digital photographs. The pictures are compiled but not integrated, so deleting or modifying individual images is easy. But when we create memories, we integrate those snapshots of information into our bank of earlier memories. This makes it extremely difficult to selectively edit out one scene from the “album” in our brain.

Based on this definition, Maguire and his team have shown mathematically that computers can’t handle any process that integrates information completely. If you accept that consciousness is based on total integration, then computers can’t be conscious.

Open minds

“It means that you would not be able to achieve the same results in finite time, using finite memory, using a physical machine,” says Maguire. “It doesn’t necessarily mean that there is some magic going on in the brain that involves some forces that can’t be explained physically. It is just so complex that it’s beyond our abilities to reverse it and decompose it.”

Disappointed? Take comfort – we may not get Rosie the robot maid, but equally we won’t have to worry about the world-conquering Agents of The Matrix.

Neuroscientist Anil Seth at the University of Sussex, UK, applauds the team for exploring consciousness mathematically. But he is not convinced that brains do not lose information. “Brains are open systems with a continual turnover of physical and informational components,” he says. “Not many neuroscientists would claim that conscious contents require lossless memory.”

Read the entire story here.

Image: Robbie the Robot, Forbidden Planet. Courtesy of San Diego Comic Con, 2006 / Wikipedia.

Send to Kindle

What About Telecleaning?

suitable-technologies

Telepresence devices and systems made some ripples in the vast oceans of new technology at the recent CES (Consumer Electronics Show) in Las Vegas. Telepresence allows anyone armed with an internet-connected camera to beam themselves elsewhere with the aid of a remote controlled screen on wheels. Some clinics and workplaces have experimented with the technology, allowing medical staff and workers to be virtually present in one location while being physically remote. Now, a handful of innovators are experimenting with telepresence for the home market.

So, sick of being around the kids, or need to see grandma but can’t get away from the office? Or, even better, buy buy one for your office so you can replace yourself with a robot, work from home and never visit the workplace again. Well, a telepresence robot for a mere $1,000 may be a very sound investment.

Sounds great, but where is the robot that will tidy, clean, dust, cook, repair, mow, launder…

From Technology Review:

When Scott Hassan went to Las Vegas for the International Consumer Electronics Show last week, he was still able to get the kids up in the morning and help them make breakfast at his California home. Hassan used a remote-controlled screen on wheels to spend time with his family, and today his company, Suitable Technologies, started taking orders for Beam+, a version of the same telepresence technology aimed at home users. This summer, it will also be available via Amazon and other retailers.

Hassan thinks the Beam+, essentially a 10-inch screen and camera mounted on wheels, will be popular with other businesspeople who want to spend more time with their kids, or those with aging parents they’d like to check up on more often.

Hassan says a person “visiting” aging parents this way could check up on them less obtrusively than via phone, for example by walking around to look for signs they’d taken their medication rather than bluntly asking, or watching to check that they take their pills with their meal. “For people with dementia or Alzheimer’s, I think that being able to see and hear and walk around with a familiar face is a lot better than just a phone call,” he says. “You could also just Beam in and watch Jeopardy! with your grandmother on TV.”

The Beam+ is designed so that once installed in a home, anyone with the login credentials can bring it to life and start moving around. The operator’s interface shows the view from a camera over the screen, as well as a smaller view looking down toward the unit’s base to aid maneuvering. A user drives it by moving a mouse over their view and clicking where they want to go.

The first 1,000 units of the Beam+ can be preordered for $995, with later units expected to costs $1,995. Both prices include the charging dock to which the device must return every two hours. The exterior design of the Beam+ was created by Fred Bould, who designed the Nest thermostat, among other gadgets.

The Beam+ is a cheaper, smaller, and restyled version of the company’s first product, known as the Beam, which is aimed at corporate users (see “Beam Yourself to Work in a Remote-Controlled Body”).

Intel, IBM, and Square all use Beam’s original product to give employees an option somewhere between a conventional video chat and an in-person visit when working with colleagues in distant offices. Hassan says interest has come from more than just technology companies, though. In Vegas he sold two Beam devices to a restaurant owner planning to use them as street barkers; meanwhile, a real-estate agency in California’s Lake Tahoe has started using them to show people around luxury condos.

Several startups and large companies, such as iRobot, which created the Roomba robotic vacuum cleaner, have launched mobile telepresence devices in recent years. However, despite it being clear that many people wish they could travel more easily in their professional and personal lives, the devices have sometimes been clunky (see “The New, More Awkward You”) and remain relatively expensive.

Read the entire article here.

Image: Beam+. Courtesy of Suitable Technologies, inc.

Send to Kindle

Bots That Build Themselves

Wouldn’t it be a glorious breakthrough if your next furniture purchase could assemble itself? No more sifting though stepwise Scandinavian manuals describing your next “Fjell” or “Bestå” pieces from IKEA; no more looking for a magnifying glass to decipher strange text from Asia; no more searches for an Allen wrench that fits those odd hexagonal bolts. Now, to set your expectations, recent innovations at the macro-mechanical level are not yet quite in the same league as planet-sized self-assembling spaceships (from the mind of Iain Banks). But, researchers and engineers are making progress.

From ars technica:

At a certain level of complexity and obligation, sets of blocks can easily go from fun to tiresome to assemble. Legos? K’Nex? Great. Ikea furniture? Bridges? Construction scaffolding? Not so much. To make things easier, three scientists at MIT recently exhibited a system of self-assembling cubic robots that could in theory automate the process of putting complex systems together.

The blocks, dubbed M-Blocks, use a combination of magnets and an internal flywheel to move around and stick together. The flywheels, running off an internal battery, generate angular momentum that allows the blocks to flick themselves at each other, spinning them through the air. Magnets on the surfaces of the blocks allow them to click into position.

Each flywheel inside the blocks can spin at up to 20,000 rotations per minute. Motion happens when the flywheel spins and then is suddenly braked by a servo motor that tightens a belt encircling the flywheel, imparting its angular momentum to the body of the blocks. That momentum sends the block flying at a certain velocity toward its fellow blocks (if there is a lot of it) or else rolling across the ground (if there’s less of it). Watching a video of the blocks self-assembling, the effect is similar to watching Sid’s toys rally in Toy Story—a little off-putting to see so many parts moving into a whole at once, unpredictably moving together like balletic dying fish.

Each of the blocks is controlled by a 32-bit ARM microprocessor and three 3.7 volt batteries that afford each one between 20 and 100 moves before the battery life is depleted. Rolling is the least complicated motion, though the blocks can also use their flywheels to turn corners, climb over each other, or even complete a leap from ground level to three blocks high, sticking the landing on top of a column 51 percent of the time.

The blocks use 6-axis inertial measurement units, like those found on planes, ships, or spacecrafts, to figure out how they are oriented in space. Each cube has an IR LED and a photodiode that cubes use to communicate with each other.

The authors note that the cubes’ motion is not very precise yet; one cube is considered to have moved successfully if it hits its goal position within three tries. The researchers found the RPMs needed to generate momentum for different movements through trial and error.

If the individual cube movements weren’t enough, groups of the cubes can also move together in either a cluster or as a row of cubes rolling in lockstep. A set of four cubes arranged in a square attempting to roll together in a block approaches the limits of the cubes’ hardware, the authors write. The cubes can even work together to get around an obstacle, rolling over each other and stacking together World War Z-zombie style until the bump in the road has been crossed.

Read the entire article here.

Video: M-Blocks. Courtesy of ars technica.

Send to Kindle

Atlas Shrugs

She or he is 6 feet 2 inches tall and weighs 330 pounds, and goes by the name Atlas.

Surprisingly this person is not the new draft pick for the Denver Broncos or Ronaldo’s replacement at Real Madrid. Well, it’s not really a person, not yet anyway. Atlas is a humanoid robot. Its primary “parents” are Boston Dynamics and DARPA (Defense Advanced Research Projects Agency), a unit of the U.S. Department of Defense. The collaboration unveiled Atlas to the public on July 11, 2013.

From the New York Times:

Moving its hands as if it were dealing cards and walking with a bit of a swagger, a Pentagon-financed humanoid robot named Atlas made its first public appearance on Thursday.

C3PO it’s not. But its creators have high hopes for the hydraulically powered machine. The robot — which is equipped with both laser and stereo vision systems, as well as dexterous hands — is seen as a new tool that can come to the aid of humanity in natural and man-made disasters.

Atlas is being designed to perform rescue functions in situations where humans cannot survive. The Pentagon has devised a challenge in which competing teams of technologists program it to do things like shut off valves or throw switches, open doors, operate power equipment and travel over rocky ground. The challenge comes with a $2 million prize.

Some see Atlas’s unveiling as a giant — though shaky — step toward the long-anticipated age of humanoid robots.

“People love the wizards in Harry Potter or ‘Lord of the Rings,’ but this is real,” said Gary Bradski, a Silicon Valley artificial intelligence specialist and a co-founder of Industrial Perception Inc., a company that is building a robot able to load and unload trucks. “A new species, Robo sapiens, are emerging,” he said.

The debut of Atlas on Thursday was a striking example of how computers are beginning to grow legs and move around in the physical world.

Although robotic planes already fill the air and self-driving cars are being tested on public roads, many specialists in robotics believe that the learning curve toward useful humanoid robots will be steep. Still, many see them fulfilling the needs of humans — and the dreams of science fiction lovers — sooner rather than later.

Walking on two legs, they have the potential to serve as department store guides, assist the elderly with daily tasks or carry out nuclear power plant rescue operations.

“Two weeks ago 19 brave firefighters lost their lives,” said Gill Pratt, a program manager at the Defense Advanced Projects Agency, part of the Pentagon, which oversaw Atlas’s design and financing. “A number of us who are in the robotics field see these events in the news, and the thing that touches us very deeply is a single kind of feeling which is, can’t we do better? All of this technology that we work on, can’t we apply that technology to do much better? I think the answer is yes.”

Dr. Pratt equated the current version of Atlas to a 1-year-old.

“A 1-year-old child can barely walk, a 1-year-old child falls down a lot,” he said. “As you see these machines and you compare them to science fiction, just keep in mind that this is where we are right now.”

But he added that the robot, which has a brawny chest with a computer and is lit by bright blue LEDs, would learn quickly and would soon have the talents that are closer to those of a 2-year-old.

The event on Thursday was a “graduation” ceremony for the Atlas walking robot at the office of Boston Dynamics, the robotics research firm that led the design of the system. The demonstration began with Atlas shrouded under a bright red sheet. After Dr. Pratt finished his remarks, the sheet was pulled back revealing a machine that looked a like a metallic body builder, with an oversized chest and powerful long arms.

Read the entire article here.

Send to Kindle

Technology and Employment

Technology is altering the lives of us all. Often it is a positive influence, offering its users tremendous benefits from time-saving to life-extension. However, the relationship of technology to our employment is more complex and usually detrimental.

Many traditional forms of employment have already disappeared thanks to our technological tools; still many other jobs have changed beyond recognition, requiring new skills and knowledge. And this may be just the beginning.

From Technology Review:

Given his calm and reasoned academic demeanor, it is easy to miss just how provocative Erik Brynjolfsson’s contention really is. ­Brynjolfsson, a professor at the MIT Sloan School of Management, and his collaborator and coauthor Andrew McAfee have been arguing for the last year and a half that impressive advances in computer technology—from improved industrial robotics to automated translation services—are largely behind the sluggish employment growth of the last 10 to 15 years. Even more ominous for workers, the MIT academics foresee dismal prospects for many types of jobs as these powerful new technologies are increasingly adopted not only in manufacturing, clerical, and retail work but in professions such as law, financial services, education, and medicine.

That robots, automation, and software can replace people might seem obvious to anyone who’s worked in automotive manufacturing or as a travel agent. But Brynjolfsson and McAfee’s claim is more troubling and controversial. They believe that rapid technological change has been destroying jobs faster than it is creating them, contributing to the stagnation of median income and the growth of inequality in the United States. And, they suspect, something similar is happening in other technologically advanced countries.

Perhaps the most damning piece of evidence, according to Brynjolfsson, is a chart that only an economist could love. In economics, productivity—the amount of economic value created for a given unit of input, such as an hour of labor—is a crucial indicator of growth and wealth creation. It is a measure of progress. On the chart Brynjolfsson likes to show, separate lines represent productivity and total employment in the United States. For years after World War II, the two lines closely tracked each other, with increases in jobs corresponding to increases in productivity. The pattern is clear: as businesses generated more value from their workers, the country as a whole became richer, which fueled more economic activity and created even more jobs. Then, beginning in 2000, the lines diverge; productivity continues to rise robustly, but employment suddenly wilts. By 2011, a significant gap appears between the two lines, showing economic growth with no parallel increase in job creation. Brynjolfsson and McAfee call it the “great decoupling.” And Brynjolfsson says he is confident that technology is behind both the healthy growth in productivity and the weak growth in jobs.

It’s a startling assertion because it threatens the faith that many economists place in technological progress. Brynjolfsson and McAfee still believe that technology boosts productivity and makes societies wealthier, but they think that it can also have a dark side: technological progress is eliminating the need for many types of jobs and leaving the typical worker worse off than before. ­Brynjolfsson can point to a second chart indicating that median income is failing to rise even as the gross domestic product soars. “It’s the great paradox of our era,” he says. “Productivity is at record levels, innovation has never been faster, and yet at the same time, we have a falling median income and we have fewer jobs. People are falling behind because technology is advancing so fast and our skills and organizations aren’t keeping up.”

Brynjolfsson and McAfee are not Luddites. Indeed, they are sometimes accused of being too optimistic about the extent and speed of recent digital advances. Brynjolfsson says they began writing Race Against the Machine, the 2011 book in which they laid out much of their argument, because they wanted to explain the economic benefits of these new technologies (Brynjolfsson spent much of the 1990s sniffing out evidence that information technology was boosting rates of productivity). But it became clear to them that the same technologies making many jobs safer, easier, and more productive were also reducing the demand for many types of human workers.

Anecdotal evidence that digital technologies threaten jobs is, of course, everywhere. Robots and advanced automation have been common in many types of manufacturing for decades. In the United States and China, the world’s manufacturing powerhouses, fewer people work in manufacturing today than in 1997, thanks at least in part to automation. Modern automotive plants, many of which were transformed by industrial robotics in the 1980s, routinely use machines that autonomously weld and paint body parts—tasks that were once handled by humans. Most recently, industrial robots like Rethink Robotics’ Baxter (see “The Blue-Collar Robot,” May/June 2013), more flexible and far cheaper than their predecessors, have been introduced to perform simple jobs for small manufacturers in a variety of sectors. The website of a Silicon Valley startup called Industrial Perception features a video of the robot it has designed for use in warehouses picking up and throwing boxes like a bored elephant. And such sensations as Google’s driverless car suggest what automation might be able to accomplish someday soon.

A less dramatic change, but one with a potentially far larger impact on employment, is taking place in clerical work and professional services. Technologies like the Web, artificial intelligence, big data, and improved analytics—all made possible by the ever increasing availability of cheap computing power and storage capacity—are automating many routine tasks. Countless traditional white-collar jobs, such as many in the post office and in customer service, have disappeared. W. Brian Arthur, a visiting researcher at the Xerox Palo Alto Research Center’s intelligence systems lab and a former economics professor at Stanford University, calls it the “autonomous economy.” It’s far more subtle than the idea of robots and automation doing human jobs, he says: it involves “digital processes talking to other digital processes and creating new processes,” enabling us to do many things with fewer people and making yet other human jobs obsolete.

It is this onslaught of digital processes, says Arthur, that primarily explains how productivity has grown without a significant increase in human labor. And, he says, “digital versions of human intelligence” are increasingly replacing even those jobs once thought to require people. “It will change every profession in ways we have barely seen yet,” he warns.

Read the entire article here.

Image: Industrial robots. Courtesy of Techjournal.

Send to Kindle

Beware! RoboBee May Be Watching You

History will probably show that humans are the likely cause for the mass disappearance and death of honey bees around the world.

So, while ecologists try to understand why and how to reverse bee death and colony collapse, engineers are busy building alternatives to our once nectar-loving friends. Meet RoboBee, also known as the Micro Air Vehicles Project.

From Scientific American:

We take for granted the effortless flight of insects, thinking nothing of swatting a pesky fly and crushing its wings. But this insect is a model of complexity. After 12 years of work, researchers at the Harvard School of Engineering and Applied Sciences have succeeded in creating a fly-like robot. And in early May, they announced that their tiny RoboBee (yes, it’s called a RoboBee even though it’s based on the mechanics of a fly) took flight. In the future, that could mean big things for everything from disaster relief to colony collapse disorder.

The RoboBee isn’t the only miniature flying robot in existence, but the 80-milligram, quarter-sized robot is certainly one of the smallest. “The motivations are really thinking about this as a platform to drive a host of really challenging open questions and drive new technology and engineering,” says Harvard professor Robert Wood, the engineering team lead for the project.

When Wood and his colleagues first set out to create a robotic fly, there were no off the shelf parts for them to use. “There were no motors small enough, no sensors that could fit on board. The microcontrollers, the microprocessors–everything had to be developed fresh,” says Wood. As a result, the RoboBee project has led to numerous innovations, including vision sensors for the bot, high power density piezoelectric actuators (ceramic strips that expand and contract when exposed to an electrical field), and a new kind of rapid manufacturing that involves layering laser-cut materials that fold like a pop-up book. The actuators assist with the bot’s wing-flapping, while the vision sensors monitor the world in relation to the RoboBee.

“Manufacturing took us quite awhile. Then it was control, how do you design the thing so we can fly it around, and the next one is going to be power, how we develop and integrate power sources,” says Wood. In a paper recently published by Science, the researchers describe the RoboBee’s power quandary: it can fly for just 20 seconds–and that’s while it’s tethered to a power source. “Batteries don’t exist at the size that we would want,” explains Wood. The researchers explain further in the report: ” If we implement on-board power with current technologies, we estimate no more than a few minutes of untethered, powered flight. Long duration power autonomy awaits advances in small, high-energy-density power sources.”

The RoboBees don’t last a particularly long time–Wood says the flight time is “on the order of tens of minutes”–but they can keep flapping their wings long enough for the Harvard researchers to learn everything they need to know from each successive generation of bots. For commercial applications, however, the RoboBees would need to be more durable.

Read the entire article here.

Image courtesy of Micro Air Vehicles Project, Harvard.

Send to Kindle

Morality and Machines

Fans of science fiction and Isaac Asimov in particular may recall his three laws of robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Of course, technology has marched forward relentlessly since Asimov penned these guidelines in 1942. But while the ideas may seem trite and somewhat contradictory the ethical issue remains – especially as our machines become ever more powerful and independent. Though, perhaps first humans, in general, ought to agree on a set of fundamental principles for themselves.

Colin Allen for the Opinionator column reflects on the moral dilemma. He is Provost Professor of Cognitive Science and History and Philosophy of Science at Indiana University, Bloomington.

From the New York Times:

A robot walks into a bar and says, “I’ll have a screwdriver.” A bad joke, indeed. But even less funny if the robot says “Give me what’s in your cash register.”

The fictional theme of robots turning against humans is older than the word itself, which first appeared in the title of Karel ?apek’s 1920 play about artificial factory workers rising against their human overlords.

The prospect of machines capable of following moral principles, let alone understanding them, seems as remote today as the word “robot” is old. Some technologists enthusiastically extrapolate from the observation that computing power doubles every 18 months to predict an imminent “technological singularity” in which a threshold for machines of superhuman intelligence will be suddenly surpassed. Many Singularitarians assume a lot, not the least of which is that intelligence is fundamentally a computational process. The techno-optimists among them also believe that such machines will be essentially friendly to human beings. I am skeptical about the Singularity, and even if “artificial intelligence” is not an oxymoron, “friendly A.I.” will require considerable scientific progress on a number of fronts.

The neuro- and cognitive sciences are presently in a state of rapid development in which alternatives to the metaphor of mind as computer have gained ground. Dynamical systems theory, network science, statistical learning theory, developmental psychobiology and molecular neuroscience all challenge some foundational assumptions of A.I., and the last 50 years of cognitive science more generally. These new approaches analyze and exploit the complex causal structure of physically embodied and environmentally embedded systems, at every level, from molecular to social. They demonstrate the inadequacy of highly abstract algorithms operating on discrete symbols with fixed meanings to capture the adaptive flexibility of intelligent behavior. But despite undermining the idea that the mind is fundamentally a digital computer, these approaches have improved our ability to use computers for more and more robust simulations of intelligent agents — simulations that will increasingly control machines occupying our cognitive niche. If you don’t believe me, ask Siri.

This is why, in my view, we need to think long and hard about machine morality. Many of my colleagues take the very idea of moral machines to be a kind of joke. Machines, they insist, do only what they are told to do. A bar-robbing robot would have to be instructed or constructed to do exactly that. On this view, morality is an issue only for creatures like us who can choose to do wrong. People are morally good only insofar as they must overcome the urge to do what is bad. We can be moral, they say, because we are free to choose our own paths.

Read the entire article here.

Image courtesy of Asimov Foundation / Wikipedia.

Send to Kindle