Ethical Robotics: Threat of Ambient Artificial Intelligence?

Its fine and all to attempt to codify a set of criteria that will keep robots from harming humans, ala [IN 1981 Kenji Urada, a 37-year-old Japanese factory worker, climbed over a safety fence at a Kawasaki plant to carry out some maintenance work on a robot. In his haste, he failed to switch the robot off properly. Unable to sense him, the robot's powerful hydraulic arm kept on working and accidentally pushed the engineer into a grinding machine. His death made Urada the first recorded victim to die at the hands of a robot. economist.com/displaystory.cfm?story_id=7001829], however, as Steve Talbott and many others are beginning to discuss, we seem to primarily want to use robots to kill other human beings. So perhaps the ethical quandry isnt about the robot ethics, but about us humans. And if all our years of philosophy and talk have added up to the current state of the world, maybe we should either find a technology that will slow down our own progress until we have figured out how to discuss before acting, or icenine the whole thing.

An ethical code to prevent humans abusing robots, and vice versa, is being drawn up by South Korea. The Robot Ethics Charter will cover standards for users and manufacturers and will be released later in 2007. It is being put together by a five member team of experts that includes futurists and a science fiction writer. The South Korean government has identified robotics as a key economic driver and is pumping millions of dollars into research. "The government plans to set ethical guidelines concerning the roles and functions of robots as robots are expected to develop strong intelligence in the near future," the ministry of Commerce, Industry and Energy said. [news.bbc.co.uk/2/hi/technology/6425927.stm]

He and others say that the technology to make lethal autonomous robots is inexpensive and proliferating, and that the advent of these robots on the battlefield is only a matter of time. That means, they say, it is time for people to start talking about whether this technology is something they want to embrace. "The important thing is not to be blind to it," Dr. Arkin said. Noel Sharkey, a computer scientist at the University of Sheffield in Britain, wrote last year in the journal Innovative Technology for Computer Professionals that "this is not a ‘Terminator’-style science fiction but grim reality." He said South Korea and Israel were among countries already deploying armed robot border guards. In an interview, he said there was “a headlong rush” to develop battlefield robots that make their own decisions about when to attack. [nytimes.com/2008/11/25/science/25robots.html]

"... report has provided the motivation, philosophy, formalisms, representational requirements, architectural design criteria, recommendations, and test scenarios to design and construct an autonomous robotic system architecture capable of the ethical use of lethal force. These first steps toward that goal are very preliminary and subject to major revision, but at the very least they can be viewed as the beginnings of an ethical robotic warfighter. The primary goal remains to enforce the International Laws of War in the battlefield in a manner that is believed achievable, by creating a class of robots that not only conform to International Law but outperform human soldiers in their ethical capacity." [R. Arkin, "Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture," Technical Report GIT-GVU-07011.]