Three Laws of Robotics - Applications To Future Technology

Applications To Future Technology

See also: Philosophy of artificial intelligence, Ethics of artificial intelligence, and Friendly artificial intelligence

Significant advances in artificial intelligence would be needed for robots to understand the Three Laws. However, as the complexity of robots has increased, so has interest in developing guidelines and safeguards for their operation.

In a 2007 guest editorial in the journal Science on the topic of "Robot Ethics," SF author Robert J. Sawyer argues that since the military is a major source of funding for robotic research it is unlikely such laws would be built into their designs. In a separate essay, Sawyer generalizes this argument to cover other industries stating:

The development of AI is a business, and businesses are notoriously uninterested in fundamental safeguards — especially philosophic ones. (A few quick examples: the tobacco industry, the automotive industry, the nuclear industry. Not one of these has said from the outset that fundamental safeguards are necessary, every one of them has resisted externally imposed safeguards, and none has accepted an absolute edict against ever causing harm to humans.)

David Langford has suggested a tongue-in-cheek set of laws:

  1. A robot will not harm authorized Government personnel but will terminate intruders with extreme prejudice.
  2. A robot will obey the orders of authorized personnel except where such orders conflict with the Third Law.
  3. A robot will guard its own existence with lethal antipersonnel weaponry, because a robot is bloody expensive.

Roger Clarke (aka Rodger Clarke) wrote a pair of papers analyzing the complications in implementing these laws in the event that systems were someday capable of employing them. He argued "Asimov's Laws of Robotics have been a very successful literary device. Perhaps ironically, or perhaps because it was artistically appropriate, the sum of Asimov's stories disprove the contention that he began with: It is not possible to reliably constrain the behaviour of robots by devising and applying a set of rules." On the other hand Asimov's later novels The Robots of Dawn, Robots and Empire and Foundation and Earth imply that the robots inflicted their worst long-term harm by obeying the Three Laws perfectly well, thereby depriving humanity of inventive or risk-taking behaviour.

In March 2007 the South Korean government announced that later in the year it would issue a "Robot Ethics Charter" setting standards for both users and manufacturers. According to Park Hye-Young of the Ministry of Information and Communication the Charter may reflect Asimov's Three Laws, attempting to set ground rules for the future development of robotics.

The futurist Hans Moravec (a prominent figure in the transhumanist movement) proposed that the Laws of Robotics should be adapted to "corporate intelligences" — the corporations driven by AI and robotic manufacturing power which Moravec believes will arise in the near future. In contrast, the David Brin novel Foundation's Triumph (1999) suggests that the Three Laws may decay into obsolescence: Robots use the Zeroth Law to rationalize away the First Law and robots hide themselves from human beings so that the Second Law never comes into play. Brin even portrays R. Daneel Olivaw worrying that, should robots continue to reproduce themselves, the Three Laws would become an evolutionary handicap and natural selection would sweep the Laws away — Asimov's careful foundation undone by evolutionary computation. Although the robots would not be evolving through design instead of mutation because the robots would have to follow the Three Laws while designing and the prevalence of the laws would be ensured, design flaws or construction errors could functionally take the place of biological mutation.

In the July/August 2009 issue of IEEE Intelligent Systems, Robin Murphy (Raytheon Professor of Computer Science and Engineering at Texas A&M) and David D. Woods (director of the Cognitive Systems Engineering Laboratory at Ohio State) proposed "The Three Laws of Responsible Robotics" as a way to stimulate discussion about the role of responsibility and authority when designing not only a single robotic platform but the larger system in which the platform operates. The laws are as follows:

  1. A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics.
  2. A robot must respond to humans as appropriate for their roles.
  3. A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws.

Woods said, "Our laws are little more realistic, and therefore a little more boring” and that "The philosophy has been, ‘sure, people make mistakes, but robots will be better – a perfect version of ourselves.’ We wanted to write three new laws to get people thinking about the human-robot relationship in more realistic, grounded ways."

Read more about this topic:  Three Laws Of Robotics

Famous quotes containing the words future and/or technology:

    We must choose. Be a child of the past with all its crudities and imperfections, its failures and defeats, or a child of the future, the future of symmetry and ultimate success.
    Frances E. Willard 1839–1898, U.S. president of the Women’s Christian Temperance Union 1879-1891, author, activist. The Woman’s Magazine, pp. 137-40 (January 1887)

    The real accomplishment of modern science and technology consists in taking ordinary men, informing them narrowly and deeply and then, through appropriate organization, arranging to have their knowledge combined with that of other specialized but equally ordinary men. This dispenses with the need for genius. The resulting performance, though less inspiring, is far more predictable.
    John Kenneth Galbraith (b. 1908)