Skip to main content
Log in

Asimov’s “three laws of robotics” and machine metaethics

  • Original Article
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

Using Asimov’s “Bicentennial Man” as a springboard, a number of metaethical issues concerning the emerging field of machine ethics are discussed. Although the ultimate goal of machine ethics is to create autonomous ethical machines, this presents a number of challenges. A good way to begin the task of making ethics computable is to create a program that enables a machine to act an ethical advisor to human beings. This project, unlike creating an autonomous ethical machine, will not require that we make a judgment about the ethical status of the machine itself, a judgment that will be particularly difficult to make. Finally, it is argued that Asimov’s “three laws of robotics” are an unsatisfactory basis for machine ethics, regardless of the status of the machine.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Related to me in conversation with Isaac Asimov.

  2. A full-length novel based on the short story, was co-authored by Asimov with Robert Silverberg. This was called The Positronic Man (Asimov and Silverberg 1992). A 1999 movie directed by Christopher Columbus, entitled Bicentennial Man, was based on the novel, with a screenplay by Nicholas Kazan (Columbus 1999). While the novel and film have broadly similar plot developments, many additional elements are introduced in both of these works. For brevity, the present discussion is limited to issues raised by the original short story treatment.

  3. One of the characters in “The Bicentennial Man” remarks “There have been times in history when segments of the human population fought for full human rights.”

  4. Also, only in this second case can we say that the machine is autonomous.

  5. I am indebted to Michael Anderson for making this point clear to me.

  6. Bruce McLaren has also created a program that enables a machine to act as an ethical advisor to human beings, but in his program the machine does not make ethical decisions itself. His advisor system simply informs the human user of the ethical dimensions of the dilemma, without reaching a decision (McLaren 2003).

  7. This is the reason why Anderson et al. have started with “MedEthEx” that advises health care workers and, initially, in just one particular circumstance.

  8. I am assuming that one will adopt the action-based approach to ethics. For the virtue-based approach to be made precise, virtues must be spelled out in terms of actions.

  9. A prima facie duty is something that one ought to do unless it conflicts with a stronger duty, so there can be exceptions, unlike an absolute duty, for which there are no exceptions.

  10. Some, who are more pessimistic than I am, would say that there might always be some dilemmas about which even experts will disagree as to what is the correct answer. Even if this turns out to be the case, the agreement that surely exists on many dilemmas will allow us to reject a completely relativistic position.

  11. The pessimists would, perhaps, say: “there are correct answers to many (or most) ethical dilemmas.”

  12. If ethical egoism is accepted as a plausible ethical theory, then the agent only needs to take him/her/itself into account, whereas all other ethical theories consider others as well as the agent, assuming that the agent has moral status.

  13. In a well-known video titled “Monkey in the Mirror,” a monkey soon realizes that the monkey it sees in a mirror is itself and it begins to enjoy making faces, etc., watching its own reflection.

  14. Christopher Grau has pointed out that Kant probably had a more robust notion of self-consciousness in mind that includes autonomy and “allows one to discern the moral law through the Categorical Imperative.” Still, even if this rules out monkeys and great apes, it also rules out very young human beings.

  15. In fact, however, it is problematic. Some would argue that Machan has set the bar too high. Two reasons could be given: (1) a number of humans (most noticeably very young children) would, according to his criterion, not have rights since they cannot be expected to behave morally. (2) Machan has confused “having rights” with “having duties.” It is reasonable to say that in order to have duties to others, you must be capable of behaving morally, that is, of respecting the rights of others, but to have rights requires something less than this. That is why young children can have rights, but not duties. In any case, Machan’s criterion would not justify our being speciesists because recent evidence concerning the great apes shows that they are capable of behaving morally. I have in mind Koko, the gorilla who has been raised by humans (at the Gorilla Foundation in Woodside, CA, USA) and absorbed their ethical principles as well as having been taught sign language.

  16. I say “in some sense, could have done otherwise” because philosophers have analyzed “could have done otherwise” in different ways, some compatible with Determinism and some not; but it is generally accepted that freedom in some sense is required for moral responsibility.

  17. I see no reason, however, why a robot/machine cannot be trained to take into account the suffering of others in calculating how it will act in an ethical dilemma, without its having to be emotional itself.

  18. It is important to emphasize here that I am not necessarily agreeing with Kant that robots like Andrew, and animals, should not have moral standing/rights. I am just making the hypothetical claim that if we determine that they should not, there is still a good reason, because of indirect duties to human beings, to treat them respectfully.

  19. Strictly speaking the three laws do not entail any permissions or obligations on humans. Nevertheless, in the absence of any additional moral principles concerning robot dealings with humans or vice versa, it is natural to take the Laws as licensing a permissive attitude towards human treatment of robots.

References

  • Anderson S (1995) Being morally responsible for an action versus acting responsibly or irresponsibly. J Philos Res XX:451–462

    Google Scholar 

  • Anderson M, Anderson S, Armen C (2005) MedEthEx: towards a medical ethics advisor. In: Proceedings of the AAAI fall symposium on caring machines: AI and Eldercare, Menlo Park. AAAI, California

  • Asimov I (1976) ‘The bicentennial man’ in I. Asimov, The bicentennial man and other stories. Doubleday, New York, 1984

  • Asimov I, Silverberg R (1992) The positronic man. Doubleday, New York

    Google Scholar 

  • Bentham J (1799) An introduction to the principles of morals and legislation, chapter 17. Burns J, Hart H (eds). Clarendon Press, Oxford, 1969

  • Columbus C (Director) (1999) Bicentennial Man [movie based on Asimov and Silverberg (1993), The positronic man]. Columbia Tristar Pictures Distributors International

  • Kant I (1780) Our duties to animals. In: Infield L (trans.). Lectures on ethics. Harper & Row, New York, pp 239–241

  • Kant I (1785) The groundwork of the metaphysic of morals, Paton HJ (trans.). Barnes and Noble, New York, 1948

  • Machan T (1991) Do animals have rights? Public Affairs Q 5(2):163–173

    Google Scholar 

  • McLaren BM (2003) Extensionally defining principles and cases in ethics: an AI model. Artif Intell 150:145–181

    Article  MATH  Google Scholar 

  • Mill JS (1863) Utilitarianism. Parker, Son and Bourn, London

    Google Scholar 

  • Ross WD (1930) The right and the good. Oxford University Press, Oxford

    Google Scholar 

  • Singer P (1975) All animals are equal. In: Animal liberation: a new ethics for our treatment of animals New York. New York review, Distributed by Random House, pp 1–22

  • Tooley M (1972) Abortion and infanticide. Philos Public Affairs 2:47–66

    Google Scholar 

  • Warren MA (1997) On the moral and legal status of abortion. In: LaFollette H (ed) Ethics in practice. Blackwell, Oxford

    Google Scholar 

Download references

Acknowledgments

This material is based upon work supported in part by the National Science Foundation grant number IIS-0500133.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Susan Leigh Anderson.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Anderson, S.L. Asimov’s “three laws of robotics” and machine metaethics. AI & Soc 22, 477–493 (2008). https://doi.org/10.1007/s00146-007-0094-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-007-0094-5

Keywords

Navigation