Skip to main content
Log in

Can we Develop Artificial Agents Capable of Making Good Moral Decisions?

Wendell Wallach and Colin Allen: Moral Machines: Teaching Robots Right from Wrong, Oxford University Press, 2009, xi + 273 pp, ISBN: 978-0-19-537404-9

Minds and Machines Aims and scope Submit manuscript

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Notes

  1. In Moor’s framework, an autopilot (which in Wallach and Allen’s scheme exhibits some degree of functional morality, despite its low level of sensitivity to ethical values) would qualify as an implicit ethical agent.

  2. For example, Luciano Floridi defines a moral agent in the context of information technology as “any interactive, autonomous, and adaptable transition system that can perform morally qualifiable actions”. [Italics Floridi] And he defines a system as “autonomous” when it can “change state without direct response to interaction, that is, it can perform internal transitions to change its state”. See [2].

  3. Floridi and Sanders argue in several of their papers that AAs (and, for that matter, all “information entities”) have moral standing because they qualify as “moral patients” that deserve moral consideration from moral agents, regardless of whether or not these entities can be full-blown moral agents. See, for example [3].

  4. Consider, for example, the insight of Hans Jonas, who, in response to challenges affecting nuclear technology and its implications for the ecosystem, as well as for future generations of humans, asked whether we need a “new framework of ethics” to account for “new objects of moral consideration” that were introduced by technological developments in the twentieth century. See [6].

References

  1. Buechner, J., & Tavani, H. T. (2011). Trust and multi-agent systems: Applying the “diffuse, default model” of trust to experiments involving artificial agents. Ethics and information technology, 13(1), 39–53.

    Google Scholar 

  2. Floridi, L. (2008). Foundations of information ethics. In Himma, K. E., & Tavani, H. T. (Eds.) The handbook of information and computer ethics. Hoboken, NJ: Wiley, pp. 3–23.

    Google Scholar 

  3. Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379.

    Article  Google Scholar 

  4. Himma, K. E. (2009). Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent?. Ethics and Information Technology, 11(1), 19–29.

    Article  Google Scholar 

  5. Johnson, D. G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8(4), 195–204.

    Article  Google Scholar 

  6. Jonas, H. (1984). The imperative of responsibility: In search of an ethics for the technological age. Chicago, IL: University of Chicago Press.

    Google Scholar 

  7. McDermott, D. (2008). Why ethics is a high hurdle for Al. Paper presented at the 2008 North American Conference on Computing and Philosophy. Bloomington, IN, July 12. (Note that the quoted passages above from McDermott, which are included on p. 35 in Moral Machines, are cited from McDermott’s conference paper.)

  8. Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4), 18–21.

    Article  Google Scholar 

  9. Royal Academy of Engineering. (2009). Autonomous systems: Social, legal and ethical issues. http://www.raeng.org.uk/autonomoussystems, London.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Herman T. Tavani.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Tavani, H.T. Can we Develop Artificial Agents Capable of Making Good Moral Decisions?. Minds & Machines 21, 465–474 (2011). https://doi.org/10.1007/s11023-011-9249-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11023-011-9249-8

Navigation