Notes
In Moor’s framework, an autopilot (which in Wallach and Allen’s scheme exhibits some degree of functional morality, despite its low level of sensitivity to ethical values) would qualify as an implicit ethical agent.
For example, Luciano Floridi defines a moral agent in the context of information technology as “any interactive, autonomous, and adaptable transition system that can perform morally qualifiable actions”. [Italics Floridi] And he defines a system as “autonomous” when it can “change state without direct response to interaction, that is, it can perform internal transitions to change its state”. See [2].
Floridi and Sanders argue in several of their papers that AAs (and, for that matter, all “information entities”) have moral standing because they qualify as “moral patients” that deserve moral consideration from moral agents, regardless of whether or not these entities can be full-blown moral agents. See, for example [3].
Consider, for example, the insight of Hans Jonas, who, in response to challenges affecting nuclear technology and its implications for the ecosystem, as well as for future generations of humans, asked whether we need a “new framework of ethics” to account for “new objects of moral consideration” that were introduced by technological developments in the twentieth century. See [6].
References
Buechner, J., & Tavani, H. T. (2011). Trust and multi-agent systems: Applying the “diffuse, default model” of trust to experiments involving artificial agents. Ethics and information technology, 13(1), 39–53.
Floridi, L. (2008). Foundations of information ethics. In Himma, K. E., & Tavani, H. T. (Eds.) The handbook of information and computer ethics. Hoboken, NJ: Wiley, pp. 3–23.
Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379.
Himma, K. E. (2009). Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent?. Ethics and Information Technology, 11(1), 19–29.
Johnson, D. G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8(4), 195–204.
Jonas, H. (1984). The imperative of responsibility: In search of an ethics for the technological age. Chicago, IL: University of Chicago Press.
McDermott, D. (2008). Why ethics is a high hurdle for Al. Paper presented at the 2008 North American Conference on Computing and Philosophy. Bloomington, IN, July 12. (Note that the quoted passages above from McDermott, which are included on p. 35 in Moral Machines, are cited from McDermott’s conference paper.)
Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4), 18–21.
Royal Academy of Engineering. (2009). Autonomous systems: Social, legal and ethical issues. http://www.raeng.org.uk/autonomoussystems, London.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Tavani, H.T. Can we Develop Artificial Agents Capable of Making Good Moral Decisions?. Minds & Machines 21, 465–474 (2011). https://doi.org/10.1007/s11023-011-9249-8
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11023-011-9249-8