Skip to main content
Log in

Out of character: on the creation of virtuous machines

  • Original Paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

The emerging discipline of Machine Ethics is concerned with creating autonomous artificial moral agents that perform ethically significant actions out in the world. Recently, Wallach and Allen (Moral machines: teaching robots right from wrong, Oxford University Press, Oxford, 2009) and others have argued that a virtue-based moral framework is a promising tool for meeting this end. However, even if we could program autonomous machines to follow a virtue-based moral framework, there are certain pressing ethical issues that need to be taken into account, prior to the implementation and development stages. Here I examine whether the creation of virtuous autonomous machines is morally permitted by the central tenets of virtue ethics. It is argued that the creation of such machines violates certain tenets of virtue ethics, and hence that the creation and use of those machines is impermissible. One upshot of this is that, although virtue ethics may have a role to play in certain near-term Machine Ethics projects (e.g. designing systems that are sensitive to ethical considerations), machine ethicists need to look elsewhere for a moral framework to implement into their autonomous artificial moral agents, Wallach and Allen’s claims notwithstanding.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. In addition to making sure that these robots are safe for human use and behave in an ethically sustainable manner, Machine Ethics is also concerned with other important issues, including the improvement of our understanding of (human) ethics through trying to implement moral decision-making faculties into robots.

  2. There are some noteworthy exceptions here. See for example Singer (2009) and Krishnan (2009). Also, the recent establishment of the international committee for robot arms control stems from a growing concern surrounding the development of autonomous robots used in warfare and other militarized settings.

  3. Elsewhere I have argued for this claim at length. Tonkens (2009).

  4. Wallach and Allen are not alone in making this claim. See for example Lin et al. (2008).

  5. Wallach and Allen (2009, 119) explicitly set the disagreements surrounding the intricacies of virtue ethics to one side, in favour of attending to “the computational tractability of virtue ethics.”

  6. It is important to note that Wallach and Allen (2009) do not argue that virtue ethics is the best or the only promising source for developing moral machines, and they consider other approaches as well. Moreover, it is worth noting that these authors also have some reservations about the development and use of certain kinds of robots for certain kinds of purposes. See especially their Chapter 3 and Chapter 12.

  7. Of course, this remains to be seen. Even if creating virtuous AMAs is consistent with virtue ethics, actually creating virtuous machines may prove to be quite difficult in practice.

  8. That the creation of these kinds of sophisticated autonomous machines is one of the goals of Machine Ethics is obvious from a survey of the current literature. See for example Asaro (2008), Sparrow (2007), and Wallach and Allen (2009).

  9. Wallach and Allen (2009, 26) adopt a similar understanding of machine autonomy, as does Sparrow (2007).

  10. There are other sorts of moral machines that may be developed, ones that do not have the ability to act (autonomously) out in the real world. For example, moral machines may be developed that could serve as ethical advisors to humans, but which do not themselves act in any robust sense beyond giving such advice. The worries presented here about the use of a virtue-based moral framework in the discipline of Machine Ethics do not necessarily apply to these and other similar moral machines. Thus, even if the implementation of virtue ethics into certain kinds of explicit and full AMAs turns out to be problematic, that moral framework may still have a role to play in more near-term Machine Ethics projects (e.g. building ethical sensitivity into machines). I am grateful to an anonymous reviewer for raising this point. At the same time, if we will be building up from such near-term platforms in order to develop explicit and full AMAs, then it may prove to be very helpful and prudent to start thinking about this consistency constraint presently, even when the development of explicit and full AMAs is only on the distant horizon.

  11. Much of this section has been adapted from Tonkens (2009).

  12. Some of the moral frameworks that have been examined with respect to their computability include Rossian prima facie duties (Anderson and Anderson 2007b), Utilitarianism (Grau 2006), Kantian moral theory (e.g. Powers 2006), and virtue ethics (Wallach and Allen 2009).

  13. For a much more thorough example, see Tonkens (2009).

  14. Whether we can get around this problem by simply not programming this sort of rule into combat robots is an open question. On Asimovian grounds, it would be ethically inconsistent to do so.

  15. There may not be much difference for a machine between following rules designed for some set of circumstances and modeling what a virtuous agent would do in those circumstances. Indeed, machines may be well suited for modelling what a perfectly virtuous agent would do, perhaps even more so than humans. Hursthouse’s claim seems especially pertinent for Machine Ethics since we can program the machine in ways that are much more coded (explicitly action guiding) than virtue ethics is usually charged as being able to offer.

  16. This appeal to naturalism has drawn its fair share of criticism. Moreover, it is unclear how much the project of creating artificial moral agents is conducive to this way of putting things. The important aspect of Hursthouse-style naturalism for our purposes is the idea that flourishing supervenes on an entity’s purpose, presumably regardless of whether this purpose originates naturally or is manufactured in some sense.

  17. Considering that Aristotle thought that the most noble death possible was to die courageously in battle, he may have been an advocate of (just) warfare. But, given that combat robots may not be able to be courageous, and that replacing human soldiers with robots may lessen the former’s opportunity to display courage in battle (since they would be relegated to remote positions or not appear on the battlefield at all) and thus to die courageously, our drive towards automated warfare may have Aristotle rotating in his grave.

  18. I return to this issue later on. Because human moral agents are autonomous, then making them do things against their will is typically morally problematic. Insofar as AMAs would be sufficiently autonomous, then assigning them duties against their will or without their consent may be similarly morally problematic. We could get around this problem my simply not making AMAs autonomous, although this flies in the face of the goal of Machine Ethics, as I understand it. Thus, it seems as though the project of developing autonomous AMAs may demand that we take their autonomy (interests, rights) into account in the ways that we treat them and the roles we assign to them.

  19. Part of this may depend on our understanding of “machine”. To be sure, machines have hitherto been understood as being tools (things, objects) manufactured and used by humans for achieving human goals. If we want to continue to accept this traditional definition of “machine” unconditionally, then it seems to follow that the creation of moral machines would be a matter of programming rules of our choosing into machines, rules that would need only to satisfy the requirements of enabling the machine (tool, object) to achieve its purpose of satisfying the ends of its human designer and user. And yet, the traditional definition of “machine” is being challenged by recent advances in artificial intelligence and robotics research, and may warrant significant revision or expansion—especially given the emergence of machines that are sufficiently sophisticated so as to possess (some of) the qualities central to moral agency. To the extent that these new kinds of machines would be moral agents, then they would no longer just be tools (things, objects) used for human ends, but would also be independent moral entities of their own. It is here that considerations of their flourishing arise.

  20. Some virtue ethicists may argue that a virtuous agent could not act viciously, since through doing so she would be revealed to be a non-virtuous agent. Yet, virtuous agents certainly have the ability to act viciously, they simply do not do so.

  21. Tonkens (2009).

  22. See Guarini and Bello (2011). Arkin (2009a, 6) has argued that it may be impossible to eliminate an AMA’s ability to act unethically in its entirety. However, he also argues that one benefit of using AMAs in military combat is that those robots could be more ethical than human beings.

  23. There are many other virtues that are relevant in the context of Machine Ethics (e.g. practical wisdom, modesty, integrity, et cetera), discussion of which is beyond the scope of this paper.

  24. Calverley (2008) argues for a similar conclusion. These rights need not be comparable to human rights in all respects. Moreover, although we would need to do this in order to be ethically consistent, I am inclined to think that this is a point in machine development that we should be hesitant to arrive at.

  25. Moral agents and moral patients both have moral worth, and both moral agency and moral patienthood put demands on the actions of moral agents; moral agents have certain obligations towards moral patients, who, although they do not themselves have any moral obligations, nevertheless deserve to be treated with respect as such. The paradigm example here is the ‘normal’ adult human (moral agent) and the ‘normal’ human infant (moral patient). If machines reach a certain level of sophistication and autonomy, then we may no longer be able to justifiably ignore their moral patienthood, which, by the received definition of the term, necessarily accompanies moral agency. Denying them moral respect just because they are not human moral agents is anthropocentric and inconsistent.

  26. What it means to be a virtuous roboticist or engineer is an important issue, detailed discussion of which cannot be offered here. It is worth noting that the virtues associated with being a virtuous engineer may be role-specific, and do not necessarily directly correspond (in kind, in number) to the virtues associated with being a virtuous human. Yet, insofar as all engineers are human, then it may be the case that (a) role-specific virtues ought not to conflict with being (able to be) a virtuous human being, and (b) no human vices should be considered virtues associated with that role (Oakley and Cocking 2001). For example, even if considerations of justice are outside the scope of what it means to be a virtuous engineer, that the virtues associated with justice are human virtues demands that the behaviour of engineers not be inimical to them.

  27. See Hursthouse (1997, 240). For example, although women may have a right to abortion (founded on their rights to security of the person and bodily integrity), there may be cases where exercising that right is morally wrong, i.e. when doing so is callous or irresponsible, and violates specific virtues (e.g. courage, humility, self-confidence).

  28. Certain low-level sex robots are currently on the market (see for example http://www.truecompanion.com). If Roxxxy’s successors are ever developed to the point where they could wilfully say “No” to any given proposed sex act, certain interesting issues would undoubtedly arise (for instance, what the moral and legal standing of robotic rape is). However, not giving those (otherwise autonomous) robots sufficient autonomy to say “No” may be to violate their rights, resulting in nothing less than forced concubinage.

  29. This problem does not arise until after the autonomous machine has been given its autonomy (moral agency), since, prior to this, the machine would have no autonomy (moral agency) that could be violated or disrespected.

  30. Whether we can overcome this worry by giving the robot a say in deciding its purpose is an open question, but doing so would certainly serve to respect their autonomy and any accompanying rights they may have. Presumably, however, this is not something that developers of autonomous AMAs have any intention to do.

  31. See McMahan (2009).

  32. Our understanding of the ethics of warfare may need to be re-evaluated; these regulations were drafted in order to protect the rights of human beings and to outline the criteria for just human warfare rather than machine warfare. See Asaro (2008) for a relevant discussion. However, we also need to ask whether we are allowed to amend the Laws of War and Rules of Engagement in order to make room for autonomous military AMAs, and whether the tenets of just war theory—the normative theory that these machines would be guided by—allow for the development and use of these sorts of machines in the first place.

  33. This may turn out to be a false dichotomy; there is no indication at present to suggest that there is no way to develop moral machines that meet the ethical consistency constraint described herein.

References

  • Allen, C., Wallach, W., & Smit, I. (2006). Why machine ethics? IEEE Intelligent Systems, 21(4), 12–17.

    Article  Google Scholar 

  • Anderson, S. (2008). Asimov’s ‘three laws of robotics’ and machine metaethics. AI & Society, 22(4), 477–493.

    Article  Google Scholar 

  • Anderson, M., & Anderson, S. (2007a). The status of machine ethics: A report from the AAAI symposium. Minds and Machines, 17(1), 1–10.

    Article  Google Scholar 

  • Anderson, M., & Anderson, S. (2007b). Machine ethics: creating an ethical intelligent agent. AI Magazine, 28(4), 15–26.

    Google Scholar 

  • Arkin, R. (2009a). Ethical robots in warfare. IEEE Technology and Society Magazine, Spring, 30–33.

  • Arkin, R. (2009b). Governing lethal behavior in autonomous robots. Dordrecht: Chapman & Hall.

    Book  Google Scholar 

  • Asaro, P. (2008). How just could a robot war be? In P. Brey, A. Briggle, & K. Waelbers (Eds.), Current issues in computing and philosophy (pp. 50–64). Amsterdam: IOS Press.

    Google Scholar 

  • Calverley, D. J. (2008). Imagining a non-biological machine as a legal person. AI & Society, 22(4), 523–537.

    Article  Google Scholar 

  • Foot, P. (1977). Euthanasia. Philosophy & Public Affairs, 6(2), 85–112.

    Google Scholar 

  • Grau, C. (2006). There is no ‘I’ in ‘Robot’: Robots and utilitarianism. IEEE Intelligent Systems, 21(4), 52–55.

    Article  Google Scholar 

  • Guarini, M., & Bello, P. (2011). Robotic warfare: Some challenges in moving from non civilian to civilian theaters. In P. Lin, G. Bekey & K. Abney (Eds.), Robot ethics: The ethical and social implications of robotics. Cambridge: MIT Press.

  • Hursthouse, R. (1997). Virtue theory and abortion. In D. Statman (Ed.), Virtue ethics: A critical reader (pp. 227–244). Washington: Georgetown University Press.

    Google Scholar 

  • Hursthouse, R. (1999). On virtue ethics. Oxford: Oxford University Press.

    Google Scholar 

  • Krishnan, A. (2009). Killer robots: The legality and ethicality of autonomous weapons. Farnham: Ashgate.

    Google Scholar 

  • Lin, P., Bekey, G., & Abney, K. (2008). Report on autonomous military robotics: Risk, ethics, and design. Available at http://ethics.calpoly.edu/ONR_report.pdf. Retrieved February 1, 2010.

  • McMahan, J. (2009). Killing in war. Oxford: Clarendon Press.

    Book  Google Scholar 

  • Moor, J. (2006). The nature, importance, and difficulty of Machine Ethics. IEEE Intelligent Systems, 21(4), 18–21.

    Article  Google Scholar 

  • Oakley, J., & Cocking, D. (2001). Virtue ethics and professional roles. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Powers, T. M. (2006). Prospects for a Kantian machine. IEEE Intelligent Systems, 21(4), 46–51.

    Article  Google Scholar 

  • Singer, P. W. (2009). Wired for war: The robotics revolution and conflict in the 21st century. New York: Penguin.

    Google Scholar 

  • Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77.

    Article  Google Scholar 

  • Swanton, C. (2003). Virtue ethics: A pluralistic view. New York: Oxford University Press.

    Google Scholar 

  • Tonkens, R. (2009). A challenge for machine ethics. Minds and Machines, 19(3), 421–438.

    Google Scholar 

  • Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.

    Google Scholar 

Download references

Acknowledgments

Thank you to the audience in Fredericton and the anonymous reviewers of this journal for their helpful comments on earlier versions of this paper. This paper has also benefited from enlightening conversations with Ron Arkin, Verena Gottschling, Marcello Guarini, Gloria Jones-Nibetta, Patrick Lin, Steve Torrance, John Sullins, and Andreas Traut, to whom I am very grateful.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ryan Tonkens.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Tonkens, R. Out of character: on the creation of virtuous machines. Ethics Inf Technol 14, 137–149 (2012). https://doi.org/10.1007/s10676-012-9290-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10676-012-9290-1

Keywords

Navigation