Skip to main content
Log in

Should Moral Machines be Banned? A Commentary on van Wynsberghe and Robbins “Critiquing the Reasons for Making Artificial Moral Agents”

  • Commentary
  • Published:
Science and Engineering Ethics Aims and scope Submit manuscript

Abstract

In a stimulating recent article for this journal (van Wynsberghe and Robbins in Sci Eng Ethics 25(3):719–735. https://doi.org/10.1007/s11948-018-0030-8, 2019), Aimee van Wynsberghe and Scott Robbins (hereafter, vW&R) mount a serious critique of a number of reasons advanced in favor of building artificial moral agents (AMAs). In light of their critique, vW&R make two recommendations: they advocate a moratorium on the commercialization of AMAs and suggest that the argumentative burden is now shifted onto the proponents of AMAs to come up with new reasons for building them. This commentary aims to explore the implications vW&R draw from their critique. In particular, it will raise objections to the moratorium argument and propose a presumptive case for commercializing AMAs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Notes

  1. Given the context of vW&R’s moratorium quote, one may think that they intend the safety vs. morality considerations to constitute the entirety of their justifications for the moratorium – if one takes this view, then one may ignore this commentary’s discussion of unpredictability, possible harms, and moral deskilling as irrelevant.

  2. As one reviewer points out, worries about profits and commercialization need not apply to military AMAs whose production and use could be handled mostly outside the market.

  3. This complaint appears to assume that such AMAs will in fact lack the capacity for feelings or genuine care. While this assumption is contentious, it is accepted in this commentary for the sake of argument.

  4. It is, of course, a further question whether the hypothetical public would be right to so tightly connect morality and feelings.

  5. As vW&R recognize, this also raises the question of moral patiency – machines capable of emotions could plausibly be considered as themselves requiring moral consideration. Addressing issues arising of this observation would need much more space than available here. For the purposes of this commentary, only moral agency of machines will be taken into account. It is important to note, though, that any case for or against AMAs will be incomplete without dealing with patiency as well.

  6. Consider: one may pursue the fictive relationship at the expense of other genuine goods, thereby degrading one’s own, and potentially others’, quality of life; one may suffer from disappointment, or worse, when the fictive nature of the relationship is laid bare; one may even be motivated to cause harm to the other person as a result of such disappointment.

  7. One could say that reading books on happiness is less likely to cause harm than thinking that safe machines are moral. But this is not obvious – there likely are people who stake their genuine lifelong happiness on pursuing philosophically naïve advice from self-help books.

  8. It should be noted that this no longer appears to be a discussion over whether AMAs could be marketed as “moral.” Even AMAs that could truly be labeled as such would be problematic in this context.

  9. One could claim that since such machines will not have moral reasoning skills, they should be subjected to more stringent standards than ordinary moral agents, such as adult humans. However, it is not obvious that the scenarios vW&R offer license this claim. After all, it is not at all obvious that ignition locks – which, presumably, lack moral reasoning skills – should be banned because they may be faced with moral dilemmas like the one vW&R describe.

  10. As one reviewer points out, a different objection, related to deskilling, could be that abdication of moral decision-making to machines threatens to fundamentally transform human nature. This, to be sure, is a concern worthy of discussion. While fully addressing it is beyond the scope of this paper, one reason to think that such abdication need not occur is that the machines likely to be touted as superior to humans in moral reasoning would be those that can make their reasoning explicit. It would still be incumbent upon human beings to examine and discuss such reasoning, deploying their own ethical reasoning in the process. Similarly, suppose that some humans (say, philosophy professors) are better at moral reasoning than, say, the general public. This does not relieve the general public from examining and disputing the professors’ claims about ethics. (Nor does the public simply accept such claims on faith, as many comment sections under popular articles written by philosophers demonstrate.) Secondly, even assuming that human beings will simply give up on making moral decisions because of the machines’ superiority, this transformation must be weighed against other potential benefits of improved moral decision-making to humanity in general. It is not obvious how this weighting would go – and, unfortunately, it is beyond the scope of this paper to address it in a fuller manner, though an attempt, restricted to one area of human morality, is given later.

  11. This is not to claim that Vallor endorses a legal ban on machines of this sort. Her own arguments do not explicitly conclude that this would be the right way to combat moral deskilling.

  12. It is assumed here (as Vallor does) that caring is a skill, and therefore subject to normative evaluation, i.e. that it can be done better or worse. This assumption could be rejected by some.

  13. The term has been used recently by thinkers like Dan Moller (2019) and Antti Kauppinen (forthcoming) to describe the moral outlook so neatly captured by Talbot et al.

  14. There might be independent good reasons to not embrace absolutism about rights and autonomy in the debate over AMAs. As Ryan Tonkens (2009) has argued, a more resolutely Kantian approach to building AMAs could end up being self-defeating, in that producing Kantian AMAs would violate the categorical imperative. Perhaps one could take it as a reason to prefer moderate deontology over its more absolutist variant when discussing AMAs.

  15. According to Rawls, the freedom of occupational choice is a “primary good” – a good that every citizen needs to have in order to pursue their permissible conception of a good life (see 1993: 181–2; see also Mackay (2016)). One could claim that being willing to create AMAs as one’s occupation necessitates having an impermissible conception of a good life – this claim, however, is doubtful, though obviously it’s possible that some manufacturers could knowingly misrepresent their machines as moral in order to increase profits. Still, this does not mean that all manufacture of AMAs should therefore be made illegal.

  16. This also shows the “protecting users” angle of vW&R’s argument to be decidedly paternalistic. The users of AMAs would be protected by having their choice space restricted in a coercive manner. On Gerald Dworkin’s (2020) classification, this could count as “hard paternalism” that permits interference in others’ choices even if they are informed about the conduct they want to engage in (if users are likely to be misled, there are policies other than a ban to remedy that). While paternalism of this sort is not without defenders (Scoccia 2008), there are powerful Millian and Kantian reasons to oppose it, laid out concisely and convincingly by, for instance, Jessica Flanigan (2012: 580–582). This remark is of course not intended to show that the concern with the willing users’ welfare is automatically disqualified as a reason for legally restricting commercial AMA production, but merely that it faces additional justificatory burdens that other reasons to ban it need not.

  17. One avenue for further exploration would be the potential to transform human nature that the introduction of AMAs could occasion. There are a number of deep philosophical questions involved here: how could such a transformation occur as a result of AMAs (and is it even plausible that it will occur)?; what aspects of human nature are susceptible to change?; are these changes to be welcomed or bemoaned?; if the latter, do other advantages of AMAs justify their production?; if not, is their production to be banned? However, it seems like vW&R’s case against AMAs is not primarily based on such considerations, hence addressing these questions falls beyond the scope of this commentary.

References

Download references

Acknowledgements

The author wishes to thank the reviewers for this journal for many useful comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bartek Chomanski.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chomanski, B. Should Moral Machines be Banned? A Commentary on van Wynsberghe and Robbins “Critiquing the Reasons for Making Artificial Moral Agents”. Sci Eng Ethics 26, 3469–3481 (2020). https://doi.org/10.1007/s11948-020-00255-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11948-020-00255-9

Keywords

Navigation