Skip to main content

Advertisement

Log in

Responsibility for Killer Robots

  • Published:
Ethical Theory and Moral Practice Aims and scope Submit manuscript

Abstract

Future weapons will make life-or-death decisions without a human in the loop. When such weapons inflict unwarranted harm, no one appears to be responsible. There seems to be a responsibility gap. I first reconstruct the argument for such responsibility gaps to then argue that this argument is not sound. The argument assumes that commanders have no control over whether autonomous weapons inflict harm. I argue against this assumption. Although this investigation concerns a specific case of autonomous weapons systems, I take steps towards vindicating the more general idea that superiors can be morally responsible in virtue of being in command.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. To be clear, I expect that only some, not all, future weapons systems will be autonomous. I assume that AWS decide at least in a thin sense of “decide,” in which also a driverless car decides to stop when a light is about to turn red.

  2. In other words, I concentrate on the control condition for moral responsibility and set aside the epistemic condition (cf. Fischer and Ravizza 1998, p. 12).

  3. This claim pertains only to cases in which a commander has an actual choice, at least, between either deploying an AWS or not deploying it, such that the former but not the latter option carries risks of harm.

  4. This case should not be confused with a case due to Sparrow (2007), which I discuss towards the end of the paper.

  5. Some advocacy groups call it an “accountability gap.”

  6. Responsibility may lie with developers (Lokhorst and van den Hoven 2011), politicians (Steinhoff 2013), or the AWS itself (Hellström 2012; Burri 2017, p. 73). Responsibility might be shared (Schulzke 2013; Robillard 2018), or “a new kind of ... responsibility” might be required (Pagallo 2011, p. 353).

  7. Santoni de Sio and van den Hoven (2018) offer an account of meaningful human control, to which my account is an alternative, as I explain below. Lin et al. (2008) as well as Roff (2013, p. 357) focus on legal instead of moral responsibility and consider the possibility that a commander is responsible only as one among many options (next to, for example, the responsibility of developers). They do not aim to offer an argument for or against a commander’s responsibility neither do they develop an account for why a commander would (not) be responsible. Nyholm (2017), similar to my approach, suggests to investigate responsibility by drawing on “hierarchical models of collaborative agency, where some agents within the collaborations are under other agents’ supervision and authority.” But Nyholm (2017, p. 1203) admits that “a fully worked-out theory is not offered” in his paper.

  8. By contrast, Hellström (2012) rests his explanation of a commander’s responsibility on the concept of autonomous power, which “denotes the amount and level of actions, interactions and decisions the considered artifact is capable of performing on its own.” Unlike control, autonomous power plays no role in existing discussions of moral or legal responsibility. Yet, the account that I propose here is compatible with that of Hellström (2012) and can be seen as spelling out an alternative way of understanding the idea of autonomous power.

  9. Shoemaker (2011, 2015), as others, distinguishes these (attributability, answerability, accountability) as different forms of responsibility. I do take an official view as to whether there are different kinds or forms of responsibility or if, instead, there is only one kind of responsibility that comes in different degrees. In order to remain neutral about this issue while nevertheless incorporating Shoemaker’s distinction in some form, I opt for the language of “aspects” of responsibility.

  10. We can understand “agency” in one of two ways. First, we can understand “agency” as a relation between an agent and an action representing who did what. This is intentional agency. Second, we can understand “agency” as a predicate representing the property of being an agent. Many usages of “agency” in this predicative sense often require more than standing in the agency relation.

  11. Although some argue that some group agents might be responsible and they might thereby avoid responsibility gaps (Pettit 2007; List and Pettit 2011, chap. 7; Duijf 2018).

  12. Robillard (2018, p. 707) observes that this assumption is widely shared, if only tacitly. In fact, a popular textbook on artificial intelligence (AI) defines AI as “as the study of agents” (Russell and Norvig 2010, p. viii).

  13. For example Sparrow (2016, p. 108) writes that “even if the machine is not a full moral agent, it is tempting to think that it might be an ‘artificial agent’ with sufficient agency, or a simulacrum of such, to problematize the ‘transmission’ of [the human operator’s] intention.”

  14. However, this understanding of “responsibility gap” seems to over-generate because it picks out actions by animals, which are another kind of merely minimal agents, as leading to responsibility gaps. This raises the question of why, if at all, responsibility gaps are morally problematic. I assume, for the sake of the argument, that responsibility gaps are morally problematic at least in the case of AWS.

  15. I want to register my hesitation in thinking that responsibility gaps are problematic as such. See note 14.

  16. For how my approach differs from these, see notes 7 and 8.

  17. I state only a sufficient condition for control because the necessary part is not needed for my argument.

  18. In the standard way, the first conditional is true already if a in fact gives an order and x occurs.

  19. As is standard with applications of such semantics for counterfactuals, the question of how “all relevantly similar situations” is defined must be set aside.

  20. This is because robust tracking control does not include a condition referring to the content of the order or to the descriptions of the outcomes, let alone the relation between the two.

  21. Nevertheless, there are broad similarities between the account of Santoni de Sio and van den Hoven and my account. First, both accounts are concerned with the same issue: the relation that partly grounds agents’ moral responsibility. Second, both accounts formulate control as tracking following Nozick (1981, pp. 172–85).

  22. Relatedly, the account of Santoni de Sio and van den Hoven is modelled after what Fischer and Ravizza (1998) call “guidance control,” whereas robust tracking control is modelled after what Fischer and Ravizza call “regulative control.”

  23. Fischer and Ravizza (1998) argue that instead of the relatively demanding notion of regulative control, on which robust tracking control is modelled, only the weaker notion of guidance control is necessary for responsibility.

  24. This sets aside the so-called overdetermination problem to which definitions in terms of counterfactual conditionals are notoriously susceptible.

  25. Fischer and Ravizza (1998) distinguish between guidance control and regulative control and argue that only guidance control is necessary for moral responsibility. When “control” is understood as guidance control the commander seems to have control over outcome A. See also Santoni de Sio and van den Hoven (2018).

  26. They might argue that responsibility requires rational control. But they reject that responsibility requires volitional control, which is the notion used in the responsibility gap argument.

  27. Insofar as a proponent of a tracing theory distinguishes between direct responsibility (for things directly under an agent’s control) and derivative responsibility (for things traceable to things under an agent’s control), a version of the responsibility gap argument returns: Commanders are only derivatively but not directly responsible for what an AWS does. But if this is a problem at all, it has little to do with AWS. On a tracing theory, all responsibility is derivative responsibility. I am grateful to an anonymous referee for pressing me to clarify this point.

  28. For the purposes of this paper, I do not side with the proponents of this view. Instead, I develop an independent response that is compatible with much of what internalists contend (e.g. that investigations looking for the specific objects of responsibility are somewhat irrelevant) although my response also denies a central internalist claim (that agents are only responsible for things such as their willings, attitudes, or their quality of will).

  29. Internalists do not always accept that responsibility requires control.

  30. It depends on the semantics of such responsibility statements.

  31. A mission can be successful (its objective is achieved), unsuccessful (something results that contradicts the mission’s objective), or neither successful nor unsuccessful (in all other cases, such as the mission being aborted).

  32. Suppose the killer in Random Killing hopes to kill victim 2 but victim 1 is killed instead. The fact that the outcome contradicts the killer’s intention is not a reason against their responsibility.

  33. Although omitted in their description, the AWS is deployed in each of these.

  34. The claim is not that how things turn out makes a difference to an agent’s responsibility. In this respect my claim differs importantly from claims defended by proponents of resultant moral luck.

  35. Likewise, Sparrow (2007, p. 70) argues that mere unpredictability of AWS is no sufficient reason that the commander is not responsible. He writes: “If the autonomy of the weapon merely consists in the fact that its actions cannot always be reliably predicted … then [e]mploying AWS …is like using long-range artillery. … [R]esponsibility for the decision to fire remains with the commanding officer.”

References

Download references

Acknowledgements

I have benefitted from presentations and discussions of this paper at the London School of Economics, the Australian National University, the Graduate Reading Retreat of the Stockholm Centre for the Ethics of War and Peace, the Future of Just War conference in Monterey, the Humboldt University Berlin, the University of Sheffield, and the Frankfurt School of Finance & Management. I am also grateful for conversations with and/or comments by Gabriel Wollner, Christian List, Susanne Burri, Helen Frowe, Ying Shi, Seth Lazar, Matthew Adams, Sebastian Köhler, and Christine Tiefensee, as well as two anonymous referees for this journal.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Johannes Himmelreich.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Himmelreich, J. Responsibility for Killer Robots. Ethic Theory Moral Prac 22, 731–747 (2019). https://doi.org/10.1007/s10677-019-10007-9

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10677-019-10007-9

Keywords

Navigation