1 Introduction

The fast development of military robotics raises urgent ethical concerns that need to be addressed not only by philosophers but also by policy makers, robot designers, robotics companies, military organizations, and other parties involved. Philosophers of technology can bring a distinct perspective to the discussion, for example, by drawing on reflections on the nature and role of technologies. As philosophers of technology, we must not hesitate to ask: Is ethics of military robotics about robots?

In the media and popular literature, there is a tendency to talk about military robotics in terms of “killer robots’ (see for example Card 2007): single machines sent out into the battlefield as a kind of replacement of soldiers in the service of a higher military and/or political aim. In the following pages, I hope to defuse this image by discussing various ethical questions that may be raised in response to developments in military robotics. I will defend three claims, which will support my conclusion that ethics of military robotics is not (necessarily) about robots:

  1. 1.

    Robots—including military robots—are not mere means to ends, but shape these ends.

  2. 2.

    Military robotics concerns not only autonomous “killer robots” but should be analyzed at various levels, including (larger) systems, networks, and swarms.

  3. 3.

    No one develops military robots (or everyone develops military robots).

2 Are Robots Mere Means?

When it comes to military activities, we must ask at least the following questions regarding their ethical justification:

  1. 1.

    If military action is considered as a means to reach a (non-military) aim, is that aim justified?

  2. 2.

    Are the means justified? This question breaks down into further questions:

    1. a.

      Can military action be justified at all?

    2. b.

      If it can be justified in principle, is it justified in this particular case?

    3. c.

      Are these (military) means the best (most effective) way to reach the aim?

    4. d.

      Are these means proportionate to the aim?

    5. e.

      Can this particular military conduct be justified?

The latter questions are typical questions of Just War Theory, which concerns the question if one should start a war (Jus ad bellum, my question b, which involves c and d) and the question concerning limitations to war conduct (Jus in bello, my question e). Just War Theory questions are often discussed in ethics of military robotics, but usually less attention is paid to Jus ad bellum questions, and virtually no attention is paid to questions 1 and 2a. For example, Arkin gives an overview of the principles of Just War Theory, but says about the question regarding the justification of war that “humanity has long debated the morality of warfare” and that “this has not deterred the persistent conduct of lethal conflict over millennia” (Arkin 2008, p. 121). Like many other researchers in the field, he seems to assume the argument that since we cannot avoid war anyway, we better focus on the question how to avoid bad conduct in war. Therefore, war cruelties, which Arkin calls the “failings of man,” are the ones to focus on. Ethicists then need to provide principles in order to ensure that these failings “need not be replicated in autonomous battlefield robots” (p. 121). Similarly, Sparrow limits his inquiry to the question “who we should hold responsible when an autonomous weapon system is involved in an atrocity of the sort that would normally be described as a war crime” (Sparrow 2007, p. 62). In other words, this is again about Jus in bello. Neither political and policy questions nor the question regarding the justification of war in general enters into the discussion. And while Asaro discusses Jus ad bellum questions in more detail than many other researchers (he considers the fact that technology functions in propaganda and discusses the question “whether it is desirable to make it practically easier to go to war or not” (Asaro 2008, pp. 8–9)), the more general questions about the justification of the aims of war and of war in general remain in the background.

However, if we put military robotics into context, we must consider the larger questions about the politics and the policies, powers and doctrines, and also concern ourselves with the justification of war. The underlying reason why ethicists of robotics tend to put these questions aside as irrelevant has to do with the way they conceive of technology’s nature and role: technology—including military technology—is assumed to be a mere means. A discussion of the ends is considered a different discussion, belonging to the domain of politics, policy, and doctrine. But is this limitation justified?

First, military robots are not used in a political vacuum: they often embody specific political and military doctrines. For example, the use of drones—unmanned aerial vehicles that can be armed and are used by the USA in both observational and offensive operations—fits within the doctrine of “preemptive” or “preventive” war. This doctrine has a long history and became a part of official policy in the USA during the Bush administration period and is still (less officially) implemented in Afghanistan. It often involves so-called preemptive strikes: rather than employing a large military apparatus, single raids are conducted in order to “eliminate” particular targets (particular people and/or installations). Since such a doctrine is directly connected with the use of robots (i.e., drones), a discussion of the means (the robot) cannot do without a discussion of the end (preemptive war). Robots, then, may not only make it easier to start a war, as Asaro rightly argues, but they may actually change our military and political doctrines and activities.

Moreover, the question regarding the justification of war in general deserves more attention—not only by pacifists. While many people today consider war as unavoidable, what we mean by “war” today is a particular form of mass violence that fits particular political ideologies (for instance imperialism and state nationalism) and is neither an inevitable necessity nor a fact of human nature. While aggression and violence might well be a perennial part of the human condition, the particular form of mass violence we know today under the description of “war” is not. What we usually have in mind when we think of war is the particular form of mass violence that marked the nineteenth- and twentieth-century international conflicts. But other forms of mass violence may emerge. Hence, the meaning of “war” should not simply be taken for granted or assumed to possess enduring reality, divorced from historical experience and linguistic usage. Therefore, ethicists of military robotics should not avoid the general question about the justification of war and its meaning.

However, my point here is not to defend a particular view on the justification of war or of particular political and military doctrines, but to show how means and ends are intrinsically linked and to argue that therefore we should not dismiss a discussion of ends when analyzing and evaluating military robotics.

Thus, means and ends are not only linked in the sense that by definition the means is linked to the end as a means to that end; the means (technology) also shapes and changes the end(s). This is particularly striking when we consider the history of military technology and its relation to political and military ends. What gun powder has done for colonialism or what nuclear technology has done to the concept of twentieth-century warfare and international politics cannot be captured by saying that they were mere means to pre-fixed ends. The gun powder made it possible to conquer distant lands (and oppress their people, who did not have that technology), and atomic bombs created particular power constellations at the international level (e.g., during the “Cold War”). In this way, these technologies were not mere means; they co-shaped imperial and nationalist policies. Technology is not a neutral means that can be used for any end, which can be discussed in separation from the means; technologies as means also influence and shape their ends.

This is not only a lesson taught by philosophers of technology (from Heidegger to contemporary authors like VerbeekFootnote 1) or a far-fetched idea that is only of academic interest. As most analysts of military affairs will acknowledge, it is also directly relevant to contemporary and near-future military and political practice. For example, P.W. Singer, familiar with Pentagon research, argues in Wired for War and related work that military robots will change warfare forever: since warfare can be conducted from an office desktop, moral and psychological barriers to killing will be more easily overcome. In an article in The New Atlantis, he refers to Grossman’s psychological research (Grossman 1995) when saying that conducting war in the manner of a video game might “make soldiers too calm, too unaffected by killing” (Singer 2009a, b, c, p. 44). This is not science fiction; today, there are many armed, remote-controlled unmanned moving objects in use, for example, by the US military.

The point is not that these technologies cause us to change our ends—military or otherwise. But they are part of the conditions of possibility that enable us to do new things and thereby encourage us to re-shape our practices. The atomic bomb has changed the twentieth-century international politics; it is likely that military robotics will also create conditions under which we will create and adopt other political and military ends. Therefore, it is wise to include a discussion of ends—and their relation to the means—into reflections on the ethics of military robotics.

3 Killer Robots Versus Networks and Swarms

A further reason why ethics of military robotics should not only concern itself with “robots” has to do with the precise nature of the most recent technologies under development. Considering this issue will also allow me to speculate about how military practice (considered as a means–ends continuum) might change in the near future, although this is not my main purpose here.

In the popular imagination, robotics is often about single, autonomous robots that can do things on their own. Applied to military robotics, it is assumed that this field is about single, autonomous robots that do military things on their own—in the absence of direct human control. This view is supported by reports about dronesFootnote 2 or military robots with legs,Footnote 3 which appear as lone “killer machines” (Card 2007) somewhat similar in kind to the lone human or cyborg fighting heroes we know from Hollywood films. Military robots are Terminators. This popular image is often reflected in academic literature too, for example when and in so far it is mainly concerned with autonomous robots and their (un)ethical behavior (Arkin 2008), the idea of “killer robots” (Sparrow 2007), the design of autonomous weapon systems (Sparrow 2009), and the ethics of autonomous technology (Asaro 2008).

However, this image of military robotics is not only misguided since to my knowledge today military robots are operating under direct human control (drones are remote controlled, and they follow human commands) and will probably continue to do so in the near future given the current state of the art in artificial intelligence and robotics. It is also misleading as an image of the military’s robotic future since recent technological–organizational developments suggest a different image of military robotics and encourage us to move beyond thinking in terms of a single autonomous robotic artifact (or a mere collection of such artifacts). In military technological thinking and research, atomistic ontologies are being replaced by thinking in terms of systems, networks, and swarms. In a network, (military) activity is not about single, atomistic agents exercising their agency in single actions. Instead, agency (if this is still the adequate term at all) is distributed, collective, and emergent. It cannot be reduced to the level of the parts (systems metaphor), nodes (network metaphor), or—why not—“bees” (swarms metaphor). None of the parts, nodes, or bees control the action (in this sense they are not agents), but the system, network, or swarm as a whole acts. To the extent that this kind of conception may be implemented in practice, a philosophy and ethics of robotics should take it seriously and modify its methodology: it should not only concern itself with “robots” but analyze and evaluate the activity of the system, network, or swarm as a whole.

In an article in Joint Force Quarterly, Singer makes an interesting distinction between two conceptions of naval warfare, which illustrates the conceptual shift relevant for this discussion. One conception is captured by the term “motherships”: decision power is centralized (the mothership), while military action (i.e., fire power) is dispersed or spread out. The alternative conception is “swarming”: a swarm consists of “independent” parts—there is no central controller—but like a swarm of bees or birds, all nodes are linked to every other node, and in this way, the whole can act in unison; it is self-organizing. (Singer 2009b, pp. 107–110) Applied to robotics, implementing this would mean that “each system would be given a few operating orders and let loose, each robot acting on its own, but also in collaboration with all the others” (p. 109). The disadvantage (at least, from a centralization point of view) is lack of control:

“Swarms may not be predictable to the enemy, but neither are they exactly controllable or predictable for the side using them, which can lead to unexpected results: (…) a swarm takes action on its own (…)”. (Singer 2009b, p. 110)

If these new technologies and their corresponding principles of organization are further developed and implemented, this may lead not only to a different (naval) warfare but might go hand in hand with different forms of social organization altogether. We might move towards a swarm society—with all its advantages and disadvantages. Singer’s description attends us to the fact that technology changes (human) organization, that is, it changes not only our “tools” but also how we do things. New technologies (robotics, artificial intelligence, information technology, etc.) may not only change military action and war, they might also shift our societies from closed, top-down, and centralized forms of power and organization (for example the nation state) to more open, bottom-up, and decentralized forms of power and organization.

Of course, such a societal change does not necessarily follow from the technologies or from the concept “swarm” alone; this depends on further social and material conditions. Just as the mere existence of a network does not necessarily make it more democratic, as Galloway and Thacker have argued (Galloway and Thacker 2007, p. 13), swarms are compatible with centralized control (e.g., control of a swarm or by a swarm). There is no direct, causal connection between swarms and societal organization. Furthermore, I concede that there is nothing in the logic of networks that prevents one or more nodes in the network from gaining more power than other nodes. Nevertheless, as concepts and as technologies, swarms and networks open up possibilities for more decentralized thinking and organization that are not necessarily intended by the authors of the concept or by the designers and users of the technologies in question. What exactly will happen in practice cannot be predicted, but like other technologies, swarms are bound to exert a wider, less tangible influence on our ways of thinking and doing, and decentralization is a real possibility.

This approach encourages us to avoid making a strict distinction between “technological” and “social” developments. Indeed, from the perspective of ontology, networks and swarms must not only be thought of as consisting of technological artifacts only; for a full analysis and evaluation, one needs to put humans into the picture too. We should also abandon the assumption that there are “first” humans and things that “then” interact with one another. Instead, we should presume that humans and things are already connected (network metaphor) and buzzing together (swarm metaphor) in common activity before one can zoom in on particular connections and movements.

If these conceptual–ontological changes were to take shape in practice, then we would have to revise the assumptions of our traditional theories of responsibility. In our moral evaluations, we could no longer focus on human intentions, minds, and “their” actions alone, and we would have to abandon the requirement that responsibility relies on full control of actions—at least if that requirement assumes that action is undistributed and individual. We would have to evaluate the holistic behavior of the network or the swarm—which can hardly be called a “robotic” swarm given the involvement of various kinds of systems and humans. Furthermore, we would also have to move beyond an analysis of distribution of responsibility between one single human actor and a single artifact (e.g., a robot). The network is much larger, and its nature goes beyond the human–artifact distinction. Thus, if these changes were to take place, then we would not have to ask if machines can be held morally responsible (Sparrow 2007, p. 72), as opposed to humans, since in a network or swarm ontology, such a question would not make sense. What would be needed is a new theory of responsibility based on a network or swarm ontology, which would probably have to change the meaning of the term responsibility itself.

Exactly how such a new theory of responsibility would look like cannot be elaborated within the space of this paper and needs a longer work, but my guess is that such a theory would revolve around the following premises: (1) activity is distributed among, and emergent from, human and non-human nodes in the network, and (2) activity is not and cannot be centrally controlled.

From these premises, one could then proceed to argue that moral responsibility too is distributed among, and emergent from, human and non-human nodes in the network, and that it cannot simply be attributed to one or some of the nodes. The challenge then is to conceptualize this in a way that can guide transitions in our practices of responsibility.

But whatever the precise implications for the theories of moral responsibility with regard to ethics of “robotics,” we can conclude from the preceding discussion that the evaluation of “robotic” activity should not be limited to a discussion about how individual autonomous systems can make ethical decisions and if they should be allowed to do so at all—however interesting these discussions are.Footnote 4 Instead, they should try to anticipate implementation of new technological–organizational concepts and adopt a methodology that considers a plurality of levels of analysis, including the level of networks or swarms.

4 How “Military” Are Military Robots?

Based on these concepts and future scenarios, one might well entertain the intuition that military robotics is one of the most dangerous technological developments of our time, and that it should be subject to heavy regulation and control by democratic political powers. If this intuition were justified (I did not argue for it here), then its implementation is problematic in at least the following ways.

The design of military robots is distributed among individuals, groups, research centers, and nations, and in spite of efforts by nation states and companies to protect what they consider to be “their” technology, there is no central control. Indeed, at a higher level of analysis, it seems that developments in robotics, like other developments in technology, are themselves moving and acting like a swarm. If this is true, it means that it is not easy to regulate “military” robotics since the responsible agents are not easily identifiable.

Military robotics shares this feature with related technological developments in information technology. Consider how difficult it is for nation states to regulate development of software. In a legal sense, not all software is open source. But in a broader, sociological sense and at a higher level of analysis, the development of knowledge in an information society is increasingly “open source” due to the very technologies it creates (Internet and related further developments), which by their very nature defy centralization of organization and power. This means that there is not one, easily identifiable source that can be held responsible and be subjected to regulation.

Furthermore, development of military robotics is in practice not always recognizable as “military,” there is no clear separation between military and non-military robotics, and given what has been said about networks and swarms, it is not only about the development of robots. Let me explain this.

It is not always recognizable as military since military robots—like all robots—are themselves systems that consist of many parts and are therefore always to a considerable extent based on systems developed by non-military organizations. Also, companies and military robots might find “civil” uses outside the military domain. Moreover, there seems to be a kind of paradox: on the one hand, a lot of robotics research is funded by the military (in the USA and perhaps also some other countries, it is the majority of robotics research—see for example Singer 2009a, b, c, p. 169), but there is also a sense in which everyone (or no one) is developing military robots. Robotic and autonomous systems—for military and other uses— are developed by universities, by companies, and so on, in many countries, and it is difficult to hold any individual or group of individuals responsible for their development. The relevant structure of knowledge development increasingly resembles more a wiki cloud or wiki swarm than a tree (roots metaphor) or river (source metaphor). It is fast moving (the origin of the term “wiki’), everyone can contribute to its development (wiki concept), and there is no central control (swarm concept). Finally, military robotics is not always recognizable as robotics: even if a particular hardware could to be traced down to particular military centers or to non-military organizations funded by the military, relevant software development goes on everywhere, not just in military contexts, and is difficult to trace back to individuals or even groups of individuals. Therefore, if we ever have “robotic” swarms, it will not only be difficult to identify their “users” (since those users are “only” a part of the network); their development will be the result of decades of both civil and military information science and robotics engineering. Similarly, what hackers can do is not only to be credited to particular individuals; they work with software and knowledge that is generated by many people in a decentralized, wiki-type or open source way.

In so far as ethics of robotics is an engineering ethics, then, it should have a realistic understanding of the way robotics know-how is created and draw conclusions for the ascription of responsibility. With a little exaggeration, one can say that the “robotic” and “military” networks and swarms under consideration here are not designed at all: they are not the (mere) implementation of a pre-fixed plan. To use a computer science metaphor: these networks and swarms and their activities are not the execution of a program, a code. There is no blueprint (to use a metaphor from an older information technology). They rather emerge or grow out of dispersed technological material and organizational human developments, without a (human or non-human) center of control. This makes it difficult, if not impossible, to apply models of responsibility that rely on tracing back something to its origin, source, or authorship. There is no “first mover” that initiated the system; in particular situations, one may experience that the network has grown, and no one could have predicted its particular shape. If it could be called “military,” it is not military because there is a clearly identifiable actor known as a military organization which created the network by having a model or plan which then gets implemented. The network or swarm is “military” in terms of how we view its activities, but it is also “non-military” in origin; it is nowhere and everywhere. The technology is engineered, for sure, but there are many hands —military and non-military. Moreover, at a higher level of analysis, the network can be considered as a self-organizing whole that is not necessarily itself designed and intended.

Note that other military technologies may share the features discussed here to some extent. But information technologies and related technologies seem to be particularly “fluid” when it comes to pinning down agency and responsibility with regard to their design and employment, which makes regulation more difficult and less effective. The development and use of nuclear technology, by contrast, appear to be more connected to centralized forms of (political) power (i.e., the power of the nation state), which makes it easier for national and international regulators to identify its origin and the user(s). However, even in the case of nuclear technology, it is not always easy to separate military from non-military development and use of the technology, as problems in the field of international nuclear regulation show.

5 Conclusion: How to Live with the Swarm

In ethics, putting things into context is sometimes associated with relativism and the refusal to make any general claim. Sometimes this accusation is justified. But in this paper, I have done quite the opposite: I made general claims about the meaning and nature of military robotics, and I made some very general suggestions and speculations about troubles this may generate for standard ethics (i.e., Just War theory and standard theory of moral responsibility) and for its epistemological assumptions. My purpose was not to foretell the future (I leave this to others), but to contribute to a better understanding of what ethics and philosophy of military robotics is about and what it can do. I conclude that military robotics is not about robots as mere means to military and political aims, but that developments in this field must be analyzed and evaluated in the context of its ends, taking into account that these ends are re-shaped by the means. I also conclude that military robotics is not just about single, autonomous “killer” robots or about interaction with such robots, but that we should also consider a different level of analysis: particular systems and artifacts should be put into the context of the network or swarm they are part of —a network or swarm that contains human and non-human elements and which developments and activities are largely unpredictable and can influence societal organization. Finally, I conclude that the context of robotics and information technology is not exclusively military and cannot be understood by means of “engineering” ethics, at least if and in so far as such an ethics starts from atomistic and authorship assumptions.Footnote 5

The positive claims in this paper, that is, my suggestions about an alternative approach to military robotics and its ethical problems, are still underdeveloped and need more work. For example, if some of the characteristics of swarms are distributed and emerging activity, lack of control, and lack of predictability, it remains unclear what this implies for our practices of responsibility.Footnote 6 However, this paper has contributed to the exploration of a perspective on military robotics and responsibility that may still be marginal in mainstream ethics of (military) robotics and in moral philosophy, but which deserves further attention in the light of concrete military and political developments.

Perhaps some of my readers are getting rather pessimistic about the possibilities for an effective ethics of military robotics if they survey the moral–technological landscape sketched in this paper. My suggestions concerning possible moral, social, and epistemic implications of Singer’s swarm conception,Footnote 7 for instance, may be perceived as casting a dark shadow over the future. Lack of control and lack of predictability are the horror par excellence to the modern mind. However, it might be a consolation to consider that ethical and philosophical reflections (academic or not, probably hybrid) may develop into swarms too: initially invisible, but potentially powerful networks of people and ideas that can help us to better understand what awaits us, to find unexpected corridors for change, and to open up different windows of possibility.