Killer robots

Journal of Applied Philosophy 24 (1):62–77 (2007)
  Copy   BIBTEX

Abstract

The United States Army’s Future Combat Systems Project, which aims to manufacture a “robot army” to be ready for deployment by 2012, is only the latest and most dramatic example of military interest in the use of artificially intelligent systems in modern warfare. This paper considers the ethics of a decision to send artificially intelligent robots into war, by asking who we should hold responsible when an autonomous weapon system is involved in an atrocity of the sort that would normally be described as a war crime. A number of possible loci of responsibility for robot war crimes are canvassed; the persons who designed or programmed the system, the commanding officer who ordered its use, the machine itself. I argue that in fact none of these are ultimately satisfactory. Yet it is a necessary condition for fighting a just war, under the principle of jus in bellum, that someone can be justly held responsible for deaths that occur in the course of the war. As this condition cannot be met in relation to deaths caused by an autonomous weapon system it would therefore be unethical to deploy such systems in warfare.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 90,221

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2009-01-28

Downloads
683 (#21,606)

6 months
111 (#29,647)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Robert Sparrow
Monash University

References found in this work

War and massacre.Thomas Nagel - 1972 - Philosophy and Public Affairs 1 (2):123-144.
The Turing triage test.Robert Sparrow - 2004 - Ethics and Information Technology 6 (4):203-213.

Add more references