Autonomous Killer Drones
Abstract
In this paper, I address the question whether drones, which may soon possess the ability to make autonomous choices, should be allowed to make life-and-death decisions and act on them. To this end, I examine an argument proposed by Rob Sparrow, who dismisses the ethicality of what he calls “killer robots”. If successful, his conclusion would extend to the use of what I call autonomous killer drones, which are special kinds of killer robots. In Sparrow’s reasoning, considerations of responsibility occupy centre stage. Though I reject his argument, I agree both with Sparrow’s conclusion and with his basic contention that the idea of responsibility should play an important role in the investigation of the problem at hand. Therefore, I propose a different argument to show that we should not allow autonomous robots and more specifically autonomous drones to make life-and-death decisions. This argument also invokes the concept of responsibility. But it does so in a way that is different from Sparrow’s use and is congenial to an account of responsibility that I favour. It assumes the simple principle that morally significant choices should only be made by subjects who are capable of responsibility, which can be reconciled both with a deontological and a consequentialist view of morality. Since killer robots, and in particular autonomous killer drones, seem to lack that capability, it follows that they should not be entrusted to make life-and-death decisions and act on them.