Abstract
Due to advances in military technology, there has been an outpouring of research on what are known as autonomous weapon systems (AWS). However, it is common in this literature for arguments to be made without first making clear exactly what definitions one is employing, with the detrimental effect that authors may speak past one another or even miss the targets of their arguments. In this article I examine the U.S. Department of Defense and International Committee of the Red Cross definitions of AWS, showing that these definitions are far broader than some recognize, and that they therefore classify a much larger set of weapons as AWS. I then show that these broader views of AWS have implications for what moral and legal rules we might argue should be applied to such systems. I conclude by arguing there is a greater need for precision and clarity within AWS debates, in order to ensure that researchers are discussing the same weapon systems (autonomous or otherwise) when they argue for or against particular points. The purpose of this article is not to defend any specific view of AWS, nor to further any general endorsement or objection to such systems, but rather to show the importance of argumentative clarity in this debate.