Abstract
The present debate over the creation and potential deployment of lethal autonomous weapons, or ‘killer robots’, is garnering more and more attention. Much of the argument revolves around whether such machines would be able to uphold the principle of noncombatant immunity. However, much of the present debate fails to take into consideration the practical realties of contemporary armed conflict, particularly generating military objectives and the adherence to a targeting process. This paper argues that we must look to the targeting process if we are to gain a fuller picture of the consequences of creating or fielding lethal autonomous robots. This paper argues that once we look to how militaries actually create military objectives, and thus identify potential targets, we face an additional problem: the Strategic Robot Problem. The ability to create targeting lists using military doctrine and targeting processes is inherently strategic, and handing this capability over to a machine undermines existing comman..