The ethics of warfare and military leadership must pay attention to the rapidly increasing use of artificial intelligence and machines. Who is responsible for the decisions made by a machine? Do machines make decisions? May they make them? These issues are of particular interest in the context of Lethal Autonomous Weapon Systems. Are they autonomous or just automated? Do they violate the international humanitarian law which requires that humans must always be responsible for the use of lethal force and for the assessment that civilian casualties are proportionate to the military goals? The article analyses relevant documents, opinions, government positions, and commentaries using the methods of applied ethics. The main conceptual finding is that the definition of autonomy depends on what the one presenting it seeks to support. Those who want to use lethal autonomous weapon systems call them by another name, say, automated instead of autonomous. They impose standards on autonomy that machines do not meet, such as moral agency. Those who wish to ban the use of lethal autonomous weapon systems define them much less broadly and do not require them to do much more than to be a self-standing part of the causal chain.The article’s argument is that the question of responsibility is most naturally perceived by abandoning the most controversial philosophical considerations and simply stating that an individual or a group of people is always responsible for the creation of the equipment they produce and use. This does not mean that those who press the button, or their immediate superiors, are to blame. They are doing their jobs in a system. The ones responsible can probably be found in higher military leadership, in political decision-makers who dictate their goals, and, at least in democracies, in the citizens who have chosen their political decision-makers.
Keywords No keywords specified (fix it)
Categories (categorize this paper)
DOI 10.5840/ijap2021326145
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

PhilArchive copy

Upload a copy of this paper     Check publisher's policy     Papers currently archived: 63,194
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

No references found.

Add more references

Citations of this work BETA

No citations found.

Add more citations

Similar books and articles

Doctor of Philosophy Thesis in Military Informatics (OpenPhD ) : Lethal Autonomy of Weapons is Designed and/or Recessive.Nyagudi Nyagudi Musandu - 2016-12-09 - Dissertation, OpenPhD (#Openphd) E.G. Wikiversity Https://En.Wikiversity.Org/Wiki/Doctor_of_Philosophy , Etc.
Feasible Precautions in Attack and Autonomous Weapons.Jeffrey S. Thurnher - 2018 - In Wolff Heintschel von Heinegg, Robert Frau & Tassilo Singer (eds.), Dehumanization of Warfare: Legal Implications of New Weapon Technologies. Springer Verlag. pp. 99-117.
Autonomous Weapon Systems: Failing the Principle of Discrimination.Ariel Guersenzvaig - 2018 - IEEE Technology and Society Magazine 37 (1):55-61.
Meaningful Human Control – and the Politics of International Law.Thilo Marauhn - 2018 - In Wolff Heintschel von Heinegg, Robert Frau & Tassilo Singer (eds.), Dehumanization of Warfare: Legal Implications of New Weapon Technologies. Springer Verlag. pp. 207-218.


Added to PP index

Total views
13 ( #743,606 of 2,448,320 )

Recent downloads (6 months)
9 ( #73,191 of 2,448,320 )

How can I increase my downloads?


My notes