Could slaughterbots wipe out humanity? Assessment of the global catastrophic risk posed by autonomous weapons

Authors
Abstract
Recently criticisms against autonomous weapons were presented in a video in which an AI-powered drone kills a person. However, some said that this video is a distraction from the real risk of AI—the risk of unlimitedly self-improving AI systems. In this article, we analyze arguments from both sides and turn them into conditions. The following conditions are identified as leading to autonomous weapons becoming a global catastrophic risk: 1) Artificial General Intelligence (AGI) development is delayed relative to progress in narrow AI and manufacturing. 2) The potential for very cheap manufacture of drones, with prices below 1 USD each. 3) Anti-drone defense capabilities lagging offensive development. 4) Special global military posture encouraging development of drone swarms as a strategic offensive weapon, able to kill civilians. We conclude that while it is unlikely that drone swarms alone will become existential risk, lethal autonomous weapons could contribute to civilizational collapse in case of new world war.
Keywords slaughterbots  lethal autonomous weapons  artificial intelligence  existntial risk
Categories No categories specified
(categorize this paper)
Options
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

Our Archive
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

No references found.

Add more references

Citations of this work BETA

No citations found.

Add more citations

Similar books and articles

Autonomous Weapons and Distributed Responsibility.Marcus Schulzke - 2013 - Philosophy and Technology 26 (2):203-219.
Doctor of Philosophy Thesis in Military Informatics (OpenPhD ) : Lethal Autonomy of Weapons is Designed and/or Recessive.Nyagudi Nyagudi Musandu - 2016-12-09 - Dissertation, OpenPhD (#Openphd) E.G. Wikiversity Https://En.Wikiversity.Org/Wiki/Doctor_of_Philosophy , Etc.
Framing Robot Arms Control.Wendell Wallach & Colin Allen - 2013 - Ethics and Information Technology 15 (2):125-135.
Theoretical Foundations for the Responsibility of Autonomous Agents.Jaap Hage - 2017 - Artificial Intelligence and Law 25 (3):255-271.
The Strategic Robot Problem: Lethal Autonomous Weapons in War.Heather M. Roff - 2014 - Journal of Military Ethics 13 (3):211-227.
Saying 'No!' to Lethal Autonomous Targeting.Noel Sharkey - 2010 - Journal of Military Ethics 9 (4):369-383.
The Ethics of Autonomous Military Robots.Jason Borenstein - 2008 - Studies in Ethics, Law, and Technology 2 (1).
The Ethics of Autonomous Military Robots.Jason Borenstein - 2008 - Law and Ethics of Human Rights 2 (1).

Analytics

Added to PP index
2018-03-19

Total downloads
101 ( #63,476 of 2,293,871 )

Recent downloads (6 months)
52 ( #7,287 of 2,293,871 )

How can I increase my downloads?

Monthly downloads

My notes

Sign in to use this feature