Safety Engineering for Artificial General Intelligence

Topoi 32 (2):217-226 (2012)
  Copy   BIBTEX

Abstract

Machine ethics and robot rights are quickly becoming hot topics in artificial intelligence and robotics communities. We will argue that attempts to attribute moral agency and assign rights to all intelligent machines are misguided, whether applied to infrahuman or superhuman AIs, as are proposals to limit the negative effects of AIs by constraining their behavior. As an alternative, we propose a new science of safety engineering for intelligent artificial agents based on maximizing for what humans value. In particular, we challenge the scientific community to develop intelligent systems that have human-friendly values that they provably retain, even under recursive self-improvement.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 99,169

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2012-08-23

Downloads
701 (#31,240)

6 months
15 (#169,843)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Roman Yampolskiy
University of Louisville

References found in this work

Leviathan.Thomas Hobbes - 1904 - Harmondsworth,: Penguin Books. Edited by C. B. Macpherson.
The Adapted Mind: Evolutionary Psychology and the Generation of Culture.Jerome H. Barkow, Leda Cosmides & John Tooby - 1992 - Oxford University Press. Edited by Jerome H. Barkow, Leda Cosmides & John Tooby.
Morals by agreement.David P. Gauthier - 1986 - New York: Oxford University Press.
The singularity: A philosophical analysis.David J. Chalmers - 2010 - Journal of Consciousness Studies 17 (9-10):9 - 10.

View all 45 references / Add more references