Narrow AI Nanny: Reaching Strategic Advantage via Narrow AI to Prevent Creation of the Dangerous Superintelligence

Authors
Abstract
Abstract: As there are no currently obvious ways to create safe self-improving superintelligence, but its emergence is looming, we probably need temporary ways to prevent its creation. The only way to prevent it is to create a special type of AI that is able to control and monitor the entire world. The idea has been suggested by Goertzel in the form of an AI Nanny, but his Nanny is still superintelligent, and is not easy to control. We explore here ways to create the safest and simplest form of AI which may work as an AI Nanny, that is, a global surveillance state powered by a Narrow AI, or AI Police. A similar but more limited system has already been implemented in China for the prevention of ordinary crime. AI police will be able to predict the actions of and stop potential terrorists and bad actors in advance. Implementation of such AI police will probably consist of two steps: first, a strategic decisive advantage via Narrow AI created by an intelligence services of a nuclear superpower, and then ubiquitous control over potentially dangerous agents which could create unauthorized artificial general intelligence which could evolve into Superintelligence.
Keywords artificial intelligence  existential risks  surveillance  NSA  World government
Categories (categorize this paper)
Options
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

Our Archive
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

No references found.

Add more references

Citations of this work BETA

No citations found.

Add more citations

Similar books and articles

Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.Kaj Sotala & Lukas Gloor - 2017 - Informatica: An International Journal of Computing and Informatics 41 (4):389-400.
Editorial: Risks of General Artificial Intelligence.Vincent C. Müller - 2014 - Journal of Experimental and Theoretical Artificial Intelligence 26 (3):297-301.
Superintelligence as Superethical.Steve Petersen - 2017 - In Patrick Lin, Keith Abney & Ryan Jenkins (eds.), Robot Ethics 2.0. New York, USA: Oxford University Press. pp. 322-337.
Risks of Artificial General Intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).

Analytics

Added to PP index
2018-07-25

Total downloads
27 ( #236,826 of 2,293,871 )

Recent downloads (6 months)
27 ( #15,837 of 2,293,871 )

How can I increase my downloads?

Monthly downloads

My notes

Sign in to use this feature