Global Solutions vs. Local Solutions for the AI Safety Problem

Big Data Cogn. Comput 3 (1) (2019)

Authors
Abstract
There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided into four groups: 1. No AI: AGI technology is banned or its use is otherwise prevented; 2. One AI: the first superintelligent AI is used to prevent the creation of any others; 3. Net of AIs as AI police: a balance is created between many AIs, so they evolve as a net and can prevent any rogue AI from taking over the world; 4. Humans inside AI: humans are augmented or part of AI. We explore many ideas, both old and new, regarding global solutions for AI safety. They include changing the number of AI teams, different forms of “AI Nanny” (non-self-improving global control AI system able to prevent creation of dangerous AIs), selling AI safety solutions, and sending messages to future AI. Not every local solution scales to a global solution or does it ethically and safely. The choice of the best local solution should include understanding of the ways in which it will be scaled up. Human-AI teams or a superintelligent AI Service as suggested by Drexler may be examples of such ethically scalable local solutions, but the final choice depends on some unknown variables such as the speed of AI progress
Keywords AI safety  existential risk,  AI alignment,  superintelligence  AI arms race
Categories (categorize this paper)
Options
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

Our Archive
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

Coalescing Minds: Brain Uploading-Related Group Mind Scenarios.Kaj Sotala & Harri Valpola - 2012 - International Journal of Machine Consciousness 4 (01):293-312.

View all 9 references / Add more references

Citations of this work BETA

No citations found.

Add more citations

Similar books and articles

Locally Global Planning.John L. Pollock - 2011 - Thinking About Acting.
Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.Kaj Sotala & Lukas Gloor - 2017 - Informatica: An International Journal of Computing and Informatics 41 (4):389-400.
Pogge on Global Poverty.Juha Räikkä - 2006 - Journal of Global Ethics 2 (1):111 – 118.
Safety, Risk Acceptability, and Morality.James A. E. Macpherson - 2008 - Science and Engineering Ethics 14 (3):377-390.
Existential Risks: Exploring a Robust Risk Reduction Strategy.Karim Jebari - 2015 - Science and Engineering Ethics 21 (3):541-554.
Safety is More Than the Antonym of Risk.Sven Ove Hansson Niklas MÖller - 2006 - Journal of Applied Philosophy 23 (4):419-432.

Analytics

Added to PP index
2018-05-01

Total views
243 ( #30,653 of 2,310,922 )

Recent downloads (6 months)
49 ( #14,743 of 2,310,922 )

How can I increase my downloads?

Downloads

My notes

Sign in to use this feature