Assessing the future plausibility of catastrophically dangerous AI

Futures (2018)
Authors
Abstract
In AI safety research, the median timing of AGI creation is often taken as a reference point, which various polls predict will happen in second half of the 21 century, but for maximum safety, we should determine the earliest possible time of dangerous AI arrival and define a minimum acceptable level of AI risk. Such dangerous AI could be either narrow AI facilitating research into potentially dangerous technology like biotech, or AGI, capable of acting completely independently in the real world or an AI capable of starting unlimited self-improvement. In this article, I present arguments that place the earliest timing of dangerous AI in the coming 10–20 years, using several partly independent sources of information: 1. Polls, which show around a 10 percent of the probability of an early creation of artificial general intelligence in the next 10-15 years. 2. The fact that artificial neural network (ANN) performance and other characteristics, like number of “neurons”, are doubling every year, and extrapolating this tendency suggests that roughly human-level performance will be reached in less than a decade. 3. The acceleration of the hardware performance available for AI research, which outperforms Moore’s law thanks to advances in specialized AI hardware, better integration of such hardware in larger computers, cloud computing and larger budgets. 4. Hyperbolic growth extrapolations of big history models.
Keywords artificial Intelligence  Existential risk  future studies  robots
Categories (categorize this paper)
Options
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

Our Archive
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

No references found.

Add more references

Citations of this work BETA

No citations found.

Add more citations

Similar books and articles

Risks of Artificial Intelligence.Vincent C. Müller (ed.) - 2016 - CRC Press - Chapman & Hall.
Risks of Artificial General Intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).
Editorial: Risks of General Artificial Intelligence.Vincent C. Müller - 2014 - Journal of Experimental and Theoretical Artificial Intelligence 26 (3):297-301.
Ai: Its Nature and Future.Margaret A. Boden - 2016 - Oxford University Press UK.
Artificial Intelligence and Wittgenstein.Gerard Casey - 1988 - Philosophical Studies 32:156-175.

Analytics

Added to PP index
2018-04-03

Total downloads
205 ( #26,657 of 2,293,869 )

Recent downloads (6 months)
123 ( #2,400 of 2,293,869 )

How can I increase my downloads?

Monthly downloads

My notes

Sign in to use this feature