Assessing the future plausibility of catastrophically dangerous AI

Futures (2018)
  Copy   BIBTEX

Abstract

In AI safety research, the median timing of AGI creation is often taken as a reference point, which various polls predict will happen in second half of the 21 century, but for maximum safety, we should determine the earliest possible time of dangerous AI arrival and define a minimum acceptable level of AI risk. Such dangerous AI could be either narrow AI facilitating research into potentially dangerous technology like biotech, or AGI, capable of acting completely independently in the real world or an AI capable of starting unlimited self-improvement. In this article, I present arguments that place the earliest timing of dangerous AI in the coming 10–20 years, using several partly independent sources of information: 1. Polls, which show around a 10 percent of the probability of an early creation of artificial general intelligence in the next 10-15 years. 2. The fact that artificial neural network (ANN) performance and other characteristics, like number of “neurons”, are doubling every year, and extrapolating this tendency suggests that roughly human-level performance will be reached in less than a decade. 3. The acceleration of the hardware performance available for AI research, which outperforms Moore’s law thanks to advances in specialized AI hardware, better integration of such hardware in larger computers, cloud computing and larger budgets. 4. Hyperbolic growth extrapolations of big history models.

Other Versions

No versions found

Links

PhilArchive

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2018-04-03

Downloads
2,197 (#6,269)

6 months
258 (#11,960)

Historical graph of downloads
How can I increase my downloads?