Don't Worry about Superintelligence

Journal of Evolution and Technology 26 (1):73-82 (2016)
  Copy   BIBTEX

Abstract

This paper responds to Nick Bostrom’s suggestion that the threat of a human-unfriendly superintelligenceshould lead us to delay or rethink progress in AI. I allow that progress in AI presents problems that we are currently unable to solve. However; we should distinguish between currently unsolved problems for which there are rational expectations of solutions and currently unsolved problems for which no such expectation is appropriate. The problem of a human-unfriendly superintelligence belongs to the first category. It is rational to proceed on that assumption that we will solve it. These observations do not reduce to zero the existential threat from superintelligence. But we should not permit fear of very improbable negative outcomes to delay the arrival of the expected benefits from AI.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,642

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2019-01-12

Downloads
21 (#762,792)

6 months
21 (#133,716)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Nick Agar
Victoria University of Wellington

References found in this work

Superintelligence: paths, dangers, strategies.Nick Bostrom (ed.) - 2014 - Oxford University Press.
Computing machinery and intelligence.Alan M. Turing - 1950 - Mind 59 (October):433-60.
Nine Ways to Bias Open-Source AGI Toward Friendliness.Ben Goertzel & Joel Pitt - 2011 - Journal of Evolution and Technology 22 (1):116-131.

Add more references