Journal of Consciousness Studies 17 (9-10):9 - 10 (2010)
Authors |
|
Abstract |
What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to ever-greater levels of intelligence, as each generation of machines creates more intelligent machines in turn. This intelligence explosion is now often known as the “singularity”. The basic argument here was set out by the statistician I.J. Good in his 1965 article “Speculations Concerning the First Ultraintelligent Machine”: Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. The key idea is that a machine that is more intelligent than humans will be better than humans at designing machines. So it will be capable of designing a machine more intelligent than the most intelligent machine that humans can design. So if it is itself designed by humans, it will be capable of designing a machine more intelligent than itself. By similar reasoning, this next machine will also be capable of designing a machine more intelligent than itself. If every machine in turn does what it is capable of, we should expect a sequence of ever more intelligent machines. This intelligence explosion is sometimes combined with another idea, which we might call the “speed explosion”. The argument for a speed explosion starts from the familiar observation that computer processing speed doubles at regular intervals. Suppose that speed doubles every two years and will do so indefinitely. Now suppose that we have human-level artificial intelligence 1 designing new processors. Then faster processing will lead to faster designers and an ever-faster design cycle, leading to a limit point soon afterwards. The argument for a speed explosion was set out by the artificial intelligence researcher Ray Solomonoff in his 1985 article “The Time Scale of Artificial Intelligence”.1 Eliezer Yudkowsky gives a succinct version of the argument in his 1996 article “Staring at the Singularity”: “Computing speed doubles every two subjective years of work..
|
Keywords | No keywords specified (fix it) |
Categories | (categorize this paper) |
Options |
![]() ![]() ![]() ![]() |
Download options
References found in this work BETA
The Conscious Mind: In Search of a Fundamental Theory.David J. Chalmers - 1996 - Oxford University Press.
Reasons and Persons.Joseph Margolis - 1986 - Philosophy and Phenomenological Research 47 (2):311-327.
View all 33 references / Add more references
Citations of this work BETA
Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.
Making Moral Machines: Why We Need Artificial Moral Agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents. [REVIEW]Nick Bostrom - 2012 - Minds and Machines 22 (2):71-85.
An Argument for the Impossibility of Machine Intelligence (Preprint).Jobst Landgrebe & Barry Smith - 2021 - Arxiv.
The Problem of Superintelligence: Political, Not Technological.Wolfhart Totschnig - 2019 - AI and Society 34 (4):907-920.
View all 88 citations / Add more citations
Similar books and articles
Universal Intelligence: A Definition of Machine Intelligence.Shane Legg & Marcus Hutter - 2007 - Minds and Machines 17 (4):391-444.
What Does the Turing Test Really Mean? And How Many Human Beings (Including Turing) Could Pass?Tyler Cowen & Michelle Dawson - unknown
Intelligence is Not Enough: On the Socialization of Talking Machines. [REVIEW]E. Ronald & Moshe Sipper - 2001 - Minds and Machines 11 (4):567-576.
Nano-Enabled AI: Some Philosophical Issues.J. Storrs Hall - 2006 - International Journal of Applied Philosophy 20 (2):247-261.
New Mathematical Foundations for AI and Alife: Are the Necessary Conditions for Animal Consciousness Sufficient for the Design of Intelligent Machines?Rodrick Wallace - 2006
Computing Machines Can't Be Intelligent (...And Turing Said So).Peter Kugel - 2002 - Minds and Machines 12 (4):563-579.
Analytics
Added to PP index
2010-04-08
Total views
2,016 ( #2,161 of 2,498,951 )
Recent downloads (6 months)
87 ( #8,464 of 2,498,951 )
2010-04-08
Total views
2,016 ( #2,161 of 2,498,951 )
Recent downloads (6 months)
87 ( #8,464 of 2,498,951 )
How can I increase my downloads?
Downloads