Self-improving AI: an Analysis [Book Review]

Minds and Machines 17 (3):249-259 (2007)
Self-improvement was one of the aspects of AI proposed for study in the 1956 Dartmouth conference. Turing proposed a “child machine” which could be taught in the human manner to attain adult human-level intelligence. In latter days, the contention that an AI system could be built to learn and improve itself indefinitely has acquired the label of the bootstrap fallacy. Attempts in AI to implement such a system have met with consistent failure for half a century. Technological optimists, however, have maintained that a such system is possible, producing, if implemented, a feedback loop that would lead to a rapid exponential increase in intelligence. We examine the arguments for both positions and draw some conclusions.
Keywords Artificial intelligence   Autogeny   Bootstrap fallacy   Complexity barrier   Learning   Self-improving
Categories (categorize this paper)
DOI 10.1007/s11023-007-9065-3
 Save to my reading list
Follow the author(s)
My bibliography
Export citation
Find it on Scholar
Edit this record
Mark as duplicate
Revision history Request removal from index
Download options
PhilPapers Archive

Upload a copy of this paper     Check publisher's policy on self-archival     Papers currently archived: 16,707
External links
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library
References found in this work BETA
Eric B. Baum (2004). What Is Thought? Cambridge MA: Bradford Book/MIT Press.

Add more references

Citations of this work BETA

Add more citations

Similar books and articles

Monthly downloads

Added to index


Total downloads

52 ( #65,834 of 1,726,249 )

Recent downloads (6 months)

9 ( #74,830 of 1,726,249 )

How can I increase my downloads?

My notes
Sign in to use this feature

Start a new thread
There  are no threads in this forum
Nothing in this forum yet.