Self-improving AI: an Analysis [Book Review]

Minds and Machines 17 (3):249-259 (2007)
Abstract
Self-improvement was one of the aspects of AI proposed for study in the 1956 Dartmouth conference. Turing proposed a “child machine” which could be taught in the human manner to attain adult human-level intelligence. In latter days, the contention that an AI system could be built to learn and improve itself indefinitely has acquired the label of the bootstrap fallacy. Attempts in AI to implement such a system have met with consistent failure for half a century. Technological optimists, however, have maintained that a such system is possible, producing, if implemented, a feedback loop that would lead to a rapid exponential increase in intelligence. We examine the arguments for both positions and draw some conclusions.
Keywords Artificial intelligence   Autogeny   Bootstrap fallacy   Complexity barrier   Learning   Self-improving
Categories (categorize this paper)
Options
 Save to my reading list
Follow the author(s)
My bibliography
Export citation
Find it on Scholar
Edit this record
Mark as duplicate
Revision history Request removal from index
 
Download options
PhilPapers Archive


Upload a copy of this paper     Check publisher's policy on self-archival     Papers currently archived: 11,817
External links
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library
References found in this work BETA

No references found.

Citations of this work BETA
Similar books and articles
Analytics

Monthly downloads

Added to index

2009-01-28

Total downloads

40 ( #45,194 of 1,099,862 )

Recent downloads (6 months)

7 ( #40,663 of 1,099,862 )

How can I increase my downloads?

My notes
Sign in to use this feature


Discussion
Start a new thread
Order:
There  are no threads in this forum
Nothing in this forum yet.