Skip to main content

Advertisement

Log in

Self-improving AI: an Analysis

  • Published:
Minds and Machines Aims and scope Submit manuscript

Abstract

Self-improvement was one of the aspects of AI proposed for study in the 1956 Dartmouth conference. Turing proposed a “child machine” which could be taught in the human manner to attain adult human-level intelligence. In latter days, the contention that an AI system could be built to learn and improve itself indefinitely has acquired the label of the bootstrap fallacy. Attempts in AI to implement such a system have met with consistent failure for half a century. Technological optimists, however, have maintained that a such system is possible, producing, if implemented, a feedback loop that would lead to a rapid exponential increase in intelligence. We examine the arguments for both positions and draw some conclusions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. There is a similar informal concept often referred to as “AI-completeness” (see Wikipedia: “AI-complete”). However, it is always assumed that humans are AI-complete, which we do not assume, so we will avoid that usage here.

  2. The demise of Moore’s law has been predicted so often since the 1970s that we feel confident in asserting that the burden of proof lies completely on any critiquer.

  3. Feigenbaum et al. (1982) Vol II, pp. 295–379.

  4. Lou Steinberg, in a private communication, analyses the problem as that understanding program specifications substantially simpler than a higher-level-language implementation of the program requires arbitrary real-world knowledge.

  5. Baum (2004) p. 316, for example, espouses the view that inductive bias limits the possibility of a general learning mechanism.

  6. R. J. Solomonoff, “Complexity-Based Induction Systems: Comparisons and Convergence Theorems,” IEEE Trans. Information Theory IT-24(4):422–432.

  7. If it were computable, we could disprove its completeness by a construction reminiscent of the ones used in Godel’s Theorem and the Halting Problem. The construction essentially forces the machine to try to outwit itself.

  8. 100,000. Note that some estimates of wild chimp population range up to twice that.

  9. Jared Diamond, The Third Chimpanzee, Harper, New York, 1992, p. 35.

  10. Mellars, Paul, et al., “A new radiocarbon revolution and the dispersal of modern humans in Eurasia”, Nature, 439. 931–935 (2006).

  11. “Neanderthals in Europe Killed Off Earlier”, Associated Press, Feb 23, 2006.

  12. Diamond, p. 44. Note, however, that though from as recently as 1992, Diamond’s information on Neanderthals is already significantly out of date.

  13. see http://www.audiblox.com/iq_scores.htm

  14. NSF Science Resources Statistics Infobrief, http://www.nsf.gov/statistics/infbrief/nsf02325/ While it is tempting to believe that scientists and engineers form the brightest one percent of the population, we can make the slightly less dubious assumption that S&E below the 99th percentile and others above it cancel out.

  15. IQ tests, of course, don’t actually measure innovative ability. They measure cognitive skills that have been found to correlate with innovative ability, so the implications are purely statistical.

  16. Parts of this section adapted from Hall, J: “Nano-enabled AI: Some Philosophical Issues” forthcoming in International Journal of Applied Philosophy, Fall 2006 Special Issue on Nanoethics.

References

  • Baum, E. (2004). What is thought. Cambridge: MIT.

    Google Scholar 

  • Bostrom, N. (2003). Ethical issues in advanced artificial intelligence. In I. Smit et al. (Eds.), Cognitive, emotive and ethical aspects of decision making in humans and in artificial intelligence (Vol. 2). Int. Institute of Advanced Studies in Systems Research and Cybernetics, pp. 12–17.

  • Feigenbaum, E. A. et al. (Eds.) (1982). The handbook of artificial intelligence. Los Altos: Morgan Kaufman 1981–2 in 3 vols.

  • Lenat, D. B., & Brown Jon, S. (1983). Why AM and Eurisko appear to work. Artificial Intelligence, 23, 269–294.

    Article  Google Scholar 

  • McCarthy, J., Marvin, M., Nathaniel, R., & Claude, S. (1955). A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence. see http://www.formal.stanford.edu/jmc/history/dartmouth.html

  • McCarthy, J. (1969). Programs with common sense. In Marvin, M. (Ed.), Semantic information processing (pp. 403–418). Cambridge: MIT.

    Google Scholar 

  • Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433–460.

    Article  MathSciNet  Google Scholar 

  • Vinge, V. (1993). The Coming Technological Singularity: How to Survive in the Post-Human Era. in Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace. NASA Conf. Pub., 10129:11–22.

  • von Neumann, J. (1966). Theory of self-reproducing Automata. Urbana: U. of Illinois Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to John Storrs Hall.

Additional information

This paper is based on Chapter 7 of the author’s forthcoming book Beyond AI: Creating the Conscience of the Machine (Amherst, NY: Prometheus, May 2007), which was in turn based on the paper delivered at AI@50.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Hall, J.S. Self-improving AI: an Analysis. Minds & Machines 17, 249–259 (2007). https://doi.org/10.1007/s11023-007-9065-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11023-007-9065-3

Keywords

Navigation