Skip to main content
Log in

Consistency, Turing Computability and Gödel’s First Incompleteness Theorem

  • Published:
Minds and Machines Aims and scope Submit manuscript

Abstract

It is well understood and appreciated that Gödel’s Incompleteness Theorems apply to sufficiently strong, formal deductive systems. In particular, the theorems apply to systems which are adequate for conventional number theory. Less well known is that there exist algorithms which can be applied to such a system to generate a gödel-sentence for that system. Although the generation of a sentence is not equivalent to proving its truth, the present paper argues that the existence of these algorithms, when conjoined with Gödel’s results and accepted theorems of recursion theory, does provide the basis for an apparent paradox. The difficulty arises when such an algorithm is embedded within a computer program of sufficient arithmetic power. The required computer program (an AI system) is described herein, and the paradox is derived. A solution to the paradox is proposed, which, it is argued, illuminates the truth status of axioms in formal models of programs and Turing machines.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. I will be using ‘paradox’ in its dictionary sense, which does not imply unsolvability.

  2. I use the expression ‘gödel-sentence’ just as the reader would surmise, i.e., to refer to a sentence which is true (on the standard arithmetic interpretation) and representable in the given system, but not derivable in that system.

  3. In recent personal communication, one logician has casually suggested that this last claim might be false in the case where the formal model (that serves as input) is inductively presented to the machine. However, I doubt that this suggestion will bear fruit. No matter how a formal system is presented to a machine, the machine will need to store at least something in its memory. Once this occurs, the machine’s memory contents will change, and this must be reflected in any formal model of the changed machine. Moreover, the output of a machine applied to its argument would be derivable within a formal model of the machine that includes the specified input, but such output would not be derivable in any formal model of the machine that omits a specification of its argument.

  4. Feferman’s algorithm is a primitive recursive function. Thus, it is certainly Turing computable and programmable in any standard computer language.

  5. It is straightforward to encode any Turing machine table in pure LISP.

  6. Documentation for the Otter and PTTP systems can be found, respectively, at these websites: http://www−unix.mcs.anl.gov/AR/otter/description and http://www.ai.sri.com/~stickel/pttp.html

References

  • Boolos, G. S., & Jeffrey, R. C. (1980). Computability and Logic, (2nd ed.). Cambridge, UK: Cambridge University Press.

    Google Scholar 

  • Church, A. (1936). An unsolvable problem of elementary number theory. American Journal of Mathematics, 58, 345–363.

    Article  MATH  MathSciNet  Google Scholar 

  • Davis, M. (1958). Computability and unsolvability. New York: McGraw-Hill Book Company.

    MATH  Google Scholar 

  • Feferman, S. (1960). Arithmetization of metamathematics in a general setting. Fundamenta Mathematicae, 49, 35–92.

    MATH  MathSciNet  Google Scholar 

  • Fisher, M. J. (1993). Lambda-calculus schemata. Lisp and Symbolic Computation, 6, 259–288.

    Article  Google Scholar 

  • Gödel, K. (1931). Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I. Monatshefte für Mathematik und Physik, 38, 173–198.

    Article  MATH  Google Scholar 

  • Hadley, R. F. (1987). Gödel, Lucas, and mechanical models of the mind. Computational Intelligence, 3, 57–63.

    Article  Google Scholar 

  • Kleene, S. C. (1935). λ-definability and recursiveness. Bulletin of the American Mathematical Society, 41, 490.

    MATH  Google Scholar 

  • Kleene, S. C. (1936). λ-definability and recursiveness. Duke Mathematical Journal, 2, 340–353.

    Article  MATH  MathSciNet  Google Scholar 

  • Kleene, S. C. (1967). Mathematical logic. New York: John Wiley & Sons, Inc.

    MATH  Google Scholar 

  • Lewis, D. (1969). Lucas against mechanism. Philosophy, 44, 231–233.

    Article  Google Scholar 

  • Löwenheim, L. (1915). Über Möglichkeiten im Relativkalkül. Math.Ann., 76, 447–470.

    Article  MathSciNet  Google Scholar 

  • Lucas, J. R. (1961). Minds, machines, and gödel. Philosophy, 36, 112–117.

    Google Scholar 

  • Penrose, R. (1989). The emperor’s new mind. Oxford: Oxford University Press.

    Google Scholar 

  • Penrose, R. (1994). Shadows of the mind. Oxford: Oxford University Press.

    Google Scholar 

  • Smullyan, R. M. (1994). Diagonalization and self-reference. Oxford: Clarendon Press.

    MATH  Google Scholar 

  • Turing, A. M. (1936–1937). On computable numbers with an application to the entscheidungsproblem. Proceedings of the London Mathematical Society. 42, (pp. 230–265). A correction is given in Vol. 43, pp. 544–546.

  • Turing, A. M. (1937). Computability and λ-definability. Journal of Symbolic Logic, 2, 153–163.

    Article  MATH  Google Scholar 

  • Webb, J. (1980). Mechanism, mentalism, and metamathematics. Hingham, MA: Reidel.

    MATH  Google Scholar 

Download references

Acknowledgements

I am very grateful to Wilfried Sieg for his penetrating comments on a precursor of this paper, and to Alistair Lachlan and Warren Burton for some technical remarks. I also wish to thank Eugenia Ternovska for her helpful comments on degenerate cases of recursive functions, and David Mitchell for a stimulating theoretical discussion. Thanks also to Mark Stickel for useful comments on the properties of the automated theorem provers, PTTP and OTTER and to Amin Sharifi for initial references to those systems.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Robert F. Hadley.

Appendix A

Appendix A

For those not accustomed to dealing with axiomatic models of Turing machines, the following remarks should be helpful. It is widely recognized within the field of computer science that programs presented in modern programming languages can always be written so as to accommodate a previously specified input format. Moreover, it is common knowledge among researchers in the logic programming community that, corresponding to any working computer program, there is an equivalent program written in Prolog (which is the most widely used, logic programming language). Prolog, like all other modern high-level programming languages has the computational power of a universal Turing machine.

Now, any Prolog program consists of a series of axioms in the form of horn-clauses, most of which are syntactic variants of axioms written in a first-order predicate calculus. When input data is to be supplied to a Prolog program, it must first be converted, either by humans or by supplementary program code, into axioms that have the form of atomic sentences. In order for the Prolog program to be successful, it must be designed in a fashion that is adapted to some previously determined, axiomatic representation of the input data. Happily, this is always possible, because if need be, it is always possible to augment the main program’s axioms with special axioms that perform a bridging function, so that the entire axiom set, including the “input data axioms”, is deductively harmonized.

Now, admittedly, the axioms of Prolog programs do not always perfectly correspond to sentences written in classical first-order logic. However, there are programming environments, based upon first-order theorem provers, which do permit the required horn-clause axioms to be written entirely within classical first-order logic. Examples of such programming environments are Otter and PTTP (Prolog Technology Theorem Prover), both of which include powerful first-order theorem provers. Footnote 6 Otter, in particular, is a very powerful, inferentially complete theorem prover that permits the full range of predicate calculus syntax to be employed in the programs it executes. A remarkable characteristic of the programs which Otter and PTTP accept is that the axiom set that comprises each such program can serve as the program’s own equivalent formal deductive model.

It is also noteworthy that, for any Turing machine, there exists a corresponding set of first-order axioms that is functionally equivalent to that machine (see Davis, Chapters 1 and 6, 1958, for a discussion of Post’s method, whereby any Turing machine can be presented axiomatically). Moreover, the axiom sets that constitute such programs can be designed to deductively harmonize with their “input data axioms” in a fashion that is strongly analogous to the fashion in which an actual Turing machine is designed to accommodate a predetermined presentation of input data upon its machine tape.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Hadley, R.F. Consistency, Turing Computability and Gödel’s First Incompleteness Theorem. Minds & Machines 18, 1–15 (2008). https://doi.org/10.1007/s11023-007-9082-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11023-007-9082-2

Keywords

Navigation