This book contains an introduction to symbolic logic and a thorough discussion of mechanical theoremproving and its applications. The book consists of three major parts. Chapters 2 and 3 constitute an introduction to symbolic logic. Chapters 4–9 introduce several techniques in mechanical theoremproving, and Chapters 10 an 11 show how theoremproving can be applied to various areas such as question answering, problem solving, program analysis, and program synthesis.
Examples in the history of Automated TheoremProving are given, in order to show that even a seemingly ‘mechanical’ activity, such as deductive inference drawing, involves special cultural features and tacit knowledge. Mechanisation of reasoning is thus regarded as a complex undertaking in ‘cultural pruning’ of human-oriented reasoning. Sociological counterparts of this passage from human- to machine-oriented reasoning are discussed, by focusing on problems of man-machine interaction in the area of computer-assisted proof processing.
The automatic verification of large parts of mathematics has been an aim of many mathematicians from Leibniz to Hilbert. While Gödel's first incompleteness theorem showed that no computer program could automatically prove certain true theorems in mathematics, the advent of electronic computers and sophisticated software means in practice there are many quite effective systems for automated reasoning that can be used for checking mathematical proofs. This book describes the use of a computer program to check the proofs of (...) several celebrated theorems in metamathematics including those of Gödel and Church-Rosser. The computer verification using the Boyer-Moore theorem prover yields precise and rigorous proofs of these difficult theorems. It also demonstrates the range and power of automated proof checking technology. The mechanization of metamathematics itself has important implications for automated reasoning, because metatheorems can be applied as labor-saving devices to simplify proof construction. (shrink)
We combine state-of-the-art techniques from computational linguisticsand theoremproving to build an engine for playing text adventures,computer games with which the player interacts purely through naturallanguage. The system employs a parser for dependency grammar and ageneration system based on TAG, and has components for resolving andgenerating referring expressions. Most of these modules make heavy useof inferences offered by a modern theorem prover for descriptionlogic. Our game engine solves some problems inherent in classical textadventures, and is an interesting (...) test case for the interactionbetween natural language processing and inference. (shrink)
The history of building automated theorem provers for higher-order logic is almost as old as the field of deduction systems itself. The first successful attempts to mechanize and implement higher-order logic were those of Huet  and Jensen and Pietrzykowski . They combine the resolution principle for higher-order logic (first studied in ) with higher-order unification. The unification problem in typed λ-calculi is much more complex than that for first-order terms, since it has to take the theory of αβη-equality (...) into account. As a consequence, the higher-order unification problem is undecidable and sets of solutions need not even always have most general elements that represent them. Thus the mentioned calculi for higher-order logic have take special measures to circumvent the problems posed by the theoretical complexity of higher-order unification. In this paper, we will exemplify the methods and proof- and model-theoretic tools needed for extending first-order automated theoremproving to higherorder logic. For the sake of simplicity take the tableau method as a basis (for a general introduction to first-order tableaux see part I.1) and discuss the higherorder tableau calculi HT and HTE first presented in . The methods in this paper also apply to higher-order resolution calculi [1, 13, 6] or the higher-order matings method of Peter , which extend their first-order counterparts in much the same way. Since higher-order calculi cannot be complete for the standard semantics by Gödel’s incompleteness theorem , only the weaker notion of Henkin models  leads to a meaningful notion of completeness in higher-order logic. It turns out that the calculi in [1, 13, 3, 19] are not Henkin-complete, since they fail to capture the extensionality principles of higher-order logic. We will characterize the deductive power of our calculus HT (which is roughly equivalent to these calculi) by the semantics of functional Σ-models. To arrive at a calculus that is complete with respect to Henkin models, we build on ideas from  and augment HT with tableau construction rules that use the extensionality principles in a goal-oriented way.. (shrink)
THINKER is an automated natural deduction first-order theoremproving program. This paper reports on how it was adapted so as to prove theorems in modal logic. The method employed is an indirect semantic method, obtained by considering the semantic conditions involved in being a valid argument in these modal logics. The method is extended from propositional modal logic to predicate modal logic, and issues concerning the domain of quantification and existence in a world's domain are discussed. Finally, we (...) look at the very interesting issues involved with adding identity to the theorem prover in the realm of modal predicate logic. Various alternatives are discussed. (shrink)
Demonstrates the validity of the measure presented in "Estimating Semantic Content" on textbook examples using (binary) resolution [a generalization of disjunctive syllogism] theoremproving; the measure is based on logical probability and is the mirror image of logical form; it dates to Popper.
We present a method for integrating rippling-based rewriting into matrix-based theoremproving as a means for automating inductive specification proofs. The selection of connections in an inductive matrix proof is guided by symmetries between induction hypothesis and induction conclusion. Unification is extended by decision procedures and a rippling/reverse-rippling heuristic. Conditional substitutions are generated whenever a uniform substitution is impossible. We illustrate the integrated method by discussing several inductive proofs for the integer square root problem as well as the (...) algorithms extracted from these proofs. (shrink)
Uniform proofs systems have recently been proposed [Mi191j as a proof-theoretic foundation and generalization of logic programming. In [Mom92a] an extension with constructive negation is presented preserving the nature of abstract logic programming language. Here we adapt this approach to provide a complete theoremproving technique for minimal, intuitionistic and classical logic, which is totally goal-oriented and does not require any form of ancestry resolution. The key idea is to use the Godel-Gentzen translation to embed those logics in (...) the syntax of Hereditary Harrop formulae, for which uniform proofs are complete. We discuss some preliminary implementation issues. (shrink)
This paper is concerned with decision proceedures for the 0-valued ukasiewicz logics,. It is shown how linear algebra can be used to construct an automated theorem checker. Two decision proceedures are described which depend on a linear programming package. An algorithm is given for the verification of consequence relations in, and a connection is made between theorem checking in two-valued logic and theorem checking in which implies that determing of a -free formula whether it takes the value (...) one is NP-complete problem. (shrink)
Free-variable semantic tableaux are a well-established technique for first-order theoremproving where free variables act as a meta-linguistic device for tracking the eigenvariables used during proof search. We present the theoretical foundations to extend this technique to propositional modal logics, including non-trivial rigorous proofs of soundness and completeness, and also present various techniques that improve the efficiency of the basic naive method for such tableaux.
Theorems in automated theoremproving are usually proved by formal logical proofs. However, there is a subset of problems which humans can prove by the use of geometric operations on diagrams, so called diagrammatic proofs. Insight is often more clearly perceived in these proofs than in the corresponding algebraic proofs; they capture an intuitive notion of truthfulness that humans find easy to see and understand. We are investigating and automating such diagrammatic reasoning about mathematical theorems. Concrete, rather than (...) general diagrams are used to prove particular concrete instances of the universally quantified theorem. The diagrammatic proof is captured by the use of geometric operations on the diagram. These operations are the inference steps of the proof. An abstracted schematic proof of the universally quantified theorem is induced from these proof instances. The constructive -rule provides the mathematical basis for this step from schematic proofs to theoremhood. In this way we avoid the difficulty of treating a general case in a diagram. One method of confirming that the abstraction of the schematic proof from the proof instances is sound is proving the correctness of schematic proofs in the meta-theory of diagrams. These ideas have been implemented in the system, called Diamond, which is presented here. (shrink)
This paper contains the full code of a prototype implementation in Haskell , in ‘literate programming’ style , of the tableau-based calculus and proof procedure for hybrid logic presented in .
We investigate several approaches to resolution based automated theoremproving in classical higher-order logic (based on Church's simply typed-calculus) and discuss their requirements with respect to Henkincompleteness and full extensionality. In particular we focus on Andrews'higher-order resolution (Andrews 1971), Huet's constrained resolution (Huet1972), higher-order E-resolution, and extensional higher-order resolution(Benzmüller and Kohlhase 1997). With the help of examples we illustratethe parallels and differences of the extensionality treatment of these approachesand demonstrate that extensional higher-order resolution is the sole approach thatcan completely avoid additional (...) extensionality axioms. (shrink)
In 1870 Jordan proved that the composition factors of two composition series of a group are the same. Almost 20 years later Hölder (1889) was able to extend this result by showing that the factor groups, which are quotient groups corresponding to the composition factors, are isomorphic. This result, nowadays called the Jordan-Hölder Theorem, is one of the fundamental theorems in the theory of groups. The fact that Jordan, who was working in the framework of substitution groups, was able (...) to prove only a part of this theorem is often used to emphasize the importance and even the necessity of the abstract conception of groups, which was employed by Hölder. However, as a little-known paper from 1873 reveals, Jordan had all the necessary ingredients to prove the Jordan-Hölder Theorem at his disposal (namely, composition series, quotient groups, and isomorphisms), and he also noted a connection between composition factors and corresponding quotient groups. Thus, I argue that the answer to the question posed in the title is “Yes.” It was not the lack of the abstract notion of groups which prevented Jordan from proving the Jordan-Hölder Theorem, but the fact that he did not ask the right research question that would have led him to this result. In addition, I suggest some reasons why this has been overlooked in the historiography of algebra, and I argue that, by hiding computational and cognitive complexities, abstraction has important pragmatic advantages. (shrink)
The position of Polish informatics, as well in research as in didactic, has its roots in achievements of Polish mathematicians of Warsaw School and logicians of Lvov-Warsaw School. Jan Lukasiewicz is considered in the world of computer science as the most famous Polish logician. The parenthesis-free notation, invented by him, is known as PN (Polish Notation) and RPN (Reverse Polish Notation). Lukasiewicz created many-valued logic as a separate subject. The idea of multi-valueness is applied to hardware design (many-valued or fuzzy (...) switching, analog computer). Many-valued approach to vague notions and commonsense reasoning is the method of expert systems, databases and knowledge-based systems. Stanis3aw Jaokowski's system of natural deduction is the base of systems of automatic deduction and theoremproving. He created a system of paraconsistent logic. Such logics are used in AI. Kazimierz Ajdukiewicz with his categorial grammar participated in the development of formal grammars, the field significant for programming languages. Andrzej Grzegorczyk had an important contribution to the development of the theory of recursiveness. (shrink)
We show that Smullyan's analytic tableaux cannot p-simulate the truth-tables. We identify the cause of this computational breakdown and relate it to an underlying semantic difficulty which is common to the whole tradition originating in Gentzen's sequent calculus, namely the dissonance between cut-free proofs and the Principle of Bivalence. Finally we discuss some ways in which this principle can be built into a tableau-like method without affecting its analytic nature.
Partial functions can be easily represented in set theory as certain sets of ordered pairs. However, classical set theory provides no special machinery for reasoning about partial functions. For instance, there is no direct way of handling the application of a function to an argument outside its domain as in partial logic. There is also no utilization of lambda-notation and sorts or types as in type theory. This paper introduces a version of von-Neumann-Bernays-Gödel set theory for reasoning about sets, proper (...) classes, and partial functions represented as classes of ordered pairs. The underlying logic of the system is a partial first-order logic, so class-valued terms may be nondenoting. Functions can be specified using lambda-notation, and reasoning about the application of functions to arguments is facilitated using sorts similar to those employed in the logic of the IMPS Interactive Mathematical Proof System. The set theory is intended to serve as a foundation for mechanized mathematics systems. (shrink)
Dependencies are identified in two recently proposed first-order axiom systems for plane hyperbolic geometry. Since the dependencies do not specifically concern hyperbolic geometry, our results yield two simpler axiom systems for absolute geometry.
By a fragment of a natural language, we understand a collection of sentences forming a naturally delineated subset of that language and equipped with a semantics commanding the general assent of its native speakers. By the semantic complexity of such a fragment, we understand the computational complexity of deciding whether any given set of sentences in that fragment represents a logically possible situation. In earlier papers by the first author, the semantic complexity of various fragments of English involving at most (...) transitive verbs was investigated. The present paper considers various fragments of English involving ditransitive verbs and determines their semantic complexity. (shrink)
The Gödelian symphony -- Foundations and paradoxes -- This sentence is false -- The liar and Gödel -- Language and metalanguage -- The axiomatic method or how to get the non-obvious out of the obvious -- Peano's axioms -- And the unsatisfied logicists, Frege and Russell -- Bits of set theory -- The abstraction principle -- Bytes of set theory -- Properties, relations, functions, that is, sets again -- Calculating, computing, enumerating, that is, the notion of algorithm -- Taking numbers (...) as sets of sets -- It's raining paradoxes -- Cantor's diagonal argument -- Self-reference and paradoxes -- Hilbert -- Strings of symbols -- In mathematics there is no ignorabimus -- Gödel on stage -- Our first encounter with the incompleteness theorem -- And some provisos -- Gödelization, or say it with numbers! -- TNT -- The arithmetical axioms of tnt and the standard model N -- The fundamental property of formal systems -- The Gödel numbering -- And the arithmetization of syntax -- Bits of recursive arithmetic -- Making algorithms precise -- Bits of recursion theory -- Church's thesis -- The recursiveness of predicates, sets, properties, and relations -- And how it is represented in typographical number theory -- Introspection and representation -- The representability of properties, relations, and functions -- And the Gödelian loop -- I am not provable -- Proof pairs -- The property of being a theorem of TNI (is not recursive!) -- Arithmetizing substitution -- How can a TNT sentence refer to itself? -- Fixed point -- Consistency and omega-consistency -- Proving G1 -- Rosser's proof -- The unprovability of consistency and the immediate consequences of G1 and -- G2 -- Technical interlude -- Immediate consequences of G1 and G2 -- Undecidable1 and undecidable 2 -- Essential incompleteness, or the syndicate of mathematicians -- Robinson arithmetic -- How general are Gödel's results? -- Bits of turing machine -- G1 and G2 in general -- Unexpected fish in the formal net -- Supernatural numbers -- The culpability of the induction scheme -- Bits of truth (not too much of it, though) -- The world after Gödel -- Bourgeois mathematicians! : the postmodern interpretations -- What is postmodernism? -- From Gödel to Lenin -- Is biblical proof decidable? -- Speaking of the totality -- Bourgeois teachers! -- (un)interesting bifurcations -- A footnote to Plato -- Explorers in the realm of numbers -- The essence of a life -- The philosophical prejudices of our times -- From Gödel to Tarski -- Human, too human -- Mathematical faith -- I'm not crazy! -- Qualified doubts -- From gentzen to the dialectica interpretation -- Mathematicians are people of faith -- Mind versus computer : Gödel and artificial intelligence -- Is mind (just) a program? -- Seeing the truth and going outside the system -- The basic mistake -- In the haze of the transfinite -- Know thyself : Socrates and the inexhaustibility of mathematics -- Gödel versus wittgenstein and the paraconsistent interpretation -- When geniuses meet -- The implausible Wittgenstein -- There is no metamathematics -- Proof and prose -- The single argument -- But how can arithmetic be inconsistent? -- The costs and benefits of making Wittgenstein plausible. (shrink)
In response to recent work on the aggregation of individual judgments on logically connected propositions into collective judgments, it is often asked whether judgment aggregation is a special case of Arrowian preference aggregation. We argue for the converse claim. After proving two impossibility theorems on judgment aggregation (using "systematicity" and "independence" conditions, respectively), we construct an embedding of preference aggregation into judgment aggregation and prove Arrow’s theorem (stated for strict preferences) as a corollary of our second result. Although (...) we thereby provide a new proof of Arrow’s theorem, our main aim is to identify the analogue of Arrow’s theorem in judgment aggregation, to clarify the relation between judgment and preference aggregation, and to illustrate the generality of the judgment aggregation model. JEL Classi…cation: D70, D71.. (shrink)
Alan Turing anticipated many areas of current research incomputer and cognitive science. This article outlines his contributionsto Artificial Intelligence, connectionism, hypercomputation, andArtificial Life, and also describes Turing's pioneering role in thedevelopment of electronic stored-program digital computers. It locatesthe origins of Artificial Intelligence in postwar Britain. It examinesthe intellectual connections between the work of Turing and ofWittgenstein in respect of their views on cognition, on machineintelligence, and on the relation between provability and truth. Wecriticise widespread and influential misunderstandings of theChurch–Turing thesis (...) and of the halting theorem. We also explore theidea of hypercomputation, outlining a number of notional machines thatcompute the uncomputable. (shrink)
Various sources in the literature claim that the deduction theorem does not hold for normal modal or epistemic logic, whereas others present versions of the deduction theorem for several normal modal systems. It is shown here that the apparent problem arises from an objectionable notion of derivability from assumptions in an axiomatic system. When a traditional Hilbert-type system of axiomatic logic is generalized into a system for derivations from assumptions, the necessitation rule has to be modified in a (...) way that restricts its use to cases in which the premiss does not depend on assumptions. This restriction is entirely analogous to the restriction of the rule of universal generalization of first-order logic. A necessitation rule with this restriction permits a proof of the deduction theorem in its usual formulation. Other suggestions presented in the literature to deal with the problem are reviewed, and the present solution is argued to be preferable to the other alternatives. A contraction-and cut-free sequent calculus equivalent to the Hubert system for basic modal logic shows the standard failure argument untenable by proving the underivability of DA from A. (shrink)
Conway and Kochen have presented a “free will theorem” [4, 6] which they claim shows that “if indeed we humans have free will, then [so do] elementary particles.” In a more precise fashion, they claim it shows that for certain quantum experiments in which the experimenters can choose between several options, no deterministic or stochastic model can account for the observed outcomes without violating a condition “MIN” motivated by relativistic symmetry. We point out that for stochastic models this conclusion (...) is not correct, while for deterministic models it is not new. In the way the free will theorem is formulated and proved, it only concerns deterministic models. But Conway and Kochen have argued [4, 5, 6, 7] that “randomness can’t help,” meaning that stochastic models are excluded as well if we insist on the conditions “SPIN”, “TWIN”, and “MIN”. We point out a mistake in their argument. Namely, the theorem is of the form deterministic model with SPIN & TWIN & MIN ⇒ contradiction , (1) and in order to derive the further claim, which is of the form stochastic model with SPIN & TWIN & MIN ⇒ contradiction , (2) Conway and Kochen propose a method for converting any stochastic model into a deterministic one . (shrink)
Thus, despite the di culty of higher-order automated theoremproving, which has to deal with problems like the undecidability of higher-order uni - cation (HOU) and the need for primitive substitution, there are proof problems which lie beyond the capabilities of rst-order theorem provers, but instead can be solved easily by an higher-order theorem prover (HOATP) like Leo. This is due to the expressiveness of higher-order Logic and, in the special case of Leo, due to an (...) appropriate handling of the extensionality principles (functional extensionality and extensionality on truth values). (shrink)