Many powerful logics exist today for reasoning about multi-agent systems, but in most of these it is hard to reason about an infinite or indeterminate number of agents. Also the naming schemes used in the logics often lack expressiveness to name agents in an intuitive way.To obtain a more expressive language for multi-agent reasoning and a better naming scheme for agents, we introduce a family of logics called term-modal logics. A main feature of our logics is the use of modal (...) operators indexed by the terms of the logics. Thus, one can quantify over variables occurring in modal operators. In term-modal logics agents can be represented by terms, and knowledge of agents is expressed with formulas within the scope of modal operators. (shrink)
Soviet work and family kollektives were the substance of official public life in Soviet Russia. Beginning in the late 1950s, gradually from both private and privatized official settings and differentiating from them, the informal public sphere emerged-the sphere of social practices, regulated by the unwritten codes of everyday moral economy.
The question that is the subject of this article is not intended to be a sociological or statistical question about the practice of today’s mathematicians, but a philosophical question about the nature of mathematics, and specifically the method of mathematics. Since antiquity, saying that mathematics is problem solving has been an expression of the view that the method of mathematics is the analytic method, while saying that mathematics is theorem proving has been an expression of the view that the (...) method of mathematics is the axiomatic method. In this article it is argued that these two views of the mathematical method are really opposed. In order to answer the question whether mathematics is problem solving or theorem proving, the article retraces the Greek origins of the question and Hilbert’s answer. Then it argues that, by Gödel’s incompleteness results and other reasons, only the view that mathematics is problem solving is tenable. (shrink)
It is quite common to object to an argument by saying that it “proves too much.” In this paper, I argue that the “proving too much” charge can be understood in at least three different ways. I explain these three interpretations of the “proving too much” charge. I urge anyone who is inclined to level the “proving too much” charge against an argument to think about which interpretation of that charge one has in mind.
In this chapter we introduce concepts for analyzing proofs, and for analyzing undergraduate and beginning graduate mathematics students’ proving abilities. We discuss how coordination of these two analyses can be used to improve students’ ability to construct proofs. -/- For this purpose, we need a richer framework for keeping track of students’ progress than the everyday one used by mathematicians. We need to know more than that a particular student can, or cannot, prove theorems by induction or contradiction or (...) can, or cannot, prove certain theorems in beginning set theory or analysis. It is more useful to describe a student’s work in terms of a finer-grained framework that includes various smaller abilities that contribute to proving and that can be learned in differing ways and at differing periods of a student’s development. (shrink)
This book contains an introduction to symbolic logic and a thorough discussion of mechanical theorem proving and its applications. The book consists of three major parts. Chapters 2 and 3 constitute an introduction to symbolic logic. Chapters 4–9 introduce several techniques in mechanical theorem proving, and Chapters 10 an 11 show how theorem proving can be applied to various areas such as question answering, problem solving, program analysis, and program synthesis.
Examples in the history of Automated Theorem Proving are given, in order to show that even a seemingly ‘mechanical’ activity, such as deductive inference drawing, involves special cultural features and tacit knowledge. Mechanisation of reasoning is thus regarded as a complex undertaking in ‘cultural pruning’ of human-oriented reasoning. Sociological counterparts of this passage from human- to machine-oriented reasoning are discussed, by focusing on problems of man-machine interaction in the area of computer-assisted proof processing.
Kurt Gödel wrote (1964, p. 272), after he had read Husserl, that the notion of objectivity raises a question: “the question of the objective existence of the objects of mathematical intuition (which, incidentally, is an exact replica of the question of the objective existence of the outer world)”. This “exact replica” brings to mind the close analogy Husserl saw between our intuition of essences in Wesensschau and of physical objects in perception. What is it like to experience a mathematical (...) class='Hi'>proving process? What is the ontological status of a mathematical proof? Can computer assisted provers output a proof? Taking a naturalized world account, I will assess the relationship between mathematics, the physical world and consciousness by introducing a significant conceptual distinction between proving and proof. I will propose that proving is a phenomenological conscious experience. This experience involves a combination of what Kurt Gödel called intuition, and what Husserl called intentionality. In contrast, proof is a function of that process — the mathematical phenomenon — that objectively self-presents a property in the world, and that results from a spatiotemporal unity being subject to the exact laws of nature. In this essay, I apply phenomenology to mathematical proving as a performance of consciousness, that is, a lived experience expressed and formalized in language, in which there is the possibility of formulating intersubjectively shareable meanings. (shrink)
A new method for obtaining lower bounds on the computational complexity of logical theories is presented. It extends widely used techniques for proving the undecidability of theories by interpreting models of a theory already known to be undecidable. New inseparability results related to the well known inseparability result of Trakhtenbrot and Vaught are the foundation of the method. Their use yields hereditary lower bounds . By means of interpretations lower bounds can be transferred from one theory to another. Complicated (...) machine codings are replaced by much simpler definability considerations, viz., the kinds of binary relations definable with short formulas on large finite sets. Numerous examples are given, including new proofs of essentially all previously known lower bounds for theories, and lower bounds for various theories of finite trees, which turn out to be particularly useful. (shrink)
The history of building automated theorem provers for higher-order logic is almost as old as the field of deduction systems itself. The first successful attempts to mechanize and implement higher-order logic were those of Huet  and Jensen and Pietrzykowski . They combine the resolution principle for higher-order logic (first studied in ) with higher-order unification. The unification problem in typed λ-calculi is much more complex than that for first-order terms, since it has to take the theory of αβη-equality into (...) account. As a consequence, the higher-order unification problem is undecidable and sets of solutions need not even always have most general elements that represent them. Thus the mentioned calculi for higher-order logic have take special measures to circumvent the problems posed by the theoretical complexity of higher-order unification. In this paper, we will exemplify the methods and proof- and model-theoretic tools needed for extending first-order automated theorem proving to higherorder logic. For the sake of simplicity take the tableau method as a basis (for a general introduction to first-order tableaux see part I.1) and discuss the higherorder tableau calculi HT and HTE first presented in . The methods in this paper also apply to higher-order resolution calculi [1, 13, 6] or the higher-order matings method of Peter , which extend their first-order counterparts in much the same way. Since higher-order calculi cannot be complete for the standard semantics by Gödel’s incompleteness theorem , only the weaker notion of Henkin models  leads to a meaningful notion of completeness in higher-order logic. It turns out that the calculi in [1, 13, 3, 19] are not Henkin-complete, since they fail to capture the extensionality principles of higher-order logic. We will characterize the deductive power of our calculus HT (which is roughly equivalent to these calculi) by the semantics of functional Σ-models. To arrive at a calculus that is complete with respect to Henkin models, we build on ideas from  and augment HT with tableau construction rules that use the extensionality principles in a goal-oriented way.. (shrink)
The Web may critically transform the way we understand the activity of proving. The Web as a collaborative medium allows the active participation of people with different backgrounds, interests, viewpoints, and styles. Mathematical formal proofs are inadequate for capturing Web-based proofs. This article claims that Web provings can be studied as a particular type of Goguen's proof-events. Web-based proof-events have a social component, communication medium, prover-interpreter interaction, interpretation process, understanding and validation, historical component, and styles. To demonstrate its claim, (...) the article discusses the Kumo and Polymath projects, both of which employ Web-based communication as part of proving. Web proving is a novel type of proving activity that may have a serious impact on the change in mathematical practices, despite the fact that it is not currently a universally acceptable methodology. (shrink)
Kant claims that Aristotles logic as complete, explain the historical and philosophical considerations that commit him to proving the completeness claim and sketch the proof based on materials from his logic corpus. The proof will turn out to be an integral part of Kant’s larger reform of formal logic in response to a foundational crisis facing it.
Reviewed Works:Reuben Hersh, Proving is Convincing and Explaining.Philip J. Davis, Visual Theorems.Gila Hanna, H. Niels Jahnke, Proof and Application.Daniel Chazan, High School Geometry Students' Justification for Their Views of Empirical Evidence and Mathematical Proof.
This paper introduces and describes new protocols for proving knowledge of secrets without giving them away: if the verifier does not know the secret, he does not learn it. This can all be done while only using one-way hash functions. If also the use of encryption is allowed, these goals can be reached in a more efficient way. We extend and use the GNY authentication logic to prove correctness of these protocols.
In this paper we focus on theorem proving for conditional logics. First, we give a detailed description of CondLean, a theorem prover for some standard conditional logics. CondLean is a SICStus Prolog implementation of some labeled sequent calculi for conditional logics recently introduced. It is inspired to the so called “lean” methodology, even if it does not fit this style in a rigorous manner. CondLean also comprises a graphical interface written in Java. Furthermore, we introduce a goal-directed proof search (...) mechanism, derived from the above mentioned sequent calculi based on the notion of uniform proofs. Finally, we describe GOALDUCK, a simple SICStus Prolog implementation of the goal-directed calculus mentioned here above. Both the programs CondLean and GOALDUCK, together with their source code, are available for free download at. (shrink)
The aim of this study is to investigate students’ conceptions about proof in mathematics and mathematics teaching. A five‐point Likert‐type questionnaire was administered in order to gather data. The sample of the study included 33 first‐year secondary school mathematics students . The data collected were analysed and interpreted using the methods of qualitative and quantitative analysis. The results have revealed that the students think that mathematical proof has an important place in mathematics and mathematics education. The students’ studying methods for (...) exams based on imitative reasoning which can be described as a type of reasoning built on copying proof, for example, by looking at a textbook or course notes proof or through remembering a proof algorithm. Moreover, they addressed to the differences between mathematics taught in high school and university as the main cause of their difficulties in proof and proving. (shrink)
ABSTRACT We are concerned with the theorem proving in annotated logics. By using annotated polynomials to express knowledge, we develop an inference rule superposition. A proof procedure is thus presented, and an improvement named M- strategy is mainly described. This proof procedure uses single overlaps instead of multiple overlaps, and above all, both the proof procedure and M-strategy are refutationally complete.
In Faith, Reason and the Existence of God, Denys Turner defends the possibility of proving God’s existence on Christian and philosophical grounds. He responds to Kantian objections by developing a theory of reason derived from Thomas Aquinas. Turner’s work shifts the debate about God’s existence to the problem of determining which concept of reason is correct. I argue that this problem is extremely difficult and perhaps insoluble, because it requires using reason to resolve a dispute about reason. Consequently, Turner’s (...) claims cannot be accepted without reservation. (shrink)
Accessing knowledge of a single knowledge source with different client applications often requires the help of mediator systems as middleware components. In the domain of theorem proving large efforts have been made to formalize knowledge for mathematics and verification issues, and to structure it in databases. But these databases are either specialized for a single client, or if the knowledge is stored in a general database, the services this database can provide are usually limited and hard to adjust for (...) a particular theorem prover. Only recently there have been initiatives to flexibly connect existing theorem proving systems into networked environments that contain large knowledge bases. An intermediate layer containing both, search and proving functionality can be used to mediate between the two. In this paper we motivate the need and discuss the requirements for mediators between mathematical knowledge bases and theorem proving systems. We also present an attempt at a concurrent mediator between a knowledge base and a proof planning system. (shrink)
Demonstrates the validity of the measure presented in "Estimating Semantic Content" on textbook examples using (binary) resolution [a generalization of disjunctive syllogism] theorem proving; the measure is based on logical probability and is the mirror image of logical form; it dates to Popper.
We combine state-of-the-art techniques from computational linguisticsand theorem proving to build an engine for playing text adventures,computer games with which the player interacts purely through naturallanguage. The system employs a parser for dependency grammar and ageneration system based on TAG, and has components for resolving andgenerating referring expressions. Most of these modules make heavy useof inferences offered by a modern theorem prover for descriptionlogic. Our game engine solves some problems inherent in classical textadventures, and is an interesting test case (...) for the interactionbetween natural language processing and inference. (shrink)
We present a method for integrating rippling-based rewriting into matrix-based theorem proving as a means for automating inductive specification proofs. The selection of connections in an inductive matrix proof is guided by symmetries between induction hypothesis and induction conclusion. Unification is extended by decision procedures and a rippling/reverse-rippling heuristic. Conditional substitutions are generated whenever a uniform substitution is impossible. We illustrate the integrated method by discussing several inductive proofs for the integer square root problem as well as the algorithms (...) extracted from these proofs. (shrink)
THINKER is an automated natural deduction first-order theorem proving program. This paper reports on how it was adapted so as to prove theorems in modal logic. The method employed is an indirect semantic method, obtained by considering the semantic conditions involved in being a valid argument in these modal logics. The method is extended from propositional modal logic to predicate modal logic, and issues concerning the domain of quantification and existence in a world's domain are discussed. Finally, we look (...) at the very interesting issues involved with adding identity to the theorem prover in the realm of modal predicate logic. Various alternatives are discussed. (shrink)
Formal verification involves the use of logical and computational methods to establish claims that are expressed in precise mathematical terms. These can include ordinary mathematical theorems, as well as claims that pieces of hardware or software, network protocols, and mechanical and hybrid systems meet their specifications. In practice, there is not a sharp distinction between verifying a piece of mathematics and verifying the correctness of a system: formal verification requires describing hardware and software systems in mathematical terms, at which point (...) establishing claims as to their correctness becomes a form of theorem proving. Conversely, the proof of a mathematical theorem may require a lengthy computation, in which case verifying the truth of the theorem requires verifying that the computation does what it is supposed to do. The gold standard for supporting a mathematical claim is to provide a proof, and twentieth-century developments in logic show most if not all conventional proof methods can be reduced to a small set of axioms and rules in any of a number of foundational systems. With this reduction, there are two ways that a computer can help establish a claim: it can help find a proof in the first place, and it can help verify that a purported proof is correct. Automated theorem proving focuses on the “finding” aspect. Resolution theorem provers, tableau theorem provers, fast satisfiability solvers, and so on provide means of establishing the validity of formulas in propositional and first-order logic. Other systems provide search procedures and decision procedures for specific languages and domains, such as linear or nonlinear expressions over the integers or the real numbers. Architectures like SMT combine domain-general search methods with domain-specific procedures. Computer algebra systems and specialized mathematical software packages provide means of carrying out mathematical computations, establishing mathematical bounds, or finding mathematical objects. A calculation can be viewed as a proof as well, and these systems, too, help establish mathematical claims. Automated reasoning systems strive for power and efficiency, often at the expense of guaranteed soundness. Such systems can have bugs, and it can be difficult to ensure that the results they deliver are correct. In contrast, interactive theorem proving focuses on the “verification” aspect of theorem proving, requiring that every claim is supported by a proof in a suitable axiomatic foundation. This sets a very high standard: every rule of inference and every step of a calculation has to be justified by appealing to prior definitions and theorems, all the way down to basic axioms and rules. In fact, most such systems provide fully elaborated “proof objects” that can be communicated to other systems and checked independently. Constructing such proofs typically requires much more input and interaction from users, but it allows us to obtain deeper and more complex proofs. The Lean Theorem Prover aims to bridge the gap between interactive and automated theorem proving, by situating automated tools and methods in a framework that supports user interaction and the construction of fully specified axiomatic proofs. The goal is to support both mathematical reasoning and reasoning about complex systems, and to verify claims in both domains. (shrink)
This book is an essay in systematic metaphysics. Its ambitious aim is to present an a posteriori causal proof for the existence of God. In addition to its metaphysical assault on the problem of proving God's existence, it contains forays into epistemology, philosophy of language, philosophy of logic and philosophy of science. As a result, the book is a rich and complex tapestry of argumentation that well illustrates its author's contention that philosophy is a "seamless web". Braine has evidently (...) been influenced by Gilson's interpretation of Aquinas. But though his argument bears a family resemblance to Aquinas's Prime Mover Argument, it is a distinct and original philosophical construction. (shrink)
A Prolog implementation of a new theorem-prover for first-order classical logic is described. The prover is based on the calculus KE and the rules used for analysing quantifiers in free variable semantic tableaux. A formal specification of the rules used in the implementation is described, for which soundness and completeness is straightforwardly verified. The prover has been tested on the first 47 problems of the Pelletier set, and its performance compared with a state of the art semantic tableaux theorem-prover. It (...) has also been applied to model building in a prototype system for logical animation, a technique for symbolic execution which can be used for validation. The interest of these experiments is that they demonstrate the value of certain “characteristics” of the KE calculus, such as the significant space-saving in theorem-proving, the mutual inconsistency of open branches in KE trees, and the relation of the KE rules to “traditional” forms of reasoning. (shrink)
Uniform proofs systems have recently been proposed [Mi191j as a proof-theoretic foundation and generalization of logic programming. In [Mom92a] an extension with constructive negation is presented preserving the nature of abstract logic programming language. Here we adapt this approach to provide a complete theorem proving technique for minimal, intuitionistic and classical logic, which is totally goal-oriented and does not require any form of ancestry resolution. The key idea is to use the Godel-Gentzen translation to embed those logics in the (...) syntax of Hereditary Harrop formulae, for which uniform proofs are complete. We discuss some preliminary implementation issues. (shrink)
A new technique for proving realisability results is presented, and is illustrated in detail for the simple case of arithmetic minus induction. CL is a Gentzen formulation of classical logic. CPQ is CL minus the Cut Rule. The basic proof theory and model theory of CPQ and CL is developed. For the semantics presented CPQ is a paraconsistent logic, i.e. there are non-trivial CPQ models in which some sentences are both true and false. Two systems of arithmetic minus induction (...) are introduced, CL-A and CPQ-A based on CL and CPQ, respectively. The realisability theorem for CPQ-A is proved: It is shown constructively that to each theorem A of CPQ-A there is a formula A *, a so-called “realised disjunctive form of A ”, such that variables bound by essentially existential quantifiers in A * can be written as recursive functions of free variables and variables bound by essentially universal quantifiers. Realisability is then applied to prove the consistency of CL-A, making use of certain finite non-trivial inconsistent models of CPQ-A. (shrink)
The theory PA(aa), which is Peano Arithmetic in the context of stationary logic, is shown to be consistent. Moreover, the first-order theory of the class of finitely determinate models of PA(aa) is characterized.
After the First World War mathematics and the organisation of ballistic computations at Aberdeen Proving Ground changed considerably. This was the basis for the development of a number of computing aids that were constructed and used during the years 1920 to 1950. This article looks how the computational organisation forms and changes the instruments of calculation. After the differential analyzer relay-based machines were built by Bell Labs and, finally, the ENIAC, one of the first electronic computers, was built, to (...) satisfy the need for computational power in ballistics during the second World War. (shrink)