This paper discusses the general problem of translation functions between logics, given in axiomatic form, and in particular, the problem of determining when two such logics are "synonymous" or "translationally equivalent." We discuss a proposed formal definition of translational equivalence, show why it is reasonable, and also discuss its relation to earlier definitions in the literature. We also give a simple criterion for showing that two modal logics are not translationally equivalent, and apply this to well-known examples. Some philosophical morals (...) are drawn concerning the possibility of having two logical systems that are "empirically distinct" but are both translationally equivalent to a common logic. (shrink)
A notion of feasible function of finite type based on the typed lambda calculus is introduced which generalizes the familiar type 1 polynomial-time functions. An intuitionistic theory IPVω is presented for reasoning about these functions. Interpretations for IPVω are developed both in the style of Kreisel's modified realizability and Gödel's Dialectica interpretation. Applications include alternative proofs for Buss's results concerning the classical first-order system S12 and its intuitionistic counterpart IS12 as well as proofs of some of Buss's conjectures concerning IS12, (...) and a proof that IS12 cannot prove that extended Frege systems are not polynomially bounded. (shrink)
Michael Kremer defines fixed-point logics of truth based on Saul Kripke’s fixed point semantics for languages expressing their own truth concepts. Kremer axiomatizes the strong Kleene fixed-point logic of truth and the weak Kleene fixed-point logic of truth, but leaves the axiomatizability question open for the supervaluation fixed-point logic of truth and its variants. We show that the principal supervaluation fixed point logic of truth, when thought of as consequence relation, is highly complex: it is not even analytic. We also (...) consider variants, engendered by a stronger notion of ‘fixed point’, and by variant supervaluation schemes. A ‘logic’ is often thought of, not as a consequence relation, but as a set of sentences – the sentences true on each interpretation. We axiomatize the supervaluation fixed-point logics so conceived. (shrink)
I discuss a collection of problems in relevance logic. The main problems discussed are: the decidability of the positive semilattice system, decidability of the fragments of R in a restricted number of variables, and the complexity of the decision problem for the implicational fragment of R. Some related problems are discussed along the way.
The lattices of the title generalize the concept of a De Morgan lattice. A representation in terms of ordered topological spaces is described. This topological duality is applied to describe homomorphisms, congruences, and subdirectly irreducible and free lattices in the category. In addition, certain equational subclasses are described in detail.
Craig's interpolation theorem fails for the propositional logics E of entailment, R of relevant implication and T of ticket entailment, as well as in a large class of related logics. This result is proved by a geometrical construction, using the fact that a non-Arguesian projective plane cannot be imbedded in a three-dimensional projective space. The same construction shows failure of the amalgamation property in many varieties of distributive lattice-ordered monoids.
This paper defines a category of bounded distributive lattice-ordered grupoids with a left-residual operation that corresponds to a weak system in the family of relevant logics. Algebras corresponding to stronger systems are obtained by adding further postulates. A duality theoey piggy-backed on the Priestley duality theory for distributive lattices is developed for these algebras. The duality theory is then applied in providing characterizations of the dual spaces corresponding to stronger relevant logics.
Propositional proof complexity is the study of the sizes of propositional proofs, and more generally, the resources necessary to certify propositional tautologies. Questions about proof sizes have connections with computational complexity, theories of arithmetic, and satisfiability algorithms. This is article includes a broad survey of the field, and a technical exposition of some recently developed techniques for proving lower bounds on proof sizes.
Quine has argued that modal logic began with the sin of confusing use and mention. Anderson and Belnap, on the other hand, have offered us a way out through a strategy of nominahzation. This paper reviews the history of Lewis's early work in modal logic, and then proves some results about the system in which "A is necessary" is intepreted as "A is a classical tautology.".
What I wish to propose in the present paper is a new form of “career induction” for ambitious young logicians. The basic problem is this: if we look at the n-variable fragments of relevant propositional logics, at what point does undecidability begin? Focus, to be definite, on the logic R. John Slaney showed that the 0-variable fragment of R contains exactly 3088 non-equivalent propositions, and so is clearly decidable. In the opposite direction, I claimed in my paper of 1984 that (...) the five variable fragment of R is undecidable. The proof given there was sketchy , and a close examination reveals that although the result claimed is true, the proof given is incorrect. In the present paper, I give a detailed and correct proof that the four variable fragments of the principal relevant logics are undecidable. This leaves open the question of the decidability of the n-variable fragments for n = 1, 2, 3. At what point does undecidability set in? (shrink)
We show that short bounded-depth Frege proofs of matrix identities, such as PQ=I⊃QP=I (over the field of two elements), imply short bounded-depth Frege proofs of the pigeonhole principle. Since the latter principle is known to require exponential-size bounded-depth Frege proofs, it follows that the propositional version of the matrix principle also requires bounded-depth Frege proofs of exponential size.
The first part of the paper is devoted to surveying the remarks that philosophers and mathematicians such as Maddy, Hardy, Gowers, and Zeilberger have made about mathematical depth. The second part is devoted to the question of whether we can make the notion precise by a more formal proof-theoretical approach. The idea of measuring depth by the depth and bushiness of the proof is considered, and compared to the related notion of the depth of a chess combination.
An Ockham lattice is defined to be a distributive lattice with 0 and 1 which is equipped with a dual homomorphic operation. In this paper we prove: (1) The lattice of all equational classes of Ockham lattices is isomorphic to a lattice of easily described first-order theories and is uncountable, (2) every such equational class is generated by its finite members. In the proof of (2) a characterization of orderings of with respect to which the successor function is decreasing is (...) given. (shrink)
Henry M. Sheffer is well known to logicians for the discovery (or rather, the rediscovery) of the ?Sheffer stroke? of propositional logic. But what else did Sheffer contribute to logic? He published very little, though he is known to have been carrying on a rather mysterious research program in logic; the only substantial result of this research was the unpublished monograph The General Theory of Notational Relativity. The main aim of this paper is to explain, as far as possible (given (...) the scanty evidence), the nature of Sheffer's program, and the reasons for its failure. The paper concludes with a discussion of Sheffer's only true logical descendant, C.H. Langford, and his contributions to model theory. (shrink)
Hans Herzberger as a philosopher and logician has shown deep interest both in the philosophy of Gottlob Frege, and in the topic of the inexpressible and the ineffable. In the fall of 1982, he taught at the University of Toronto, together with André Gombay, a course on Frege's metaphysics, philosophy of language, and foundations of arithmetic. Again, in the fall of 1986, he taught a seminar on the philosophy of language that dealt with 'the limits of discursive symbolism in several (...) domains of human experience.' The course description continues by saying: 'Special attention will be given to the paradoxes underlying various doctrines of the inexpressible and the tensions inherent in those paradoxes. Some doctrines of .. (shrink)
Around 1989, a striking letter written in March 1956 from Kurt Gödel to John von Neumann came to light. It poses some problems about the complexity of algorithms; in particular, it asks a question that can be seen as the first formulation of the P=?NP question. This paper discusses some of the background to this letter, including von Neumann's own ideas on complexity theory. Von Neumann had already raised explicit questions about the complexity of Tarski's decision procedure for elementary algebra (...) and geometry in a letter of 1949 to J. C. C. McKinsey. The paper concludes with a discussion of why theoretical computer science did not emerge as a separate discipline until the 1960s. (shrink)
The method of analytic tableaux is employed in many introductory texts and has also been used quite extensively as a basis for automated theorem proving. In this paper, we discuss the complexity of the system as a method for refuting contradictory sets of clauses, and resolve several open questions. We discuss the three forms of analytic tableaux: clausal tableaux, generalized clausal tableaux, and binary tableaux. We resolve the relative complexity of these three forms of tableaux proofs and also resolve the (...) relative complexity of analytic tableaux versus resolution. We show that there is a quasi-polynomial simulation of tree resolution by analytic tableaux; this simulation is close to optimal, since we give a matching lower bound that is tight to within a polynomial. (shrink)
This paper investigates the depth of resolution proofs, that is to say, the length of the longest path in the proof from an input clause to the conclusion. An abstract characterization of the measure is given, as well as a discussion of its relation to other measures of space complexity for resolution proofs.