The standard relation of logical consequence allows for non-standard interpretations of logical constants, as was shown early on by Carnap. But then how can we learn the interpretations of logical constants, if not from the rules which govern their use? Answers in the literature have mostly consisted in devising clever rule formats going beyond the familiar what follows from what. A more conservative answer is possible. We may be able to learn the correct interpretations from the standard rules, because the (...) space of possible interpretations is a priori restricted by universal semantic principles. We show that this is indeed the case. The principles are familiar from modern formal semantics: compositionality, supplemented, for quantifiers, with topic-neutrality. (shrink)
Starting from the familiar observation that no straightforward treatment of pure quotation can be compositional in the standard (homomorphism) sense, we introduce general compositionality, which can be described as compositionality that takes linguistic context into account. A formal notion of linguistic context type is developed, allowing the context type of a complex expression to be distinct from those of its constituents. We formulate natural conditions under which an ordinary meaning assignment can be non-trivially extended to one that is sensitive to (...) context types and satisfies general compositionality. As our main example we work out a Fregean treatment of pure quotation, but we also indicate that the method applies to other kinds of context, e.g. intensional contexts. (shrink)
The standard semantic definition of consequence with respect to a selected set X of symbols, in terms of truth preservation under replacement (Bolzano) or reinterpretation (Tarski) of symbols outside X, yields a function mapping X to a consequence relation ⇒x. We investigate a function going in the other direction, thus extracting the constants of a given consequence relation, and we show that this function (a) retrieves the usual logical constants from the usual logical consequence relations, and (b) is an inverse (...) to—more precisely, forms a Galois connection with—the Bolzano-Tarski function. (shrink)
Quantification is a topic which brings together linguistics, logic, and philosophy. Quantifiers are the essential tools with which, in language or logic, we refer to quantity of things or amount of stuff. In English they include such expressions as no, some, all, both, many. Peters and Westerstahl present the definitive interdisciplinary exploration of how they work - their syntax, semantics, and inferential role.
Bolzano’s definition of consequence in effect associates with each set X of symbols (in a given interpreted language) a consequence relation X . We present this in a precise and abstract form, in particular studying minimal sets of symbols generating X . Then we present a method for going in the other direction: extracting from an arbitrary consequence relation its associated set C of constants. We show that this returns the expected logical constants from familiar consequence relations, and that, restricting (...) attention to sets of symbols satisfying a strong minimality condition, there is an isomorphism between the set of strongly minimal sets of symbols and the set of corresponding consequence relations (both ordered under inclusion). (shrink)
This note explains the circumstances under which a type 1 quantifier can be decomposed into a type 1, 1 quantifier and a set, by fixing the first argument of the former to the latter. The motivation comes from the semantics of Noun Phrases (also called Determiner Phrases) in natural languages, but in this article, I focus on the logical facts. However, my examples are taken among quantifiers appearing in natural languages, and at the end, I sketch two more principled linguistic (...) applications. (shrink)
The paper elaborates two points: i) There is no principal opposition between predicate logic and adherence to subject-predicate form, ii) Aristotle's treatment of quantifiers fits well into a modern study of generalized quantifiers.
This is a reply to H. Ben-Yami, 'Generalized quantifiers, and beyond' (this journal, 2009), where he argues that standard GQ theory does not explain why natural language quantifiers have a restricted domain of quantification. I argue, on the other hand, that although GQ theory gives no deep explanation of this fact, it does give a sort of explanation, whereas Ben-Yami's suggested alternative is no improvement.
We study generalized quantifiers on finite structures.With every function : we associate a quantifier Q by letting Q x say there are at least (n) elementsx satisfying , where n is the sizeof the universe. This is the general form ofwhat is known as a monotone quantifier of type .We study so called polyadic liftsof such quantifiers. The particular lifts we considerare Ramseyfication, branching and resumption.In each case we get exact criteria fordefinability of the lift in terms of simpler quantifiers.
We study definability in terms of monotone generalized quantifiers satisfying Isomorphism Closure, Conservativity and Extension. Among the quantifiers with the latter three properties - here called CE quantifiers - one finds the interpretations of determiner phrases in natural languages. The property of monotonicity is also linguistically ubiquitous, though some determiners like an even number of are highly non-monotone. They are nevertheless definable in terms of monotone CE quantifiers: we give a necessary and sufficient condition for such definability. We further identify (...) a stronger form of monotonicity, called smoothness, which also has linguistic relevance, and we extend our considerations to smooth quantifiers. The results lead us to propose two tentative universals concerning monotonicity and natural language quantification. The notions involved as well as our proofs are presented using a graphical representation of quantifiers in the so-called number triangle. (shrink)
A semantics may be compositional and yet partial, in the sense that not all well-formed expressions are assigned meanings by it. Examples come from both natural and formal languages. When can such a semantics be extended to a total one, preserving compositionality? This sort of extension problem was formulated by Hodges, and solved there in a particular case, in which the total extension respects a precise version of the fregean dictum that the meaning of an expression is the contribution it (...) makes to the meanings of complex phrases of which it is a part. Hodges' result presupposes the so-called Husserl property, which says roughly that synonymous expressions must have the same category. Here I solve a different version of the compositional extension problem, corresponding to another type of linguistic situation in which we only have a partial semantics, and without assuming the Husserl property. I also briefly compare Hodges' framework for grammars in terms of partial algebras with more familiar ones, going back to Montague, which use many-sorted algebras instead. (shrink)
A new formalism for predicate logic is introduced, with a non-standard method of binding variables, which allows a compositional formalization of certain anaphoric constructions, including donkey sentences and cross-sentential anaphora. A proof system in natural deduction format is provided, and the formalism is compared with other accounts of this type of anaphora, in particular Dynamic Predicate Logic.
A common misunderstanding is that there is something logically amiss with the classical square of opposition, and that the problem is related to Aristotle’s and medieval philosophers’ rejection of empty terms. But [Parsons 2004] convincingly shows that most of these philosophers did not in fact reject empty terms, and that, when properly understood, there are no logical problems with the classical square. Instead, the classical square, compared to its modern version, raises the issue of the existential import of words like (...) all; a semantic issue. I argue that the modern square is more interesting than Parsons allows, because it presents, in contrast with the classical square, notions of negation that are ubiquitous in natural languages. This is an indirect logical argument against interpreting all with existential import. I also discuss some linguistic matters bearing on the latter issue. (shrink)
It is not unreasonable to think that the dispute between classical and intuitionistic mathematics might be unresolvable or 'faultless', in the sense of there being no objective way to settle it. If so, we would have a pretty case of relativism. In this note I argue, however, that there is in fact not even disagreement in any interesting sense, let alone a faultless one, in spite of appearances and claims to the contrary. A position I call classical pluralism is sketched, (...) intended to provide a coherent methodological stance towards the issue. Some reasons to recommend this stance are given, as well as some speculations as to why not everyone might want to follow the recommendation. (shrink)