Elsevier

Cognition

Volume 86, Issue 3, January 2003, Pages 223-251
Cognition

Reasoning with quantifiers

https://doi.org/10.1016/S0010-0277(02)00180-4Get rights and content

Abstract

In the semantics of natural language, quantification may have received more attention than any other subject, and one of the main topics in psychological studies on deductive reasoning is syllogistic inference, which is just a restricted form of reasoning with quantifiers. But thus far the semantical and psychological enterprises have remained disconnected. This paper aims to show how our understanding of syllogistic reasoning may benefit from semantical research on quantification. I present a very simple logic that pivots on the monotonicity properties of quantified statements – properties that are known to be crucial not only to quantification but to a much wider range of semantical phenomena. This logic is shown to account for the experimental evidence available in the literature as well as for the data from a new experiment with cardinal quantifiers (“at least n” and “at most n”), which cannot be explained by any other theory of syllogistic reasoning.

Introduction

In logic, inference and interpretation are always closely tied together. Consider, for example, the standard inference rules associated with conjunctive sentences:

&-introduction allows a sentence of the form “ϕ & ψ” to be derived whenever ϕ and ψ are given, and &-exploitation licenses the derivation of either conjunct of “ϕ & ψ”. Of course, this is what one should expect in view of the meaning of “&”, which is that “ϕ & ψ” is false unless ϕ is true and ψ is true. In logic, the search for a system of inference is usually guided by a (possibly informal) construal of a set of logical constants, and inference rules are judged by the constraints they impose on the interpretation of such logical vocabulary as they involve. Not that such customs are particularly remarkable, for there clearly must be an intimate connection between the meaning of an expression and valid arguments which make essential use of that expression. What is remarkable is that such connections have not played an equally central part in the psychological study of deductive reasoning, and especially of syllogistic reasoning.

In the past two or three decades, the semantics of natural language has come into its own, and quantification may have received more attention than any other semantic topic. During the same period, the psychological study of deduction made great advances, too, and one of its central topics is syllogistic inference, which is just a restricted form of reasoning with quantifiers. Strangely enough, these two enterprises have remained disconnected so far. All current approaches to syllogistic reasoning are based on first-order mental representations, which encode quantified statements in terms of individuals. Such representations are unsuitable for dealing with many quantified statements (e.g. “Most A are B”, “At least three A are B”, etc.), but semanticists have developed a general framework which overcomes these problems, and it will be argued that this framework should be adopted in the psychology of reasoning, too.

The plan for this paper is as follows. I start out with a survey of the central facts concerning syllogistic reasoning, and then go on to discuss the main approaches to deductive inference, arguing that each is flawed in the same way: they all employ representational schemes that are inadequate in principle for dealing with natural-language quantification, and in this sense they are all ad hoc. I then turn to the interpretation of quantified expressions, and sketch the outlines of a general framework for dealing with quantification that is widely accepted in the field of natural-language semantics. Research within this framework has shown that certain logical properties are especially important to natural systems of quantification, and I contend that the very same properties go a long way to explain the peculiarities of syllogistic reasoning.

It bears emphasizing, perhaps, that the general view on syllogistic reasoning adopted here is not original with me. Indeed, the key ideas have a venerable ancestry and can be traced back in part to medieval times and partly to the founder of syllogistic logic, Aristotle. More recent developments in semantic theory have systematized these ideas and incorporated them in a much broader framework. Therefore, my objective is a modest one: to show that this view on quantification is relevant to the psychology of syllogistic inference, too.

Section snippets

Syllogistic reasoning

The syllogistic language is confined to four sentence types, or “moods”:

All A are B:universal affirmative (A)
Some A are B:particular affirmative (I)
No A are B:universal negative (E)
Some A are not B:particular negative (O)
Although the scholastic labels A, I, E, O (from Latin “AffIrmo” and “nEgO”) have all but ceased to be mnemonic, they are still widely used, and I will use them here, too. Most psychological studies on syllogistic reasoning have adopted the traditional definition according to

Psychological theories of syllogistic reasoning

Over the years, many theories about syllogistic reasoning have been proposed, the large majority of which fall into one of three families: logic-based approaches, mental-model theories, and heuristic theories. The theory to be presented below belongs to the first family. Existing accounts in the logic-based tradition are mostly based on natural deduction, which is a species of proof theory developed by Jaśkowski and Gentzen in the 1930s.6

Interpreting quantifier expressions

In the field of natural-language semantics, expressions like “all”, “most”, “some”, etc. are analyzed as denoting relations between sets, or generalized quantifiers.9

A monotonicity-based model of reasoning with quantifiers

In this section I present a very simple logic which builds on the observations made in the foregoing. In this logic all valid classical syllogisms are provable, but it goes far beyond traditional syllogistic logic in that it renders many other arguments valid, as well. The logic has three rules of inference, which follow directly from the interpretation of the quantifiers and negation. The logic's workhorse is monotonicity, which turns out to be implicated in every valid syllogistic argument.

Concluding remarks

One popular way of characterizing logical inference is that a conclusion ϕ follows logically from a set of premisses ψ1 … ψn if the meanings of ϕ and ψ1 … ψn alone guarantee that ϕ is true if ψ1 … ψn are. It is not the facts but the meanings of its component propositions that render an argument valid or invalid. Hence, in order to understand logical inference we must understand how arguments are interpreted: no inference without interpretation. I have endeavoured to demonstrate that this slogan

Acknowledgements

I am greatly indebted to Rob van der Sandt and three anonymous readers of Cognition for their elaborate and very constructive comments on earlier versions of this paper, and to Frans van der Slik for his statistical advice and for carrying out the analyses reported in Section 5.

References (36)

  • C.L. Smith

    Quantifiers and question answering in young children

    Journal of Experimental Child Psychology

    (1980)
  • J. Barwise et al.

    Generalized quantifiers and natural language

    Linguistics and Philosophy

    (1981)
  • M.D.S. Braine

    Steps towards a mental-predicate logic

  • M.D.S. Braine et al.

    Some empirical justification for a theory of natural propositional logic

  • L. Cosmides et al.

    Cognitive adaptations for social exchange

  • L.S. Dickstein

    The effect of figure on syllogistic reasoning

    Memory and Cognition

    (1978)
  • L.S. Dickstein

    Conversion and possibility in syllogistic reasoning

    Bulletin of the Psychonomic Society

    (1981)
  • D. Dowty

    The role of negative polarity and concord marking in natural language reasoning

  • Cited by (0)

    View full text