This article studies the monotonicity behavior of plural determinersthat quantify over collections. Following previous work, we describe thecollective interpretation of determiners such as all, some andmost using generalized quantifiers of a higher type that areobtained systematically by applying a type shifting operator to thestandard meanings of determiners in Generalized Quantifier Theory. Twoprocesses of counting and existential quantification thatappear with plural quantifiers are unified into a single determinerfitting operator, which, unlike previous proposals, both capturesexistential quantification with plural determiners and respects (...) theirmonotonicity properties. However, some previously unnoticed factsindicate that monotonicity of plural determiners is not always preservedwhen they apply to collective predicates. We show that the proposedoperator describes this behavior correctly, and characterize themonotonicity of the collective determiners it derives. It is proved thatdeterminer fitting always preserves monotonicity properties ofdeterminers in their second argument, but monotonicity in the firstargument of a determiner is preserved if and only if it is monotonic inthe same direction in the second argument. We argue that this asymmetryfollows from the conservativity of generalized quantifiers innatural language. (shrink)
In voting theory, monotonicity is the axiom that an improvement in the ranking of a candidate by voters cannot cause a candidate who would otherwise win to lose. The participation axiom states that the sincere report of a voter’s preferences cannot cause an outcome that the voter regards as less attractive than the one that would result from the voter’s non-participation. This article identifies three binary distinctions in the types of circumstances in which failures of monotonicity or participation (...) can occur. Two of the three distinctions apply to monotonicity, while one of those and the third apply to participation. The distinction that is unique to monotonicity is whether the voters whose changed rankings demonstrate non-monotonicity are better off or worse off. The distinction that is unique to participation is whether the marginally participating voter causes his first choice to lose or his last choice to win. The overlapping distinction is whether the profile of voters’ rankings has a Condorcet winner or a cycle at the top. This article traces the occurrence of all of the resulting combination of characteristics in the voting methods that can exhibit failures of monotonicity. (shrink)
Classic deductive logic entails that once a conclusion is sustained by a valid argument, the argument can never be invalidated, no matter how many new premises are added. This derived property of deductive reasoning is known as monotonicity. Monotonicity is thought to conflict with the defeasibility of reasoning in natural language, where the discovery of new information often leads us to reject conclusions that we once accepted. This perceived failure of monotonic reasoning to observe the defeasibility of natural-language (...) arguments has led some philosophers to abandon deduction itself (!), often in favor of new, non-monotonic systems of inference known as `default logics'. But these radical logics (e.g., Ray Reiter's default logic) introduce their desired defeasibility at the expense of other, equally important intuitions about natural-language reasoning. And, as a matter of fact, if we recognize that monotonicity is a property of the form of a deductive argument and not its content (i.e., the claims in the premise(s) and conclusion), we can see how the common-sense notion of defeasibility can actually be captured by a purely deductive system. (shrink)
Power indices are commonly required to assign at least as much power to a player endowed with some given voting weight as to any player of the same game with smaller weight. This local monotonicity and a related global property however are frequently and for good reasons violated when indices take account of a priori unions amongst subsets of players (reflecting, e.g., ideological proximity). This paper introduces adaptations of the conventional monotonicity notions that are suitable for voting games (...) with an exogenous coalition structure. A taxonomy of old and new monotonicity concepts is provided, and different coalitional versions of the Banzhaf and Shapley–Shubik power indices are compared accordingly. (shrink)
We identify a new monotonicity condition (called cover monotonicity) for tournament solutions which allows a discrimination among main tournament solutions: The top-cycle, the iterated uncovered set, the minimal covering set, and the bipartisan set are cover monotonic while the uncovered set, Banks set, the Copeland rule, and the Slater rule fail to be so. As cover monotonic tournament solutions induce social choice rules which are Nash implementable in certain non-standard frameworks (such as those set by Bochet and Maniquet (...) (CORE Discussion Paper No. 2006/84, 2006) or Özkal-Sanver and Sanver (Social Choice and Welfare, 26(3), 607–623, 2006), the discrimination generated by cover monotonicity becomes particularly notable when implementability is a concern. (shrink)
Monotonicity is commonly considered an essential requirement for power measures; violation of local monotonicity or related postulates supposedly disqualifies an index as a valid yardstick for measuring power. This paper questions if such claims are really warranted. In the light of features of real-world collective decision making such as coalition formation processes, ideological affinities, a priori unions, and strategic interaction, standard notions of monotonicity are too narrowly defined. A power measure should be able to indicate that power (...) is non-monotonic in a given dimension of players' resources if – given a decision environment and plausible assumptions about behaviour – itis non-monotonic. (shrink)
In this paper we present an embedding of abstract argumentation systems into the framework of Barwise and Seligmans logic of information flow. We show that, taking P.M. Dungs characterization of argument systems, a local logic over states of a deliberation may be constructed. In this structure, the key feature of non-monotonicity of commonsense reasoning obtains as the transition from one local logic to another, due to a change in certain background conditions. Each of Dungs extensions of argument systems leads (...) to a corresponding ordering of background conditions. The relations among extensions becomes a relation among partial orderings of background conditions. This introduces a conceptual innovation in Barwise and Seligmans representation of commonsense reasoning. (shrink)
In this paper we will discuss constraints on the number of (non-dummy) players and on the distribution of votes such that local monotonicity is satisfied for the Public Good Index. These results are compared to properties which are related to constraints on the redistribution of votes (such as implied by global monotonicity). The discussion shows that monotonicity is not a straightforward criterion of classification for power measures.
It is not unusual in real-life that one has to choose among finitely many alternatives when the merit of each alternative is not perfectly known. Instead of observing the actual utilities of the alternatives at hand, one typically observes more or less precise signals that are positively correlated with these utilities. In addition, the decision-maker may, at some cost or disutility of effort, choose to increase the precision of these signals, for example by way of a careful study or the (...) hiring of expertise. We here develop a model of such decision problems. We begin by showing that a version of the monotone likelihood-ratio property is sufficient, and also essentially necessary, for the optimality of the heuristic decision rule to always choose the alternative with the highest signal. Second, we show that it is not always advantageous to face alternatives with higher utilities, a non-monotonicity result that holds even if the decision-maker optimally chooses the signal precision. We finally establish an operational first-order condition for the optimal precision level in a canonical class of decision-problems, and we show that the optimal precision level may be discontinuous in the precision cost. (shrink)
There has recently been some literature on the properties of a Health-Related Social Welfare Function (HRSWF). The aim of this article is to contribute to the analysis of the different properties of a HRSWF, paying particular attention to the monotonicity principle. For monotonicity to be fulfilled, any increase in individual health—other things equal—should result in an increase in social welfare. We elicit public preferences concerning trade-offs between the total level of health (concern for efficiency) and its distribution (concern (...) for equality), under different hypothetical scenarios through face-to-face interviews. Of key interests are: the distinction between non-monotonic preferences and Rawlsian preferences; symmetry of HRSWF; and the extent of inequality neutral preferences. The results indicate strong support for non-monotonic preferences, over Rawlsian preferences. Furthermore, the majority of those surveyed had preferences that were consistent with a symmetric and inequality averse HRSWF. (shrink)
In the context of indivisible public objects problems (e.g., candidate selection or qualification) with “separable” preferences, unanimity rule accepts each object if and only if the object is in everyone’s top set. We establish two axiomatizations of unanimity rule. The main axiom is resource monotonicity, saying that resource increase should affect all agents in the same direction. This axiom is considered in combination with simple Pareto (there is no Pareto improvement by addition or subtraction of a single object), independence (...) of irrelevant alternatives, and either path independence or strategy-proofness. (shrink)
We provide necessary and sufficient conditions determining how monotonicity of some classes of reducible quantifiers depends on the monotonicity of simpler quantifiers of iterations to which they are equivalent.
The distribution of the focus particle even is constrained: if it is adjoined at surface structure to an expression that is entailed by its focus alternatives, as in even once, it must be appropriately embedded to be acceptable. This paper focuses on the context-dependent distribution of such occurrences of even in the scope of non-monotone quantifiers. We show that it is explained on the assumption that even can move at LF Syntax and semantics, 1979). The analysis is subsequently extended to (...) occurrences of negative polarity items in these environments, which mirror the abovementioned distribution of even and which invalidate standard characterizations of NPI licensing conditions in terms of downward-entailingness. The idea behind the extension is that NPIs denote weak elements that are associates of covert even. The paper concludes by discussing two comprehensive theories of NPI licensing and how our proposal relates to them. (shrink)
A proposal by Ferguson [2003, Argumentation 17, 335–346] for a fully monotonic argument form allowing for the expression of defeasible generalizations is critically examined and rejected as a general solution. It is argued that (i) his proposal reaches less than the default-logician’s solution allows, e.g., the monotonously derived conclusion is one-sided and itself not defeasible. (ii) when applied to a suitable example, his proposal derives the wrong conclusion. Unsuccessful remedies are discussed.
Starting out from the assumption that monotonicity plays a central role in interpretation and inference, we derive a number of predictions about the complexity of processing quantified sentences. A quantifier may be upward entailing (i.e. license inferences from subsets to supersets) or downward entailing (i.e. license inferences from supersets to subsets). Our main predictions are the following: If the monotonicity profiles of two quantifying expressions are the same, they should be equally easy or hard to process, ceteris paribus. (...) Sentences containing both upward and downward entailing quantifiers are more difficult than sentences with upward entailing quantifiers only. Downward-entailing quantifiers built from cardinals, like ‘at most three’, are more difficult than others. Inferences from subsets to supersets are easier than inferences in the opposite direction. We present experimental evidence confirming these predictions. (shrink)
This paper focuses on the concept of collective essence: that some truths are essential to many items taken together. For example, that it is essential to conjunction and negation that they are truth-functionally complete. The concept of collective essence is one of the main innovations of recent work on the theory of essence. In a sense, this innovation is natural, since we make all sorts of plural predications. It stands to reason that there should be a distinction between essential and (...) accidental plural predications if there is a distinction among singular predications. In this paper I defend the view that the concept of collective essence is governed by the principle of Monotonicity: that something is essential to some items only if it is essential to any items to which they belong. (shrink)
In this paper, I show that the availability of what some authors have called the weak reading and the strong reading of donkey sentences with relative clauses is systematically related to monotonicity properties of the determiner. The correlation is different from what has been observed in the literature in that it concerns not only right monotonicity, but also left monotonicity (persistence/antipersistence). I claim that the reading selected by a donkey sentence with a double monotone determiner is in (...) fact the one that validates inference based on the left monotonicity of the determiner. This accounts for the lack of strong reading in donkey sentences with MON determiners, which have been neglected in the literature. I consider the relevance of other natural forms of inference as well, but also suggest how monotonicity inference might play a central role in the actual process of interpretation. The formal theory is couched in dynamic predicate logic with generalized quantifiers. (shrink)
Predicate approaches to modality have been a topic of increased interest in recent intensional logic. Halbach and Welch :71–100, 2009) have proposed a new formal technique to reduce the necessity predicate to an operator, demonstrating that predicate and operator methods are ultimately compatible. This article concerns the question of whether Halbach and Welch’s approach can provide a uniform formal treatment for intensionality. I show that the monotonicity constraint in Halbach and Welch’s proof for necessity fails for almost all possible-worlds (...) theories of knowledge. The nonmonotonicity results demonstrate that the most obvious way of emulating Halbach and Welch’s rapprochement of the predicate and operator fails in the epistemic setting. (shrink)
This paper presents a generalization of the standard notions of left monotonicity (on the nominal argument of a determiner) and right monotonicity (on the VP argument of a determiner). Determiners such as “more than/at least as many as” or “fewer than/at most as many as”, which occur in so-called propositional comparison, are shown to be monotone with respect to two nominal arguments and two VP-arguments. In addition, it is argued that the standard Generalized Quantifier analysis of numerical determiners (...) such as “more than three/at least three” is a simplification which ignores the fundamental parallellism with the propositional comparatives. Furthermore, the symmetric monotonicity configurations of the existential “some” and “no” are shown to be straightforwardly related to those of numerical comparatives with the limit numeral zero, whereas the asymmetric configurations of universal “all” and “not all” involve the extra complicating mechanism of polarity reversal, which is related to Keenan's notion of co-intersectivity (as opposed to intersectivity). A second aim of the paper is to investigate some of the factors reducing the inferential potential of determiners. Different types of comparative determiners and their modification will be considered in detail. In addition, a systematic interaction will be revealed between the monotonicity properties of the determiners in propositional comparatives and the differ ent types of ellipsis in the “than”-complement. Different degrees of ellipsis are defined in terms of informational dependencies between the “than”-complement and the main clause. A general balancing mechanism is observed by virtue of which an increase in informational dependency is compensated by a reduction of the inferential potential of the comparative determiner. The structure of the paper is as follows. Part two deals with propositional comparison and introduces the two basic monotonicity configurations. Part three distinguishes three types of informational dependencies or ellipsis between the “than”-complement and the main clause. In Section 3.1 the “than”-complement only contains a VP constituent, whereas in Section 3.2. it only consists of a nominal constituent. Section 3.3 considers more complex instances of multiple dependencies. Section 3.4 then looks at the interaction of these three types with the modifier “proportionally”. Part four presents the two basic monotonicity configurations for numerical comparison, whereas part five deals with more complex numerical determiners. Bounding determiners, such as “between five and ten”, and other boolean combinations are discussed in Section 5.1, whereas Section 5.2 goes into approximative determiners involving modifiers such as “only” or “almost”. Section 5.3. is devoted to proportional determiners such as “more than two out of three”. Part six then considers the standard existential and universal quantifiers. Section 6.1 reformulates the standard Generalized Quantifier analysis in comparative terms, whereas Section 6.2 deals with exception determiners of the form “all but five”. (shrink)
The note addresses the problem of how utilitarianism and other finitely additive theories of value should evaluate infinitely long utility streams. We use the axiomatic approach and show that finite anonymity does not apply in an infinite framework. A stronger anonymity demand (fixed step anonymity) is proposed and motivated. Finally, we construct an ordering criterion that combines fixed step anonymity and strong monotonicity.
This is the handout of my comments on E. Zimmermann's paper "Monotonicity in Opaque Verbs", which I prepared for the workshop on Intensional Verbs and Non-Referential Terms held at IHPST on January 14, 2006.
Peter G¨ ardenfors proved a theorem purporting to show that it is impossible to adjoin to the AGM -postulates for belief-revision a principle of monotonicity for revisions. The principle of monotonicity in question is implied by the Ramsey test for conditionals. So G¨.
Syllogistics reduces to only two rules of inference: monotonicity and symmetry, plus a third if one wants to take existential import into account. We give an implementation that uses only the monotonicity and symmetry rules, with an addendum for the treatment of existential import. Soundness follows from the monotonicity properties and symmetry properties of the Aristotelean quantiﬁers, while completeness for syllogistic theory is proved by direct inspection of the valid syllogisms. Next, the valid syllogisms are decomposed in (...) terms of the rules they involve. The implementation uses Haskell , and is given in ‘literate programming’ style . (shrink)
We introduce two new belief revision axioms: partial monotonicity and consequence correctness. We show that partial monotonicity is consistent with but independent of the full set of axioms for a Gärdenfors belief revision sytem. In contrast to the Gärdenfors inconsistency results for certain monotonicity principles, we use partial monotonicity to inform a consistent formalization of the Ramsey test within a belief revision system extended by a conditional operator. We take this to be a technical dissolution of (...) the well-known Gärdenfors dilemma.In addition, we present the consequential correctness axiom as a new measure of minimal revision in terms of the deductive core of a proposition whose support we wish to excise. We survey several syntactic and semantic belief revision systems and evaluate them according to both the Gärdenfors axioms and our new axioms. Furthermore, our algebraic characterization of semantic revision systems provides a useful technical device for analysis and comparison, which we illustrate with several new proofs. (shrink)
In this paper I give conditions under which a matrix characterisation of validity is correct for first order logics where quantifications are restricted by statements from a theory. Unfortunately the usual definition of path closure in a matrix is unsuitable and a less pleasant definition must be used. I derive the matrix theorem from syntactic analysis of a suitable tableau system, but by choosing a tableau system for restricted quantification I generalise Wallen's earlier work on modal logics. The tableau system (...) is only correct if a new condition I call alphabetical monotonicity holds. I sketch how the result can be applied to a wide range of logics such as first order variants of many standard modal logics, including non-serial modal logics. (shrink)
The paper is about the interpretation of opaque verbs like “seek”, “owe”, and “resemble” which allow for unspecific readings of their (indefinite) objects. It is shown that the following two observations create a problem for semantic analysis: (a) The opaque position is upward monotone: “John seeks a unicorn” implies “John seeks an animal”, given that “unicorn” is more specific than “animal”. (b) Indefinite objects of opaque verbs allow for higher-order, or “underspecific”, readings: “Jones is looking for something Smith is looking (...) for” can express that there is something unspecific that both Jones and Smith are looking for. Given (a) and (b), it would seem that the following inference is hard to escape, if the premisses are construed unspecifically and the conclusion is taken on its under- specific reading: Jones is looking for a sweater. Smith is looking for a pen. Smith is looking for something Jones is looking for. (shrink)
Dp-minimality is a common generalization of weak minimality and weak o-minimality. If T is a weakly o-minimal theory then it is dp-minimal (Fact 2.2), but there are dp-minimal densely ordered groups that are not weakly o-minimal. We introduce the even more general notion of inp-minimality and prove that in an inp-minimal densely ordered group, every definable unary function is a union of finitely many continuous locally monotonic functions (Theorem 3.2).
The theory of Generalized Quantifiers has facilitated progress in the study of negation in natural language. In particular it has permitted the formulation of a DeMorgan taxonomy of logical strength of negative Noun Phrases (Zwarts 1996a,b). It has permitted the formulation of broad semantical generalizations to explain grammatical phenomena, e.g. the distribution of Negative Polarity Items (Ladusaw 1980; Linebarger 1981, 1987, 1991; Hoeksema 1986, 1995; Zwarts 1996a,b; Horn 1992, 1996b). In the midst of this theorizing Jaap Hoepelman invited me to (...) lecture in Stuttgart about Focus, and I took the opportunity to talk about a seminal paper on ‘only Proper Name’ and ‘even Proper Name’ by Larry Horn (1969), a paper that I had admired but that had nagged at me for years. The result of Hoepelman‘s invitation was Atlas (1991, 1993), in which I believed that I had discerned difficulties for the formal semantics of Negative Polarity Item sentences, ‘only Proper Name’ sentences licensed Zwarts’s “weak” Negative Polarity Items, e.g. ‘ever’, ‘any’, but ‘only Proper Name’ was not a downwards monotonic quantifier, thus refusing the broad semantical generalization that any NPI licenser was a downward monotonic quantifier. In fact ‘only Proper Name’ was the first of a new category of generalized quantifier: the pseudo-anti-additive quantifier. Though I have explained and defended the introduction of this new category in this paper, a particular interest of my analysis is that it opens up the theory of Negative Polarity Items for further development; it permits the formulation of entirely new questions for research (see ‘Open Questions’, Appendix 1). Along the way I was also trying to present a correct account of the formal semantics and implicatures of ‘Only a is F’, a subject of theoretical investigation for the last 700 years, but without, in my view, any theory ever arriving at the truth. There had to be something wrong with our theoretical methods or theoretical bias towards the data. So I (Atlas 1991, 1993) have tried to break out of this logjam by introducing new constraints on the acceptability of logical forms (first introduced in Atlas & Levinson 1981 for the analysis of clefts, and in Atlas 1988 for the analysis of negative existence statements). The earlier theories ignored conversational implicatures entirely; it seemed of theoretical interest to examine statements containing focal particles like ‘Only’ for their implicatures, especially as the correct prediction of implicatures tells one something about the truth-conditions and logical form of the statement itself (Atlas 1991, 1993). In this paper I review and modify my earlier theory of the logical form, semantical properties, and pragmatic properties of ‘Only a is F‘. I also provide the correct generalization to the case of ‘Only G is F‘. And I respond to the criticisms in Horn (1992, 1996b). (shrink)
This article reveals a tension between a fairly standard response to "liar sentences," of which -/- (L) Sentence (L) -/- is not true is an instance, and some features of our natural language determiners (e.g., 'every,' 'some,' 'no,' etc.) that have been established by formal linguists. The fairly standard response to liar sentences, which has been voiced by a number of philosophers who work directly on the Liar paradox (e.g., Parsons , Kripke , Burge , Goldstein [1985, 2009], Gaifman [1992, (...) 2000]), Glanzberg , Azzouni , and others), but can also be heard from philosophers who do not work directly on that paradox, is that liar sentences do not express propositions. Call this the "No Proposition View" (hereafter NPV). Evidently, the belief that liar sentences do not express propositions is a deeply held intuition. As the previously mentioned tension will reveal, there is reason to worry about whether this deeply held intuition can be sustained. (shrink)
In this paper, it is argued that Ferguson’s (2003, Argumentation 17, 335–346) recent proposal to reconcile monotonic logic with defeasibility has three counterintuitive consequences. First, the conclusions that can be derived from his new rule of inference are vacuous, a point that as already made against default logics when there are conflicting defaults. Second, his proposal requires a procedural “hack” to the break the symmetry between the disjuncts of the tautological conclusions to which his proposal leads. Third, Ferguson’s proposal amounts (...) to arguing that all everyday inferences are sound by definition. It is concluded that the informal logic response to defeasibility, that an account of the context in which inferences are sound or unsound is required, still stands. It is also observed that another possible response is given by Bayesian probability theory (Oaksford and Chater, in press, Bayesian Rationality: The Probabilistic Approach to Human Reasoning, Oxford University Press, Oxford, UK; Hahn and Oaksford, in press, Synthese). (shrink)
This paper is a comparison of how first-order Kyburgian Evidential Probability (EP), second-order EP, and objective Bayesian epistemology compare as to the KLM system-P rules for consequence relations and the monotonic / non-monotonic divide.
We investigate why similar extensions of first-order logic using operators corresponding to NP-complete decision problems apparently differ in expressibility: the logics capture either NP or LNP. It had been conjectured that the complexity class captured is NP if and only if the operator is monotone. We show that this conjecture is false. However, we provide evidence supporting a revised conjecture involving finite variations of monotone problems.
One of the great successes of the application of generalized quantifiers to natural language has been the ability to formulate robust semantic universals. When such a universal is attested, the question arises as to the source of the universal. In this paper, we explore the hypothesis that many semantic universals arise because expressions satisfying the universal are easier to learn than those that do not. While the idea that learnability explains universals is not new, explicit accounts of learning that can (...) make good on this hypothesis are few and far between. We propose a model of learning — back-propagation through a recurrent neural network — which can make good on this promise. In particular, we discuss the universals of monotonicity, quantity, and conservativity and perform computational experiments of training such a network to learn to verify quantifiers. Our results are able to explain monotonicity and quantity quite well. We suggest that conservativity may have a different source than the other universals. (shrink)
In the semantics of natural language, quantification may have received more attention than any other subject, and one of the main topics in psychological studies on deductive reasoning is syllogistic inference, which is just a restricted form of reasoning with quantifiers. But thus far the semantical and psychological enterprises have remained disconnected. This paper aims to show how our understanding of syllogistic reasoning may benefit from semantical research on quantification. I present a very simple logic that pivots on the (...) class='Hi'>monotonicity properties of quantified statements - properties that are known to be crucial not only to quantification but to a much wider range of semantical phenomena. This logic is shown to account for the experimental evidence available in the literature as well as for the data from a new experiment with cardinal quantifiers ("at least n" and "at most n"), which cannot be explained by any other theory of syllogistic reasoning. (shrink)
Linguists often sharply distinguish the different modules that support linguistics competence, e.g., syntax, semantics, pragmatics. However, recent work has identified phenomena in syntax (polarity sensitivity) and pragmatics (implicatures), which seem to rely on semantic properties (monotonicity). We propose to investigate these phenomena and their connections as a window into the modularity of our linguistic knowledge. We conducted a series of experiments to gather the relevant syntactic, semantic and pragmatic judgments within a single paradigm. The comparison between these quantitative data (...) leads us to four main results, (i) Our results support a departure from one element of the classical Gricean approach, thus helping to clarify and settle an empirical debate. This first outcome also confirms the soundness of the methodology, as the results align with standard contemporary accounts of scalar implicature (SI), (ii) We confirm that the formal semantic notion of monotonicity underlies negative polarity item (NPI) syntactic acceptability, but (iii) our results indicate that the notion needed is perceived monotonicity. We see results (ii) and (iii) as the main contribution of this study: (ii) provides an empirical interpretation and confirmation of one of the insights of the model-theoretic approach to semantics, while (iii) calls for an incremental, cognitive implementation of the current generalizations, (iv) Finally, our results do not indicate that the relationship between NPI acceptability and monotonicity is mediated by pragmatic features related to Sis: this tells against elegant attempts to unify polarity sensitivity and Sis (pioneered by Krifka and Chierchia). These results illustrate a new methodology for integrating theoretically rigorous work in formal semantics with an experimentally-grounded cognitively-oriented view of linguistic competence. (shrink)