Starting out from the assumption that monotonicity plays a central role in interpretation and inference, we derive a number of predictions about the complexity of processing quantified sentences. A quantifier may be upward entailing (i.e. license inferences from subsets to supersets) or downward entailing (i.e. license inferences from supersets to subsets). Our main predictions are the following: If the monotonicity profiles of two quantifying expressions are the same, they should be equally easy or hard to process, ceteris paribus. (...) Sentences containing both upward and downward entailing quantifiers are more difficult than sentences with upward entailing quantifiers only. Downward-entailing quantifiers built from cardinals, like ‘at most three’, are more difficult than others. Inferences from subsets to supersets are easier than inferences in the opposite direction. We present experimental evidence confirming these predictions. (shrink)
This article studies the monotonicity behavior of plural determinersthat quantify over collections. Following previous work, we describe thecollective interpretation of determiners such as all, some andmost using generalized quantifiers of a higher type that areobtained systematically by applying a type shifting operator to thestandard meanings of determiners in Generalized Quantifier Theory. Twoprocesses of counting and existential quantification thatappear with plural quantifiers are unified into a single determinerfitting operator, which, unlike previous proposals, both capturesexistential quantification with plural determiners and respects (...) theirmonotonicity properties. However, some previously unnoticed factsindicate that monotonicity of plural determiners is not always preservedwhen they apply to collective predicates. We show that the proposedoperator describes this behavior correctly, and characterize themonotonicity of the collective determiners it derives. It is proved thatdeterminer fitting always preserves monotonicity properties ofdeterminers in their second argument, but monotonicity in the firstargument of a determiner is preserved if and only if it is monotonic inthe same direction in the second argument. We argue that this asymmetryfollows from the conservativity of generalized quantifiers innatural language. (shrink)
The distribution of the focus particle even is constrained: if it is adjoined at surface structure to an expression that is entailed by its focus alternatives, as in even once, it must be appropriately embedded to be acceptable. This paper focuses on the context-dependent distribution of such occurrences of even in the scope of non-monotone quantifiers. We show that it is explained on the assumption that even can move at LF Syntax and semantics, 1979). The analysis is subsequently extended to (...) occurrences of negative polarity items in these environments, which mirror the abovementioned distribution of even and which invalidate standard characterizations of NPI licensing conditions in terms of downward-entailingness. The idea behind the extension is that NPIs denote weak elements that are associates of covert even. The paper concludes by discussing two comprehensive theories of NPI licensing and how our proposal relates to them. (shrink)
This paper focuses on the concept of collective essence: that some truths are essential to many items taken together. For example, that it is essential to conjunction and negation that they are truth-functionally complete. The concept of collective essence is one of the main innovations of recent work on the theory of essence. In a sense, this innovation is natural, since we make all sorts of plural predications. It stands to reason that there should be a distinction between essential and (...) accidental plural predications if there is a distinction among singular predications. In this paper I defend the view that the concept of collective essence is governed by the principle of Monotonicity: that something is essential to some items only if it is essential to any items to which they belong. (shrink)
The paper is about the interpretation of opaque verbs like “seek”, “owe”, and “resemble” which allow for unspecific readings of their (indefinite) objects. It is shown that the following two observations create a problem for semantic analysis: (a) The opaque position is upward monotone: “John seeks a unicorn” implies “John seeks an animal”, given that “unicorn” is more specific than “animal”. (b) Indefinite objects of opaque verbs allow for higher-order, or “underspecific”, readings: “Jones is looking for something Smith is looking (...) for” can express that there is something unspecific that both Jones and Smith are looking for. Given (a) and (b), it would seem that the following inference is hard to escape, if the premisses are construed unspecifically and the conclusion is taken on its under- specific reading: Jones is looking for a sweater. Smith is looking for a pen. Smith is looking for something Jones is looking for. (shrink)
Classic deductive logic entails that once a conclusion is sustained by a valid argument, the argument can never be invalidated, no matter how many new premises are added. This derived property of deductive reasoning is known as monotonicity. Monotonicity is thought to conflict with the defeasibility of reasoning in natural language, where the discovery of new information often leads us to reject conclusions that we once accepted. This perceived failure of monotonic reasoning to observe the defeasibility of natural-language (...) arguments has led some philosophers to abandon deduction itself (!), often in favor of new, non-monotonic systems of inference known as `default logics'. But these radical logics (e.g., Ray Reiter's default logic) introduce their desired defeasibility at the expense of other, equally important intuitions about natural-language reasoning. And, as a matter of fact, if we recognize that monotonicity is a property of the form of a deductive argument and not its content (i.e., the claims in the premise(s) and conclusion), we can see how the common-sense notion of defeasibility can actually be captured by a purely deductive system. (shrink)
Power indices are commonly required to assign at least as much power to a player endowed with some given voting weight as to any player of the same game with smaller weight. This local monotonicity and a related global property however are frequently and for good reasons violated when indices take account of a priori unions amongst subsets of players (reflecting, e.g., ideological proximity). This paper introduces adaptations of the conventional monotonicity notions that are suitable for voting games (...) with an exogenous coalition structure. A taxonomy of old and new monotonicity concepts is provided, and different coalitional versions of the Banzhaf and Shapley–Shubik power indices are compared accordingly. (shrink)
Monotonicity is commonly considered an essential requirement for power measures; violation of local monotonicity or related postulates supposedly disqualifies an index as a valid yardstick for measuring power. This paper questions if such claims are really warranted. In the light of features of real-world collective decision making such as coalition formation processes, ideological affinities, a priori unions, and strategic interaction, standard notions of monotonicity are too narrowly defined. A power measure should be able to indicate that power (...) is non-monotonic in a given dimension of players' resources if – given a decision environment and plausible assumptions about behaviour – itis non-monotonic. (shrink)
This paper presents a generalization of the standard notions of left monotonicity (on the nominal argument of a determiner) and right monotonicity (on the VP argument of a determiner). Determiners such as “more than/at least as many as” or “fewer than/at most as many as”, which occur in so-called propositional comparison, are shown to be monotone with respect to two nominal arguments and two VP-arguments. In addition, it is argued that the standard Generalized Quantifier analysis of numerical determiners (...) such as “more than three/at least three” is a simplification which ignores the fundamental parallellism with the propositional comparatives. Furthermore, the symmetric monotonicity configurations of the existential “some” and “no” are shown to be straightforwardly related to those of numerical comparatives with the limit numeral zero, whereas the asymmetric configurations of universal “all” and “not all” involve the extra complicating mechanism of polarity reversal, which is related to Keenan's notion of co-intersectivity (as opposed to intersectivity). A second aim of the paper is to investigate some of the factors reducing the inferential potential of determiners. Different types of comparative determiners and their modification will be considered in detail. In addition, a systematic interaction will be revealed between the monotonicity properties of the determiners in propositional comparatives and the differ ent types of ellipsis in the “than”-complement. Different degrees of ellipsis are defined in terms of informational dependencies between the “than”-complement and the main clause. A general balancing mechanism is observed by virtue of which an increase in informational dependency is compensated by a reduction of the inferential potential of the comparative determiner. The structure of the paper is as follows. Part two deals with propositional comparison and introduces the two basic monotonicity configurations. Part three distinguishes three types of informational dependencies or ellipsis between the “than”-complement and the main clause. In Section 3.1 the “than”-complement only contains a VP constituent, whereas in Section 3.2. it only consists of a nominal constituent. Section 3.3 considers more complex instances of multiple dependencies. Section 3.4 then looks at the interaction of these three types with the modifier “proportionally”. Part four presents the two basic monotonicity configurations for numerical comparison, whereas part five deals with more complex numerical determiners. Bounding determiners, such as “between five and ten”, and other boolean combinations are discussed in Section 5.1, whereas Section 5.2 goes into approximative determiners involving modifiers such as “only” or “almost”. Section 5.3. is devoted to proportional determiners such as “more than two out of three”. Part six then considers the standard existential and universal quantifiers. Section 6.1 reformulates the standard Generalized Quantifier analysis in comparative terms, whereas Section 6.2 deals with exception determiners of the form “all but five”. (shrink)
Dp-minimality is a common generalization of weak minimality and weak o-minimality. If T is a weakly o-minimal theory then it is dp-minimal (Fact 2.2), but there are dp-minimal densely ordered groups that are not weakly o-minimal. We introduce the even more general notion of inp-minimality and prove that in an inp-minimal densely ordered group, every definable unary function is a union of finitely many continuous locally monotonic functions (Theorem 3.2).
In voting theory, monotonicity is the axiom that an improvement in the ranking of a candidate by voters cannot cause a candidate who would otherwise win to lose. The participation axiom states that the sincere report of a voter’s preferences cannot cause an outcome that the voter regards as less attractive than the one that would result from the voter’s non-participation. This article identifies three binary distinctions in the types of circumstances in which failures of monotonicity or participation (...) can occur. Two of the three distinctions apply to monotonicity, while one of those and the third apply to participation. The distinction that is unique to monotonicity is whether the voters whose changed rankings demonstrate non-monotonicity are better off or worse off. The distinction that is unique to participation is whether the marginally participating voter causes his first choice to lose or his last choice to win. The overlapping distinction is whether the profile of voters’ rankings has a Condorcet winner or a cycle at the top. This article traces the occurrence of all of the resulting combination of characteristics in the voting methods that can exhibit failures of monotonicity. (shrink)
Syllogistics reduces to only two rules of inference: monotonicity and symmetry, plus a third if one wants to take existential import into account. We give an implementation that uses only the monotonicity and symmetry rules, with an addendum for the treatment of existential import. Soundness follows from the monotonicity properties and symmetry properties of the Aristotelean quantifiers, while completeness for syllogistic theory is proved by direct inspection of the valid syllogisms. Next, the valid syllogisms are decomposed in (...) terms of the rules they involve. The implementation uses Haskell [8], and is given in ‘literate programming’ style [9]. (shrink)
We introduce two new belief revision axioms: partial monotonicity and consequence correctness. We show that partial monotonicity is consistent with but independent of the full set of axioms for a Gärdenfors belief revision sytem. In contrast to the Gärdenfors inconsistency results for certain monotonicity principles, we use partial monotonicity to inform a consistent formalization of the Ramsey test within a belief revision system extended by a conditional operator. We take this to be a technical dissolution of (...) the well-known Gärdenfors dilemma.In addition, we present the consequential correctness axiom as a new measure of minimal revision in terms of the deductive core of a proposition whose support we wish to excise. We survey several syntactic and semantic belief revision systems and evaluate them according to both the Gärdenfors axioms and our new axioms. Furthermore, our algebraic characterization of semantic revision systems provides a useful technical device for analysis and comparison, which we illustrate with several new proofs. (shrink)
This article reveals a tension between a fairly standard response to "liar sentences," of which -/- (L) Sentence (L) -/- is not true is an instance, and some features of our natural language determiners (e.g., 'every,' 'some,' 'no,' etc.) that have been established by formal linguists. The fairly standard response to liar sentences, which has been voiced by a number of philosophers who work directly on the Liar paradox (e.g., Parsons [1974], Kripke [1975], Burge [1979], Goldstein [1985, 2009], Gaifman [1992, (...) 2000]), Glanzberg [2004], Azzouni [2006], and others), but can also be heard from philosophers who do not work directly on that paradox, is that liar sentences do not express propositions. Call this the "No Proposition View" (hereafter NPV). Evidently, the belief that liar sentences do not express propositions is a deeply held intuition. As the previously mentioned tension will reveal, there is reason to worry about whether this deeply held intuition can be sustained. (shrink)
A proposal by Ferguson [2003, Argumentation 17, 335–346] for a fully monotonic argument form allowing for the expression of defeasible generalizations is critically examined and rejected as a general solution. It is argued that (i) his proposal reaches less than the default-logician’s solution allows, e.g., the monotonously derived conclusion is one-sided and itself not defeasible. (ii) when applied to a suitable example, his proposal derives the wrong conclusion. Unsuccessful remedies are discussed.
The aim of this paper is to study the monotonicity properties with respect to the probability distribution of the state processes, of optimal decisions in bandit decision problems. Orderings of dynamic discrete projects are provided by extending the notion of stochastic dominance to stochastic processes.
We identify a new monotonicity condition (called cover monotonicity) for tournament solutions which allows a discrimination among main tournament solutions: The top-cycle, the iterated uncovered set, the minimal covering set, and the bipartisan set are cover monotonic while the uncovered set, Banks set, the Copeland rule, and the Slater rule fail to be so. As cover monotonic tournament solutions induce social choice rules which are Nash implementable in certain non-standard frameworks (such as those set by Bochet and Maniquet (...) (CORE Discussion Paper No. 2006/84, 2006) or Özkal-Sanver and Sanver (Social Choice and Welfare, 26(3), 607–623, 2006), the discrimination generated by cover monotonicity becomes particularly notable when implementability is a concern. (shrink)
In this paper, it is argued that Ferguson’s (2003, Argumentation 17, 335–346) recent proposal to reconcile monotonic logic with defeasibility has three counterintuitive consequences. First, the conclusions that can be derived from his new rule of inference are vacuous, a point that as already made against default logics when there are conflicting defaults. Second, his proposal requires a procedural “hack” to the break the symmetry between the disjuncts of the tautological conclusions to which his proposal leads. Third, Ferguson’s proposal amounts (...) to arguing that all everyday inferences are sound by definition. It is concluded that the informal logic response to defeasibility, that an account of the context in which inferences are sound or unsound is required, still stands. It is also observed that another possible response is given by Bayesian probability theory (Oaksford and Chater, in press, Bayesian Rationality: The Probabilistic Approach to Human Reasoning, Oxford University Press, Oxford, UK; Hahn and Oaksford, in press, Synthese). (shrink)
In this paper we present an embedding of abstract argumentation systems into the framework of Barwise and Seligmans logic of information flow. We show that, taking P.M. Dungs characterization of argument systems, a local logic over states of a deliberation may be constructed. In this structure, the key feature of non-monotonicity of commonsense reasoning obtains as the transition from one local logic to another, due to a change in certain background conditions. Each of Dungs extensions of argument systems leads (...) to a corresponding ordering of background conditions. The relations among extensions becomes a relation among partial orderings of background conditions. This introduces a conceptual innovation in Barwise and Seligmans representation of commonsense reasoning. (shrink)
In this paper we will discuss constraints on the number of (non-dummy) players and on the distribution of votes such that local monotonicity is satisfied for the Public Good Index. These results are compared to properties which are related to constraints on the redistribution of votes (such as implied by global monotonicity). The discussion shows that monotonicity is not a straightforward criterion of classification for power measures.
In this paper, I show that the availability of what some authors have called the weak reading and the strong reading of donkey sentences with relative clauses is systematically related to monotonicity properties of the determiner. The correlation is different from what has been observed in the literature in that it concerns not only right monotonicity, but also left monotonicity (persistence/antipersistence). I claim that the reading selected by a donkey sentence with a double monotone determiner is in (...) fact the one that validates inference based on the left monotonicity of the determiner. This accounts for the lack of strong reading in donkey sentences with MON determiners, which have been neglected in the literature. I consider the relevance of other natural forms of inference as well, but also suggest how monotonicity inference might play a central role in the actual process of interpretation. The formal theory is couched in dynamic predicate logic with generalized quantifiers. (shrink)
We investigate why similar extensions of first-order logic using operators corresponding to NP-complete decision problems apparently differ in expressibility: the logics capture either NP or LNP. It had been conjectured that the complexity class captured is NP if and only if the operator is monotone. We show that this conjecture is false. However, we provide evidence supporting a revised conjecture involving finite variations of monotone problems.
We provide necessary and sufficient conditions determining how monotonicity of some classes of reducible quantifiers depends on the monotonicity of simpler quantifiers of iterations to which they are equivalent.
In the context of indivisible public objects problems (e.g., candidate selection or qualification) with “separable” preferences, unanimity rule accepts each object if and only if the object is in everyone’s top set. We establish two axiomatizations of unanimity rule. The main axiom is resource monotonicity, saying that resource increase should affect all agents in the same direction. This axiom is considered in combination with simple Pareto (there is no Pareto improvement by addition or subtraction of a single object), independence (...) of irrelevant alternatives, and either path independence or strategy-proofness. (shrink)
The theory of Generalized Quantifiers has facilitated progress in the study of negation in natural language. In particular it has permitted the formulation of a DeMorgan taxonomy of logical strength of negative Noun Phrases (Zwarts 1996a,b). It has permitted the formulation of broad semantical generalizations to explain grammatical phenomena, e.g. the distribution of Negative Polarity Items (Ladusaw 1980; Linebarger 1981, 1987, 1991; Hoeksema 1986, 1995; Zwarts 1996a,b; Horn 1992, 1996b). In the midst of this theorizing Jaap Hoepelman invited me to (...) lecture in Stuttgart about Focus, and I took the opportunity to talk about a seminal paper on ‘only Proper Name’ and ‘even Proper Name’ by Larry Horn (1969), a paper that I had admired but that had nagged at me for years. The result of Hoepelman‘s invitation was Atlas (1991, 1993), in which I believed that I had discerned difficulties for the formal semantics of Negative Polarity Item sentences, ‘only Proper Name’ sentences licensed Zwarts’s “weak” Negative Polarity Items, e.g. ‘ever’, ‘any’, but ‘only Proper Name’ was not a downwards monotonic quantifier, thus refusing the broad semantical generalization that any NPI licenser was a downward monotonic quantifier. In fact ‘only Proper Name’ was the first of a new category of generalized quantifier: the pseudo-anti-additive quantifier. Though I have explained and defended the introduction of this new category in this paper, a particular interest of my analysis is that it opens up the theory of Negative Polarity Items for further development; it permits the formulation of entirely new questions for research (see ‘Open Questions’, Appendix 1). Along the way I was also trying to present a correct account of the formal semantics and implicatures of ‘Only a is F’, a subject of theoretical investigation for the last 700 years, but without, in my view, any theory ever arriving at the truth. There had to be something wrong with our theoretical methods or theoretical bias towards the data. So I (Atlas 1991, 1993) have tried to break out of this logjam by introducing new constraints on the acceptability of logical forms (first introduced in Atlas & Levinson 1981 for the analysis of clefts, and in Atlas 1988 for the analysis of negative existence statements). The earlier theories ignored conversational implicatures entirely; it seemed of theoretical interest to examine statements containing focal particles like ‘Only’ for their implicatures, especially as the correct prediction of implicatures tells one something about the truth-conditions and logical form of the statement itself (Atlas 1991, 1993). In this paper I review and modify my earlier theory of the logical form, semantical properties, and pragmatic properties of ‘Only a is F‘. I also provide the correct generalization to the case of ‘Only G is F‘. And I respond to the criticisms in Horn (1992, 1996b). (shrink)
The note addresses the problem of how utilitarianism and other finitely additive theories of value should evaluate infinitely long utility streams. We use the axiomatic approach and show that finite anonymity does not apply in an infinite framework. A stronger anonymity demand (fixed step anonymity) is proposed and motivated. Finally, we construct an ordering criterion that combines fixed step anonymity and strong monotonicity.
Suppose that we are interested in the average causal effect of a binary treatment on an outcome when this relationship is confounded by a binary confounder. Suppose that the confounder is unobserved but a nondifferential proxy of it is observed. We show that, under certain monotonicity assumption that is empirically verifiable, adjusting for the proxy produces a measure of the effect that is between the unadjusted and the true measures.
Predicate approaches to modality have been a topic of increased interest in recent intensional logic. Halbach and Welch :71–100, 2009) have proposed a new formal technique to reduce the necessity predicate to an operator, demonstrating that predicate and operator methods are ultimately compatible. This article concerns the question of whether Halbach and Welch’s approach can provide a uniform formal treatment for intensionality. I show that the monotonicity constraint in Halbach and Welch’s proof for necessity fails for almost all possible-worlds (...) theories of knowledge. The nonmonotonicity results demonstrate that the most obvious way of emulating Halbach and Welch’s rapprochement of the predicate and operator fails in the epistemic setting. (shrink)
Do causes necessitate their effects? Causal necessitarianism is the view that they do. One major objection—the “monotonicity objection”—runs roughly as follows. For many particular causal relations, we can easily find a possible “blocker”—an additional causal factor that, had it also been there, would have prevented the cause from producing its effect. However—the objection goes on—, if the cause really necessitated its effect in the first place, it would have produced it anyway—despite the blocker. Thus, CN must be false. Though (...) different from Hume’s famous attacks against CN, the monotonicity objection is no less important. In one form or another, it has actually been invoked by various opponents to CN, past and present. And indeed, its intuitive appeal is quite powerful. Yet, this paper argues that, once carefully analysed, the objection can be resisted—and should be. First, I show how its success depends on three implicit assumptions concerning, respectively, the notion of cause, the composition of causal factors, and the relation of necessitation. Second, I present general motivations for rejecting at least one of those assumptions: appropriate variants of them threaten views that even opponents to CN would want to preserve—in particular, the popular thesis of grounding necessitarianism. Finally, I argue that the assumption we should reject is the one concerning how causes should be understood: causes, I suggest, include an element of completeness that excludes blockers. In particular, I propose a way of understanding causal completeness that avoids common difficulties. (shrink)
In the classic Miners case, an agent subjectively ought to do what they know is objectively wrong. This case shows that the subjective and objective ‘oughts’ are somewhat independent. But there remains a powerful intuition that the guidance of objective ‘oughts’ is more authoritative—so long as we know what they tell us. We argue that this intuition must be given up in light of a monotonicity principle, which undercuts the rationale for saying that objective ‘oughts’ are an authoritative guide (...) for agents and advisors. (shrink)
Violations of expected utility theory are sometimes attributed to imprecise preferences interacting with a lack of learning opportunity in the experimental laboratory. This paper reports an experimental test of whether a learning opportunity which engenders accurate probability assessments, by enhancing understanding of the meaning of stated probability information, causes anomalous behaviour to diminish. The data show that whilst in some cases expected utility maximising behaviour increases with the learning opportunity, so too do systematic violations. Therefore, there should be no presumption (...) that anomalous behaviour under risk is transient and that discovered preferences will be appropriately described by expected utility theory. (shrink)
It is not unusual in real-life that one has to choose among finitely many alternatives when the merit of each alternative is not perfectly known. Instead of observing the actual utilities of the alternatives at hand, one typically observes more or less precise signals that are positively correlated with these utilities. In addition, the decision-maker may, at some cost or disutility of effort, choose to increase the precision of these signals, for example by way of a careful study or the (...) hiring of expertise. We here develop a model of such decision problems. We begin by showing that a version of the monotone likelihood-ratio property is sufficient, and also essentially necessary, for the optimality of the heuristic decision rule to always choose the alternative with the highest signal. Second, we show that it is not always advantageous to face alternatives with higher utilities, a non-monotonicity result that holds even if the decision-maker optimally chooses the signal precision. We finally establish an operational first-order condition for the optimal precision level in a canonical class of decision-problems, and we show that the optimal precision level may be discontinuous in the precision cost. (shrink)
There has recently been some literature on the properties of a Health-Related Social Welfare Function (HRSWF). The aim of this article is to contribute to the analysis of the different properties of a HRSWF, paying particular attention to the monotonicity principle. For monotonicity to be fulfilled, any increase in individual health—other things equal—should result in an increase in social welfare. We elicit public preferences concerning trade-offs between the total level of health (concern for efficiency) and its distribution (concern (...) for equality), under different hypothetical scenarios through face-to-face interviews. Of key interests are: the distinction between non-monotonic preferences and Rawlsian preferences; symmetry of HRSWF; and the extent of inequality neutral preferences. The results indicate strong support for non-monotonic preferences, over Rawlsian preferences. Furthermore, the majority of those surveyed had preferences that were consistent with a symmetric and inequality averse HRSWF. (shrink)
In the framework of transferable utility games, we modify the 2-person Davis–Maschler reduced game to ensure non-emptiness of the imputation set of the adapted 2-person reduced game. Based on the modification, we propose two new axioms: reduced game monotonicity and reduced dominance. Using RGM, RD, NE, Covariance under strategic equivalence, Equal treatment property and Pareto optimality, we are able to characterize the kernel.
In Instrumental Variables estimation, the effect of an instrument on an endogenous variable may vary across the sample. In this case, IV produces a local average treatment effect, and if monotonicity does not hold, then no effect of interest is identified. In this paper, I calculate the weighted average of treatment effects that is identified under general first-stage effect heterogeneity, which is generally not the average treatment effect among those affected by the instrument. I then describe a simple set (...) of data-driven approaches to modeling variation in the effect of the instrument. These approaches identify a Super-Local Average Treatment Effect that weights treatment effects by the corresponding instrument effect more heavily than LATE. Even when first-stage heterogeneity is poorly modeled, these approaches considerably reduce the impact of small-sample bias compared to standard IV and unbiased weak-instrument IV methods, and can also make results more robust to violations of monotonicity. In application to a published study with a strong instrument, the preferred approach reduces error by about 19% in small subsamples, and by about 13% in larger subsamples. (shrink)
Peter G¨ ardenfors proved a theorem purporting to show that it is impossible to adjoin to the AGM -postulates for belief-revision a principle of monotonicity for revisions. The principle of monotonicity in question is implied by the Ramsey test for conditionals. So G¨.
This is the handout of my comments on E. Zimmermann's paper "Monotonicity in Opaque Verbs", which I prepared for the workshop on Intensional Verbs and Non-Referential Terms held at IHPST on January 14, 2006.
In this paper I give conditions under which a matrix characterisation of validity is correct for first order logics where quantifications are restricted by statements from a theory. Unfortunately the usual definition of path closure in a matrix is unsuitable and a less pleasant definition must be used. I derive the matrix theorem from syntactic analysis of a suitable tableau system, but by choosing a tableau system for restricted quantification I generalise Wallen's earlier work on modal logics. The tableau system (...) is only correct if a new condition I call alphabetical monotonicity holds. I sketch how the result can be applied to a wide range of logics such as first order variants of many standard modal logics, including non-serial modal logics. (shrink)