This paper develops a trivalent semantics for indicative conditionals and extends it to a probabilistic theory of valid inference and inductive learning with conditionals.} On this account, (i) all complex conditionals can be rephrased as simple conditionals, connecting our account to Adams's theory of p-valid inference; (ii) we obtain Stalnaker's Thesis as a theorem while avoiding the well-known triviality results; (iii) we generalize Bayesian conditionalization to an updating principle for conditional sentences. The final result is a unified semantic and probabilistic (...) theory of conditionals with attractive results and predictions. (shrink)
This paper explores trivalent truth conditions for indicative conditionals, examining the “defective” truth table proposed by de Finetti and Reichenbach. On their approach, a conditional takes the value of its consequent whenever its antecedent is true, and the value Indeterminate otherwise. Here we deal with the problem of selecting an adequate notion of validity for this conditional. We show that all standard validity schemes based on de Finetti’s table come with some problems, and highlight two ways out of the predicament: (...) one pairs de Finetti’s conditional with validity as the preservation of non-false values, but at the expense of Modus Ponens; the other modifies de Finetti’s table to restore Modus Ponens. In Part I of this paper, we present both alternatives, with specific attention to a variant of de Finetti’s table proposed by Cooper and Cantwell. In Part II, we give an in-depth treatment of the proof theory of the resulting logics, DF/TT and CC/TT: both are connexive logics, but with significantly different algebraic properties. (shrink)
Adams’ thesis is generally agreed to be linguistically compelling for simple conditionals with factual antecedent and consequent. We propose a derivation of Adams’ thesis from the Lewis- Kratzer analysis of if-clauses as domain restrictors, applied to probability operators. We argue that Lewis’s triviality result may be seen as a result of inexpressibility of the kind familiar in generalized quantifier theory. Some implications of the Lewis- Kratzer analysis are presented concerning the assignment of probabilities to compounds of conditionals.
We present the results of four experiments concerning the evaluation people make of sentences involving “many”, showing that two sentences of the form “many As are Bs” vs. “many As are Cs” need not be equivalent when evaluated relative to a background in which B and C have the same cardinality and proportion to A, but in which B and C are predicates with opposite semantic and affective values. The data provide evidence that subjects lower the standard relevant to ascribe (...) “many” for the more negative predicate, and that judgments involving “many” are sensitive to moral considerations, namely to expectations involving a representation of the desirability as opposed to the mere probability of an outcome. We relate the results to similar semantic asymmetries discussed in the psychological literature, in particular to the Knobe effect and to framing effects. (shrink)
Why is ordinary language vague? We argue that in contexts in which a cooperative speaker is not perfectly informed about the world, the use of vague expressions can offer an optimal tradeoff between truthfulness (Gricean Quality) and informativeness (Gricean Quantity). Focusing on expressions of approximation such as “around”, which are semantically vague, we show that they allow the speaker to convey indirect probabilistic information, in a way that can give the listener a more accurate representation of the information available to (...) the speaker than any more precise expression would (intervals of the form “between”). That is, vague sentences can be _more informative_ than their precise counterparts. We give a probabilistic treatment of the interpretation of “around”, and offer a model for the interpretation and use of “around”-statements within the Rational Speech Act (RSA) framework. In our account the shape of the speaker’s distribution matters in ways not predicted by the Lexical Uncertainty model standardly used in the RSA framework for vague predicates. We use our approach to draw further lessons concerning the semantic flexibility of vague expressions and their irreducibility to more precise meanings. (shrink)
This paper develops a trivalent semantics for the truth conditions and the probability of the natural language indicative conditional. Our framework rests on trivalent truth conditions first proposed by Cooper (1968), and Belnap (1973), and it yields two logics of conditional reasoning: (i) a logic C of inference from certain premises; and (ii) a logic U of inference from uncertain premises. But whereas the conditional is monotonic in C, it is non-monotonic in U, and whereas it obeys Modus Ponens in (...) C, it does not in U without restrictions. We show systematic correspondences between trivalent and probabilistic representations of inferences in either framework, and we use the distinction between the two systems to cast light on the validity of inferences such as Modus Ponens, Or-To-If, and Conditional Excluded Middle. The result is a unified account of the semantics and epistemology of indicative conditionals that can be fruitfully applied to analyzing the validity of conditional inferences. (shrink)
Editorial: Objects and Sound Perception Content Type Journal Article Pages 5-17 DOI 10.1007/s13164-009-0006-3 Authors Nicolas J. Bullot, École des Hautes Études en Sciences Sociales Centre de Recherches sur les Arts et le Langage (CRAL/CNRS) 96 Bd Raspail 75006 Paris France Paul Égré, Institut Jean-Nicod (ENS/EHESS/CNRS) Département d’Etudes Cognitives de l’ENS 29 rue d’Ulm 75005 Paris France Journal Review of Philosophy and Psychology Online ISSN 1878-5166 Print ISSN 1878-5158 Journal Volume Volume 1 Journal Issue Volume 1, Number 1.
This paper examines an hypothesis put forward by Pettit and Knobe 2009 to account for the Knobe effect. According to Pettit and Knobe, one should look at the semantics of the adjective “intentional” on a par with that of other gradable adjectives such as “warm”, “rich” or “expensive”. What Pettit and Knobe’s analogy suggests is that the Knobe effect might be an instance of a much broader phenomenon which concerns the context-dependence of normative standards relevant for the application of gradable (...) expressions. I adduce further evidence in favor of this view and go on to examine the predictions one obtains when assuming that “intentional” involves a two-dimensional scale, which implies evaluating how much an action or outcome is desired on the one hand, and how much it can be foreseen as a consequence of one’s actions on the other. (shrink)
Attitude verbs fall in different categories depending on the kind of sentential complements which they can embed. In English, a verb like know takes both declarative and interrogative complements. By contrast, believe takes only declarative complements and wonder takes only interrogative complements. The present paper examines the hypothesis, originally put forward by Hintikka (1975), that the only verbs that can take both that -complements and whether -complements are the factive verbs. I argue that at least one half of the hypothesis (...) is empirically correct, namely that all veridical attitude verbs taking that -complements take whether -complements. I distinguish veridical verbs from factive verbs, and present one way of deriving the generalization. Counterexamples to both directions of the factivity hypothesis are discussed, in particular the case of emotive factive verbs like regret , and the case of non-veridical verbs that licence whether complements, in particular tell, guess, decide and agree . Alternative accounts are discussed along the way, in particular Zuber (1982), Ginzburg (1995) and Saebø (2007). (shrink)
Is knowledge definable as justified true belief? We argue that one can legitimately answer positively or negatively, depending on whether or not one’s true belief is justified by what we call adequate reasons. To facilitate our argument we introduce a simple propositional logic of reason-based belief, and give an axiomatic characterization of the notion of adequacy for reasons. We show that this logic is sufficiently flexible to accommodate various useful features, including quantification over reasons. We use our framework to contrast (...) two notions of JTB: one internalist, the other externalist. We argue that Gettier cases essentially challenge the internalist notion but not the externalist one. Our approach commits us to a form of infallibilism about knowledge, but it also leaves us with a puzzle, namely whether knowledge involves the possession of only adequate reasons, or leaves room for some inadequate reasons. We favor the latter position, which reflects a milder and more realistic version of infallibilism. (shrink)
In 1907 Borel published a remarkable essay on the paradox of the Heap (“Un paradoxe économique: le sophisme du tas de blé et les vérités statistiques”), in which Borel proposes what is likely the first statistical account of vagueness ever written, and where he discusses the practical implications of the sorites paradox, including in economics. Borel’s paper was integrated in his book Le Hasard, published 1914, but has gone mostly unnoticed since its publication. One of the originalities of Borel’s essay (...) is that it puts forward a model of vagueness as imprecision, making particular use of the Gaussian law of measurement errors to model categorization. The aim of our paper is to give a presentation of the historical context of Borel’s essay, to spell out the mathematical details of his model, and to provide a critical assessment of his theory. Three aspects of Borel’s account are particularly discussed: the first concerns the comparison between Borel’s statistical account and posterior degree-theoretic accounts of vagueness. The second concerns the anti-epistemicist flavor of Borel’s approach, whereby the idea of statistical fluctuation is used to undermine the notion of sharp boundary for vague predicates. The third concerns the problematic link between Borel’s model of vagueness as imprecision and the notion of semantic indeterminacy. An English translation of Borel’s original essay is appended to this paper (Erkenntnis, this issue). (shrink)
The tolerance principle, the idea that vague predicates are insensitive to sufficiently small changes, remains the main bone of contention between theories of vagueness. In this paper I examine three sources behind our ordinary belief in the tolerance principle, to establish whether any of them might give us a good reason to revise classical logic. First, I compare our understanding of tolerance in the case of precise predicates and in the case of vague predicates. While tolerance in the case of (...) precise predicates results from approximation, tolerance in the case of vague predicates appears to originate from two more specific sources: semantic indeterminacy on the one hand, and epistemic indiscriminability on the other. Both give us good and coherent grounds to revise classical logic. Epistemic indiscriminability, it is argued, may be more fundamental than semantic indeterminacy to justify the intuition that vague predicates are tolerant. (shrink)
This paper proposes an experimental investigation of the use of vague predicates in dynamic sorites. We present the results of two studies in which subjects had to categorize colored squares at the borderline between two color categories (Green vs. Blue, Yellow vs. Orange). Our main aim was to probe for hysteresis in the ordered transitions between the respective colors, namely for the longer persistence of the initial category. Our main finding is a reverse phenomenon of enhanced contrast (i.e. negative hysteresis), (...) present in two different tasks, a comparative task involving two color names, and a yes/no task involving a single color name, but not found in a corresponding color matching task. We propose an optimality-theoretic explanation of this effect in terms of the strict-tolerant framework of Cobreros et al. (J Philos Log 1–39, 2012), in which borderline cases are characterized in a dual manner in terms of overlap between tolerant extensions, and underlap between strict extensions. (shrink)
This paper explores the idea that vague predicates like “tall”, “loud” or “expensive” are applied based on a process of analog magnitude representation, whereby magnitudes are represented with noise. I present a probabilistic account of vague judgment, inspired by early remarks from E. Borel on vagueness, and use it to model judgments about borderline cases. The model involves two main components: probabilistic magnitude representation on the one hand, and a notion of subjective criterion. The framework is used to represent judgments (...) of the form “x is clearly tall” versus “x is tall”, as involving a shift of one’s criterion, and then to derive observed patterns of acceptance for borderline contradictions, namely sentences of the form “x is tall and not tall”, relative to the acceptance of their conjuncts. (shrink)
In this paper we compare different models of vagueness viewed as a specific form of subjective uncertainty in situations of imperfect discrimination. Our focus is on the logic of the operator “clearly” and on the problem of higher-order vagueness. We first examine the consequences of the notion of intransitivity of indiscriminability for higher-order vagueness, and compare several accounts of vagueness as inexact or imprecise knowledge, namely Williamson’s margin for error semantics, Halpern’s two-dimensional semantics, and the system we call Centered semantics. (...) We then propose a semantics of degrees of clarity, inspired from the signal detection theory model, and outline a view of higher-order vagueness in which the notions of subjective clarity and unclarity are handled asymmetrically at higher orders, namely such that the clarity of clarity is compatible with the unclarity of unclarity. (shrink)
In Part I of this paper, we identified and compared various schemes for trivalent truth conditions for indicative conditionals, most notably the proposals by de Finetti and Reichenbach on the one hand, and by Cooper and Cantwell on the other. Here we provide the proof theory for the resulting logics DF/TT and CC/TT, using tableau calculi and sequent calculi, and proving soundness and completeness results. Then we turn to the algebraic semantics, where both logics have substantive limitations: DF/TT allows for (...) algebraic completeness, but not for the construction of a canonical model, while CC/TT fails the construction of a Lindenbaum-Tarski algebra. With these results in mind, we draw up the balance and sketch future research projects. (shrink)
This paper propounds a systematic examination of the link between the Knower Paradox and provability interpretations of modal logic. The aim of the paper is threefold: to give a streamlined presentation of the Knower Paradox and related results; to clarify the notion of a syntactical treatment of modalities; finally, to discuss the kind of solution that modal provability logic provides to the Paradox. I discuss the respective strength of different versions of the Knower Paradox, both in the framework of first-order (...) arithmetic and in that of modal logic with fixed point operators. It is shown that the notion of a syntactical treatment of modalities is ambiguous between a self-referential treatment and a metalinguistic treatment of modalities, and that these two notions are independent. I survey and compare the provability interpretations of modality respectively given by Skyrms, B. (1978, The Journal of Philosophy 75: 368–387) Anderson, C.A. (1983, The Journal of Philosophy 80: 338–355) and Solovay, R. (1976, Israel Journal of Mathematics 25: 287–304). I examine how these interpretations enable us to bypass the limitations imposed by the Knower Paradox while preserving the laws of classical logic, each time by appeal to a distinct form of hierarchy. (shrink)
Practices of concept-revision among scientists seem to indicate that concepts can be improved. In 2006, the International Astronomical Union revised the concept "Planet" so that it excluded Pluto, and insisting that the result was an improvement. But what could it mean for one concept or conceptual scheme to be better than another? Here we draw on the theory of epistemic utility to address this question. We show how the plausibility and informativeness of beliefs, two features that contribute to their utility, (...) have direct correlates in our concepts. These are how inclusive a concept is, or how many objects in an environment it applies to, and how homogeneous it is, or how similar the objects that fall under the concept are. We provide ways to measure these values, and argue that in combination they can provide us with a single principle of concept utility. The resulting principle can be used to decide how best to categorize an environment, and can rationalize practices of concept revision. (shrink)
The aim of the present paper is to understand what the notions of explanation and prediction in contemporary linguistics mean, and to compare various aspects that the notion of explanation encompasses in that domain. The paper is structured around an opposition between three main styles of explanation in linguistics, which I propose to call ‘grammatical’, ‘functional’, and ‘historical’. Most of this paper is a comparison between these different styles of explanations and their relations. A second, more methodological aspect this paper (...) seeks to clarify concerns the extent to which linguistic explanations can be viewed as predictive, rather than merely descriptive, and the problem of whether linguistic explanations ought to be causal, rather than noncausal. I argue that the notion of prediction is equally applicable in linguistics as in other empirical sciences. The extent to which the computational model of generative syntax can be viewed as providing a causal or psychologically realist model of language is more controversial. (shrink)
Preface: The Review of Philosophy and Psychology Content Type Journal Article Pages 1-3 DOI 10.1007/s13164-010-0024-1 Authors Dario Taraborelli, University of Surrey Centre for Research in Social Simulation Guilford GU2 7XH United Kingdom Roberto Casati, Institut Jean Nicod, Ecole Normale Supérieure 29 rue d’Ulm 75005 Paris France Paul Egré, Institut Jean Nicod, Ecole Normale Supérieure 29 rue d’Ulm 75005 Paris France Christophe Heintz, Central European University Budapest Hungary Journal Review of Philosophy and Psychology Online ISSN 1878-5166 Print ISSN 1878-5158 Journal (...) Volume Volume 1 Journal Issue Volume 1, Number 1. (shrink)
We say that a sentence A is a permissive consequence of a set X of premises whenever, if all the premises of X hold up to some standard, then A holds to some weaker standard. In this paper, we focus on a three-valued version of this notion, which we call strict-to-tolerant consequence, and discuss its fruitfulness toward a unified treatment of the paradoxes of vagueness and self-referential truth. For vagueness, st-consequence supports the principle of tolerance; for truth, it supports the (...) requisite of transparency. Permissive consequence is non-transitive, however, but this feature is argued to be an essential component to the understanding of paradoxical reasoning in cases involving vagueness or self-reference. (shrink)
In “Facts: Particulars of Information Units?”, Kratzer proposed a causal analysis of knowledge in which knowledge is defined as a form of de re belief of facts. In support of Kratzer’s view, I show that a certain articulation of the de re/de dicto distinction can be used to integrally account for the original pair of Gettier cases. In contrast to Kratzer, however, I think such an account does not fundamentally require a distinction between facts and true propositions. I then discuss (...) whether this account might be generalized and whether it can give us a reductive analysis of knowledge as de re true belief. Like Kratzer, I think it will not, in particular the distinction appears inadequate to account for Ginet-Goldman cases of causally connected but unreliable belief. Nevertheless, I argue that the de re belief analysis allows us to account for a distinction Starmans and Friedman recently introduced between apparent evidence and authentic evidence in their empirical study of Gettier cases, in a way that questions their claim that a causal disconnect is not operative in the contrasts they found. (shrink)
Moral considerations and our normative expectations influence not only our judgments about intentional action or causation but also our judgments about exact probabilities and quantities. Whereas those cases support the competence theory proposed by Knobe in his paper, they remain compatible with a modular conception of the interaction between moral and nonmoral cognitive faculties in each of those domains.
I discuss the problem of whether true contradictions of the form “x is P and not P” might be the expression of an implicit relativization to distinct respects of application of one and the same predicate P. Priest rightly claims that one should not mistake true contradictions for an expression of lexical ambiguity. However, he primarily targets cases of homophony for which lexical meanings do not overlap. There exist more subtle forms of equivocation, such as the relation of privative opposition (...) singled out by Zwicky and Sadock in their study of ambiguity. I argue that this relation, which is basically a relation of general to more specific, underlies the logical form of true contradictions. The generalization appears to be that all true contradictions really mean “x is P in some respects/to some extent, but not in all respects/not to all extent”. I relate this to the strict-tolerant account of vague predicates and outline a variant of the account to cover one-dimensional and multidimensional predicates. (shrink)
This paper supersedes an ealier version, entitled "A Non-Standard Semantics for Inexact Knowledge with Introspection", which appeared in the Proceedings of "Rationality and Knowledge". The definition of token semantics, in particular, has been modified, both for the single- and the multi-agent case.
This paper revisits Buridan’s Bridge paradox (Sophismata, chapter 8, Sophism 17), itself close kin to the Liar paradox, a version of which also appears in Bradwardine’s Insolubilia. Prompted by the occurrence of the paradox in Cervantes’s Don Quixote, I discuss and compare four distinct solutions to the problem, namely Bradwardine’s “just false” conception, Buridan’s “contingently true/false” theory, Cervantes’s “both true and false” view, and then the “neither true simpliciter nor false simpliciter” account proposed more recently by Jacquette. All have in (...) common to accept that the Bridge expresses a truth-apt proposition, but only the latter three endorse the transparency of truth. Against some previous commentaries I first show that Buridan’s solution is fully compliant with an account of the paradox within classical logic. I then argue that Cervantes’s insights, as well as Jacquette’s treatment, are both supportive of a dialetheist account, and Jacquette’s in particular of the strict-tolerant account of truth. I defend dialetheist intuitions (whether in LP or ST guise) against two objections: one concerning the future, the other concerning the alleged simplicity of the Bridge compared to the Liar. (shrink)
Our paper addresses the following question: Is there a general characterization, for all predicates P that take both declarative and interrogative complements , of the meaning of the P-interrogative clause construction in terms of the meaning of the P-declarative clause construction? On our account, if P is a responsive predicate and Q a question embedded under P, then the meaning of ‘P + Q’ is, informally, “to be in the relation expressed by P to some potential complete answer to Q”. (...) We show that this rule allows us to derive veridical and non-veridical readings of embedded questions, depending on whether the embedding verb is veridical or not, and provide novel empirical evidence supporting the generalization. We then enrich our basic proposal to account for the presuppositions induced by the embedding verbs, as well as for the generation of intermediate exhaustive readings of embedded questions. (shrink)
This is the handout of my comments on E. Zimmermann's paper "Monotonicity in Opaque Verbs", which I prepared for the workshop on Intensional Verbs and Non-Referential Terms held at IHPST on January 14, 2006.
Given a consequence relation in many-valued logic, what connectives can be defined? For instance, does there always exist a conditional operator internalizing the consequence relation, and which form should it take? In this paper, we pose this question in a multi-premise multi-conclusion setting for the class of so-called intersective mixed consequence relations, which extends the class of Tarskian relations. Using computer-aided methods, we answer extensively for 3-valued and 4-valued logics, focusing not only on conditional operators, but also on what we (...) call Gentzen-regular connectives. For arbitrary N-valued logics, we state necessary and sufficient conditions for the existence of such connectives in a multi-premise multi-conclusion setting. The results show that mixed consequence relations admit all classical connectives, and among them pure consequence relations are those that admit no other Gentzen-regular connectives. Conditionals can also be found for a broader class of intersective mixed consequence relations, but with the exclusion of order-theoretic consequence relations. (shrink)
In chapter 5 of Knowledge and its Limits, T. Williamson formulates an argument against the principle (KK) of epistemic transparency, or luminosity of knowledge, namely “that if one knows something, then one knows that one knows it”. Williamson’s argument proceeds by reductio: from the description of a situation of approximate knowledge, he shows that a contradiction can be derived on the basis of principle (KK) and additional epistemic principles that he claims are better grounded. One of them is a reflective (...) form of the margin for error principle defended by Williamson in his account of knowledge. We argue that Williamson’s reductio rests on the inappropriate identification of distinct forms of knowledge. More specifically, an important distinction between perceptual knowledge and non-perceptual knowledge is wanting in his statement and analysis of the puzzle. We present an alternative account of this puzzle, based on a modular conception of knowledge: the (KK) principle and the margin for error principle can coexist, provided their domain of application is referred to the right sort of knowledge. (shrink)
The paper examines the logic and semantics of knowledge attributions of the form “s knows whether A or B”. We analyze these constructions in an epistemic logic with alternative questions, and propose an account of the context-sensitivity of the corresponding sentences and of their presuppositions.
We discuss the 'problem of convergent knowledge', an argument presented by J. Schaffer in favour of contextualism about knowledge attributions, and against the idea that knowledge- wh can be simply reduced to knowledge of the proposition answering the question. Schaffer's argument centrally involves alternative questions of the form 'whether A or B'. We propose an analysis of these on which the problem of convergent knowledge does not arise. While alternative questions can contextually restrict the possibilities relevant for knowledge attributions, what (...) Schaffer's puzzle reveals is a pragmatic ambiguity in what 'knowing the answer' means: in his problematic cases, the subject knows only a partial answer to the question. This partial knowledge can be counted as adequate only on externalist grounds. (shrink)
Résumé — Toute vérité est-elle connaissable en principe ? Une réponse négative à cette question suit d’un argument logique dû à F. Fitch, voisin du paradoxe de Moore, et connu sous le nom de paradoxe de la connaissabilité. Le paradoxe de Fitch constitue un obstacle à la conception antiréaliste de la vérité et, plus généralement, semble-t-il, à l’idéal positiviste d’après lequel toute vérité devrait nous être accessible en principe. Dans cet article, j’examine différentes tentatives pour préserver le principe selon lequel (...) toute vérité est connaissable, chacune inspirée d’une forme propre d’antiréalisme . Ces différentes approches sont comparées à une conception réaliste du positivisme, évoquée récemment par Burgess, qui postulerait seulement que toute vérité nécessaire est connaissable. Selon cette conception, certaines vérités contingentes sont effectivement inconnaissables, mais l’accessibilité de principe des vérités nécessaires demeure suffisante pour garantir la confiance positiviste dans la science.— Are all truths knowable ? A negative answer to this question follows from a logical argument related to the Moore Paradox and due to F. Fitch, also known as Fitch’s Knowability Paradox. Fitch’s paradox is widely considered to be an obstacle to the antirealist conception of truth, but even for a realist, it appears to threaten the positivist faith in the accessibility of all truths to our minds. In this paper, I first review different strategies to circumvent the paradox, each of them inspired by a different form of antirealism . I then compare these approaches to a realist version of positivism, discussed recently by Burgess, which would only postulate that all necessary truths are knowable. On that view, some contingent truths are indeed unknowable, but the idea that all necessary truths are within our reach remains sufficient to maintain the positivist faith in scientific knowledge. (shrink)
In this paper we investigate a semantics for first-order logic originally proposed by R. van Rooij to account for the idea that vague predicates are tolerant, that is, for the principle that if x is P, then y should be P whenever y is similar enough to x. The semantics, which makes use of indifference relations to model similarity, rests on the interaction of three notions of truth: the classical notion, and two dual notions simultaneously defined in terms of it, (...) which we call tolerant truth and strict truth. We characterize the space of consequence relations definable in terms of those and discuss the kind of solution this gives to the sorites paradox. We discuss some applications of the framework to the pragmatics and psycholinguistics of vague predicates, in particular regarding judgments about borderline cases. (shrink)
Nicholas Smith argues that an adequate account of vagueness must involve\ndegrees of truth. The basic idea of degrees of truth is that while\nsome sentences are true and some are false, others possess intermediate\ntruth values: they are truer than the false sentences, but not as\ntrue as the true ones. This idea is immediately appealing in the\ncontext of vagueness--yet it has fallen on hard times in the philosophical\nliterature, with existing degree-theoretic treatments of vagueness\nfacing apparently insuperable objections. Smith seeks to turn the\ntide in (...) favor of a degree-theoretic treatment of vagueness, by motivating\nand defending the basic idea that truth can come in degrees, by arguing\nthat no theory of vagueness that does not countenance degrees of\ntruth can be correct, and by developing a new degree-theoretic treatment\nof vagueness--fuzzy plurivaluationism--that solves the problems plaguing\nearlier degree theories. (shrink)