Vagueness provides the first comprehensive examination of a topic of increasing importance in metaphysics and the philosophy of logic and language. Timothy Williamson traces the history of this philosophical problem from discussions of the heap paradox in classical Greece to modern formal approaches such as fuzzy logic. He illustrates the problems with views which have taken the position that standard logic and formal semantics do not apply to vague language, and defends the controversial realistic view that vagueness is (...) a kind of ignorance--that there really is a grain of sand whose removal turns a heap into a non-heap, but we cannot know which one it is. (shrink)
This paper investigates the way that linguistic expressions influence vagueness, focusing on the interpretation of the positive (unmarked) form of gradable adjectives. I begin by developing a semantic analysis of the positive form of ‘relative’ gradable adjectives, expanding on previous proposals by further motivating a semantic basis for vagueness and by precisely identifying and characterizing the division of labor between the compositional and contextual aspects of its interpretation. I then introduce a challenge to the analysis from the class (...) of ‘absolute’ gradable adjectives: adjectives that are demonstrably gradable, but which have positive forms that relate objects to maximal or minimal degrees, and do not give rise to vagueness. I argue that the truth conditional difference between relative and absolute adjectives in the positive form stems from the interaction of lexical semantic properties of gradable adjectives—the structure of the scales they use—and a general constraint on interpretive economy that requires truth conditions to be computed on the basis of conventional meaning to the extent possible, allowing for context dependent truth conditions only as a last resort. (shrink)
Roy Sorenson offers a unique exploration of an ancient problem: vagueness. Did Buddha become a fat man in one second? Is there a tallest short giraffe? According to Sorenson's epistemicist approach, the answers are yes! Although vagueness abounds in the way the world is divided, Sorenson argues that the divisions are sharp; yet we often do not know where they are. Written in Sorenson'e usual inventive and amusing style, this book offers original insight on language and logic, the (...) way world is, and our understanding of it. (shrink)
This paper discusses Fara's so-called 'Paradox of Higher-Order Vagueness' concerning supervaluationism. In the paper I argue that supervaluationism is not committed to global validity, as it is largely assumed in the literature, but to a weaker notion of logical consequence I call 'regional validity'. Then I show that the supervaluationist might solve Fara's paradox making use of this weaker notion of logical consequence. The paper is discussed by Delia Fara in the same volume.
Stewart Shapiro's ambition in Vagueness in Context is to develop a comprehensive account of the meaning, function, and logic of vague terms in an idealized version of a natural language like English. It is a commonplace that the extensions of vague terms vary according to their context: a person can be tall with respect to male accountants and not tall (even short) with respect to professional basketball players. The key feature of Shapiro's account is that the extensions of vague (...) terms also vary in the course of conversations and that, in some cases, a competent speaker can go either way without sinning against the meaning of the words or the non-linguistic facts. As Shapiro sees it, vagueness is a linguistic phenomenon, due to the kinds of languages that humans speak; but vagueness is also due to the world we find ourselves in, as we try to communicate features of it to each other. (shrink)
In this paper we focus mainly on a kind of contextualism theory of vagueness according to which the context dependence has its source in the variation of our practical interests. We largely focus on Fara's version of the theory but our observations work at different levels of generality, some relevant only to the specifics of Fara's theory others relevant to all contextualist theories of a certain type.
Scott Soames has recently argued that the fact that lawmakers and other legal practitioners regard vagueness as having a valuable power-delegating function gives us good reason to favor one theory of vagueness over another. If Soames is right, then facts about legal practice can in an important sense adjudicate between rival theories of vagueness. I argue that due to what I call the “Gappiness Problem” – raised by recent critics of the “communicative-content theory of law” – we (...) have to give up the one premise of Soames’s argument that he seems to take to be uncontroversial: that the legal content of a statute or constitutional clause is identical with, or constituted by, its communicative content. I provide a sketch of my own account of legal content and show how it provides a response to the Gappiness Problem. This account, however, does not suffice to vindicate Soames’s argument. I conclude by arguing that my point about Soames’s argument is generalizable. (shrink)
Vague expressions are omnipresent in natural language. As such, their use in legal texts is virtually inevitable. If a law contains vague terms, the question whether it applies to a particular case often lacks a clear answer. One of the fundamental pillars of the rule of law is legal certainty. The determinacy of the law enables people to use it as a guide and places judges in the position to decide impartially. Vagueness poses a threat to these ideals. In (...) borderline cases, the law seems to be indeterminate and thus incapable of serving its core rule of law value. -/- In the philosophy of language, vagueness has become one of the hottest topics of the last two decades. Linguists and philosophers have investigated what distinguishes "soritical " vagueness from other kinds of linguistic indeterminacy, such as ambiguity, generality, open texture, and family resemblance concepts. There is a vast literature that discusses the logical, semantic, pragmatic, and epistemic aspects of these phenomena. Legal theory has hitherto paid little attention to the differences between the various kinds of linguistic indeterminacy that are grouped under the heading of "vagueness ", let alone to the various theories that try to account for these phenomena. -/- The paper is an introduction to a book of the same title. Bringing together leading scholars working on the topic of vagueness in philosophy and in law, the book fosters a dialogue between philosophers and legal scholars by examining how philosophers conceive legal ambiguity from their theoretical perspective and how legal theorists make use of philosophical theories of vagueness. (shrink)
In psychiatry there is no sharp boundary between the normal and the pathological. Although clear cases abound, it is often indeterminate whether a particular condition does or does not qualify as a mental disorder. For example, definitions of ‘subthreshold disorders’ and of the ‘prodromal stages’ of diseases are notoriously contentious. Philosophers and linguists call concepts that lack sharp boundaries, and thus admit of borderline cases, ‘vague’. This overview chapter reviews current debates about demarcation in psychiatry against the backdrop of key (...) issues within the philosophical discussion of vagueness: Are there various kinds of vagueness? Is all vagueness representational? How does vagueness relate to epistemic uncertainty? What is the value of vagueness? Given the immense social, moral, and legal importance of demarcating the normal from the pathological in psychiatry, what are the pros and cons of gradualist approaches to mental disorders, that is, of construing boundaries as matters of degree? (shrink)
How does vagueness interact with metaphysical modality and with restrictions of it, such as nomological modality? In particular, how do definiteness, necessity (understood as restricted in some way or not), and actuality interact? This paper proposes a model-theoretic framework for investigating the logic and semantics of that interaction. The framework is put forward in an ecumenical spirit: it is intended to be applicable to all theories of vagueness that express vagueness using a definiteness (or: determinacy) operator. We (...) will show how epistemicists, supervaluationists, and theorists of metaphysical vagueness like Barnes and Williams (2010) can interpret the framework. We will also present a complete axiomatization of the logic we recommend to both epistemicists and local supervaluationists. . (shrink)
In this paper I explore the implications of moral vagueness (viz., the vagueness of moral predicates) for non-naturalist metaethical theories like those recently championed by Shafer-Landau, Parfit, and others. I characterise non-naturalism in terms of its commitment to 7 theses: Cognitivism, Correspondence, Atomism, Objectivism, Supervenience, Non-reductivism, and Rationalism. I start by offering a number of reasons for thinking that moral predicates are vague in the same way in which ‘red’, ‘tall’, and ‘heap’ are said to be. I then (...) argue that the moral non-naturalist seeking to countenance moral vagueness faces a dilemma: are moral properties vague, or perfectly sharp? On either horn of the dilemma, serious problems arise for some of the central tenets of non-naturalism: vague properties seem to threaten Objectivism, Supervenience, and Non-reductivism; on the other hand, sharp properties raise problems for Supervenience and Rationalism. The difficulties on each horn of the dilemma are real, and while they may not be insuperable, they do, at the very least, drastically limit the things non-naturalists can consistently say about moral properties, facts, and reasons. (shrink)
The conceptual spaces approach has recently emerged as a novel account of concepts. Its guiding idea is that concepts can be represented geometrically, by means of metrical spaces. While it is generally recognized that many of our concepts are vague, the question of how to model vagueness in the conceptual spaces approach has not been addressed so far, even though the answer is far from straightforward. The present paper aims to fill this lacuna.
Vagueness is currently the subject of vigorous debate in the philosophy of logic and language. Vague terms -- such as 'tall', 'red', 'bald', and 'tadpole' -- have borderline cases ; and they lack well-defined extensions. The phenomenon of vagueness poses a fundamental challenge to classical logic and semantics, which assumes that propositions are either true or false and that extensions are determinate.This anthology collects for the first time the most important papers in the field. After a substantial introduction (...) that surveys the field, the essays form four groups, starting with some historically notable pieces. The 1970s saw an explosion of interest in vagueness, and the second group of essays reprints classic papers from this period. The following group of papers represent the best recent work on the logic and semantics of vagueness. The essays in the final group are contributions to the continuing debate about vague objects and vague identity. (shrink)
The term “vagueness” describes a property of natural concepts, which normally have fuzzy boundaries, admit borderline cases, and are susceptible to Zeno's sorites paradox. We will discuss the psychology of vagueness, especially experiments investigating the judgment of borderline cases and contradictions. In the theoretical part, we will propose a probabilistic model that describes the quantitative characteristics of the experimental finding and extends Alxatib's and Pelletier's () theoretical analysis. The model is based on a Hopfield network for predicting truth (...) values. Powerful as this classical perspective is, we show that it falls short of providing an adequate coverage of the relevant empirical results. In the final part, we will argue that a substantial modification of the analysis put forward by Alxatib and Pelletier and its probabilistic pendant is needed. The proposed modification replaces the standard notion of probabilities by quantum probabilities. The crucial phenomenon of borderline contradictions can be explained then as a quantum interference phenomenon. (shrink)
Most descriptions of higher-order vagueness in terms of traditional modal logic generate so-called higher-order vagueness paradoxes. The one that doesn't is problematic otherwise. Consequently, the present trend is toward more complex, non-standard theories. However, there is no need for this. In this paper I introduce a theory of higher-order vagueness that is paradox-free and can be expressed in the first-order extension of a normal modal system that is complete with respect to single-domain Kripke-frame semantics. This is the (...) system QS4M+BF+FIN. It corresponds to the class of transitive, reflexive and final frames. With borderlineness defined logically as usual, it then follows that something is borderline precisely when it is higher-order borderline, and that a predicate is vague precisely when it is higher-order vague. Like Williamson's, the theory proposed here has no clear borderline cases in Sorites sequences. I argue that objections that there must be clear borderline cases ensue from the confusion of two notions of borderlineness—one associated with genuine higher-order vagueness, the other employed to sort objects into categories—and that the higher-order vagueness paradoxes result from superimposing the second notion onto the first. Lastly, I address some further potential objections. (shrink)
This paper is an expanded written version of my reply to Rosanna Keefe’s paper ‘Modelling higher-order vagueness: columns, borderlines and boundaries’ (Keefe 2015), which in turn is a reply to my paper ‘Columnar higher-order vagueness, or Vagueness is higher-order vagueness’ (Bobzien 2015). Both papers were presented at the Joint Session of the the Aristotelian Society and the Mind Association in July, 2015. At the Joint Session meeting, there was insufficient time to present all of my points (...) in response to Keefe’s paper. In addition, the audio of the session, which is available online, becomes inaudible at the beginning of my reply to Keefe’s comments due to a technical defect. The following is a full version of my remarks. (shrink)
ABSTRACT: This paper argues that the so-called paradoxes of higher-order vagueness are the result of a confusion between higher-order vagueness and the distribution of the objects of a Sorites series into extensionally non-overlapping non-empty classes.
One of the hardest problems in philosophy, one that has been around for over two thousand years without generating any significant consensus on its solution, involves the concept of vagueness: a word or concept that doesn’t have a perfectly precise meaning. There is an argument that seems to show that the word or concept simply must have a perfectly precise meaning, as violently counterintuitive as that is. Unfortunately, the argument is usually so compressed that it is difficult to see (...) why exactly the problem is so hard to solve. In this essay I attempt to explain just why it is that the problem—the sorites paradox—is so intractable. (shrink)
Supervaluationism is a well known theory of vagueness. Subvaluationism is a less well known theory of vagueness. But these theories cannot be taken apart, for they are in a relation of duality that can be made precise. This paper provides an introduction to the subvaluationist theory of vagueness in connection to its dual, supervaluationism. A survey on the supervaluationist theory can be found in the Compass paper of Keefe (2008); our presentation of the theory in this paper (...) will be short to get rapidly into the logical issues. This paper is relatively self-contained. A modest background on propositional modal logic is, though not strictly necessary, advisable. The reader might find useful the Compass papers Kracht (2011) and Negri (2011) (though these papers cover issues of more complexity than what is demanded to follow this paper). (shrink)
The goal of this paper is a comprehensive analysis of basic reasoning patterns that are characteristic of vague predicates. The analysis leads to rigorous reconstructions of the phenomena within formal systems. Two basic features are dealt with. One is tolerance: the insensitivity of predicates to small changes in the objects of predication (a one-increment of a walking distance is a walking distance). The other is the existence of borderline cases. The paper shows why these should be treated as different, though (...) related phenomena. Tolerance is formally reconstructed within a proposed framework of contextual logic, leading to a solution of the Sorites paradox. Borderline-vagueness is reconstructed using certain modality operators; the set-up provides an analysis of higher order vagueness and a derivation of scales of degrees for the property in question. (shrink)
Vague predicates, those that exhibit borderline cases, pose a persistent problem for philosophers and logicians. Although they are ubiquitous in natural language, when used in a logical context, vague predicates lead to contradiction. This paper will address a question that is intimately related to this problem. Given their inherent imprecision, why do vague predicates arise in the first place? I discuss a variation of the signaling game where the state space is treated as contiguous, i.e., endowed with a metric that (...) captures a similarity relation over states. This added structure is manifested in payoffs that reward approximate coordination between sender and receiver as well as perfect coordination. I evolve these games using a variation of Herrnstein reinforcement learning that better reflects the generalizing learning strategies real-world actors use in situations where states of the world are similar. In these simulations, signaling can develop very quickly, and the signals are vague in much the way ordinary language predicates are vague—they each exclusively apply to certain items, but for some transition period both signals apply to varying degrees. Moreover, I show that under certain parameter values, in particular when state spaces are large and time is limited, learning generalization of this sort yields strategies with higher payoffs than standard Herrnstein reinforcement learning. These models may then help explain why the phenomenon of vagueness arises in natural language: the learning strategies that allow actors to quickly and effectively develop signaling conventions in contiguous state spaces make it unavoidable. (shrink)
The main goal of Sider’s book, Four-Dimensionalism: An Ontology of Persistence and Time, is to show why his version of four- dimensionalism, the stage-theory, on balance, should be preferred over its main competitors: it is, in his view, the theory which presents the best unified treatment of a wide range of central metaphysical puzzles; the theory which has, on balance, “the most important advantages and the least serious drawbacks” (ibid., p. 140). I argue in this paper that, when we add (...) up all the evidence for and against the stage-theory, a different assessment of the dialectical situation recommends itself. As it turns out, everything depends on the argument from vagueness, the dialectical fulcrum of Sider’s book. If it were not for the argument from vagueness (so I suggest in outline in Section 2 of this paper), the situation would be relatively even-handed between the three-dimensionalist and the four-dimensionalist. But the argument from vagueness (as I show in more detail in Section 3) suffers from a crucial, and arguably fatal, weakness: no independent, non-question-begging justification has been provided for its most controversial premise, the non-vagueness of mereological composition. (shrink)
The purpose of this paper is to challenge some widespread assumptions about the role of the modal axiom 4 in a theory of vagueness. In the context of vagueness, axiom 4 usually appears as the principle ‘If it is clear (determinate, definite) that A, then it is clear (determinate, definite) that it is clear (determinate, definite) that A’, or, more formally, CA → CCA. We show how in the debate over axiom 4 two different notions of clarity are (...) in play (Williamson-style "luminosity" or self-revealing clarity and concealeable clarity) and what their respective functions are in accounts of higher-order vagueness. On this basis, we argue first that, contrary to common opinion, higher-order vagueness and S4 are perfectly compatible. This is in response to claims like that by Williamson that, if vagueness is defined with the help of a clarity operator that obeys axiom 4, higher-order vagueness disappears. Second, we argue that, contrary to common opinion, (i) bivalence-preservers (e.g. epistemicists) can without contradiction condone axiom 4 (by adopting what elsewhere we call columnar higher-order vagueness), and (ii) bivalence-discarders (e.g. open-texture theorists, supervaluationists) can without contradiction reject axiom 4. Third, we rebut a number of arguments that have been produced by opponents of axiom 4, in particular those by Williamson. (The paper is pitched towards graduate students with basic knowledge of modal logic.). (shrink)
We say that a sentence A is a permissive consequence of a set X of premises whenever, if all the premises of X hold up to some standard, then A holds to some weaker standard. In this paper, we focus on a three-valued version of this notion, which we call strict-to-tolerant consequence, and discuss its fruitfulness toward a unified treatment of the paradoxes of vagueness and self-referential truth. For vagueness, st-consequence supports the principle of tolerance; for truth, it (...) supports the requisite of transparency. Permissive consequence is non-transitive, however, but this feature is argued to be an essential component to the understanding of paradoxical reasoning in cases involving vagueness or self-reference. (shrink)
There is a general form of an argument which I call the ‘argument from vagueness’ which attempts to show that objects persist by perduring, via the claim that vagueness is never ontological in nature and thus that composition is unrestricted. I argue that even if we grant that vagueness is always the result of semantic indeterminacy rather than ontological vagueness, and thus also grant that composition is unrestricted, it does not follow that objects persist by perduring. (...) Unrestricted mereological composition lacks the power to ensure that there exist instantaneous objects that wholly overlap persisting objects at times, and thus lacks the power to ensure that there exists anything that could be called a temporal part. Even if we grant that such instantaneous objects exist, however, I argue that it does not follow that objects perdure. To show this I briefly outline a coherent version of three dimensionalism that grants just such an assumption. Thus considerations pertaining to the nature of vagueness need not lead us inevitably to accept perdurantism. (shrink)
John Broome has argued that incomparability and vagueness cannot coexist in a given betterness order. His argument essentially hinges on an assumption he calls the ‘collapsing principle’. In an earlier article I criticized this principle, but Broome has recently expressed doubts about the cogency of my criticism. Moreover, Cristian Constantinescu has defended Broome’s view from my objection. In this paper, I present further arguments against the collapsing principle, and try to show that Constantinescu’s defence of Broome’s position fails.
Paraconsistent approaches have received little attention in the literature on vagueness (at least compared to other proposals). The reason seems to be that many philosophers have found the idea that a contradiction might be true (or that a sentence and its negation might both be true) hard to swallow. Even advocates of paraconsistency on vagueness do not look very convinced when they consider this fact; since they seem to have spent more time arguing that paraconsistent theories are at (...) least as good as their paracomplete counterparts, than giving positive reasons to believe on a particular paraconsistent proposal. But it sometimes happens that the weakness of a theory turns out to be its mayor ally, and this is what (I claim) happens in a particular paraconsistent proposal known as subvaluationism. In order to make room for truth-value gluts subvaluationism needs to endorse a notion of logical consequence that is, in some sense, weaker than standard notions of consequence. But this weakness allows the subvaluationist theory to accommodate higher-order vagueness in a way that it is not available to other theories of vagueness (such as, for example, its paracomplete counterpart, supervaluationism). (shrink)
Recently a fascinating debate has been rekindled over whether vagueness is metaphysical or linguistic. That is, is vagueness an objective feature of reality or is it merely an artifact of our language? Bertrand Russell's contribution to this debate is considered by many to be decisive. Russell suggested that it is a mistake to conclude that the world is vague simply because the language we use to describe it is vague. He argued that to draw such an inference is (...) to commit "the fallacy of verbalism". I argue that this is only a fallacy if we have no reason to believe that the world is as our language says. Since vagueness is apparently not eliminable from our language—a fact that Russell himself acknowledged—an indispensability argument can be launched for metaphysical vagueness. In this paper I outliine such an argument. (shrink)
In this paper we compare different models of vagueness viewed as a specific form of subjective uncertainty in situations of imperfect discrimination. Our focus is on the logic of the operator “clearly” and on the problem of higher-order vagueness. We first examine the consequences of the notion of intransitivity of indiscriminability for higher-order vagueness, and compare several accounts of vagueness as inexact or imprecise knowledge, namely Williamson’s margin for error semantics, Halpern’s two-dimensional semantics, and the system (...) we call Centered semantics. We then propose a semantics of degrees of clarity, inspired from the signal detection theory model, and outline a view of higher-order vagueness in which the notions of subjective clarity and unclarity are handled asymmetrically at higher orders, namely such that the clarity of clarity is compatible with the unclarity of unclarity. (shrink)
Philosophers disagree about whether vagueness requires us to admit truth-value gaps, about whether there is a gap between the objects of which a given vague predicate is true and those of which it is false on an appropriately constructed sorites series for the predicate—a series involving small increments of change in a relevant respect between adjacent elements, but a large increment of change in that respect between the endpoints. There appears, however, to be widespread agreement that there is some (...) sense in which vague predicates are gappy which may be expressed neutrally by saying that on any appropriately constructed sorites series for a given vague predicate there will be a gap between the objects of which the predicate is deﬁnitely true and those of which it is deﬁnitely false. Taking as primitive the operator ‘it is deﬁnitely the case that’, abbreviated as ‘D’, we may stipulate that a predicate F is deﬁnitely true (or deﬁnitely false) of an object just in case ‘DF (a)’, where a is a name for the object, is true (or false) simpliciter.1 This yields the following conditional formulation of a ‘gap principle’: (DΦ(x) ∧ D¬Φ(y)) → ¬R(x, y). Here ‘Φ’ is to be replaced with a vague predicate, while ‘R’ is to stand for a sorites relation for that predicate: a relation that can be used to construct a sorites series for the predicate—such as the relation of being just one millimetre shorter than for the predicate ‘is tall’. Disagreements about the sense in which it is correct to say that vague predicates are gappy can then be recast as disagreements about how to understand the deﬁnitely operator. One might give it, for example, a pragmatic construal such as ‘it would not be misleading to assert that’; or an epistemic construal such as ‘it is known that’ or ‘it is knowable that’; or a semantic construal such as ‘it is true that’. (shrink)
The paper presents a new theory of higher-order vagueness. This theory is an improvement on current theories of vagueness in that it (i) describes the kind of borderline cases relevant to the Sorites paradox, (ii) retains the ‘robustness’ of vague predicates, (iii) introduces a notion of higher-order vagueness that is compositional, but (iv) avoids the paradoxes of higher-order vagueness. The theory’s central building-blocks: Borderlinehood is defined as radical unclarity. Unclarity is defined by means of competent, rational, (...) informed speakers (‘CRISPs’) whose competence, etc., is indexed to the scope of the unclarity operator. The unclarity is radical since it eliminates clear cases of unclarity and, that is, clear borderline cases. This radical unclarity leads to a (bivalence-compatible, non-intuitionist) absolute agnosticism about the semantic status of all borderline cases. The corresponding modal system would be a non-normal variation on S4M. (shrink)
There are three main traditional accounts of vagueness : one takes it as a genuinely metaphysical phenomenon, one takes it as a phenomenon of ignorance, and one takes it as a linguistic or conceptual phenomenon. In this paper I first very briefly present these views, especially the epistemicist and supervaluationist strategies, and shortly point to some well-known problems that the views carry. I then examine a 'statistical epistemicist' account of vagueness that is designed to avoid precisely these problems (...) – it will be a view that provides an account of the phenomenon of vagueness as coming from our linguistic practices, while insisting that meaning supervenes on use, and that our use of vague terms does yield sharp and precise meanings, which we ignore, thus allowing bivalence to hold. (shrink)
According to a popular line of reasoning, diachronic vagueness creates a problem for the endurantist conception of persistence. Some authors have replied that this line of reasoning is inconclusive, since the endurantist can subscribe to a principle of Diachronic Unrestricted Composition (DUC) that is perfectly parallel to the principle required by the perdurantist’s semantic account. I object that the endurantist should better avoid DUC. And I argue that even DUC, if accepted, would fail to provide the endurantist with the (...) necessary resources for explaining diachronic vagueness in familiar semantic terms. (shrink)
The tolerance principle, the idea that vague predicates are insensitive to sufficiently small changes, remains the main bone of contention between theories of vagueness. In this paper I examine three sources behind our ordinary belief in the tolerance principle, to establish whether any of them might give us a good reason to revise classical logic. First, I compare our understanding of tolerance in the case of precise predicates and in the case of vague predicates. While tolerance in the case (...) of precise predicates results from approximation, tolerance in the case of vague predicates appears to originate from two more specific sources: semantic indeterminacy on the one hand, and epistemic indiscriminability on the other. Both give us good and coherent grounds to revise classical logic. Epistemic indiscriminability, it is argued, may be more fundamental than semantic indeterminacy to justify the intuition that vague predicates are tolerant. (shrink)
It is common among philosophers who take an interest in the phenomenon of vagueness in natural language not merely to acknowledge higher-order vagueness but to take its existence as a basic datum— so that views that lack the resources to account for it, or that put obstacles in the way, are regarded as deficient just on that score. My main purpose in what follows is to loosen the hold of this deeply misconceived idea. Higher-order vagueness is no (...) basic datum but an illusion, fostered by misunderstandings of the nature of (ordinary, if you will ‘first-order’) vagueness itself. To see through the illusion is to take a step that is prerequisite for a correct understanding of vagueness, and for any satisfying dissolution of its attendant paradoxes. (shrink)
ABSTRACT: Stewart Shapiro recently argued that there is no higher-order vagueness. More specifically, his thesis is: (ST) ‘So-called second-order vagueness in ‘F’ is nothing but first-order vagueness in the phrase ‘competent speaker of English’ or ‘competent user of “F”’. Shapiro bases (ST) on a description of the phenomenon of higher-order vagueness and two accounts of ‘borderline case’ and provides several arguments in its support. We present the phenomenon (as Shapiro describes it) and the accounts; then discuss (...) Shapiro’s arguments, arguing that none is compelling. Lastly, we introduce the account of vagueness Shapiro would have obtained had he retained compositionality and show that it entails true higher-order vagueness. (shrink)
I argue that for those who follow Evans in finding indeterminacy of de re identity statements problematic, ontic vagueness within a three-dimensionalist metaphysics will raise some problems that are not faced by the four-dimensionalist. For the types of strategies used to avoid de re indeterminacy within the context of ontic vagueness at-at-time, that is, spatial vagueness, are problematic within a three-dimensionalist framework when put to use within the context of ontic vagueness across-time, that is temporal (...) class='Hi'>vagueness. (shrink)
In this paper we propose an approach to vagueness characterised by two features. The first one is philosophical: we move along a Kantian path emphasizing the knowing subject’s conceptual apparatus. The second one is formal: to face vagueness, and our philosophical view on it, we propose to use topology and formal topology. We show that the Kantian and the topological features joined together allow us an atypical, but promising, way of considering vagueness.
This paper develops a novel problem for representationalism (also known as "intentionalism"), a popular contemporary account of perception. We argue that representationalism is incompatible with supervaluationism, the leading contemporary account of vagueness. The problem generalizes to naive realism and related views, which are also incompatible with supervaluationism.
We derive a probabilistic account of the vagueness and context-sensitivity of scalar adjectives from a Bayesian approach to communication and interpretation. We describe an iterated-reasoning architecture for pragmatic interpretation and illustrate it with a simple scalar implicature example. We then show how to enrich the apparatus to handle pragmatic reasoning about the values of free variables, explore its predictions about the interpretation of scalar adjectives, and show how this model implements Edgington’s Vagueness: a reader, 1997) account of the (...) sorites paradox, with variations. The Bayesian approach has a number of explanatory virtues: in particular, it does not require any special-purpose machinery for handling vagueness, and it is integrated with a promising new approach to pragmatics and other areas of cognitive science. (shrink)
A discussion of Crispin Wright's 'paradox of higher-order vagueness', I suggest that the paradox may be resolved by careful attention to the logical principles used in its formulation. In particular, I focus attention on the rule of inference that allows for the inference from A to 'Definitely A', and argue that this rule, though valid, may not be used in subordinate deductions, e.g., in the course of a conditional proof. Wright's paradox uses the rule (or its equivalent) in this (...) way. (shrink)
Theories of truth and vagueness are closely connected; in this article, I draw another connection between these areas of research. Gupta and Belnap’s Revision Theory of Truth is converted into an approach to vagueness. I show how revision sequences from a general theory of definitions can be used to understand the nature of vague predicates. The revision sequences show how the meaning of vague predicates are interconnected with each other. The approach is contrasted with the similar supervaluationist approach.
This paper deals with higher-order vagueness in Williamson's 'logic of clarity'. Its aim is to prove that for 'fixed margin models' (W,d,α ,[ ]) the notion of higher-order vagueness collapses to second-order vagueness. First, it is shown that fixed margin models can be reformulated in terms of similarity structures (W,~). The relation ~ is assumed to be reflexive and symmetric, but not necessarily transitive. Then, it is shown that the structures (W,~) come along with naturally defined maps (...) h and s that define a Galois connection on the power set PW of W. These maps can be used to define two distinct boundary operators bd and BD on W. The main theorem of the paper states that higher-order vagueness with respect to bd collapses to second-order vagueness. This does not hold for BD, the iterations of which behave in quite an erratic way. In contrast, the operator bd defines a variety of tolerance principles that do not fall prey to the sorites paradox and, moreover, do not always satisfy the principles of positive and negative introspection. (shrink)
There is a trade-off between specificity and accuracy in existing models of belief. Descriptions of agents in the tripartite model, which recognizes only three doxastic attitudes—belief, disbelief, and suspension of judgment—are typically accurate, but not sufficiently specific. The orthodox Bayesian model, which requires real-valued credences, is perfectly specific, but often inaccurate: we often lack precise credences. I argue, first, that a popular attempt to fix the Bayesian model by using sets of functions is also inaccurate, since it requires us to (...) have interval-valued credences with perfectly precise endpoints. We can see this problem as analogous to the problem of higher order vagueness. Ultimately, I argue, the only way to avoid these problems is to endorse Insurmountable Unclassifiability. This principle has some surprising and radical consequences. For example, it entails that the trade-off between accuracy and specificity is in-principle unavoidable: sometimes it is simply impossible to characterize an agent’s doxastic state in a way that is both fully accurate and maximally specific. What we can do, however, is improve on both the tripartite and existing Bayesian models. I construct a new model of belief—the minimal model—that allows us to characterize agents with much greater specificity than the tripartite model, and yet which remains, unlike existing Bayesian models, perfectly accurate. (shrink)
ABSTRACT: Recently a bold and admirable interpretation of Chrysippus’ position on the Sorites has been presented, suggesting that Chrysippus offered a solution to the Sorites by (i) taking an epistemicist position1 which (ii) made allowances for higher-order vagueness. In this paper I argue (i) that Chrysippus did not take an epistemicist position, but − if any − a non-epistemic one which denies truth-values to some cases in a Sorites-series, and (ii) that it is uncertain whether and how he made (...) allowances for higher-order vagueness, but if he did, this was not grounded on an epistemicist position. (shrink)
Words, languages, symphonies, fictional characters, games, and recipes are plausibly abstract artifacts— entities that have no spatial location and that are deliberately brought into existence as a result of creative acts. Many accept that composition is unrestricted: for every plurality of material objects, there is a material object that is the sum of those objects. These two views may seem entirely unrelated. I will argue that the most influential argument against restricted composition—the vagueness argument—doubles as an argument that there (...) can be no abstract artifacts. There is no way to resist the vagueness argument against abstract artifacts that does not also undermine the vagueness argument against restricted composition. (shrink)
From Fine and Kamp in the 70’s—through Osherson and Smith in the 80’s, Williamson, Kamp and Partee in the 90’s and Keefe in the 00’s—up to Sauerland in the present decade, the objection continues to be run that fuzzy logic based theories of vagueness are incompatible with ordinary usage of compound propositions in the presence of borderline cases. These arguments against fuzzy theories have been rebutted several times but evidently not put to rest. I attempt to do so in (...) this paper. (shrink)
In this paper, I start by showing that sorites paradoxes are inclosure paradoxes. That is, they fit the Inclosure Scheme which characterizes the paradoxes of self-reference. Given that sorites and self-referential paradoxes are of the same kind, they should have the same kind of solution. The rest of the paper investigates what a dialetheic solution to sorites paradoxes is like, connections with a dialetheic solution to the self-referential paradoxes, and related issues—especially so called "higher order" vagueness.