In J Philos Logic 34:155–192, 2005, Leitgeb provides a theory of truth which is based on a theory of semantic dependence. We argue here that the conceptual thrust of this approach provides us with the best way of dealing with semantic paradoxes in a manner that is acceptable to a classical logician. However, in investigating a problem that was raised at the end of J Philos Logic 34:155–192, 2005, we discover that something is missing from Leitgeb’s original definition. Moreover, we (...) show that once the appropriate repairs have been made, the resultant definition is equivalent to a version of the supervaluation definition suggested in J Philos 72:690–716, 1975 and discussed in detail in J Symb Log 51(3):663–681, 1986. The upshot of this is a philosophical justification for the simple supervaluation approach and fresh insight into its workings. (shrink)
Among other good things, supervaluation is supposed to allow vague sentences to go without truth values. But Jerry Fodor and Ernest Lepore have recently argued that it cannot allow this - not if it also respects certain conceptual truths. The main point I wish to make here is that they are mistaken. Supervaluation can leave truth-value gaps while respecting the conceptual truths they have in mind.
Michael Kremer defines fixed-point logics of truth based on Saul Kripke’s fixed point semantics for languages expressing their own truth concepts. Kremer axiomatizes the strong Kleene fixed-point logic of truth and the weak Kleene fixed-point logic of truth, but leaves the axiomatizability question open for the supervaluation fixed-point logic of truth and its variants. We show that the principal supervaluation fixed point logic of truth, when thought of as consequence relation, is highly complex: it is not even analytic. (...) We also consider variants, engendered by a stronger notion of ‘fixed point’, and by variant supervaluation schemes. A ‘logic’ is often thought of, not as a consequence relation, but as a set of sentences – the sentences true on each interpretation. We axiomatize the supervaluation fixed-point logics so conceived. (shrink)
In this paper, we define some consequence relations based on supervaluation semantics for partial models, and we investigate their properties. For our main consequence relation, we show that natural versions of the following fail: upwards and downwards Lowenheim-Skolem, axiomatizability, and compactness. We also consider an alternate version for supervaluation semantics, and show both axiomatizability and compactness for the resulting consequence relation.
It is widely assumed that the methods and results of science have no place among the data to which our semantics of vague predicates must answer. This despite the fact that it is well known that such prototypical vague predicates as ‘is bald’ play a central role in scientific research (e.g. the research that established Rogaine as a treatment for baldness). I argue here that the assumption is false and costly: in particular, I argue one cannot accept either supervaluationist semantics, (...) or the criticism of that semantics offered by Fodor and Lepore, without having to abandon accepted, and unexceptionable, scientific methodology. (shrink)
This article provide an intuitive semantic account of a new logic for comparisons (CL), in which atomic statements are assigned both a classical truth-value and a “how much” value or extension in the range [0, 1]. The truth-value of each comparison is determined by the extensions of its component sentences; the truth-value of each atomic depends on whether its extension matches a separate standard for its predicate; everything else is computed classically. CL is less radical than Casari’s comparative logics, in (...) that it does not allow for the formation of comparative statements out of truth-functional molecules. It is argued that CL provides a better analysis of predicate vagueness than classical logic, fuzzy logic or supervaluation theory. (shrink)
http://dx.doi.org/10.5007/1808-1711.2012v16n2p341 Current supervaluation models of opinion, notably van Fraassen’s (1984; 1989; 1990; 1998; 2005; 2006) use of intervals to characterize vague opinion, capture nuances of ordinary reflection which are overlooked by classic measure theoretic models of subjective probability. However, after briefly explaining van Fraassen’s approach, we present two limitations in his current framework which provide clear empirical reasons for seeking a refinement. Any empirically adequate account of our actual judgments must reckon with the fact that these are typically neither (...) uniform through the range of outcomes we take to be serious possibilities nor abrupt at the edges. (shrink)
This paper asks which free logic a Fregean should adopt. It examines options within the tradition including Carnap’s (1956) chosen object theory, Lehmann’s (1994, 2002) strict Fregean free logic, Woodruff’s (1970) strong table about Boolean operators and Bencivenga’s (1986, 1991) supervaluational semantics. It argues for a neutral free logic in view of its proximity towards explaining natural languages. However, disagreeing with Lehmann, it claims a Fregean should adopt the strong table based on Frege’s discussion on generality. Supervaluation uses strong (...) table and aims to give it a semantic justification. However, supervaluation is in turn justified by convention or mental experiments, which Lehmann argues as inadequate. The paper proposes a new justification of supervaluation based on sense and two-dimensional semantics. The resulting model, coined Supervaluational Neutral Free Logic (SNFL), resolves many conflicts between Lehmann and Bencivenga while staying close with Frege’s discussions about non-denotation. It also provides new insights into the relations among truth, logical truth, and supervaluated truth (or supertruth, for short). (shrink)
Trenton Merricks presents an original argument for the existence of propositions, and defends an account of their nature. He draws a variety of controversial conclusions, for instance about supervaluationism, the nature of possible worlds, truths about non-existent entities, and whether and how logical consequence depends on modal facts.
The logic of singular terms that refer to nothing, such as ‘Santa Claus,’ has been studied extensively under the heading of free logic. The present essay examines expressions whose reference is defective in a different way: they signify more than one entity. The bulk of the effort aims to develop an acceptable formal semantics based upon an intuitive idea introduced informally by Hartry Field and discussed by Joseph Camp; the basic strategy is to use supervaluations. This idea, as it stands, (...) encounters difficulties, but with suitable refinements it can be salvaged. Two other options for a formal semantics of multiply signifying terms are also presented, and I discuss the relative merits of the three semantics briefly. Finally, possible modifications to the standard logical regimentation of the notion of existence are considered. (shrink)
The Boolean many-valued approach to vagueness is similar to the infinite-valued approach embraced by fuzzy logic in the respect in which both approaches seek to solve the problems of vagueness by assigning to the relevant sentences many values between falsity and truth, but while the fuzzy-logic approach postulates linearly-ordered values between 0 and 1, the Boolean approach assigns to sentences values in a many-element complete Boolean algebra. On the modal-precisificational approach represented by Kit Fine, if a sentence is indeterminate in (...) truth value in some world, it is taken to be true in one precisified world accessible from that world and false in another. This paper points to a way to unify these two approaches to vagueness by showing that Fine’s version of the modal-precisificational approach can be combined with the Boolean many-valued approach instead of supervaluationism, one of the most popular approaches to vagueness. (shrink)
My goal is to defend the indeterminist approach to vagueness, according to which a borderline vague utterance is neither true nor false. Indeterminism appears to contradict bivalence and the disquotational schema for truth. I agree that indeterminism compels us to modify each of these principles. Kit Fine has defended indeterminism by claiming that ordinary ambiguous sentences are neither true nor false when one disambiguation is true and the other is false. But even if Fine is right about sentences, his point (...) does not seem to generalize to utterances. What the indeterminist needs -- and what ordinary ambiguity does not provide -- is an ambiguous utterance where what is being said is indeterminate between two different propositions. I will show that such cases exist. These cases imply that the modifications that indeterminism makes to bivalence and the disquotational schema are required independently of indeterminism, in fact independently of vagueness. (shrink)
One of the few points of agreement to be found in mainstream responses to the logical and semantic problems generated by vagueness is the view that if any modification of classical logic and semantics is required at all then it will only be such as to admit underdetermined reference and truth-value gaps. Logics of vagueness including many valued logics, fuzzy logics, and supervaluation logics all provide responses in accord with this view. The thought that an adequate response might require (...) the recognition of cases of overdetermination and truth value gluts has few supporters. This imbalance lacks justification. As it happens, Jaskowski's paraconsistent discussive logic-a logic which admits truth value gluts-can be defended by reflecting on similarities between it and the popular supervaluationist analysis of vagueness already in the philosophical literature. A simple dualisation of supervaluation semantics results in a paraconsistent logic of vagueness based on what has been termed subvaluational semantics. (shrink)
Many philosophical theories make comparisons between objects, events, states of affairs, worlds, or systems, and many such theories deliver plausible verdicts only if some of the elements they compare are ranked as ‘best.’ When the relevant ordering does not have such ‘best’ or ‘tied for best’ elements the theory wrongly falls silent or gives badly counterintuitive results. This paper develops ordering supervaluationism---a very general technique that allows any such theory to handle these problematic cases. Just as ordinary supervaluation helps (...) us save and generalize ‘uniqueness assuming’ theories, ordering supervaluationism helps us save and generalize ‘limit assuming’ theories. With so many otherwise attractive limit assuming theories, this is a sensible, methodologically conservative approach. (shrink)
A preliminary goal in what follows is to properly articulate the Observational Sorites Paradox, together with a related paradox which I dub The Observational Paradox . From there, I distinguish the Observational Sorites from a paradox with which it is easily conflated, namely the Phenomenal Sorites Paradox. Next, I outline six treatments of the Observational Sorites, some familiar, others less so. These are: two versions of Subvaluation; two versions of Supervaluation; and, two versions of Epistemicism. The aim is not (...) to provide a comprehensive evaluation of the respective virtues and vices of these treatments, but rather to uncover what they have in common. The main goal in what follows is to exploit such commonality to develop a completely different kind of solution to the Observational Sorites—a solution which deploys only minimal theoretical resources. The result is what may be dubbed Minimalism. As we shall see, minimal treatments of paradox can prove to be just as effective as any non-minimal treatment, but without many of the untoward side-effects. (shrink)
Supervaluational accounts of vagueness have come under assault from Timothy Williamson for failing to provide either a sufficiently classical logic or a disquotational notion of truth, and from Crispin Wright and others for incorporating a notion of higher-order vagueness, via the determinacy operator, which leads to contradiction when combined with intuitively appealing ‘gap principles’. We argue that these criticisms of supervaluation theory depend on giving supertruth an unnecessarily central role in that theory as the sole notion of truth, rather (...) than as one mode of truth. Allowing for the co-existence of supertruth and local truth, we define a notion of local entailment in supervaluation theory, and show that the resulting logic is fully classical and allows for the truth of the gap principles. Finally, we argue that both supertruth and local truth are disquotational, when disquotational principles are properly understood. (shrink)
We consider various concepts associated with the revision theory of truth of Gupta and Belnap. We categorize the notions definable using their theory of circular definitions as those notions universally definable over the next stable set. We give a simplified account of varied revision sequences-as a generalised algorithmic theory of truth. This enables something of a unification with the Kripkean theory of truth using supervaluation schemes.
We present a theory VF of partial truth over Peano arithmetic and we prove that VF and ID 1 have the same arithmetical content. The semantics of VF is inspired by van Fraassen's notion of supervaluation.
Since its first appearance in 1966, the notion of a supervaluation has been regarded by many as a powerful tool for dealing with semantic gaps. Only recently, however, applications to semantic gluts have also been considered. In previous work I proposed a general framework exploiting the intrinsic gap/glut duality. Here I also examine an alternative account where gaps and gluts are treated on a par: although they reflect opposite situations, the semantic upshot is the same in both cases--the value (...) of some expressions is not uniquely defined. Other strategies for generalizing supervaluations are considered and some comparative facts are discussed. (shrink)
Donkey sentences have existential and universal readings, but they are not often perceived as ambiguous. We extend the pragmatic theory of nonmaximality in plural definites by Križ (2016) to explain how context disambiguates donkey sentences. We propose that the denotations of such sentences produce truth-value gaps — in certain scenarios the sentences are neither true nor false — and demonstrate that Križ’s pragmatic theory fills these gaps to generate the standard judgments of the literature. Building on Muskens’s (1996) Compositional Discourse (...) Representation Theory and on ideas from supervaluation semantics, the semantic analysis defines a general schema for quantification that delivers the required truth-value gaps. Given the independently motivated pragmatic theory of Križ 2016, we argue that mixed readings of donkey sentences require neither plural information states, contra Brasoveanu 2008, 2010, nor error states, contra Champollion 2016, nor singular donkey pronouns with plural referents, contra Krifka 1996, Yoon 1996. We also show that the pragmatic account improves over alternatives like Kanazawa 1994 that attribute the readings of donkey sentences to the monotonicity properties of the embedding quantifier. (shrink)
Since Fodor 1970, negation has worn a Homogeneity Condition to the effect that homogeneous predicates, ) denote homogeneously—all or nothing —to characterize the meaning of – when uttered out-of-the blue, in contrast to –:The mirrors are smooth. The mirrors are not smooth. The mirrors circle the telescope’s reflector. The mirrors do not circle the telescope’s reflector. It has been a problem for philosophical logic and for the semantics of natural language that – appear to defy the Principle of Excluded Middle (...) while – do not—Smooth ¬Smooth Circle ¬Circle. An impoverished logical form – has been the occasion to embellish all else—Boolean algebra, lexical presuppositions, Strongest Meaning Hypothesis, trivalence, supervaluation, double strengthening, etc., enriching the semantics and pragmatics with what remains a special theory of negation, which may be dismissed when the logical syntax and semantics of negation reflects that negated sentences are also tensed sentences. (shrink)
We give a survey on truth theories for applicative theories. It comprises Frege structures, universes for Frege structures, and a theory of supervaluation. We present the proof-theoretic results for these theories and show their syntactical expressive power. In particular, we present as a novelty a syntactical interpretation of ID1 in a applicative truth theory based on supervaluation.
In this paper it is argued that herzberger's general theory of presupposition may be successfully applied to category mistakes. The study offers an alternative to thomason's supervaluation treatment of sortal presupposition and as an indirect measure of the relative merits of the two-Dimensional theory to supervaluations. Bivalent, Three-Valued matrix, And supervaluation accounts are compared to the two-Dimensional theory according to three criteria: (1) abstraction from linguistic behavior, (2) conformity of technical to preanalytic distinctions, And (3) ability to capture (...) classical logic. A matrix like characterization of thomason's theory is reported. (shrink)
The first section (§1) of this essay defends reliance on truth values against those who, on nominalistic grounds, would uniformly substitute a truth predicate. I rehearse some practical, Carnapian advantages of working with truth values in logic. In the second section (§2), after introducing the key idea of auxiliary parameters (§2.1), I look at several cases in which logics involve, as part of their semantics, an extra auxiliary parameter to which truth is relativized, a parameter that caters to special kinds (...) of sentences. In many cases, this facility is said to produce truth values for sentences that on the face of it seem neither true nor false. Often enough, in this situation appeal is made to the method of supervaluations, which operate by “quantifying out” auxiliary parameters, and thereby produce something like a truth value. Logics of this kind exhibit striking differences. I first consider the role that Tarski gives to supervaluation in first order logic (§2.2), and then, after an interlude that asks whether neither-true-nor-false is itself a truth value (§2.3), I consider sentences with non-denoting terms (§2.4), vague sentences (§2.5), ambiguous sentences (§2.6), paradoxical sentences (§2.7), and future-tensed sentences in indeterministic tense logic (§2.8). I conclude my survey with a look at alethic modal logic considered as a cousin (§2.9), and finish with a few sentences of “advice to supervaluationists” (2.10), advice that is largely negative. The case for supervaluations as a road to truth is strong only when the auxiliary parameter that is “quantified out” is in fact irrelevant to the sentences of interest—as in Tarski’s definition of truth for classical logic. In all other cases, the best policy when reporting the results of supervaluation is to use only explicit phrases such as “settled true” or “determinately true,” never dropping the qualification. (shrink)
We present a theory VF of partial truth over Peano arithmetic and we prove that VF and ID 1 have the same arithmetical content. The semantics of VF is inspired by van Fraassen's notion of supervaluation.
This paper is a survey of how economists and philosophers approach the issue of comparisons. More precisely, it is about what formal representation is appropriate whenever our ability to compare things breaks down. We restrict our attention to failures that arise with ordinal comparisons. We consider a number of formal approaches to this problem including one based on the idea of parity. We also consider the claim that the failure to compare things is a consequence of vagueness. We contrast two (...) theories of vagueness; fuzzy set theory and supervaluation theory. Some applications of these theories are described. (shrink)
The paper consists of two parts. The first part begins with the problem of whether the original three-valued calculus, invented by J. Łukasiewicz, really conforms to his philosophical and semantic intuitions. I claim that one of the basic semantic assumptions underlying Łukasiewicz's three-valued logic should be that if under any possible circumstances a sentence of the form "X will be the case at time t" is true (resp. false) at time t, then this sentence must be already true (resp. false) (...) at present. However, it is easy to see that this principle is violated in Lukasiewicz's original calculus (as the cases of the law of excluded middle and the law of contradiction show). Nevertheless it is possible to construct (either with the help of the notion of "supervaluation", or purely algebraically) a different three-valued, semi-classical sentential calculus, which would properly incorporate Łukasiewicz's initial intuitions. Algebraically, this calculus has the ordinary Boolean structure, and therefore it retains all classically valid formulas. Yet because possible valuations are no longer represented by ultrafilters, but by filters (not necessarily maximal), the new calculus displays certain non-classical metalogical features (like, for example, nonextensionality and the lack of the metalogical rule enabling one to derive "p is true or q is true" from" 'p ∨ q' is true"). The second part analyses whether the proposed calculus could be useful in formalizing inferences in situations, when for some reason (epistemological or ontological) our knowledge of certain facts is subject to limitation. Special attention should be paid to the possibility of employing this calculus to the case of quantum mechanics. I am going to compare it with standard non-Boolean quantum logic (in the Jauch-Piron approach), and to show that certain shortcomings of the latter can be avoided in the former. For example, I will argue that in order to properly account for quantum features of microphysics, we do not need to drop the law of distributivity. Also the idea of "reading off" the logical structure of propositions from the structure of Hilbert space leads to some conceptual troubles, which I am going to point out. The thesis of the paper is that all we need to speak about quantum reality can be acquired by dropping the principle of bivalence and extensionality, while accepting all classically valid formulas. (shrink)
In this paper I criticize a version of supervaluation semantics. This version is called "Region-Valuation" semantics. It's developed by Pablo Cobreros. I argue that all supervaluationists, regionalists in particular, and truth-value gap theorists of vagueness more generally, are commited to the validity of D-intro, the principle that every sentence entails its definitization (the truth of "Paul is tall" guarantees the truth of "Paul is definitely tall"). The principle embroils one in a paradox that's distinct from, but related to, the (...) sorites paradox. I call it the "gap-principles paradox". -/- . (shrink)
I consider two related objections to the claim that the law of excluded middle does not imply bivalence. One objection claims that the truth predicate captured by supervaluation semantics is not properly motivated. The second objection says that even if it is, LEM still implies bivalence. I show that LEM does not imply bivalence in a supervaluational language. I also argue that considering supertruth as truth can be reasonably motivated.
This paper offers a unification and systematization of the grounding approaches to truth, denotation, classes and abstraction. Its main innovation is a method for “kleenifying” bivalent semantics so as to ensure that the trivalent semantics used for various linguistic elements are perfectly analogous to the semantics used by Kripke, rather than relying on intuition to achieve similarity. The focus is on generalizing strong Kleene semantics, but one section is devoted to supervaluation, and the unification method also extends to weak (...) Kleene semantics. (shrink)
Ted Sider’s Proportionality of Justice condition requires that any two moral agents instantiating nearly the same moral state be treated in nearly the same way. I provide a countermodel in supervaluation semantics to the proportionality of justice condition. It is possible that moral agents S and S' are in nearly the same moral state, S' is beyond all redemption and S is not. It is consistent with perfect justice then that moral agents that are not beyond redemption go determinately (...) to heaven and moral agents that are beyond all redemption go determinately to hell. I conclude that moral agents that are in nearly the same moral state may be treated in very unequal ways. (shrink)
This is a long paper with a long title, but its moral is succinct. There are supposed to be two, closely related, philosophical problems about sentences1 with truth value gaps: If a sentence can't be semantically evaluated, how can it mean anything at all? and How can classical logic be preserved for a language which contains such sentences? We are neutral on whether either of these supposed problems is real. But we claim that, if either is, supervaluation won't solve (...) it. (shrink)
Circular denitions have primarily been studied in revision theory in the classical scheme. I present systems of circular denitions in the Strong Kleene and supervaluation schemes and provide complete proof systems for them. One class of denitions, the intrinsic denitions, naturally arises in both schemes. I survey some of the features of this class of denitions.
This paper is an attempt to show that the subvaluation theory isnot a good theory of vagueness. It begins with a short review of supervaluation and subvaluation theories and procedes to evaluate the subvaluation theory. Subvaluationism shares all the main short-comings of supervaluationism.Moreover, the solution to the sorites paradox proposed by subvaluationists isnot satisfactory. There is another solution which subvaluationists could availthemselves of, but it destroys the whole motivation for using a paraconsistentlogic and is not diﬀerent from the one (...) oﬀered by supervaluationism. (shrink)
As it is well known, Jan Lukasiewicz invented his three-valued logic as a result of philosophical considerations concerning the problem of determinism and the status of future contingent sentences. In the article I critically analyse the thesis that the sentential calculus introduced by Lukasiewicz himself actually fulfills his philosophical assumptions. I point out that there are some counterintuitive features of Lukasiewicz three-valued logic. Firstly, there is no clear explanation for adopting specific truth-tables for logical connectives, such as conjunction, disjunction and (...) first of all implication. Secondly, it is by no means clear, why certain classical logical principles should be invalid for future contingents. And thirly, I show that within Lukasiewicz logic it is possible to construct a „paradoxical” sentence, namely a conditional which changes in time its logical value from truth to falsity. This fact obviously contradicts Lukasiewicz's philosophical reading of his three truth values, according to which true sentences are already positively determined, false sentences are negatively determined, and possible sentences are neither positively, nor negatively determined. Above-mentioned facts justify in my opinion the thesis that Lukasiewicz's three-valued logic does not satisfy his philosophical intuitions. For this purpose more appropriate seems to be sentential calculus based on the so-called supervaluation. It is three-valued, non-extentional calculus, which nevertheless preserves all tautologies of the classical logic. At the end of the article I consider the possibility of introducing to this calculus modal operators. (shrink)
Peter Unger's puzzle, the problem of the many, is an argument for the conclusion that we are grossly mistaken about what kinds of objects are in our immediate surroundings. But it is not clear what we should make of Unger's argument. There is an epistemic view which says that the argument shows that we don't know which objects are the referents of singular terms in our language. There is a linguistic view which says that Unger's puzzle shows that ordinary singular (...) terms and count nouns are vague. Finally, there is an ontological view which says that the puzzle shows that there are vague objects. ;The epistemic view offers the simplest solution to the problem of the many, but runs foul of a different problem, the problem of vague reference. The problem of vague reference is that given the presuppositions of the epistemic view there are too many too similar objects that might be the reference of a name such as 'Kilimanjaro' for it to be plausible that the name has a determinate reference. The linguistic view, spelled out in terms of semantic indecision and supervaluation, offers the same solution to the problem of the many and to the problem of vague reference. But it leaves no room for de re beliefs about ordinary material object. The ontological view offers a solution to the problem of the many that avoids the problem of vague reference and the problem of de re beliefs. For these reasons it is preferable to the other two. ;However, ontological vagueness has met strong objections. It has been argued that it is a fallacy of verbalism, that it is inconsistent and that once formulated in a consistent way it is not distinguishable from the linguistic view. These objections can be met, but not without cost. To avoid the charge of being inconsistent, friends of the ontological view have to give up the law of excluded middle. ;A positive account of vague parthood has two parts. First, parthood is not primitive but dependent on other primitive facts. The most important of the primitive facts are about to what kinds objects belong and how objects are casually related. Second, sometimes the primitive facts fail do determine of two objects whether one is part of the other. Given a notion of vague parthood, a notion of vague object can be defined roughly in the following way: An object O is vague if there is an object a such that it is indeterminate whether a is part of O. (shrink)
This dissertation explores several accounts of the intuitions speakers have concerning the truth values of utterances of sentences containing vague nouns and adjectives. While some semanticists have attempted to account for these intuitions with multi-valued logics and supervaluation theories of truth, I focus on how utterances of vague sentences affect hearers' beliefs. ;Following a critique of the major semantical accounts of vagueness, I propose a formal theory of how beliefs are revised following utterances of sentences of the form X (...) is A, X is A and B, and X is A and not A, where A and B are vague scalar adjectives. Formally, a hearer's beliefs are represented as a set of weighted sentences, and the information conveyed by a speaker's utterance is represented as a set of weighted conditionals. When a speaker utters a sentence, a function on these sets yields the hearer's revised beliefs. I derive from this theory a criterion for proper assertability: a sentence is properly assertable in a given context if the maximum information loss that could obtain between competent discourse participants is less than some threshold. I argue that this criterion often predicts the truth-value judgements competent speakers make which violate the basic rules of logic. I extend these theories to utterances of sentences containing vague non-scalar nouns. ;In the second half of the dissertation, I propose two semantic accounts of vagueness. One incorporates the assertability criterion into its definition of truth. The other is independent of it. The former accounts for a large set of intuitions concerning the truth values of utterances of vague sentences. The latter accounts for only a subset of those intuitions, leaving the rest to be explained independently by the theory of proper assertability. ;I conclude with brief discussions of the Sorites Paradox and the experimental data obtained by Tversky and Kahneman which purport to demonstrate people's poor intuitions concerning the probabilities of conjunctions. (shrink)
The dissertation is a defense of realism about propositions . According to the propositionlist, there is a realm of entities that simultaneously serve as inter-subjectively shareable "objects" or "contents" of assertion and belief, as units of information more generally, as fundamental bearers of truth-values, and as entities capable of having certain modal, logical and epistemological properties. ;In chapter one, I flesh out a traditional concept of proposition, and I sketch a general argument in favor of propositionalism. ;In chapter two, I (...) argue that we cannot do away with propositions by employing other entities--e.g., sentences, utterances, facts--that we may have already admitted into our ontology. And I argue that we cannot explain away the evidence that supports propositionalism by appealing to some "unloaded" notion of existence. ;In chapter three, I go part way towards dispelling the worry that various types of conceptual messiness --indeterminacy, holism and vagueness--pose a problem for propositionalism. ;In chapter four, I take up my central concern: the problem of identity conditions. I argue that the correct theory of identity conditions for propositions may be underdetermined by relevant philosophical considerations. ;In chapter five, I argue that this "underdetermination hypothesis" does not undermine propositionalism, but that it does imply both that propositions are sui generis entities, and that they are "entities without identity". They are sui generis entities in the sense that they cannot be reduced to any entities not already known to be propositions. And propositions are entities without identity in the sense that certain identity statements linking terms for propositions have indeterminate truth-value. This is so because, as the underdetermination hypothesis shows, our concept of proposition is "individuatively vague": when employing the concept we variously slough between several distinct ways of individuating propositions. The indeterminacy of the identity statements is nicely explained by applying a supervaluation semantics over these different schemes of individuation. ;In chapter six, I briefly discuss five lingering questions about the detailed nature of propositions. (shrink)
This dissertation is concerned with the problem of giving a correct account of the semantics of vague predicates such as '...is tall', '...is bald' and '...is near...'. ;In Chapter 1 I present a definition of vagueness that aims to capture, in a useful form, all our fundamental intuitions about the vagueness of predicates such as those mentioned above; such a definition is lacking in the literature. I also present an abstract characterisation of the Sorites paradox: one that is independent of (...) the particular forms in which the paradox may be presented, and brings to light the essence of the paradox. ;In Chapter 2 I examine existing theories of vagueness. In light of the definition of vagueness defended in Chapter 1, I argue that we need a semantics for vague language that countenances degrees of truth. I distinguish two sorts of degree theory. One sort---which includes the degree form of supervaluation semantics---sees vagueness as an essentially semantic matter. The other sort---which includes accounts based on fuzzy set theory---accounts for vagueness in language in terms of vagueness in the world. I argue that we need a degree theory of the latter, worldly sort. ;In Chapter 3 I examine the fuzzy theory in detail. I show that many of the objections to the fuzzy view that have been raised in the literature do not carry weight. In particular, I defend the coherence of the idea of degrees of truth. However, I isolate a problem for the fuzzy view---the problem of higher-order vagueness---that is serious enough to render the view unacceptable. ;In Chapter 4 I present a new theory of vagueness: one that is intended to share the advantages of the fuzzy view, while avoiding its disadvantages. In particular, this theory accommodates the phenomenon of higher-order vagueness. The theory involves degrees of truth, but they are not the same as the degrees of truth involved in the fuzzy theory. Although the theory involves a non-classical semantics for vague language, this semantics validates classical logic. (shrink)
Supervaluation is a method which has been invented to deal with the reference failure. In his 1975 paper K. Fine suggested that it might be applied to the analysis of the phenomenon of vagueness as well. The paper tries to assess the pros and cons of the supervaluation theory of vagueness. Supervaluation provides us with the means for analysing vagueness without eliminating it from the language, and allows to solve the main paradox connected with vagueness; i.e. the (...) sorities paradox. The preservation of classical logic was thought to be one of its main virtues. The solution to sorities which supervaluationism proposes is a very counterintuitive one, however. Moreover, it seems that it does not preserve classical logic after all. Besides, the theory of supervaluation is not able to handle the higher-order vagueness. Nevertheless it remains one of most attractive semantic theories of vagueness available. In conection with the objections raised against supervaluationism arises the problem concerning the interpretation of the meaning of the supervaluationism's key notion, namely the notion of supertruth. The paper offers one such interpretation. (shrink)