Addressing such questions is a central challenge in explicating the cognitive role of indeterminacy. But there is little consensus in the literature about even such mundane questions as: what attitude to p is appropriate, when one knows that p is indeterminate'? This paper explores two answers, both built on a 'supervaluational' treatment of indeterminacy. The first is drawn out from David Lewis's discussion of Parfit on what matters in survival, and is a view where the indeterminacy of the identity relation (...) between Alpha and Omega scales the concern Alpha should feel. The second is developed on the model of imprecise credence treatments of indeterminacy, and generates some interesting and suprisingly successful predictions about the forced march sorites. (shrink)
Some things are left open by a work of fiction. What colour were the hero’s eyes? How many hairs are on her head? Did the hero get shot in the final scene, or did the jailor complete his journey to redemption and shoot into the air? Are the ghosts that appear real, or a delusion? Where fictions are open or incomplete in this way, we can ask what attitudes it’s appropriate (or permissible) to take to the propositions in question, in (...) engaging with the fiction. In Mimesis as Make-Believe (henceforth, MMB), Walton argues that just as truth norms belief, truth-in-fiction norms imagination. Granting that what is true-in-the-fiction should be imagined, and what is false-in-the-fiction is not to be imagined, there remains the question of what to say within the Waltonian framework about things that are neither true- nor false-in-the-fiction---the loci of incompleteness. (shrink)
I formulate a counterfactual version of the notorious ‘Ramsey Test’. Even in a weak form, this makes counterfactuals subject to the very argument that Lewis used to persuade the majority of the philosophical community that indicative conditionals were in hot water. I outline two reactions: to indicativize the debate on counterfactuals; or to counterfactualize the debate on indicatives.
Chancy counterfactuals are a headache. Dylan Dodd (2009) presents an interesting argument against a certain general strategy for accounting for them, instances of which are found in the appendices to Lewis (1979) and in Williams (2008). I will argue (i) that Dodd’s understates the counterintuitiveness of the conclusions he can reach; (ii) that the counterintuitiveness can be thought of as an instance of more general oddities arising when we treat vagueness and indeterminacy in a classical setting; and (iii) the underlying (...) source of discontent which animates Dodd’s complains is to be found in a certain general constraint one might impose on conditionals—what I’ll call the counterfactual Ramsey bound. Unfortunately, the counterfactual Ramsey bound is just as problematic as its famous indicative cousin. The moral is that there’s no comfortable resting place in this area; for violations of the counterfactual Ramsey bound are going to lead to prima facie surprising results. (shrink)
Bryne & H´ajek (1997) argue that Lewis’s (1988; 1996) objections to identifying desire with belief do not go through if our notion of desire is ‘causalized’ (characterized by causal, rather than evidential, decision theory). I argue that versions of the argument go through on certain assumptions about the formulation of decision theory. There is one version of causal decision theory where the original arguments cannot be formulated—the ‘imaging’ formulation that Joyce (1999) advocates. But I argue this formulation is independently objectionable. (...) If we want to maintain the desire as belief thesis, there’s no shortcut through causalization. (shrink)
Jeff Paris (2001) proves a generalized Dutch Book theorem. If a belief state is not a generalized probability (a kind of probability appropriate for generalized distributions of truth-values) then one faces ‘sure loss’ books of bets. In Williams (manuscript) I showed that Joyce’s (1998) accuracy-domination theorem applies to the same set of generalized probabilities. What is the relationship between these two results? This note shows that (when ‘accuracy’ is treated via the Brier Score) both results are easy corollaries of the (...) core result that Paris appeals to in proving his dutch book theorem (Minkowski’s separating hyperplane theorem). We see that every point of accuracy-domination defines a dutch book, but we only have a partial converse. (shrink)
There are advantages to thrift over honest toil. If we can make do without numbers we avoid challenging questions over the metaphysics and epistemology of such entities; and we have a good idea, I think, of what a nominalistic metaphysics should look like. But minimizing ontology brings its own problems; for it seems to lead to error theory— saying that large swathes of common-sense and best science are false. Should recherche philosophical arguments really convince us to give all this up? (...) Such Moorean considerations are explicitly part of the motivation for the recent resurgence of structured metaphysics, which allow a minimal (perhaps nominalistic) fundamental ontology, while avoiding error-theory by adopting a permissive stance towards ontology that can be argued to be grounded in the fundamental. This paper evaluates the Moorean arguments, identifying key epistemological assumptions. On the assumption that Moorean arguments can be used to rule out error-theory, I examine deflationary ‘representationalist’ rivals to the structured metaphysics reaction. Quinean paraphrase, fictionalist claims about syntax and semantics are considered and criticized. In the final section, a ‘direct’ deflationary strategy is outlined and the theoretical obligations that it faces are articulated. The position advocated may have us talking a lot like a friend of structured metaphysics—but with a very different conception of what we’re up to. (shrink)
Revising semantics and logic has consequences for the theory of mind. Standard formal treatments of rational belief and desire make classical assumptions. If we are to challenge the presuppositions, we indicate what is kind of theory is going to take their place. Consider probability theory interpreted as an account of ideal partial belief. But if some propositions are neither true nor false, or are half true, or whatever—then it’s far from clear that our degrees of belief in it and its (...) negation should sum to 1, as classical probability theory requires (?, cf.). There are extant proposals in the literature for generalizing (categorical) probability theory to a non-classical setting, and we will use these below. But subjective probabilities themselves stand in functional relations to other mental states, and we need to trace the knock-on consequences of revisionism for this interrelationship (arguably, degrees of belief only count as kinds of belief in virtue of standing in these functional relationships). (shrink)
When should we believe a indicative conditional, and how much confidence in it should we have? Here’s one proposal: one supposes actual the antecedent; and sees under that supposition what credence attaches to the consequent. Thus we suppose that Oswald did not shot Kennedy; and note that under this assumption, Kennedy was assassinated by someone other than Oswald. Thus we are highly confident in the indicative: if Oswald did not kill Kennedy, someone else did.
In some sense, survival seems to be an intrinsic matter. Whether or not you survive some event seems to depend on what goes on with you yourself —what happens in the environment shouldn’t make a difference. Likewise, being a person at a time seems intrinsic. The principle that survival seems intrinsic is one factor which makes personal fission puzzles so awkward. Fission scenarios present cases where if survival is an intrinsic matter, it appears that an individual could survive twice over. (...) But it’s well known that standard notions of “intrinsicality” won’t do to articulate the sense in which survival is intrinsic, since ‘personhood’ appears to be a maximal property. We formulate a sense in which survival and personhood (and perhaps other maximal properties) may be almost intrinsic—a sense that would suffice, for example, to ground fission arguments. It turns out that this notion of almost-intrinsicality allows us to formulate a new version of the problem of the many. (shrink)
I formulate a counterfactual version of the notorious 'Ramsey Test'. Whereas the Ramsey Test for indicative conditionals links credence in indicatives to conditional credences, the counterfactual version links credence in counterfactuals to expected conditional chance. I outline two forms: a Ramsey Identity on which the probability of the conditional should be identical to the corresponding conditional probabihty/expectation of chance; and a Ramsey Bound on which credence in the conditional should never exceed the latter.Even in the weaker, bound, form, the counterfactual (...) Ramsey Test makes counterfactuals subject to the very argument that Lewis used to argue against the indicative version of the Ramsey Test. I compare the assumptions needed to run each, pointing to assumptions about the time-evolution of chances that can replace the appeal to Bayesian assumptions about credence update in motivating the assumptions of the argument.I finish by outlining two reactions to the discussion: to indicativize the debate on counterfactuals; or to counterfactualize the debate on indicatives. (shrink)
Jeff Paris (2001) proves a generalized Dutch Book theorem. If a belief state is not a generalized probability (a kind of probability appropriate for generalized distributions of truth-values) then one faces ‘sure loss’ books of bets. In <span class='Hi'>Williams</span> (manuscript) I showed that Joyce’s (1998) accuracy-domination theorem applies to the same set of generalized probabilities. What is the relationship between these two results? This note shows that (when ‘accuracy’ is treated via the Brier Score) both results are easy corollaries of (...) the core result that Paris appeals to in proving his dutch book theorem (Minkowski’s separating hyperplane theorem). We see that every point of accuracy-domination deﬁnes a dutch book, but we only have a partial converse. (shrink)
Supervaluationism is often described as the most popular semantic treatment of indeterminacy. Thereall classical valid sequents are degree logic valid. Strikingly, metarules such as cut and conjunction introduction fail.
Lewis (1973) gave a short argument against conditional excluded middle, based on his treatment of ‘might’ counterfactuals. Bennett (2003), with much of the recent literature, gives an alternative take on ‘might’ counterfactuals. But Bennett claims the might-argument against CEM still goes through. This turns on a specific claim I call Bennett’s Hypothesis. I argue that independently of issues to do with the proper analysis of might-counterfactuals, Bennett’s Hypothesis is inconsistent with CEM. But Bennett’s Hypothesis is independently objectionable, so we should (...) resolve this tension by dropping the Hypothesis, not by dropping CEM. (shrink)
I outline and motivate a way of implementing a closest world theory of indicatives, appealing to Stalnaker’s framework of open conversational possibilities. Stalnakerian conversational dynamics helps us resolve two outstanding puzzles for a such a theory of indicative conditionals. The first puzzle—concerning so-called ‘reverse Sobel sequences’—can be resolved by conversation dynamics in a theory-neutral way: the explanation works as much for Lewisian counterfactuals as for the account of indicatives developed here. Resolving the second puzzle, by contrast, relies on the interplay (...) between the particular theory of indicative conditionals developed here and Stalnakerian dynamics. The upshot is an attractive resolution of the so-called “Gibbard phenomenon” for indicative conditionals. (shrink)
John Hawthorne in a recent paper takes issue with Lewisian accounts of counterfactuals, when relevant laws of nature are chancy. I respond to his arguments on behalf of the Lewisian, and conclude that while some can be rebutted, the case against the original Lewisian account is strong. I develop a neo-Lewisian account of what makes for closeness of worlds. I argue that my revised version avoids Hawthorne's challenges. I argue that this is closer to the spirit of Lewis's first (non-chancy) (...) proposal than is Lewis's own suggested modification. (shrink)
Might it be that world itself, independently of what we know about it or how we represent it, is metaphysically indeterminate? This article tackles in turn a series of questions: In what sorts of cases might we posit metaphysical indeterminacy? What is it for a given case of indefiniteness to be 'metaphysical'? How does the phenomenon relate to 'ontic vagueness', the existence of 'vague objects', 'de re indeterminacy' and the like? How might the logic work? Are there reasons for postulating (...) this distinctive sort of indefiniteness? Conversely, are there reasons for denying that there is indefiniteness of this sort? (shrink)
How are permutation arguments for the inscrutability of reference to be formulated in the context of a Davidsonian truth-theoretic semantics? Davidson (1979) takes these arguments to establish that there are no grounds for favouring a reference scheme that assigns London to “Londres”, rather than one that assigns Sydney to that name. We shall see, however, that it is far from clear whether permutation arguments work when set out in the context of the kind of truth-theoretic semantics which Davidson favours. The (...) principle required to make the argument work allows us to resurrect Foster problems against the Davidsonian position. The Foster problems and the permutation inscrutability problems stand or fall together: they are one puzzle, not two. (shrink)
Inscrutability arguments threaten to reduce interpretationist metasemantic theories to absurdity. Can we find some way to block the arguments? A highly influential proposal in this regard is David Lewis’ ‘eligibility’ response: some theories are better than others, not because they fit the data better, but because they are framed in terms of more natural properties. The purposes of this paper are (1) to outline the nature of the eligibility proposal, making the case that it is not ad hoc, but instead (...) flows naturally from three independently motivated elements; and (2) to show that severe limitations afflict the proposal. In conclusion, I pick out the element of the eligibility response that is responsible for the limitations: future work in this area should therefore concentrate on amending this aspect of the overall theory. (shrink)
Some argue that theories of universals should incorporate structural universals, in order to allow for the metaphysical possibility of worlds of 'infinite descending complexity' ('onion worlds'). I argue that the possibility of such worlds does not establish the need for structural universals. So long as we admit the metaphysical possibility of emergent universals, there is an attractive alternative description of such cases.
If one believes vagueness to be an exclusively representational phenomenon, one faces the problem of the many: in the vicinity of Kilimanjaro, there are many 'mountain-candidates', all, apparently, with more or less equal claim to be mountains. David Lewis has defended a radical claim: that all these billions of mountain-candidates are mountains. I argue that the supervaluationist about vagueness should adopt Lewis's proposal, on pain of losing their best explanation of the seductiveness of the sorites paradox.