People with the kind of preferences that give rise to the St. Petersburg paradox are problematic---but not because there is anything wrong with infinite utilities. Rather, such people cannot assign the St. Petersburg gamble any value that any kind of outcome could possibly have. Their preferences also violate an infinitary generalization of Savage's Sure Thing Principle, which we call the *Countable Sure Thing Principle*, as well as an infinitary generalization of von Neumann and Morgenstern's Independence axiom, which we call *Countable (...) Independence*. In violating these principles, they display foibles like those of people who deviate from standard expected utility theory in more mundane cases: they choose dominated strategies, pay to avoid information, and reject expert advice. We precisely characterize the preference relations that satisfy Countable Independence in several equivalent ways: a structural constraint on preferences, a representation theorem, and the principle we began with, that every prospect has a value that some outcome could have. (shrink)
We explore the view that Frege's puzzle is a source of straightforward counterexamples to Leibniz's law. Taking this seriously requires us to revise the classical logic of quantifiers and identity; we work out the options, in the context of higher-order logic. The logics we arrive at provide the resources for a straightforward semantics of attitude reports that is consistent with the Millian thesis that the meaning of a name is just the thing it stands for. We provide models to show (...) that some of these logics are non-degenerate. (shrink)
Is the fact that our universe contains fine-tuned life evidence that we live in a multiverse? Ian Hacking and Roger White influentially argue that it is not. We approach this question through a systematic framework for self-locating epistemology. As it turns out, leading approaches to self-locating evidence agree that the fact that our own universe contains fine-tuned life indeed confirms the existence of a multiverse. This convergence is no accident: we present two theorems showing that, in this setting, any updating (...) rule that satisfies a few reasonable conditions will have the same feature. The conclusion that fine-tuned life provides evidence for a multiverse is hard to escape. (shrink)
How should a group with different opinions (but the same values) make decisions? In a Bayesian setting, the natural question is how to aggregate credences: how to use a single credence function to naturally represent a collection of different credence functions. An extension of the standard Dutch-book arguments that apply to individual decision-makers recommends that group credences should be updated by conditionalization. This imposes a constraint on what aggregation rules can be like. Taking conditionalization as a basic constraint, we gather (...) lessons from the established work on credence aggregation, and extend this work with two new impossibility results. We then explore contrasting features of two kinds of rules that satisfy the constraints we articulate: one kind uses fixed prior credences, and the other uses geometric averaging, as opposed to arithmetic averaging. We also prove a new characterisation result for geometric averaging. Finally we consider applications to neighboring philosophical issues, including the epistemology of disagreement. (shrink)
The Epistemic Objection says that certain theories of time imply that it is impossible to know which time is absolutely present. Standard presentations of the Epistemic Objection are elliptical—and some of the most natural premises one might fill in to complete the argument end up leading to radical skepticism. But there is a way of filling in the details which avoids this problem, using epistemic safety. The new version has two interesting upshots. First, while Ross Cameron alleges that the Epistemic (...) Objection applies to presentism as much as to theories like the growing block, the safety version does not overgeneralize this way. Second, the Epistemic Objection does generalize in a different, overlooked way. The safety objection is a serious problem for a widely held combination of views: “propositional temporalism” together with “metaphysical eternalism”. (shrink)
Famous results by David Lewis show that plausible-sounding constraints on the probabilities of conditionals or evaluative claims lead to unacceptable results, by standard probabilistic reasoning. Existing presentations of these results rely on stronger assumptions than they really need. When we strip these arguments down to a minimal core, we can see both how certain replies miss the mark, and also how to devise parallel arguments for other domains, including epistemic “might,” probability claims, claims about comparative value, and so on. A (...) popular reply to Lewis's results is to claim that conditional claims, or claims about subjective value, lack truth conditions. For this strategy to have a chance of success, it needs to give up basic structural principles about how epistemic states can be updated—in a way that is strikingly parallel to the commitments of the project of dynamic semantics. (shrink)
Sometimes you are unreliable at fulfilling your doxastic plans: for example, if you plan to be fully confident in all truths, probably you will end up being fully confident in some falsehoods by mistake. In some cases, there is information that plays the classical role of evidence—your beliefs are perfectly discriminating with respect to some possible facts about the world—and there is a standard expected‐accuracy‐based justification for planning to conditionalize on this evidence. This planning‐oriented justification extends to some cases where (...) you do not have transparent evidence, in the sense that your beliefs are not perfectly discriminating with respect to any non‐trivial facts. In other cases, accuracy considerations do not tell you to plan to conditionalize on any information at all, but rather to plan to follow a different updating rule. Even in the absence of evidence, accuracy considerations can guide your doxastic plan. (shrink)
Many people do not know or believe there is a God, and many experience a sense of divine absence. Are these (and other) “divine hiddenness” facts evidence against the existence of God? Using Bayesian tools, we investigate *evidential arguments from divine hiddenness*, and respond to two objections to such arguments. The first objection says that the problem of hiddenness is just a special case of the problem of evil, and so if one has responded to the problem of evil then (...) hiddenness has no additional bite. The second objection says that, while hiddenness may be evidence against generic theism, it is not evidence against more specific conceptions of God, and thus hiddenness poses no epistemic challenge to a theist who holds one of these more specific conceptions. Our investigation leaves open just how strong the evidence from hiddenness really is, but we aim to clear away some important reasons for thinking hiddenness is of no evidential significance at all. (shrink)
I examine three ‘anti-object’ metaphysical views: nihilism, generalism, and anti-quantificationalism. After setting aside nihilism, I argue that generalists should be anti-quantificationalists. Along the way, I attempt to articulate what a ‘metaphysically perspicuous’ language might even be.
Should we make significant sacrifices to ever-so-slightly lower the chance of extremely bad outcomes, or to ever-so-slightly raise the chance of extremely good outcomes? *Fanaticism* says yes: for every bad outcome, there is a tiny chance of extreme disaster that is even worse, and for every good outcome, there is a tiny chance of an enormous good that is even better. I consider two related recent arguments for Fanaticism: Beckstead and Thomas's argument from *strange dependence on space and time*, and (...) Wilkinson's *Indology* argument. While both arguments are instructive, neither is persuasive. In fact, the general principles that underwrite the arguments (a *separability* principle in the first case, and a *reflection* principle in the second) are *inconsistent* with Fanaticism. In both cases, though, it is possible to rehabilitate arguments for Fanaticism based on restricted versions of those principles. The situation is unstable: plausible general principles tell *against* Fanaticism, but restrictions of those same principles (with strengthened auxiliary assumptions) *support* Fanaticism. All of the consistent views that emerge are very strange. (shrink)
David Lewis holds that a single possible world can provide more than one way things could be. But what are possible worlds good for if they come apart from ways things could be? We can make sense of this if we go in for a metaphysical understanding of what the world is. The world does not include everything that is the case—only the genuine facts. Understood this way, Lewis's “cheap haecceitism” amounts to a kind of metaphysical anti-haecceitism: it says there (...) aren't any genuine facts about individuals over and above their qualitative roles. (shrink)
“There are no gaps in logical space,” David Lewis writes, giving voice to sentiment shared by many philosophers. But different natural ways of trying to make this sentiment precise turn out to conflict with one another. One is a *pattern* idea: “Any pattern of instantiation is metaphysically possible.” Another is a *cut and paste* idea: “For any objects in any worlds, there exists a world that contains any number of duplicates of all of those objects.” We use resources from model (...) theory to show the inconsistency of certain packages of combinatorial principles and the consistency of others. (shrink)
We prove a representation theorem for preference relations over countably infinite lotteries that satisfy a generalized form of the Independence axiom, without assuming Continuity. The representing space consists of lexicographically ordered transfinite sequences of bounded real numbers. This result is generalized to preference orders on abstract superconvex spaces.
Could space consist entirely of extended regions, without any regions shaped like points, lines, or surfaces? Peter Forrest and Frank Arntzenius have independently raised a paradox of size for space like this, drawing on a construction of Cantor’s. I present a new version of this argument and explore possible lines of response.
Suppose that all non-qualitative facts are grounded in qualitative facts. I argue that this view naturally comes with a picture in which trans-world identity is indeterminate. But this in turn leads to either pervasive indeterminacy in the non-qualitative, or else contingency in what facts about modality and possible worlds are determinate.
The existence of mereological sums can be derived from an abstraction principle in a way analogous to numbers. I draw lessons for the thesis that “composition is innocent” from neo-Fregeanism in the philosophy of mathematics.
The counterpart theorist has a problem: there is no obvious way to understand talk about actuality in terms of counterparts. Fara and Williamson have charged that this obstacle cannot be overcome. Here I defend the counterpart theorist by offering systematic interpretations of a quantified modal language that includes an actuality operator. Centrally, I disentangle the counterpart relation from a related notion, a ‘representation relation’. The relation of possible things to the actual things they represent is variable, and an adequate account (...) of modal language must keep track of the way it is systematically shifted by modal operators. I apply my account to resolve several puzzles about counterparts and actuality. In technical appendices, I prove some important logical results about this ‘representational’ counterpart system and its relationship to other modal systems. (shrink)
Suppose that, for reasons of animal welfare, it would be better if everyone stopped eating chicken. Does it follow that you should stop eating chicken? Proponents of the “inefficacy objection” argue that, due to the scale and complexity of markets, the expected effects of your chicken purchases are negligible. So the expected effects of eating chicken do not make it wrong. -/- We argue that this objection does not succeed, in two steps. First, empirical data about chicken production tells us (...) that the expected effects of consuming *many* chickens are not negligible. Second, this implies that the expected effect of consuming one chicken is ordinarily not negligible. *Parity* between your purchase and other counterfactual purchases and *uncertainty* about others’ consumption behavior each tend to pull the expected effect of a single purchase toward the average large scale effect. While some purchases do have negligible expected effects, many do not. (shrink)
Decision theorists widely accept a stochastic dominance principle: roughly, if a risky prospect A is at least as probable as another prospect B to result in something at least as good, then A is at least as good as B. Recently, philosophers have applied this principle even in contexts where the values of possible outcomes do not have the structure of the real numbers: this includes cases of incommensurable values and cases of infinite values. But in these contexts the usual (...) formulation of stochastic dominance is wrong. We show this with several counterexamples. Still, the motivating idea behind stochastic dominance is a good one: it is supposed to provide a way of applying dominance reasoning in the stochastic context of probability distributions. We give two new formulations of stochastic dominance that are more faithful to this guiding idea, and prove that they are equivalent. (shrink)
Some hold that the lesson of Russell’s paradox and its relatives is that mathematical reality does not form a ‘definite totality’ but rather is ‘indefinitely extensible’. There can always be more sets than there ever are. I argue that certain contact puzzles are analogous to Russell’s paradox this way: they similarly motivate a vision of physical reality as iteratively generated. In this picture, the divisions of the continuum into smaller parts are ‘potential’ rather than ‘actual’. Besides the intrinsic interest of (...) this metaphysical picture, it has important consequences for the debate over absolute generality. It is often thought that ‘indefinite extensibility’ arguments at best make trouble for mathematical platonists; but the contact arguments show that nominalists face the same kind of difficulty, if they recognize even the metaphysical possibility of the picture I sketch. (shrink)
“Pragmatic encroachers” about knowledge generally advocate two ideas: (1) you can rationally act on what you know; (2) knowledge is harder to achieve when more is at stake. Charity Anderson and John Hawthorne have recently argued that these two ideas may not fit together so well. I extend their argument by working out what “high stakes” would have to mean for the two ideas to line up, using decision theory.
Some philosophers respond to Leibniz’s “shift” argument against absolute space by appealing to antihaecceitism about possible worlds, using David Lewis’s counterpart theory. But separated from Lewis’s distinctive system, it is difficult to understand what this doctrine amounts to or how it bears on the Leibnizian argument. In fact, the best way of making sense of the relevant kind of antihaecceitism concedes the main point of the Leibnizian argument, pressing us to consider alternative spatiotemporal metaphysics.
I examine what the mathematical theory of random structures can teach us about the probability of Plenitude, a thesis closely related to David Lewis's modal realism. Given some natural assumptions, Plenitude is reasonably probable a priori, but in principle it can be (and plausibly it has been) empirically disconfirmed—not by any general qualitative evidence, but rather by our de re evidence.
In "Pascal's Mugging" (Bostrom 2009), Pascal gives away his wallet for an extremely tiny chance of an extremely large reward. In this continuation of Bostrom's story, Pascal's friend counsels him to take into account the possibility of making mistakes about his true expected utilities, and they consider to what extent this will help Pascal make plans to avoid future muggings.
This paper explores the idea that it is instrumentally valuable to learn normative truths. We consider an argument for "normative hedging" based on this principle, and examine the structure of decision-making under moral uncertainty that arises from it. But it also turns out that the value of normative information is inconsistent with the principle that learning *empirical* truths is instrumentally valuable. We conclude with a brief comment on "metanormative regress.".