Probabilism is committed to two theses: 1) Opinion comes in degrees—call them degrees of belief, or credences. 2) The degrees of belief of a rational agent obey the probability calculus. Correspondingly, a natural way to argue for probabilism is: i) to give an account of what degrees of belief are, and then ii) to show that those things should be probabilities, on pain of irrationality. Most of the action in the literature concerns stage ii). Assuming that stage i) has been (...) adequately discharged, various authors move on to stage ii) with varied and ingenious arguments. But an unsatisfactory response at stage i) clearly undermines any gains that might be accrued at stage ii) as far as probabilism is concerned: if those things are not degrees of belief, then it is irrelevant to probabilism whether they should be probabilities or not. In this paper we scrutinize the state of play regarding stage i). We critically examine several of the leading accounts of degrees of belief: reducing them to corresponding betting behavior (de Finetti); measuring them by that behavior (Jeffrey); and analyzing them in terms of preferences and their role in decision-making more generally (Ramsey, Lewis, Maher). We argue that the accounts fail, and so they are unfit to subserve arguments for probabilism. We conclude more positively: ‘degree of belief’ should be taken as a primitive concept that forms the basis of our best theory of rational belief and decision: probabilism. (shrink)
The reference class problem arises when we want to assign a probability to a proposition (or sentence, or event) X, which may be classified in various ways, yet its probability can change depending on how it is classified. The problem is usually regarded as one specifically for the frequentist interpretation of probability and is often considered fatal to it. I argue that versions of the classical, logical, propensity and subjectivist interpretations also fall prey to their own variants of the reference (...) class problem. Other versions of these interpretations apparently evade the problem. But I contend that they are all “no-theory” theories of probability - accounts that leave quite obscure why probability should function as a guide to life, a suitable basis for rational inference and action. The reference class problem besets those theories that are genuinely informative and that plausibly constrain our inductive reasonings and decisions. I distinguish a “metaphysical” and an “epistemological” reference class problem. I submit that we can dissolve the former problem by recognizing that probability is fundamentally a two-place notion: conditional probability is the proper primitive of probability theory. However, I concede that the epistemological problem remains. (shrink)
Four important arguments for probabilism—the Dutch Book, representation theorem, calibration, and gradational accuracy arguments—have a strikingly similar structure. Each begins with a mathematical theorem, a conditional with an existentially quantified consequent, of the general form: if your credences are not probabilities, then there is a way in which your rationality is impugned. Each argument concludes that rationality requires your credences to be probabilities. I contend that each argument is invalid as formulated. In each case there is a mirror-image theorem and (...) a corresponding argument of exactly equal strength that concludes that rationality requires your credences not to be probabilities. Some further consideration is needed to break this symmetry in favour of probabilism. (shrink)
This is the sequel to my “Fifteen Arguments Against Finite Frequentism” ( Erkenntnis 1997), the second half of a long paper that attacks the two main forms of frequentism about probability. Hypothetical frequentism asserts: The probability of an attribute A in a reference class B is p iff the limit of the relative frequency of A ’s among the B ’s would be p if there were an infinite sequence of B ’s. I offer fifteen arguments against this analysis. I (...) consider various frequentist responses, which I argue ultimately fail. I end with a positive proposal of my own, ‘hyper-hypothetical frequentism’, which I argue avoids several of the problems with hypothetical frequentism. It identifies probability with relative frequency in a hyperfinite sequence of trials. However, I argue that this account also fails, and that the prospects for frequentism are dim. (shrink)
So-called “traditional epistemology” and “Bayesian epistemology” share a word, but it may often seem that the enterprises hardly share a subject matter. They differ in their central concepts. They differ in their main concerns. They differ in their main theoretical moves. And they often differ in their methodology. However, in the last decade or so, there have been a number of attempts to build bridges between the two epistemologies. Indeed, many would say that there is just one branch of philosophy (...) here—epistemology. There is a common subject matter after all. In this paper, we begin by playing the role of a “bad cop,” emphasizing many apparent points of disconnection, and even conflict, between the approaches to epistemology. We then switch role, playing a “good cop” who insists that the approaches are engaged in common projects after all. We look at various ways in which the gaps between them have been bridged, and we consider the prospects for bridging them further. We conclude that this is an exciting time for epistemology, as the two traditions can learn, and have started learning, from each other. (shrink)
Four important arguments for probabilism—the Dutch Book, representation theorem, calibration, and gradational accuracy arguments—have a strikingly similar structure. Each begins with a mathematical theorem, a conditional with an existentially quantified consequent, of the general form: if your credences are not probabilities, then there is a way in which your rationality is impugned. Each argument concludes that rationality requires your credences to be probabilities. I contend that each argument is invalid as formulated. In each case there is a mirror-image theorem and (...) a corresponding argument of exactly equal strength that concludes that rationality requires your credences not to be probabilities. Some further consideration is needed to break this symmetry in favour of probabilism. I discuss the extent to which the original arguments can be buttressed. Introduction The Dutch Book Argument 2.1 Saving the Dutch Book argument 2.2 ‘The Dutch Book argument merely dramatizes an inconsistency in the attitudes of an agent whose credences violate probability theory’ Representation Theorem-based Arguments The Calibration Argument The Gradational Accuracy Argument Conclusion. (shrink)
According to orthodox (Kolmogorovian) probability theory, conditional probabilities are by definition certain ratios of unconditional probabilities. As a result, orthodox conditional probabilities are undefined whenever their antecedents have zero unconditional probability. This has important ramifications for the notion of probabilistic independence. Traditionally, independence is defined in terms of unconditional probabilities (the factorization of the relevant joint unconditional probabilities). Various “equivalent” formulations of independence can be given using conditional probabilities. But these “equivalences” break down if conditional probabilities are permitted to have (...) conditions with zero unconditional probability. We reconsider probabilistic independence in this more general setting. We argue that a less orthodox but more general (Popperian) theory of conditional probability should be used, and that much of the conventional wisdom about probabilistic independence needs to be rethought. (shrink)
Bayesianism is our leading theory of uncertainty. Epistemology is defined as the theory of knowledge. So “Bayesian Epistemology” may sound like an oxymoron. Bayesianism, after all, studies the properties and dynamics of degrees of belief, understood to be probabilities. Traditional epistemology, on the other hand, places the singularly non-probabilistic notion of knowledge at centre stage, and to the extent that it traffics in belief, that notion does not come in degrees. So how can there be a Bayesian epistemology?
We introduce a St. Petersburg-like game, which we call the ‘Pasadena game’, in which we toss a coin until it lands heads for the first time. Your pay-offs grow without bound, and alternate in sign (rewards alternate with penalties). The expectation of the game is a conditionally convergent series. As such, its terms can be rearranged to yield any sum whatsoever, including positive infinity and negative infinity. Thus, we can apparently make the game seem as desirable or undesirable as we (...) want, simply by reordering the pay-off table, yet the game remains unchanged throughout. Formally speaking, the expectation does not exist; but we contend that this presents a serious problem for decision theory, since it goes silent when we want it to speak. We argue that the Pasadena game is more paradoxical than the St. Petersburg game in several respects. We give a brief review of the relevant mathematics of infinite series. We then consider and rebut a number of replies to our paradox: that there is a privileged ordering to the expectation series; that decision theory should be restricted to finite state spaces; and that it should be restricted to bounded utility functions. We conclude that the paradox remains live. (shrink)
We argue that indeterminate probabilities are not only rationally permissible for a Bayesian agent, but they may even be rationally required . Our first argument begins by assuming a version of interpretivism: your mental state is the set of probability and utility functions that rationalize your behavioral dispositions as well as possible. This set may consist of multiple probability functions. Then according to interpretivism, this makes it the case that your credal state is indeterminate. Our second argument begins with our (...) describing a world that plausibly has indeterminate chances. Rationality requires a certain alignment of your credences with corresponding hypotheses about the chances. Thus, if you hypothesize the chances to be indeterminate, your will inherit their indeterminacy in your corresponding credences. Our third argument is motivated by a dilemma. Epistemic rationality requires you to stay open-minded about contingent matters about which your evidence has not definitively legislated. Practical rationality requires you to be able to act decisively at least sometimes. These requirements can conflict with each other-for thanks to your open-mindedness, some of your options may have undefined expected utility, and if you are choosing among them, decision theory has no advice to give you. Such an option is playing Nover and Hájek’s Pasadena Game , and indeed any option for which there is a positive probability of playing the Pasadena Game. You can serve both masters, epistemic rationality and practical rationality, with an indeterminate credence to the prospect of playing the Pasadena game. You serve epistemic rationality by making your upper probability positive-it ensures that you are open-minded. You serve practical rationality by making your lower probability 0-it provides guidance to your decision-making. No sharp credence could do both. (shrink)
‘If I were to toss a coin 1000 times, then it would land heads exactly n times’. Is there a specific value of n that renders this counterfactual true? According to an increasingly influential view, there is. A precursor of the view goes back to the Molinists; more recently it has been inspired by Stalnaker, and versions of it have been advocated by Hawthorne, Bradley, Moss, Schulz, and Stefánsson. More generally, I attribute to these authors what I call Counterfactual Plenitude:For (...) any antecedent A, there is a world wisuch that A ☐→ wiis true.Moreover, some of these authors are also committed to Primitive Counterfacts Realism:There exist primitive modal facts that serve as truth-makers for counterfactual claims.Call the conjunction of these italicized theses counterfactism. I clarify it and suggest some of its virtues, while ultimately rejecting it.Stefánsson’s counterfactism is motivated by and targeted at my “counterfactual skepticism”—I argue that most counterfactuals are false—and counterfactism has various other sources of support. I briefly defend that skepticism, and I seek to undercut those sources of support. I then argue more directly against counterfactism, especially on grounds of its ontological profligacy, and its leading to another kind of skepticism about counterfactuals that I believe is more problematic than my kind. In the process, I discuss how Bradley’s multidimensional semantics bears on counterfactism; I offer some new considerations against some central theses regarding conditionals ; and I reflect more generally on the epistemology of modality and the choice of primitives in our theorizing. (shrink)
I have argued for a kind of ‘counterfactual scepticism’: most counterfactuals ever uttered or thought in human history are false. I briefly rehearse my main arguments. Yet common sense recoils. Ordinary speakers judge most counterfactuals that they utter and think to be true. A common defence of such judgments regards counterfactuals as context-dependent: the proposition expressed by a given counterfactual can vary according to the context in which it is uttered. In normal contexts, the counterfactuals that we utter are typically (...) true, the defence insists, while granting that there may be more rarefied contexts in which they are false. I give a taxonomy of such contextualist replies. One could be a contextualist about the counterfactual connective, about its antecedent, or about its consequent. I offer some general concerns about all these varieties of contextualism. I then focus especially on antecedent-contextualism, as I call it. I firstly raise some high-level objections to it. Then, I look at such a contextualist account due to Sandgren and Steele. I think it has many virtues, but also some problems. I conclude with some avenues for future research. (shrink)
The Dutch Book argument, like Route 66, is about to turn 80. It is arguably the most celebrated argument for subjective Bayesianism. Start by rejecting the Cartesian idea that doxastic attitudes are ‘all-or-nothing’; rather, they are far more nuanced degrees of belief, for short credences, susceptible to fine-grained numerical measurement. Add a coherentist assumption that the rationality of a doxastic state consists in its internal consistency. The remaining problem is to determine what consistency of credences amounts to. The Dutch Book (...) argument, in a nutshell, says that if your credences do not obey the probability calculus, you are ‘incoherent’—susceptible to sure losses at the hands of a ‘Dutch Bookie’—and thus irrational. Conclusion: rationality requires your credences to obey the probability calculus. And like Route 66, the fortunes of the Dutch Book argument have been mixed. Opinions on the argument are sharply divided. The list of its proponents is quite a ‘who’s who’ of philosophers of probability; they include de Finetti (1937, 1980), Carnap (1950, 1962, and more fully, 1955), Kemeny (1955), Lehman (1955), Shimony (1955), Adams (1962), Mellor (1971), Rosenkrantz (1981), van Fraassen (1989), Jeffrey (1983, 1992). (shrink)
Start with an ordinary disposition ascription, like ‘the wire is live’ or ‘the glass is fragile’. Lewis gives a canonical template for what he regards as the analysandum of such an ascription:“Something x is disposed at time t to give response r to stimulus s”.For example, the wire is disposed at noon to conduct electrical current when touched by a conductor.What Lewis calls “the simple conditional analysis” gives putatively necessary and sufficient conditions for the analysandum in terms of a counterfactual:“if (...) x were to undergo stimulus s at time t, x would give response r”.Call this the counterfactual analysans. For example: If the wire were to be touched by a conductor at noon, the wire would conduct electricity.So we have three things in play: the ordinary disposition ascription ; the canonical template that is supposed to formalize this disposition ascription; and the counterfactual analysans that is supposed to provided an analysis of the canonical template.Finkish dispositions have been widely regarded as counterexamples to the adequacy of as an analysis of. I will argue that they are not. They succeed, however, as counterexamples to the adequacy of as an analysis of. That said, the classic cases are somewhat contrived. I will introduce the notion of a minkish disposition: a disposition that something has, even though it might not display it in response to the relevant stimulus. Cases of minkish dispositions are entirely familiar. They refute the adequacy of both as an analysis of and of. I will argue that they also refute Lewis’s own, more complicated counterfactual analysis of dispositions, and bring out an internal tension in his views. (shrink)
According to finite frequentism, the probability of an attribute A in a finite reference class B is the relative frequency of actual occurrences of A within B. I present fifteen arguments against this position.
The so-called ‘Adams’ Thesis’ is often understood as the claim that the assertibility of an indicative conditional equals the corresponding conditional probability—schematically: $${({\rm AT})}\qquad\qquad\quad As(A\rightarrow B)=P({B|A}),{\rm provided}\quad P(A)\neq 0.$$ The Thesis is taken by many to be a touchstone of any theorizing about indicative conditionals. Yet it is unclear exactly what the Thesis is . I suggest some precise statements of it. I then rebut a number of arguments that have been given in its favor. Finally, I offer a new (...) argument against it. I appeal to an old triviality result of mine against ‘Stalnaker’s Thesis’ that the probability of a conditional equals the corresponding conditional probability. I showed that for all finite-ranged probability functions, there are strictly more distinct values of conditional probabilities than there are distinct values of probabilities of conditionals, so they cannot all be paired up as Stalnaker’s Thesis promises. Conditional probabilities are too fine-grained to coincide with probabilities of conditionals across the board. If the assertibilities of conditionals are to coincide with conditional probabilities across the board, then assertibilities must be finer-grained than probabilities. I contend that this is implausible—it is surely the other way round. I generalize this argument to other interpretations of ‘ As ’, including ‘acceptability’ and ‘assentability’. I find it hard to see how any such figure of merit for conditionals can systematically align with the corresponding conditional probabilities. (shrink)
This paper is a response to Paul Bartha’s ‘Making Do Without Expectations’. We provide an assessment of the strengths and limitations of two notable extensions of standard decision theory: relative expectation theory and Paul Bartha’s relative utility theory. These extensions are designed to provide intuitive answers to some well-known problems in decision theory involving gaps in expectations. We argue that both RET and RUT go some way towards providing solutions to the problems in question but neither extension solves all the (...) relevant problems. (shrink)
According to the so-called ‘deliberation crowds out prediction’ thesis, while deliberating about what you’ll do, you cannot rationally have credences for what you’ll do – you cannot rationally have option-credences. Versions of the thesis have been defended by authors such as Spohn, Levi, Gilboa, Price, Louise, and others. After registering a number of concerns about the thesis, I rehearse and rebut many of the main arguments for it, grouped according to their main themes: agency, vacuity, betting, and decision-theoretical considerations. I (...) go on to suggest many possible theoretical roles for option-credences. -/- I locate the debate about the thesis in a broader discussion: Are there rational credence gaps – propositions to which one cannot rationally assign credences? If there are, they spell trouble for various foundations of Bayesian epistemology, including the usual ratio formula for conditional probability, conditionalization, decision theory, and independence. According to the thesis, credence gaps are completely mundane; they arise every time someone rationally deliberates. But these foundations are safe from any threat here, I contend, since the thesis is false. Deliberation welcomes prediction. (shrink)
Probability theory is a key tool of the physical, mathematical, and social sciences. It has also been playing an increasingly significant role in philosophy: in epistemology, philosophy of science, ethics, social philosophy, philosophy of religion, and elsewhere. This Handbook encapsulates and furthers the influence of philosophy on probability, and of probability on philosophy. Nearly forty articles summarise the state of play and present new insights in various areas of research at the intersection of these two fields. The articles will be (...) of special interest to practitioners of probability who seek a greater understanding of its mathematical and conceptual foundations, and to philosophers who want to get up to speed on the cutting edge of research in this area. The volume begins with a primer on those parts of probability theory that we believe are most important for philosophers to know, and the rest is divided into seven main sections: History; Formalism; Alternatives to Standard Probability Theory; Interpretations and Interpretive Issues; Probabilistic Judgment and Its Applications; Applications of Probability: Science; and Applications of Probability: Philosophy. (shrink)
Confirmation theory is intended to codify the evidential bearing of observations on hypotheses, characterizing relations of inductive “support” and “countersupport” in full generality. The central task is to understand what it means to say that datum E confirms or supports a hypothesis H when E does not logically entail H.
Frank Ramsey (1931) wrote: If two people are arguing 'if p will q?' and both are in doubt as to p, they are adding p hypothetically to their stock of knowledge and arguing on that basis about q. We can say that they are fixing their degrees of belief in q given p. Let us take the first sentence the way it is often taken, as proposing the following test for the acceptability of an indicative conditional: ‘If p then q’ (...) is acceptable to a subject S iff, were S to accept p and consider q, S would accept q. Now consider an indicative conditional of the form (1) If p, then I believe p. Suppose that you accept p and consider ‘I believe p’. To accept p while rejecting ‘I believe p’ is tantamount to accepting the Moore-paradoxical sentence ‘p and I do not believe p’, and so is irrational. To accept p while suspending judgment about ‘I believe p’ is irrational for similar reasons. So rationality requires that if you accept p and consider ‘I believe p’, you accept ‘I believe p’. (shrink)
“Pascal's Wager” is the name given to an argument due to Blaise Pascal for believing, or for at least taking steps to believe, in God. The name is somewhat misleading, for in a single paragraph of his Pensées, Pascal apparently presents at least three such arguments, each of which might be called a ‘wager’ — it is only the final of these that is traditionally referred to as “Pascal's Wager”. We find in it the extraordinary confluence of several important strands (...) of thought: the justification of theism; probability theory and decision theory, used here for almost the first time in history; pragmatism; voluntarism (the thesis that belief is a matter of the will); and the use of the concept of infinity. (shrink)
David Lewis [1988; 1996] canvases an anti-Humean thesis about mental states: that the rational agent desires something to the extent that he or she believes it to be good. Lewis offers and refutes a decision-theoretic formulation of it, the 'Desire-as-Belief Thesis'. Other authors have since added further negative results in the spirit of Lewis's. We explore ways of being anti-Humean that evade all these negative results. We begin by providing background on evidential decision theory and on Lewis's negative results. We (...) then introduce what we call the indexicality loophole: if the goodness of a proposition is indexical, partly a function of an agent's mental state, then the negative results have no purchase. Thus we propose a variant of Desire-as-Belief that exploits this loophole. We argue that a number of meta-ethical positions are committed to just such indexicality. Indeed, we show that with one central sort of evaluative belief--the belief that an option is right--the indexicality loophole can be exploited in various interesting ways. Moreover, on some accounts, 'good' is indexical in the same way. Thus, it seems that the anti-Humean can dodge the negative results. (shrink)
The thesis that probabilities of conditionals are conditional probabilities has putatively been refuted many times by so-called ‘triviality results’, although it has also enjoyed a number of resurrections. In this paper I assault it yet again with a new such result. I begin by motivating the thesis and discussing some of the philosophical ramifications of its fluctuating fortunes. I will canvas various reasons, old and new, why the thesis seems plausible, and why we should care about its fate. I will (...) look at some objections to Lewis’s famous triviality results, and thus some reasons for the pursuit of further triviality results. I will generalize Lewis’s results in ways that meet the objections. I will conclude with some reflections on the demise of the thesis—or otherwise. (shrink)
David Lewis claims that a simple sort of anti-Humeanism-that the rational agent desires something to the extent he believes it to be good-can be given a decision-theoretic formulation, which Lewis calls 'Desire as Belief' (DAB). Given the (widely held) assumption that Jeffrey conditionalising is a rationally permissible way to change one's mind in the face of new evidence, Lewis proves that DAB leads to absurdity. Thus, according to Lewis, the simple form of anti-Humeanism stands refuted. In this paper we investigate (...) whether Lewis's case against DAB can be strengthened by examining how it fares under rival versions of decision theory, including other conceptions of rational ways to change one's mind. We argue that the anti-Humean may escape Lewis's argument either by adopting a version of causal decision theory, or by claiming that the refutation only applies to hyper-idealised rational agents, or by denying that the decision-theoretic framework has the expressive capacity to formulate anti-Humeanism. (shrink)
Bayesians have a seemingly attractive account of rational credal states in terms of coherence. An agent's set of credences are synchronically coherent just in case they conform to the probability calculus. Some Bayesians impose a further putative coherence constraint called regularity: roughly, if X is possible, then it is assigned positive probability. I look at two versions of regularity – logical and metaphysical – and I canvass various defences of it as a rationality norm. Combining regularity with synchronic coherence, we (...) have a set of constraints known as strict coherence.I argue that strict coherence is untenable. In particular, I attack regularity as a rationality norm. First, I rebut each of the various defences of regularity. Then I argue directly against regularity: it conflicts with the Bayesian decision‐theoretic treatment of rational action. Thus, seemingly plausible theoretical and pragmatic norms turn out to be inconsistent.[Barbossa is about to kill Will,but Jack Sparrow shows up:]Barbossa: It's not possible!Jack: Not probable.To be uncertain is to be uncomfortable,but to be certain is to be ridiculous. (shrink)
in Probability is the Very Guide of Life: The Philosophical Uses of Chance, eds. Henry Kyburg, Jr. and Mariam Thalos, Open Court. Abridged version in Proceedings of the International Society for Bayesian Analysis 2002.
A decade ago, Harris Nover and I introduced the Pasadena game, which we argued gives rise to a new paradox in decision theory even more troubling than the St Petersburg paradox. Gwiazda's and Smith's articles in this volume both offer revisionist solutions. I critically engage with both articles. They invite reflections on a number of deep issues in the foundations of decision theory, which I hope to bring out. These issues include: some ways in which orthodox decision theory might be (...) supplemented; the role of simulations of such infinite games; the role of small probabilities, and of idealization, in decision theory; tolerance about practical norms; and alternative ways of understanding decision theory's job description. (shrink)
We examine a distinctive kind of problem for decision theory, involving what we call discontinuity at infinity. Roughly, it arises when an infinite sequence of choices, each apparently sanctioned by plausible principles, converges to a ‘limit choice’ whose utility is much lower than the limit approached by the utilities of the choices in the sequence. We give examples of this phenomenon, focusing on Arntzenius et al.’s Satan’s apple, and give a general characterization of it. In these examples, repeated dominance reasoning (...) (a paradigm of rationality) apparently gives rise to a situation closely analogous to having intransitive preferences (a paradigm of irrationality). Indeed, the agents in these examples are vulnerable to a money pump set-up despite having preferences that exhibit no obvious defect of rationality. We explore several putative solutions to such problems, particularly those that appeal to binding and to deliberative dynamics. We consider the prospects for these solutions, concluding that if they fail, the examples show that money pump arguments are invalid. (shrink)
In our 2004, we introduced two games in the spirit of the St Petersburg game, the Pasadena and Altadena games. As these latter games lack an expectation, we argued that they pose a paradox for decision theory. Terrence Fine has shown that any finite valuations for the Pasadena, Altadena, and St Petersburg games are consistent with the standard decision-theoretic axioms. In particular, one can value the Pasadena game above the other two, a result that conflicts with both our intuitions and (...) dominance reasoning. We argue that this result, far from resolving the Pasadena paradox, should serve as a reductio of the standard theory, and we consequently make a plea for new axioms for a revised theory. We also discuss a proposal by Kenny Easwaran that a gamble should be valued according to its 'weak expectation', a generalization of the usual notion of expectation. (shrink)
Probabilities figure centrally in much of the literature on the semantics of conditionals. I find this surprising: it accords a special status to conditionals that other parts of language apparently do not share. I critically discuss two notable ‘probabilities first’ accounts of counterfactuals, due to Edgington and Leitgeb. According to Edgington, counterfactuals lack truth values but have probabilities. I argue that this combination gives rise to a number of problems. According to Leitgeb, counterfactuals have truth conditions-roughly, a counterfactual is true (...) when the corresponding conditional chance is sufficiently high. I argue that problems arise from the disparity between truth and high chance, between approximate truth and high chance, and from counterfactuals for which the corresponding conditional chances are undefined. However, Edgington, Leitgeb and I can unite in opposition to Stalnaker and Lewis-style ‘similarity’ accounts of counterfactuals. (shrink)
I analyze David Hume’s "Of Miracles". I vindicate Hume’s argument against two charges: that it (1) defines miracles out of existence; (2) appeals to a suspect principle of balancing probabilities. He argues that miracles are, in a certain sense, maximally improbable. To understand this sense, we must turn to his notion of probability as ’strength of analogy’: miracles are incredible, according to him, because they bear no analogy to anything in our past experience. This reveals as anachronistic various recent Bayesian (...) reconstructions of his argument. But it exposes him to other charges, with which I conclude. (shrink)
There are two central questions concerning probability. First, what are its formal features? That is a mathematical question, to which there is a standard, widely (though not universally) agreed upon answer. This answer is reviewed in the next section. Second, what sorts of things are probabilities---what, that is, is the subject matter of probability theory? This is a philosophical question, and while the mathematical theory of probability certainly bears on it, the answer must come from elsewhere. To see why, observe (...) that there are many things in the world that have the mathematical structure of probabilities---the set of measurable regions on the surface of a table, for example---but that one would never mistake for being probabilities. So probability is distinguished by more than just its formal characteristics. The bulk of this essay will be taken up with the central question of what this “more” might be. (shrink)
This paper revisits the Pasadena game (Nover and Háyek 2004), a St Petersburg-like game whose expectation is undefined. We discuss serveral respects in which the Pasadena game is even more troublesome for decision theory than the St Petersburg game. Colyvan (2006) argues that the decision problem of whether or not to play the Pasadena game is ‘ill-posed’. He goes on to advocate a ‘pluralism’ regarding decision rules, which embraces dominance reasoning as well as maximizing expected utility. We rebut Colyvan’s argument, (...) offering several considerations in favour of the Pasadena decision problem being well posed. To be sure, current decision theory, which is underpinned by various preference axioms, leaves indeterminate how one should value the Pasadena game. But we suggest that determinacy might be achieved by adding further preference axioms. We conclude by opening the door to a far greater plurality of decision rules. We suggest how the goal of unifying these rules might guide future research. (shrink)
Gonzales tells Mark Crimmins (1992) that Crimmins knows him under two guises, and that under his other guise Crimmins thinks him an idiot. Knowing his cleverness, but not knowing which guise he has in mind, Crimmins trusts Gonzales but does not know which of his beliefs to revise. He therefore asserts to Gonzales. (FBI) I falsely believe that you are an idiot.
Arguably, Hume's greatest single contribution to contemporary philosophy of science has been the problem of induction (1739). Before attempting its statement, we need to spend a few words identifying the subject matter of this corner of epistemology. At a first pass, induction concerns ampliative inferences drawn on the basis of evidence (presumably, evidence acquired more or less directly from experience)—that is, inferences whose conclusions are not (validly) entailed by the premises. Philosophers have historically drawn further distinctions, often appropriating the term (...) “induction” to mark them; since we will not be concerned with the philosophical issues for which these distinctions are relevant, we will use the word “inductive” in a catch-all sense synonymous with “ampliative”. But we will follow the usual practice of choosing, as our paradigm example of inductive inferences, inferences about the future based on evidence drawn from the past and present. A further refinement is more important. Opinion typically comes in degrees, and this fact makes a great deal of difference to how we understand inductive inferences. For while it is often harmless to talk about the conclusions that can be rationally believed on the basis of some.. (shrink)
– We offer a new motivation for imprecise probabilities. We argue that there are propositions to which precise probability cannot be assigned, but to which imprecise probability can be assigned. In such cases the alternative to imprecise probability is not precise probability, but no probability at all. And an imprecise probability is substantially better than no probability at all. Our argument is based on the mathematical phenomenon of non-measurable sets. Non-measurable propositions cannot receive precise probabilities, but there is a natural (...) way for them to receive imprecise probabilities. The mathematics of non-measurable sets is arcane, but its epistemological import is far-reaching; even apparently mundane propositions are liable to be affected by non-measurability. The phenomenon of non-measurability dramatically reshapes the dialectic between critics and proponents of imprecise credence. Non-measurability offers natural rejoinders to prominent critics of imprecise credence. Non-measurability even reverses some of the critics’ arguments—by the very lights that have been used to argue against imprecise credences, imprecise credences are better than precise credences. (shrink)
This chapter is a philosophical survey of some leading approaches in formal epistemology in the so-called ‘Bayesian’ tradition. According to them, a rational agent’s degrees of belief—credences—at a time are representable with probability functions. We also canvas various further putative ‘synchronic’ rationality norms on credences. We then consider ‘diachronic’ norms that are thought to constrain how credences should respond to evidence. We discuss some of the main lines of recent debate, and conclude with some prospects for future research.