Taking Joyce’s (1998; 2009) recent argument(s) for probabilism as our point of departure, we propose a new way of grounding formal, synchronic, epistemic coherence requirements for (opinionated) full belief. Our approach yields principled alternatives to deductive consistency, sheds new light on the preface and lottery paradoxes, and reveals novel conceptual connections between alethic and evidential epistemic norms.
To the extent that we have reasons to avoid these “bad B -properties”, these arguments provide reasons not to have an incoherent credence function b — and perhaps even reasons to have a coherent one. But, note that these two traditional arguments for probabilism involve what might be called “pragmatic” reasons (not) to be (in)coherent. In the case of the Dutch Book argument, the “bad” property is pragmatically bad (to the extent that one values money). But, it is not clear (...) whether the DBA pinpoints any epistemic defect of incoherent agents. The same can be said for Representation Theorem arguments, since they involve the structure of an agent’s preferences. (shrink)
Arguments for probabilism aim to undergird/motivate a synchronic probabilistic coherence norm for partial beliefs. Standard arguments for probabilism are all of the form: An agent S has a non-probabilistic partial belief function b iff (⇐⇒) S has some “bad” property B (in virtue of the fact that their p.b.f. b has a certain kind of formal property F). These arguments rest on Theorems (⇒) and Converse Theorems (⇐): b is non-Pr ⇐⇒ b has formal property F.
Hempel first introduced the paradox of confirmation in (Hempel 1937). Since then, a very extensive literature on the paradox has evolved (Vranas 2004). Much of this literature can be seen as responding to Hempel’s subsequent discussions and analyses of the paradox in (Hempel 1945). Recently, it was noted that Hempel’s intuitive (and plausible) resolution of the paradox was inconsistent with his official theory of confirmation (Fitelson & Hawthorne 2006). In this article, we will try to explain how this inconsistency (...) affects the historical dialectic about the paradox and how it illuminates the nature of confirmation. In the end, we will argue that Hempel’s intuitions about the paradox of confirmation were (basically) correct, and that it is his theory that should be rejected, in favor of a (broadly) Bayesian account of confirmation. (shrink)
Naive deductivist accounts of confirmation have the undesirable consequence that if E confirms H, then E also confirms the conjunction H·X, for any X—even if X is completely irrelevant to E and H. Bayesian accounts of confirmation may appear to have the same problem. In a recent article in this journal Fitelson (2002) argued that existing Bayesian attempts to resolve of this problem are inadequate in several important respects. Fitelson then proposes a new‐and‐improved Bayesian account that overcomes the (...) problem of irrelevant conjunction, and does so in a more general setting than past attempts. We will show how to simplify and improve upon Fitelson's solution. (shrink)
In response to a paper by Harris & Fitelson, Slaney states several open questions concerning possible strategies for proving distributivity in a wide class of positive sentential logics. In this note, I provide answers to all of Slaney's open questions. The result is a better understanding of the class of positive logics in which distributivity holds.
There is general agreement in mathematics about what continuity is. In this paper we examine how well the mathematical definition lines up with common sense notions. We use a recent paper by Hud Hudson as a point of departure. Hudson argues that two objects moving continuously can coincide for all but the last moment of their histories and yet be separated in space at the end of this last moment. It turns out that Hudson’s construction does not deliver mathematically continuous (...) motion, but the natural question then is whether there is any merit in the alternative definition of continuity that he implicitly invokes. (shrink)
Many philosophers have become worried about the use of standard real numbers for the probability function that represents an agent's credences. They point out that real numbers can't capture the distinction between certain extremely unlikely events and genuinely impossible ones—they are both represented by credence 0, which violates a principle known as “regularity.” Following Skyrms 1980 and Lewis 1980, they recommend that we should instead use a much richer set of numbers, called the “hyperreals.” This essay argues that this popular (...) view is the result of two mistakes. The first mistake, which this essay calls the “numerical fallacy,” is to assume that a distinction that isn't represented by different numbers isn't represented at all in a mathematical representation. In this case, the essay claims that although the real numbers do not make all relevant distinctions, the full mathematical structure of a probability function does. The second mistake is that the hyperreals make too many distinctions. They have a much more complex structure than credences in ordinary propositions can have, so they make distinctions that don't exist among credences. While they might be useful for generating certain mathematical models, they will not appear in a faithful mathematical representation of credences of ordinary propositions. (shrink)
Contemporary Bayesian confirmation theorists measure degree of (incremental) confirmation using a variety of non-equivalent relevance measures. As a result, a great many of the arguments surrounding quantitative Bayesian confirmation theory are implicitly sensitive to choice of measure of confirmation. Such arguments are enthymematic, since they tacitly presuppose that certain relevance measures should be used (for various purposes) rather than other relevance measures that have been proposed and defended in the philosophical literature. I present a survey of this pervasive class of (...) Bayesian confirmation-theoretic enthymemes, and a brief analysis of some recent attempts to resolve the problem of measure sensitivity. (shrink)
Many philosophers have argued that "degree of belief" or "credence" is a more fundamental state grounding belief. Many other philosophers have been skeptical about the notion of "degree of belief", and take belief to be the only meaningful notion in the vicinity. This paper shows that one can take belief to be fundamental, and ground a notion of "degree of belief" in the patterns of belief, assuming that an agent has a collection of beliefs that isn't dominated by some other (...) collection in terms of the overall balance of truth and falsity that it could contain. (shrink)
Let E be a set of n propositions E1, ..., En. We seek a probabilistic measure C(E) of the ‘degree of coherence’ of E. Intuitively, we want C to be a quantitative, probabilistic generalization of the (deductive) logical coherence of E. So, in particular, we require C to satisfy the following..
Expected accuracy arguments have been used by several authors (Leitgeb and Pettigrew, and Greaves and Wallace) to support the diachronic principle of conditionalization, in updates where there are only finitely many possible propositions to learn. I show that these arguments can be extended to infinite cases, giving an argument not just for conditionalization but also for principles known as ‘conglomerability’ and ‘reflection’. This shows that the expected accuracy approach is stronger than has been realized. I also argue that we should (...) be careful to distinguish diachronic update principles from related synchronic principles for conditional probability. (shrink)
According to Bayesian confirmation theory, evidence E (incrementally) confirms (or supports) a hypothesis H (roughly) just in case E and H are positively probabilistically correlated (under an appropriate probability function Pr). There are many logically equivalent ways of saying that E and H are correlated under Pr. Surprisingly, this leads to a plethora of non-equivalent quantitative measures of the degree to which E confirms H (under Pr). In fact, many non-equivalent Bayesian measures of the degree to which E confirms (or (...) supports) H have been proposed and defended in the literature on inductive logic. I provide a thorough historical survey of the various proposals, and a detailed discussion of the philosophical ramifications of the differences between them. I argue that the set of candidate measures can be narrowed drastically by just a few intuitive and simple desiderata. In the end, I provide some novel and compelling reasons to think that the correct measure of degree of evidential support (within a Bayesian framework) is the (log) likelihood ratio. The central analyses of this research have had some useful and interesting byproducts, including: (i ) a new Bayesian account of (confirmationally) independent evidence, which has applications to several important problems in con- firmation theory, including the problem of the (confirmational) value of evidential diversity, and (ii ) novel resolutions of several problems in Bayesian confirmation theory, motivated by the use of the (log) likelihood ratio measure, including a reply to the Popper-Miller critique of probabilistic induction, and a new analysis and resolution of the problem of irrelevant conjunction (a.k.a., the tacking problem). (shrink)
Several forms of symmetry in degrees of evidential support areconsidered. Some of these symmetries are shown not to hold in general. This has implications for the adequacy of many measures of degree ofevidential support that have been proposed and defended in the philosophical literature.
Likelihoodists and Bayesians seem to have a fundamental disagreement about the proper probabilistic explication of relational (or contrastive) conceptions of evidential support (or confirmation). In this paper, I will survey some recent arguments and results in this area, with an eye toward pinpointing the nexus of the dispute. This will lead, first, to an important shift in the way the debate has been couched, and, second, to an alternative explication of relational support, which is in some sense a "middle way" (...) between Likelihoodism and Bayesianism. In the process, I will propose some new work for an old probability puzzle: the "Monty Hall" problem. (shrink)
In this paper, we investigate various possible (Bayesian) precisifications of the (somewhat vague) statements of “the equal weight view” (EWV) that have appeared in the recent literature on disagreement. We will show that the renditions of (EWV) that immediately suggest themselves are untenable from a Bayesian point of view. In the end, we will propose some tenable (but not necessarily desirable) interpretations of (EWV). Our aim here will not be to defend any particular Bayesian precisification of (EWV), but rather to (...) raise awareness about some of the difficulties inherent in formulating such precisifications. (shrink)
The conjunction fallacy has been a key topic in debates on the rationality of human reasoning and its limitations. Despite extensive inquiry, however, the attempt to provide a satisfactory account of the phenomenon has proved challenging. Here we elaborate the suggestion (first discussed by Sides, Osherson, Bonini, & Viale, 2002) that in standard conjunction problems the fallacious probability judgements observed experimentally are typically guided by sound assessments of _confirmation_ relations, meant in terms of contemporary Bayesian confirmation theory. Our main formal (...) result is a confirmation-theoretic account of the conjunction fallacy, which is proven _robust_ (i.e., not depending on various alternative ways of measuring degrees of confirmation). The proposed analysis is shown distinct from contentions that the conjunction effect is in fact not a fallacy, and is compared with major competing explanations of the phenomenon, including earlier references to a confirmation-theoretic account. (shrink)
First, a brief historical trace of the developments in confirmation theory leading up to Goodman's infamous "grue" paradox is presented. Then, Goodman's argument is analyzed from both Hempelian and Bayesian perspectives. A guiding analogy is drawn between certain arguments against classical deductive logic, and Goodman's "grue" argument against classical inductive logic. The upshot of this analogy is that the "New Riddle" is not as vexing as many commentators have claimed. Specifically, the analogy reveals an intimate connection between Goodman's problem, and (...) the "problem of old evidence". Several other novel aspects of Goodman's argument are also discussed. (shrink)
To answer the question of whether mathematics needs new axioms, it seems necessary to say what role axioms actually play in mathematics. A first guess is that they are inherently obvious statements that are used to guarantee the truth of theorems proved from them. However, this may neither be possible nor necessary, and it doesn’t seem to fit the historical facts. Instead, I argue that the role of axioms is to systematize uncontroversial facts that mathematicians can accept from a wide (...) variety of philosophical positions. Once the axioms are generally accepted, mathematicians can expend their energies on proving theorems instead of arguing philosophy. Given this account of the role of axioms, I give four criteria that axioms must meet in order to be accepted. Penelope Maddy has proposed a similar view in Naturalism in Mathematics, but she suggests that the philosophical questions bracketed by adopting the axioms can in fact be ignored forever. I contend that these philosophical arguments are in fact important, and should ideally be resolved at some point, but I concede that their resolution is unlikely to affect the ordinary practice of mathematics. However, they may have effects in the margins of mathematics, including with regards to the controversial “large cardinal axioms” Maddy would like to support. (shrink)
Bayesianism is a collection of positions in several related fields, centered on the interpretation of probability as something like degree of belief, as contrasted with relative frequency, or objective chance. However, Bayesianism is far from a unified movement. Bayesians are divided about the nature of the probability functions they discuss; about the normative force of this probability function for ordinary and scientific reasoning and decision making; and about what relation (if any) holds between Bayesian and non-Bayesian concepts.
In applying Bayes’s theorem to the history of science, Bayesians sometimes assume – often without argument – that they can safely ignore very implausible theories. This assumption is false, both in that it can seriously distort the history of science as well as the mathematics and the applicability of Bayes’s theorem. There are intuitively very plausible counter-examples. In fact, one can ignore very implausible or unknown theories only if at least one of two conditions is satisfied: one is certain that (...) there are no unknown theories which explain the phenomenon in question, or the likelihood of at least one of the known theories used in the calculation of the posterior is reasonably large. Often in the history of science, a very surprising phenomenon is observed, and neither of these criteria is satisfied. (shrink)
In this note, I consider various precisifications of the slogan ‘evidence of evidence is evidence’. I provide counter-examples to each of these precisifications (assuming an epistemic probabilistic relevance notion of ‘evidential support’).
Note: This is not an ad hoc change at all. It’s simply the natural thing say here – if one thinks of F as a generalization of classical logical entailment. The extra complexity I had in my original (incorrect) deﬁnition of F was there because I was foolishly trying to encode some non-classical, or “relavant” logical structure in F. I now think this is a mistake, and that I should go with the above, classical account of F. Arguments about relevance (...) logic need to be handled in a diﬀerent way (and a diﬀerent context!). And, besides, as Luca Moretti has shown (see below), the original deﬁnition of F cannot be the right basis for C ! OK, now on to C. (shrink)
We give an analysis of the Monty Hall problem purely in terms of confirmation, without making any lottery assumptions about priors. Along the way, we show the Monty Hall problem is structurally identical to the Doomsday Argument.
In the first paper, I discussed the basic claims of Bayesianism (that degrees of belief are important, that they obey the axioms of probability theory, and that they are rationally updated by either standard or Jeffrey conditionalization) and the arguments that are often used to support them. In this paper, I will discuss some applications these ideas have had in confirmation theory, epistemol- ogy, and statistics, and criticisms of these applications.
We introduce a family of rules for adjusting one's credences in response to learning the credences of others. These rules have a number of desirable features. 1. They yield the posterior credences that would result from updating by standard Bayesian conditionalization on one's peers' reported credences if one's likelihood function takes a particular simple form. 2. In the simplest form, they are symmetric among the agents in the group. 3. They map neatly onto the familiar Condorcet voting results. 4. They (...) preserve shared agreement about independence in a wide range of cases. 5. They commute with conditionalization and with multiple peer updates. Importantly, these rules have a surprising property that we call synergy - peer testimony of credences can provide mutually supporting evidence raising an individual's credence higher than any peer's initial prior report. At first, this may seem to be a strike against them. We argue, however, that synergy is actually a desirable feature and the failure of other updating rules to yield synergy is a strike against them. (shrink)
Fine has shown that assigning any value to the Pasadena game is consistent with a certain standard set of axioms for decision theory. However, I suggest that it might be reasonable to believe that the value of an individual game is constrained by the long-run payout of repeated plays of the game. Although there is no value that repeated plays of the Pasadena game converges to in the standard strong sense, I show that there is a weaker sort of convergence (...) it exhibits, and use this to define a notion of ‘weak expectation’ that can give values to the Pasadena game and many others, though not to all games that fail to have a strong expectation in the standard sense. CiteULike Connotea Del.icio.us What's this? (shrink)
The Paradox of the Ravens (a.k.a,, The Paradox of Confirmation) is indeed an old chestnut. A great many things have been written and said about this paradox and its implications for the logic of evidential support. The first part of this paper will provide a brief survey of the early history of the paradox. This will include the original formulation of the paradox and the early responses of Hempel, Goodman, and Quine. The second part of the paper will describe attempts (...) to resolve the paradox within a Bayesian framework, and show how to improve upon them. This part begins with a discussion of how probabilistic methods can help to clarify the statement of the paradox itself. And it describes some of the early responses to probabilistic explications. We then inspect the assumptions employed by traditional (canonical) Bayesian approaches to the paradox. These assumptions may appear to be overly strong. So, drawing on weaker assumptions, we formulate a new-and-improved Bayesian confirmation-theoretic resolution of the Paradox of the Ravens. (shrink)
Carnap's inductive logic (or confirmation) project is revisited from an "increase in firmness" (or probabilistic relevance) point of view. It is argued that Carnap's main desiderata can be satisfied in this setting, without the need for a theory of "logical probability." The emphasis here will be on explaining how Carnap's epistemological desiderata for inductive logic will need to be modified in this new setting. The key move is to abandon Carnap's goal of bridging confirmation and credence, in favor of bridging (...) confirmation and evidential support. (shrink)
outlined. This account is partly inspired by the work of C.S. Peirce. When we want to consider how degree of confirmation varies with changing I show that a large class of quantitative Bayesian measures of con-.
In a series of papers, Don Fallis points out that although mathematicians are generally unwilling to accept merely probabilistic proofs, they do accept proofs that are incomplete, long and complicated, or partly carried out by computers. He argues that there are no epistemic grounds on which probabilistic proofs can be rejected while these other proofs are accepted. I defend the practice by presenting a property I call ‘transferability’, which probabilistic proofs lack and acceptable proofs have. I also consider what this (...) says about the similarities between mathematics and, on the one hand natural sciences, and on the other hand philosophy. (shrink)
It is sometimes alleged that arguments that probability functions should be countably additive show too much, and that they motivate uncountable additivity as well. I show this is false by giving two naturally motivated arguments for countable additivity that do not motivate uncountable additivity.
In this paper, we compare and contrast two methods for the revision of qualitative beliefs. The first method is generated by a simplistic diachronic Lockean thesis requiring coherence with the agent’s posterior credences after conditionalization. The second method is the orthodox AGM approach to belief revision. Our primary aim is to determine when the two methods may disagree in their recommendations and when they must agree. We establish a number of novel results about their relative behavior. Our most notable finding (...) is that the inverse of the golden ratio emerges as a non-arbitrary bound on the Bayesian method’s free-parameter—the Lockean threshold. This “golden threshold” surfaces in two of our results and turns out to be crucial for understanding the relation between the two methods. (shrink)
Corroborating Testimony, Probability and Surprise’, Erik J. Olsson ascribes to L. Jonathan Cohen the claims that if two witnesses provide us with the same information, then the less probable the information is, the more confident we may be that the information is true (C), and the stronger the information is corroborated (C*). We question whether Cohen intends anything like claims (C) and (C*). Furthermore, he discusses the concurrence of witness reports within a context of independent witnesses, whereas the witnesses in (...) Olsson's model are not independent in the standard sense. We argue that there is much more than, in Olsson's words, ‘a grain of truth’ to claim (C), both on his own characterization as well as on Cohen's characterization of the witnesses. We present an analysis for independent witnesses in the contexts of decision-making under risk and decision-making under uncertainty and generalize the model for n witnesses. As to claim (C*), Olsson's argument is contingent on the choice of a particular measure of corroboration and is not robust in the face of alternative measures. Finally, we delimit the set of cases to which Olsson's model is applicable. 1 Claim (C) examined for Olsson's characterization of the relationship between the witnesses 2 Claim (C) examined for two or more independent witnesses 3 Robustness and multiple measures of corroboration 4 Discussion. (shrink)
Naive deductive accounts of confirmation have the undesirable consequence that if E confirms H, then E also confirms the conjunction H & X, for any X—even if X is utterly irrelevant to H (and E). Bayesian accounts of confirmation also have this property (in the case of deductive evidence). Several Bayesians have attempted to soften the impact of this fact by arguing that—according to Bayesian accounts of confirmation— E will confirm the conjunction H & X less strongly than E confirms (...) H (again, in the case of deductive evidence). I argue that existing Bayesian “resolutions” of this problem are inadequate in several important respects. In the end, I suggest a new‐and‐improved Bayesian account (and understanding) of the problem of irrelevant conjunction. (shrink)
In Thinking and Acting John Pollock offers some criticisms of Bayesian epistemology, and he defends an alternative understanding of the role of probability in epistemology. Here, I defend the Bayesian against some of Pollock's criticisms, and I discuss a potential problem for Pollock's alternative account.
I defend a causal reductionist account of the nature of rates of change like velocity and acceleration. This account identifies velocity with the past derivative of position and acceleration with the future derivative of velocity. Unlike most reductionist accounts, it can preserve the role of velocity as a cause of future positions and acceleration as the effect of current forces. I show that this is possible only if all the fundamental laws are expressed by differential equations of the same order. (...) Consideration of the continuity of time explains why the differential equations are all second order. This explanation is not available on non-causal or non-reductionist accounts of rates of change. Finally, I argue that alleged counterexamples to the reductionist account involving physically impossible worlds are irrelevant to an analysis of the properties that play a causal role in the actual world. 1 Background2 Grounding3 Causation4 The Proposal5 Why No Third Derivatives?6 Why Any Derivatives?7 Counterexamples? (shrink)
In Chapter 12 of Warrant and Proper Function, Alvin Plantinga constructs two arguments against evolutionary naturalism, which he construes as a conjunction E&N .The hypothesis E says that “human cognitive faculties arose by way of the mechanisms to which contemporary evolutionary thought directs our attention (p.220).”1 With respect to proposition N , Plantinga (p. 270) says “it isn’t easy to say precisely what naturalism is,” but then adds that “crucial to metaphysical naturalism, of course, is the view that there is (...) no such person as the God of traditional theism.” Plantinga tries to cast doubt on the conjunction E&N in two ways.His “preliminary argument” aims to show that the conjunction is probably false, given the fact (R) that our psychological mechanisms for forming beliefs about the world are generally reliable.His “main argument” aims to show that the conjunction E&N is self-defeating — if you believe E&N , then you should stop believing that conjunction.Plantinga further develops the main argument in his unpublished paper “Naturalism Defeated” (Plantinga 1994).We will try to show that both arguments contain serious errors. (shrink)
has proposed an interesting and novel Bayesian analysis of the Quine-Duhem (Q–D) problem (i.e., the problem of auxiliary hypotheses). Strevens's analysis involves the use of a simplifying idealization concerning the original Q–D problem. We will show that this idealization is far stronger than it might appear. Indeed, we argue that Strevens's idealization oversimplifies the Q–D problem, and we propose a diagnosis of the source(s) of the oversimplification. Some background on Quine–Duhem Strevens's simplifying idealization Indications that (I) oversimplifies Q–D Strevens's argument (...) for the legitimacy of (I). (shrink)
A Bayesian account of independent evidential support is outlined. This account is partly inspired by the work of C. S. Peirce. I show that a large class of quantitative Bayesian measures of confirmation satisfy some basic desiderata suggested by Peirce for adequate accounts of independent evidence. I argue that, by considering further natural constraints on a probabilistic account of independent evidence, all but a very small class of Bayesian measures of confirmation can be ruled out. In closing, another application of (...) my account to the problem of evidential diversity is also discussed. (shrink)
There are two central questions concerning probability. First, what are its formal features? That is a mathematical question, to which there is a standard, widely (though not universally) agreed upon answer. This answer is reviewed in the next section. Second, what sorts of things are probabilities---what, that is, is the subject matter of probability theory? This is a philosophical question, and while the mathematical theory of probability certainly bears on it, the answer must come from elsewhere. To see why, observe (...) that there are many things in the world that have the mathematical structure of probabilities---the set of measurable regions on the surface of a table, for example---but that one would never mistake for being probabilities. So probability is distinguished by more than just its formal characteristics. The bulk of this essay will be taken up with the central question of what this “more” might be. (shrink)
Charles Stein discovered a paradox in 1955 that many statisticians think is of fundamental importance. Here we explore its philosophical implications. We outline the nature of Stein’s result and of subsequent work on shrinkage estimators; then we describe how these results are related to Bayesianism and to model selection criteria like AIC. We also discuss their bearing on scientific realism and instrumentalism. We argue that results concerning shrinkage estimators underwrite a surprising form of holistic pragmatism.
Naive versions of decision theory take probabilities and utilities as primitive and use expected value to give norms on rational decision. However, standard decision theory takes rational preference as primitive and uses it to construct probability and utility. This paper shows how to justify a version of the naive theory, by taking dominance as the most basic normatively required preference relation, and then extending it by various conditions under which agents should be indifferent between acts. The resulting theory can make (...) all the decisions of classical expected utility theory, plus more in cases where expected utilities are infinite or undefined. Although the theory requires similarly strong assumptions to classical expected utility theory, versions of the theory can be developed with slightly weaker assumptions, without having to prove a new representation theorem for the weaker theory. This alternate foundation is particularly useful if probability is prior to preference, as suggested by the recent program to base probabilism on accuracy and alethic considerations rather than pragmatic ones. (shrink)
The (recent, Bayesian) cognitive science literature on the Wason Task (WT) has been modeled largely after the (not-so-recent, Bayesian) philosophy of science literature on the Paradox of Confirmation (POC). In this paper, we apply some insights from more recent Bayesian approaches to the (POC) to analogous models of (WT). This involves, first, retracing the history of the (POC), and, then, re-examining the (WT) with these historico-philosophical insights in mind.