To the extent that we have reasons to avoid these “bad B -properties”, these arguments provide reasons not to have an incoherent credence function b — and perhaps even reasons to have a coherent one. But, note that these two traditional arguments for probabilism involve what might be called “pragmatic” reasons (not) to be (in)coherent. In the case of the Dutch Book argument, the “bad” property is pragmatically bad (to the extent that one values money). But, it is not clear (...) whether the DBA pinpoints any epistemic defect of incoherent agents. The same can be said for Representation Theorem arguments, since they involve the structure of an agent’s preferences. (shrink)
Arguments for probabilism aim to undergird/motivate a synchronic probabilistic coherence norm for partial beliefs. Standard arguments for probabilism are all of the form: An agent S has a non-probabilistic partial belief function b iff (⇐⇒) S has some “bad” property B (in virtue of the fact that their p.b.f. b has a certain kind of formal property F). These arguments rest on Theorems (⇒) and Converse Theorems (⇐): b is non-Pr ⇐⇒ b has formal property F.
Hempel first introduced the paradox of confirmation in (Hempel 1937). Since then, a very extensive literature on the paradox has evolved (Vranas 2004). Much of this literature can be seen as responding to Hempel’s subsequent discussions and analyses of the paradox in (Hempel 1945). Recently, it was noted that Hempel’s intuitive (and plausible) resolution of the paradox was inconsistent with his official theory of confirmation (Fitelson & Hawthorne 2006). In this article, we will try to explain how this inconsistency (...) affects the historical dialectic about the paradox and how it illuminates the nature of confirmation. In the end, we will argue that Hempel’s intuitions about the paradox of confirmation were (basically) correct, and that it is his theory that should be rejected, in favor of a (broadly) Bayesian account of confirmation. (shrink)
Naive deductivist accounts of confirmation have the undesirable consequence that if E confirms H, then E also confirms the conjunction H·X, for any X—even if X is completely irrelevant to E and H. Bayesian accounts of confirmation may appear to have the same problem. In a recent article in this journal Fitelson (2002) argued that existing Bayesian attempts to resolve of this problem are inadequate in several important respects. Fitelson then proposes a new‐and‐improved Bayesian account that overcomes the (...) problem of irrelevant conjunction, and does so in a more general setting than past attempts. We will show how to simplify and improve upon Fitelson's solution. (shrink)
Naive deductivist accounts of confirmation have the undesirable consequence that if E confirms H, then E also confirms the conjunction H·X, for any X _ even if X is completely irrelevant to E and H. Bayesian accounts of confirmation may appear to have the same problem. In a recent article in this journal Fitelson (2002) argued that existing Bayesian attempts to resolve of this problem are inadequate in several important respects. Fitelson then proposes a new-and-improved Bayesian account that (...) overcomes the problem of irrelevant conjunction, and does so in a more general setting than past attempts. We will show how to simplify and improve upon Fitelson’s solution. (shrink)
By and large, we think (Strevens's ) is a useful reply to our original critique (Fitelson and Waterman ) of his article on the Quine-Duhem (QD) problem (Strevens ). But, we remain unsatisfied with several aspects of his reply (and his original article). Ultimately, we do not think he properly addresses our most important worries. In this brief rejoinder, we explain our remaining worries, and we issue a revised challenge for Strevens's approach to QD.
In response to a paper by Harris & Fitelson, Slaney states several open questions concerning possible strategies for proving distributivity in a wide class of positive sentential logics. In this note, I provide answers to all of Slaney's open questions. The result is a better understanding of the class of positive logics in which distributivity holds.
Many philosophers have become worried about the use of standard real numbers for the probability function that represents an agent's credences. They point out that real numbers can't capture the distinction between certain extremely unlikely events and genuinely impossible ones—they are both represented by credence 0, which violates a principle known as “regularity.” Following Skyrms 1980 and Lewis 1980, they recommend that we should instead use a much richer set of numbers, called the “hyperreals.” This essay argues that this popular (...) view is the result of two mistakes. The first mistake, which this essay calls the “numerical fallacy,” is to assume that a distinction that isn't represented by different numbers isn't represented at all in a mathematical representation. In this case, the essay claims that although the real numbers do not make all relevant distinctions, the full mathematical structure of a probability function does. The second mistake is that the hyperreals make too many distinctions. They have a much more complex structure than credences in ordinary propositions can have, so they make distinctions that don't exist among credences. While they might be useful for generating certain mathematical models, they will not appear in a faithful mathematical representation of credences of ordinary propositions. (shrink)
Let E be a set of n propositions E1, ..., En. We seek a probabilistic measure C(E) of the ‘degree of coherence’ of E. Intuitively, we want C to be a quantitative, probabilistic generalization of the (deductive) logical coherence of E. So, in particular, we require C to satisfy the following..
Contemporary Bayesian confirmation theorists measure degree of (incremental) confirmation using a variety of non-equivalent relevance measures. As a result, a great many of the arguments surrounding quantitative Bayesian confirmation theory are implicitly sensitive to choice of measure of confirmation. Such arguments are enthymematic, since they tacitly presuppose that certain relevance measures should be used (for various purposes) rather than other relevance measures that have been proposed and defended in the philosophical literature. I present a survey of this pervasive class of (...) Bayesian confirmation-theoretic enthymemes, and a brief analysis of some recent attempts to resolve the problem of measure sensitivity. (shrink)
According to Bayesian confirmation theory, evidence E (incrementally) confirms (or supports) a hypothesis H (roughly) just in case E and H are positively probabilistically correlated (under an appropriate probability function Pr). There are many logically equivalent ways of saying that E and H are correlated under Pr. Surprisingly, this leads to a plethora of non-equivalent quantitative measures of the degree to which E confirms H (under Pr). In fact, many non-equivalent Bayesian measures of the degree to which E confirms (or (...) supports) H have been proposed and defended in the literature on inductive logic. I provide a thorough historical survey of the various proposals, and a detailed discussion of the philosophical ramifications of the differences between them. I argue that the set of candidate measures can be narrowed drastically by just a few intuitive and simple desiderata. In the end, I provide some novel and compelling reasons to think that the correct measure of degree of evidential support (within a Bayesian framework) is the (log) likelihood ratio. The central analyses of this research have had some useful and interesting byproducts, including: (i ) a new Bayesian account of (confirmationally) independent evidence, which has applications to several important problems in con- firmation theory, including the problem of the (confirmational) value of evidential diversity, and (ii ) novel resolutions of several problems in Bayesian confirmation theory, motivated by the use of the (log) likelihood ratio measure, including a reply to the Popper-Miller critique of probabilistic induction, and a new analysis and resolution of the problem of irrelevant conjunction (a.k.a., the tacking problem). (shrink)
Several forms of symmetry in degrees of evidential support areconsidered. Some of these symmetries are shown not to hold in general. This has implications for the adequacy of many measures of degree ofevidential support that have been proposed and defended in the philosophical literature.
In this paper, we investigate various possible (Bayesian) precisifications of the (somewhat vague) statements of “the equal weight view” (EWV) that have appeared in the recent literature on disagreement. We will show that the renditions of (EWV) that immediately suggest themselves are untenable from a Bayesian point of view. In the end, we will propose some tenable (but not necessarily desirable) interpretations of (EWV). Our aim here will not be to defend any particular Bayesian precisification of (EWV), but rather to (...) raise awareness about some of the difficulties inherent in formulating such precisifications. (shrink)
To answer the question of whether mathematics needs new axioms, it seems necessary to say what role axioms actually play in mathematics. A first guess is that they are inherently obvious statements that are used to guarantee the truth of theorems proved from them. However, this may neither be possible nor necessary, and it doesn’t seem to fit the historical facts. Instead, I argue that the role of axioms is to systematize uncontroversial facts that mathematicians can accept from a wide (...) variety of philosophical positions. Once the axioms are generally accepted, mathematicians can expend their energies on proving theorems instead of arguing philosophy. Given this account of the role of axioms, I give four criteria that axioms must meet in order to be accepted. Penelope Maddy has proposed a similar view in Naturalism in Mathematics, but she suggests that the philosophical questions bracketed by adopting the axioms can in fact be ignored forever. I contend that these philosophical arguments are in fact important, and should ideally be resolved at some point, but I concede that their resolution is unlikely to affect the ordinary practice of mathematics. However, they may have effects in the margins of mathematics, including with regards to the controversial “large cardinal axioms” Maddy would like to support. (shrink)
The conjunction fallacy has been a key topic in debates on the rationality of human reasoning and its limitations. Despite extensive inquiry, however, the attempt to provide a satisfactory account of the phenomenon has proved challenging. Here we elaborate the suggestion (first discussed by Sides, Osherson, Bonini, & Viale, 2002) that in standard conjunction problems the fallacious probability judgements observed experimentally are typically guided by sound assessments of _confirmation_ relations, meant in terms of contemporary Bayesian confirmation theory. Our main formal (...) result is a confirmation-theoretic account of the conjunction fallacy, which is proven _robust_ (i.e., not depending on various alternative ways of measuring degrees of confirmation). The proposed analysis is shown distinct from contentions that the conjunction effect is in fact not a fallacy, and is compared with major competing explanations of the phenomenon, including earlier references to a confirmation-theoretic account. (shrink)
Bayesianism is a collection of positions in several related fields, centered on the interpretation of probability as something like degree of belief, as contrasted with relative frequency, or objective chance. However, Bayesianism is far from a unified movement. Bayesians are divided about the nature of the probability functions they discuss; about the normative force of this probability function for ordinary and scientific reasoning and decision making; and about what relation (if any) holds between Bayesian and non-Bayesian concepts.
Note: This is not an ad hoc change at all. It’s simply the natural thing say here – if one thinks of F as a generalization of classical logical entailment. The extra complexity I had in my original (incorrect) deﬁnition of F was there because I was foolishly trying to encode some non-classical, or “relavant” logical structure in F. I now think this is a mistake, and that I should go with the above, classical account of F. Arguments about relevance (...) logic need to be handled in a diﬀerent way (and a diﬀerent context!). And, besides, as Luca Moretti has shown (see below), the original deﬁnition of F cannot be the right basis for C ! OK, now on to C. (shrink)
In the first paper, I discussed the basic claims of Bayesianism (that degrees of belief are important, that they obey the axioms of probability theory, and that they are rationally updated by either standard or Jeffrey conditionalization) and the arguments that are often used to support them. In this paper, I will discuss some applications these ideas have had in confirmation theory, epistemol- ogy, and statistics, and criticisms of these applications.
Likelihoodists and Bayesians seem to have a fundamental disagreement about the proper probabilistic explication of relational (or contrastive) conceptions of evidential support (or confirmation). In this paper, I will survey some recent arguments and results in this area, with an eye toward pinpointing the nexus of the dispute. This will lead, first, to an important shift in the way the debate has been couched, and, second, to an alternative explication of relational support, which is in some sense a "middle way" (...) between Likelihoodism and Bayesianism. In the process, I will propose some new work for an old probability puzzle: the "Monty Hall" problem. (shrink)
Fine has shown that assigning any value to the Pasadena game is consistent with a certain standard set of axioms for decision theory. However, I suggest that it might be reasonable to believe that the value of an individual game is constrained by the long-run payout of repeated plays of the game. Although there is no value that repeated plays of the Pasadena game converges to in the standard strong sense, I show that there is a weaker sort of convergence (...) it exhibits, and use this to define a notion of ‘weak expectation’ that can give values to the Pasadena game and many others, though not to all games that fail to have a strong expectation in the standard sense. CiteULike Connotea Del.icio.us What's this? (shrink)
The Paradox of the Ravens (a.k.a,, The Paradox of Confirmation) is indeed an old chestnut. A great many things have been written and said about this paradox and its implications for the logic of evidential support. The first part of this paper will provide a brief survey of the early history of the paradox. This will include the original formulation of the paradox and the early responses of Hempel, Goodman, and Quine. The second part of the paper will describe attempts (...) to resolve the paradox within a Bayesian framework, and show how to improve upon them. This part begins with a discussion of how probabilistic methods can help to clarify the statement of the paradox itself. And it describes some of the early responses to probabilistic explications. We then inspect the assumptions employed by traditional (canonical) Bayesian approaches to the paradox. These assumptions may appear to be overly strong. So, drawing on weaker assumptions, we formulate a new-and-improved Bayesian confirmation-theoretic resolution of the Paradox of the Ravens. (shrink)
In this note, I consider various precisifications of the slogan ‘evidence of evidence is evidence’. I provide counter-examples to each of these precisifications (assuming an epistemic probabilistic relevance notion of ‘evidential support’).
outlined. This account is partly inspired by the work of C.S. Peirce. When we want to consider how degree of confirmation varies with changing I show that a large class of quantitative Bayesian measures of con-.
First, a brief historical trace of the developments in confirmation theory leading up to Goodman's infamous "grue" paradox is presented. Then, Goodman's argument is analyzed from both Hempelian and Bayesian perspectives. A guiding analogy is drawn between certain arguments against classical deductive logic, and Goodman's "grue" argument against classical inductive logic. The upshot of this analogy is that the "New Riddle" is not as vexing as many commentators have claimed (especially, from a Bayesian inductive-logical point of view). Specifically, the analogy (...) reveals an intimate connection between Goodman's problem, and the "problem of old evidence". Several other novel aspects of Goodman's argument are also discussed (mainly, from a Bayesian perspective). (shrink)
We give an analysis of the Monty Hall problem purely in terms of confirmation, without making any lottery assumptions about priors. Along the way, we show the Monty Hall problem is structurally identical to the Doomsday Argument.
In a series of papers, Don Fallis points out that although mathematicians are generally unwilling to accept merely probabilistic proofs, they do accept proofs that are incomplete, long and complicated, or partly carried out by computers. He argues that there are no epistemic grounds on which probabilistic proofs can be rejected while these other proofs are accepted. I defend the practice by presenting a property I call ‘transferability’, which probabilistic proofs lack and acceptable proofs have. I also consider what this (...) says about the similarities between mathematics and, on the one hand natural sciences, and on the other hand philosophy. (shrink)
It is sometimes alleged that arguments that probability functions should be countably additive show too much, and that they motivate uncountable additivity as well. I show this is false by giving two naturally motivated arguments for countable additivity that do not motivate uncountable additivity.
Naive deductive accounts of confirmation have the undesirable consequence that if E confirms H, then E also confirms the conjunction H & X, for any X—even if X is utterly irrelevant to H (and E). Bayesian accounts of confirmation also have this property (in the case of deductive evidence). Several Bayesians have attempted to soften the impact of this fact by arguing that—according to Bayesian accounts of confirmation— E will confirm the conjunction H & X less strongly than E confirms (...) H (again, in the case of deductive evidence). I argue that existing Bayesian “resolutions” of this problem are inadequate in several important respects. In the end, I suggest a new‐and‐improved Bayesian account (and understanding) of the problem of irrelevant conjunction. (shrink)
In Thinking and Acting John Pollock offers some criticisms of Bayesian epistemology, and he defends an alternative understanding of the role of probability in epistemology. Here, I defend the Bayesian against some of Pollock's criticisms, and I discuss a potential problem for Pollock's alternative account.
Carnap's inductive logic (or confirmation) project is revisited from an "increase in firmness" (or probabilistic relevance) point of view. It is argued that Carnap's main desiderata can be satisfied in this setting, without the need for a theory of "logical probability." The emphasis here will be on explaining how Carnap's epistemological desiderata for inductive logic will need to be modified in this new setting. The key move is to abandon Carnap's goal of bridging confirmation and credence, in favor of bridging (...) confirmation and evidential support. (shrink)
In Chapter 12 of Warrant and Proper Function, Alvin Plantinga constructs two arguments against evolutionary naturalism, which he construes as a conjunction E&N .The hypothesis E says that “human cognitive faculties arose by way of the mechanisms to which contemporary evolutionary thought directs our attention (p.220).”1 With respect to proposition N , Plantinga (p. 270) says “it isn’t easy to say precisely what naturalism is,” but then adds that “crucial to metaphysical naturalism, of course, is the view that there is (...) no such person as the God of traditional theism.” Plantinga tries to cast doubt on the conjunction E&N in two ways.His “preliminary argument” aims to show that the conjunction is probably false, given the fact (R) that our psychological mechanisms for forming beliefs about the world are generally reliable.His “main argument” aims to show that the conjunction E&N is self-defeating — if you believe E&N , then you should stop believing that conjunction.Plantinga further develops the main argument in his unpublished paper “Naturalism Defeated” (Plantinga 1994).We will try to show that both arguments contain serious errors. (shrink)
has proposed an interesting and novel Bayesian analysis of the Quine-Duhem (Q–D) problem (i.e., the problem of auxiliary hypotheses). Strevens's analysis involves the use of a simplifying idealization concerning the original Q–D problem. We will show that this idealization is far stronger than it might appear. Indeed, we argue that Strevens's idealization oversimplifies the Q–D problem, and we propose a diagnosis of the source(s) of the oversimplification. Some background on Quine–Duhem Strevens's simplifying idealization Indications that (I) oversimplifies Q–D Strevens's argument (...) for the legitimacy of (I). (shrink)
I defend a causal reductionist account of the nature of rates of change like velocity and acceleration. This account identifies velocity with the past derivative of position and acceleration with the future derivative of velocity. Unlike most reductionist accounts, it can preserve the role of velocity as a cause of future positions and acceleration as the effect of current forces. I show that this is possible only if all the fundamental laws are expressed by differential equations of the same order. (...) Consideration of the continuity of time explains why the differential equations are all second order. This explanation is not available on non-causal or non-reductionist accounts of rates of change. Finally, I argue that alleged counterexamples to the reductionist account involving physically impossible worlds are irrelevant to an analysis of the properties that play a causal role in the actual world. 1 Background2 Grounding3 Causation4 The Proposal5 Why No Third Derivatives?6 Why Any Derivatives?7 Counterexamples? (shrink)
A Bayesian account of independent evidential support is outlined. This account is partly inspired by the work of C. S. Peirce. I show that a large class of quantitative Bayesian measures of confirmation satisfy some basic desiderata suggested by Peirce for adequate accounts of independent evidence. I argue that, by considering further natural constraints on a probabilistic account of independent evidence, all but a very small class of Bayesian measures of confirmation can be ruled out. In closing, another application of (...) my account to the problem of evidential diversity is also discussed. (shrink)
The (recent, Bayesian) cognitive science literature on the Wason Task (WT) has been modeled largely after the (not-so-recent, Bayesian) philosophy of science literature on the Paradox of Confirmation (POC). In this paper, we apply some insights from more recent Bayesian approaches to the (POC) to analogous models of (WT). This involves, first, retracing the history of the (POC), and, then, re-examining the (WT) with these historico-philosophical insights in mind.
Corroborating Testimony, Probability and Surprise’, Erik J. Olsson ascribes to L. Jonathan Cohen the claims that if two witnesses provide us with the same information, then the less probable the information is, the more confident we may be that the information is true (C), and the stronger the information is corroborated (C*). We question whether Cohen intends anything like claims (C) and (C*). Furthermore, he discusses the concurrence of witness reports within a context of independent witnesses, whereas the witnesses in (...) Olsson's model are not independent in the standard sense. We argue that there is much more than, in Olsson's words, ‘a grain of truth’ to claim (C), both on his own characterization as well as on Cohen's characterization of the witnesses. We present an analysis for independent witnesses in the contexts of decision-making under risk and decision-making under uncertainty and generalize the model for n witnesses. As to claim (C*), Olsson's argument is contingent on the choice of a particular measure of corroboration and is not robust in the face of alternative measures. Finally, we delimit the set of cases to which Olsson's model is applicable. 1 Claim (C) examined for Olsson's characterization of the relationship between the witnesses 2 Claim (C) examined for two or more independent witnesses 3 Robustness and multiple measures of corroboration 4 Discussion. (shrink)
In this paper, the authors describe their initial investigations in computational metaphysics. Our method is to implement axiomatic metaphysics in an automated reasoning system. In this paper, we describe what we have discovered when the theory of abstract objects is implemented in PROVER9 (a first-order automated reasoning system which is the successor to OTTER). After reviewing the second-order, axiomatic theory of abstract objects, we show (1) how to represent a fragment of that theory in PROVER9's first-order syntax, and (2) how (...) PROVER9 then finds proofs of interesting theorems of metaphysics, such as that every possible world is maximal. We conclude the paper by discussing some issues for further research. (shrink)
There are two central questions concerning probability. First, what are its formal features? That is a mathematical question, to which there is a standard, widely (though not universally) agreed upon answer. This answer is reviewed in the next section. Second, what sorts of things are probabilities---what, that is, is the subject matter of probability theory? This is a philosophical question, and while the mathematical theory of probability certainly bears on it, the answer must come from elsewhere. To see why, observe (...) that there are many things in the world that have the mathematical structure of probabilities---the set of measurable regions on the surface of a table, for example---but that one would never mistake for being probabilities. So probability is distinguished by more than just its formal characteristics. The bulk of this essay will be taken up with the central question of what this “more” might be. (shrink)
– Foundation: Probabilistic Conﬁrmation (c) from a Logical POV ∗ cph, eq as a “relevant” quantitative generalization of pe hq ∗ cph, eq, so understood, is not Prpe hq or Prph | eq, etc. ∗ cph, eq is something akin (ordinally) to the likelihood ratio..