It is well known that de se (or ‘self-locating’) propositions complicate the standard picture of how we should respond to evidence. This has given rise to a substantial literature centered around puzzles like Sleeping Beauty, Dr. Evil, and Doomsday—and it has also sparked controversy over a style of argument that has recently been adopted by theoretical cosmologists. These discussions often dwell on intuitions about a single kind of case, but it’s worth seeking a rule that can unify our treatment of (...) all evidence, whether de dicto or de se. -/- This paper is about three candidates for such a rule, presented as replacements for the standard updating rule. Each rule stems from the idea that we should treat ourselves as a random sample, a heuristic that underlies many of the intuitions that have been pumped in treatments of the standard puzzles. But each rule also yields some strange results when applied across the board. This leaves us with some difficult options. We can seek another way to refine the random-sample heuristic, e.g. by restricting one of our rules. We can try to live with the strange results, perhaps granting that useful principles can fail at the margins. Or we can reject the random-sample heuristic as fatally flawed—which means rethinking its influence in even the simplest cases. (shrink)
Contemporary arguments for and against the existence of God are often formulated within a broadly Bayesian framework. Arguments of this sort focus on a specific feature of the world that is taken to provide probabilistic evidence for or against the existence of God: the existence of life in a ‘fine-tuned’ universe, the magnitude of suffering, divine hiddenness, etc. In each case, the idea is that things were more likely to be this way if God existed than if God did not (...) exist—or the other way around. Less attention, however, has been paid to the deeper question of what it takes for something to count as evidence for or against the existence of God. What exactly is being claimed when it is said that some feature of the world is more or less likely given the existence of God, and how should we go about assessing such a claim? This paper is about epistemological issues—and in particular, certain potential cognitive errors—that arise when we reason probabilistically about the existence of God. The moral is not that we should refrain from reasoning in this way, but that we should be mindful of potential errors when we do. (shrink)
I argue that when we use ‘probability’ language in epistemic contexts—e.g., when we ask how probable some hypothesis is, given the evidence available to us—we are talking about degrees of support, rather than degrees of belief. The epistemic probability of A given B is the mind-independent degree to which B supports A, not the degree to which someone with B as their evidence believes A, or the degree to which someone would or should believe A if they had B as (...) their evidence. My central argument is that the degree-of-support interpretation lets us better model good reasoning in certain cases involving old evidence. Degree-of-belief interpretations make the wrong predictions not only about whether old evidence confirms new hypotheses, but about the values of the probabilities that enter into Bayes’ Theorem when we calculate the probability of hypotheses conditional on old evidence and new background information. (shrink)
Bayesian confirmation theory is our best formal framework for describing inductive reasoning. The problem of old evidence is a particularly difficult one for confirmation theory, because it suggests that this framework fails to account for central and important cases of inductive reasoning and scientific inference. I show that we can appeal to the fragmentation of doxastic states to solve this problem for confirmation theory. This fragmentation solution is independently well-motivated because of the success of fragmentation in solving other problems. I (...) also argue that the fragmentation solution is preferable to other solutions to the problem of old evidence. These other solutions are already committed to something like fragmentation, but suffer from difficulties due to their additional commitments. If these arguments are successful, Bayesian confirmation theory is saved from the problem of old evidence, and the argument for fragmentation is bolstered by its ability to solve yet another problem. (shrink)
Wenmackers and Romeijn (2016) formalize ideas going back to Shimony (1970) and Putnam (1963) into an open-minded Bayesian inductive logic, that can dynamically incorporate statistical hypotheses proposed in the course of the learning process. In this paper, we show that Wenmackers and Romeijn’s proposal does not preserve the classical Bayesian consistency guarantee of merger with the true hypothesis. We diagnose the problem, and offer a forward-looking open-minded Bayesians that does preserve a version of this guarantee.
According to certain normative theories in epistemology, rationality requires us to be logically omniscient. Yet this prescription clashes with our ordinary judgments of rationality. How should we resolve this tension? In this paper, I focus particularly on the logical omniscience requirement in Bayesian epistemology. Building on a key insight by Hacking :311–325, 1967), I develop a version of Bayesianism that permits logical ignorance. This includes: an account of the synchronic norms that govern a logically ignorant individual at any given time; (...) an account of how we reduce our logical ignorance by learning logical facts and how we should update our credences in response to such evidence; and an account of when logical ignorance is irrational and when it isn’t. At the end, I explain why the requirement of logical omniscience remains true of ideal agents with no computational, processing, or storage limitations. (shrink)
The Bayesian maxim for rational learning could be described as conservative change from one probabilistic belief or credence function to another in response to newinformation. Roughly: ‘Hold fixed any credences that are not directly affected by the learning experience.’ This is precisely articulated for the case when we learn that some proposition that we had previously entertained is indeed true (the rule of conditionalisation). But can this conservative-change maxim be extended to revising one’s credences in response to entertaining propositions or (...) concepts of which one was previously unaware? The economists Karni and Vierø (2013, 2015) make a proposal in this spirit. Philosophers have adopted effectively the same rule: revision in response to growing awareness should not affect the relative probabilities of propositions in one’s ‘old’ epistemic state. The rule is compelling, but only under the assumptions that its advocates introduce. It is not a general requirement of rationality, or so we argue. We provide informal counterexamples. And we show that, when awareness grows, the boundary between one’s ‘old’ and ‘new’ epistemic commitments is blurred. Accordingly, there is no general notion of conservative change in this setting. (shrink)
The problem of old evidence, first described by Glymour , is still widely regarded as one of the most pressing foundational challenges to the Bayesian account of scientific reasoning. Many solutions have been proposed, but all of them have drawbacks and none is considered to be definitive. Here, we introduce and defend a new kind of solution, according to which hypotheses are confirmed when we become more confident that they provide the only way of accounting for the known evidence.
This paper discusses an aspect of the problem of old evidence which I call here the general problem of old evidence. The probability of old evidence is one or close to one, because background information K entails the evidence E or K consists of propositions which make E probable. In the literature, K is considered as a proposition relevant to E. Based on examples, I argue that K does not support the truth of E; instead, K supports the evidential status (...) of E. I define background information as a set of propositions necessary and sufficient to consider E as the evidence of hypothesis H. Background information is relevant to the bearing of E on H, but not to the truth of E itself. My definition of background information implies that background information of E is probabilistically independent of E; that is, in the case of old evidence, neither P = 1, nor P ≈ 1. (shrink)
The Problem of Old Evidence is a perennial issue for Bayesian confirmation theory. Garber famously argues that the problem can be solved by conditionalizing on the proposition that a hypothesis deductively implies the existence of the old evidence. In recent work, Hartmann and Fitelson :712–717, 2015) and Sprenger :383–401, 2015) aim for similar, but more general, solutions to the Problem of Old Evidence. These solutions are more general because they allow the explanatory relationship between a new hypothesis and old evidence (...) to be inductive, rather than deductive. In this paper, I argue that these solutions are either unsound or under-motivated, depending on the case of inductive explanation that we have in mind. This lends support to the broader claim that Garber–Style Bayesian confirmation cannot capture the sense in which new hypotheses that do not deductively imply old evidence nevertheless seem to be confirmed via old evidence. (shrink)
The Problem of Old Evidence is a perennial issue for Bayesian confirmation theory. Garber (Test Sci Theor 10:99–131, 1983) famously argues that the problem can be solved by conditionalizing on the proposition that a hypothesis deductively implies the existence of the old evidence. In recent work, Hartmann and Fitelson (Philos Sci 82(4):712–717, 2015) and Sprenger (Philos Sci 82(3):383–401, 2015) aim for similar, but more general, solutions to the Problem of Old Evidence. These solutions are more general because they allow the (...) explanatory relationship between a new hypothesis and old evidence to be inductive, rather than deductive. In this paper, I argue that these solutions are either unsound or under-motivated, depending on the case of inductive explanation that we have in mind. This lends support to the broader claim that Garber–Style Bayesian confirmation cannot capture the sense in which new hypotheses that do not deductively imply old evidence nevertheless seem to be confirmed via old evidence. (shrink)
We present a conservative extension of a Bayesian account of confirmation that can deal with the problem of old evidence and new theories. So-called open-minded Bayesianism challenges the assumption—implicit in standard Bayesianism—that the correct empirical hypothesis is among the ones currently under consideration. It requires the inclusion of a catch-all hypothesis, which is characterized by means of sets of probability assignments. Upon the introduction of a new theory, the former catch-all is decomposed into a new empirical hypothesis and a new (...) catch-all. As will be seen, this motivates a second update rule, besides Bayes’ rule, for updating probabilities in light of a new theory. This rule conserves probability ratios among the old hypotheses. This framework allows for old evidence to confirm a new hypothesis due to a shift in the theoretical context. The result is a version of Bayesianism that, in the words of Earman, “keep[s] an open mind, but not so open that your brain falls out”. (shrink)
Formal methods are changing how epistemology is being studied and understood. A Critical Introduction to Formal Epistemology introduces the types of formal theories being used and explains how they are shaping the subject. Beginning with the basics of probability and Bayesianism, it shows how representing degrees of belief using probabilities informs central debates in epistemology. As well as discussing induction, the paradox of confirmation and the main challenges to Bayesianism, this comprehensive overview covers objective chance, peer disagreement, the concept of (...) full belief, and the traditional problems of justification and knowledge. Subjecting each position to a critical analysis, it explains the main issues in formal epistemology, and the motivations and drawbacks of each position. Written in an accessible language and supported study questions, guides to further reading and a glossary, positions are placed in an historic context to give a sense of the development of the field. As the first introductory textbook on formal epistemology, A Critical Introduction to Formal Epistemology is an invaluable resource for students and scholars of contemporary epistemology. (shrink)
The Problem of Old Evidence has troubled Bayesians ever since Clark Glymour first presented it in 1980. Several solutions have been proposed, but all of them have drawbacks and none of them is considered to be the definite solution. In this article, I propose a new solution which combines several old ideas with a new one. It circumvents the crucial omniscience problem in an elegant way and leads to a considerable confirmation of the hypothesis in question.
In the first paper, I discussed the basic claims of Bayesianism (that degrees of belief are important, that they obey the axioms of probability theory, and that they are rationally updated by either standard or Jeffrey conditionalization) and the arguments that are often used to support them. In this paper, I will discuss some applications these ideas have had in confirmation theory, epistemol- ogy, and statistics, and criticisms of these applications.
How to regard the weight we give to a proposition on the grounds of its being endorsed by an authority? I examine this question as it is raised within the epistemology of science, and I argue that “authority-based weight” should receive special handling, for the following reason. Our assessments of other scientists’ competence or authority are nearly always provisional, in the sense that to save time and money, they are not made nearly as carefully as they could be---indeed, they are (...) typically made on the basis of only a small portion of the available evidence. Consequently, we need to represent the authority-based elements of our epistemic attitudes in such a way as to allow the later revision of those elements, in case we decide in the light of new priorities that a more conscientious assessment is warranted. I look to the literature in confirmation theory, statistics, and economics for a semiformal model of this revision process, and make a particular proposal of my own. The discussion also casts some light on the question of why certain aspects of science’s epistemic state are not made public. (shrink)
First, a brief historical trace of the developments in confirmation theory leading up to Goodman's infamous "grue" paradox is presented. Then, Goodman's argument is analyzed from both Hempelian and Bayesian perspectives. A guiding analogy is drawn between certain arguments against classical deductive logic, and Goodman's "grue" argument against classical inductive logic. The upshot of this analogy is that the "New Riddle" is not as vexing as many commentators have claimed. Specifically, the analogy reveals an intimate connection between Goodman's problem, and (...) the "problem of old evidence". Several other novel aspects of Goodman's argument are also discussed. (shrink)
With the notable exception of David Lewis, most of those writing on the Sleeping Beauty problem have argued that 1/3 is the correct answer. Terence Horgan has provided the clearest account of why, contrary to Lewis, Beauty has evidence against the proposition that the coin comes up heads when she awakens on Monday. In this paper, I argue that Horgan’s proposal fails because it neglects important facts about epistemic probability.
In this paper, we discuss three probabilistic arguments for the existence of multiple universes. First, we provide an analysis of total evidence and use that analysis to defend Roger White's "this universe" objection to a standard fine-tuning argument for multiple universes. Second, we explain why Rodney Holder's recent cosmological argument for multiple universes is unconvincing. Third, we develop a "Cartesian argument" for multiple universes. While this argument is not open to the objections previously noted, we show that, given certain highly (...) plausible assumptions about evidence and epistemic probability, the proposition which it treats as evidence cannot coherently be regarded as evidence for anything. This raises the question of whether to reject the assumptions or accept that such a proposition cannot be evidence. (shrink)
Bayesian conceptions of evidence have been invoked in recent arguments regarding the existence of God, the hypothesis of multiple physical universes, and the Doomsday Argument. Philosophers writing on these topics often claim that, given a Bayesian account of evidence, our existence or something entailed by our existence (perhaps in conjunction with some background knowledge or assumption) may serve as evidence for each of us. In this paper, I argue that this widespread view is mistaken. The mere fact of one's existence (...) qua conscious creature cannot serve as evidence on the standard Bayesian conception of evidence because knowledge of one's existence is a necessary part of the background knowledge relative to which all epistemic probabilities are defined. It follows that some formulations of the fine-tuning argument (for theism or a multiverse), the argument from consciousness (for theism) and a rejoinder to the Doomsday argument are mistaken. (shrink)
This paper is a response to Stephen Leeds’s "Juhl on Many Worlds". Contrary to what Leeds claims, we can legitimately argue for nontrivial conclusions by appeal to our existence. The ’problem of old evidence’, applied to the ’old evidence’ that we exist, seems to be a red herring in the context of determining whether there is a rationally convincing argument for the existence of many universes. A genuinely salient worry is whether multiversers can avoid illicit reuse of empirical evidence in (...) their arguments. (shrink)
The fundamental constants that are involved in the laws of physics which describe our universe are finely-tuned for life, in the sense that if some of the constants had slightly different values life could not exist. Some people hold that this provides evidence for the existence of God. I will present a probabilistic version of this fine-tuning argument which is stronger than all other versions in the literature. Nevertheless, I will show that one can have reasonable opinions such that the (...) fine-tuning argument doesn’t lead to an increase in one’s probability for the existence of God. (shrink)
I argue that Bayesians need two distinct notions of probability. We need the usual degree-of-belief notion that is central to the Bayesian account of rational decision. But Bayesians also need a separate notion of probability that represents the degree to which evidence supports hypotheses. Although degree-of-belief is well suited to the theory of rational decision, Bayesians have tried to apply it to the realm of hypothesis confirmation as well. This double duty leads to the problem of old evidence, a problem (...) that, we will see, is much more extensive than usually recognized. I will argue that degree-of-support is distinct from degree-of-belief, that it is not just a kind of counterfactual degree-of-belief, and that it supplements degree-of-belief in a way that resolves the problems of old evidence and provides a richer account of the logic of scientific inference and belief. (shrink)
Bayesianism is the position that scientific reasoning is probabilistic and that probabilities are adequately interpreted as an agent's actual subjective degrees of belief, measured by her betting behaviour. Confirmation is one important aspect of scientific reasoning. The thesis of this paper is the following: if scientific reasoning is at all probabilistic, the subjective interpretation has to be given up in order to get right confirmation—and thus scientific reasoning in general. The Bayesian approach to scientific reasoning Bayesian confirmation theory The example (...) The less reliable the source of information, the higher the degree of Bayesian confirmation Measure sensitivity A more general version of the problem of old evidence Conditioning on the entailment relation The counterfactual strategy Generalizing the counterfactual strategy The desired result, and a necessary and sufficient condition for it Actual degrees of belief The common knock-down feature, or ‘anything goes’ The problem of prior probabilities. (shrink)
Jeffrey updating is a natural extension of Bayesian updating to cases where the evidence is uncertain. But, the resulting degrees of belief appear to be sensitive to the order in which the uncertain evidence is acquired, a rather un-Bayesian looking effect. This order dependence results from the way in which basic Jeffrey updating is usually extended to sequences of updates. The usual extension seems very natural, but there are other plausible ways to extend Bayesian updating that maintain order-independence. I will (...) explore three models of sequential updating, the usual extension and two alternatives. I will show that the alternative updating schemes derive from extensions of the usual rigidity requirement, which is at the heart of Jeffrey updating. Finally, I will establish necessary and sufficient conditions for order-independent updating, and show that extended rigidity is closely related to these conditions. (shrink)
Bayesian epistemology suggests various ways of measuring the support that a piece of evidence provides a hypothesis. Such measures are defined in terms of a subjective probability assignment, pr, over propositions entertained by an agent. The most standard measure (where “H” stands for “hypothesis” and “E” stands for “evidence”) is: the difference measure: d(H,E) = pr(H/E) - pr(H).0 This may be called a “positive (probabilistic) relevance measure” of confirmation, since, according to it, a piece of evidence E qualitatively confirms a (...) hypothesis H if and only if pr(H/E) > pr(H), where qualitative disconfirmation is characterized by replacing “>” with “ “ with “=”. Other more or less standard positive relevance measures that have been proposed are: the log-ratio measure: r(H,E) = log[pr(H/E)/pr(H)] and the log-likelihood-ratio measure: l(H,E) = log[pr(E/H)/pr(E/~H)]. (shrink)
Knowledge and its Limits presents a systematic new conception of knowledge as a kind of mental stage sensitive to the knower's environment. It makes a major contribution to the debate between externalist and internalist philosophies of mind, and breaks radically with the epistemological tradition of analyzing knowledge in terms of true belief. The theory casts new light on such philosophical problems as scepticism, evidence, probability and assertion, realism and anti-realism, and the limits of what can be known. The arguments are (...) illustrated by rigorous models based on epistemic logic and probability theory. The result is a new way of doing epistemology and a notable contribution to the philosophy of mind. (shrink)
The quantitative problem of old evidence is the problem of how to measure the degree to which e confirms h for agent A at time t when A regards e as justified at t. Existing attempts to solve this problem have applied the e-difference approach, which compares A's probability for h at t with what probability A would assign h if A did not regard e as justified at t. The quantitative problem has been widely regarded as unsolvable primarily on (...) the grounds that the e-difference approach suffers from intractable problems. Various philosophers have proposed that 'Bayesianism' should be rejected as a research strategy in confirmation theory in part because of the unsolvability of this problem. I develop a version of the e-difference approach which overcomes these problems and possesses various advantages (but also certain limitations). I develop an alternative 'theistic' approach which handles many cases that my development of the e-difference approach does not handle. I conclude with an assessment of the significance of the quantitative problem for Bayesianism and argue that this problem is misunderstood in so far as it is regarded as unsolvable, and in so far as it is regarded as a problem only for Bayesians. (shrink)
The old evidence problem affects any probabilistic confirmation measure based on comparing pr(H/E) and pr(H). The article argues for the following points: (1) measures based on likelihood ratios also suffer old evidence difficulties; (2) the less-discussed synchronic old evidence problem is, in an important sense, the most acute; (3) prominent attempts to solve or dissolve the synchronic problem fail; (4) a little-discussed variant of the standard measure avoids the problem, in an appealing way; and (5) this measure nevertheless reveals a (...) different problem for probabilistic confirmation measures, a problem that is unlikely to lend itself to formal solution. (shrink)
This book defends the view that any adequate account of rational decision making must take a decision maker's beliefs about causal relations into account. The early chapters of the book introduce the non-specialist to the rudiments of expected utility theory. The major technical advance offered by the book is a 'representation theorem' that shows that both causal decision theory and its main rival, Richard Jeffrey's logic of decision, are both instances of a more general conditional decision theory. The book solves (...) a long-standing problem for Jeffrey's theory by showing for the first time how to obtain a unique utility and probability representation for preferences and judgements of comparative likelihood. The book also contains a major new discussion of what it means to suppose that some event occurs or that some proposition is true. The most complete and robust defence of causal decision theory available. (shrink)
Jeffrey has devised a probability revision method that increases the probability of hypothesis H when it is discovered that H implies previously known evidence E. A natural extension of Jeffrey's method likewise increases the probability of H when E has been established with sufficiently high probability and it is then discovered, quite apart from this, that H confers sufficiently higher probability on E than does its logical negation H̄.
Confirmation is commonly identified with positive relevance, E being said to confirm H if and only if E increases the probability of H. Today, analyses of this general kind are usually Bayesian ones that take the relevant probabilities to be subjective. I argue that these subjective Bayesian analyses are irremediably flawed. In their place I propose a relevance analysis that makes confirmation objective and which, I show, avoids the flaws of the subjective analyses. What I am proposing is in some (...) ways a return to Carnap's conception of confirmation, though there are also important differences between my analysis and his. My analysis includes new accounts of what evidence is and of the indexicality of confirmation claims. Finally, I defend my analysis against Achinstein's criticisms of the relevance concept of confirmation. (shrink)
Contrary to what has been widely supposed, Bayesian theory deals successfully with the introduction of new theories that have never previously been entertained. The theory enables us to say what sorts of method should be used to assign probabilities to these new theories, and it allows that the probabilities of existing theories may be modified as a result.
Scientific reasoning is—and ought to be—conducted in accordance with the axioms of probability. This Bayesian view—so called because of the central role it accords to a theorem first proved by Thomas Bayes in the late eighteenth ...
There is currently no viable alternative to the Bayesian analysis of scientific inference, yet the available versions of Bayesianism fail to do justice to several aspects of the testing and confirmation of scientific hypotheses. Bayes or Bust? provides the first balanced treatment of the complex set of issues involved in this nagging conundrum in the philosophy of science. Both Bayesians and anti-Bayesians will find a wealth of new insights on topics ranging from Bayes's original paper to contemporary formal learning theory. (...) In a paper published posthumously in 1763, the Reverend Thomas Bayes made a seminal contribution to the understanding of "analogical or inductive reasoning." Building on his insights, modem Bayesians have developed an account of scientific inference that has attracted numerous champions as well as numerous detractors. Earman argues that Bayesianism provides the best hope for a comprehensive and unified account of scientific inference, yet the presently available versions of Bayesianisin fail to do justice to several aspects of the testing and confirming of scientific theories and hypotheses. By focusing on the need for a resolution to this impasse, Earman sharpens the issues on which a resolution turns. John Earman is Professor of History and Philosophy of Science at the University of Pittsburgh. (shrink)
There is currently no viable alternative to the Bayesian analysis of scientific inference, yet the available versions of Bayesianism fail to do justice to several aspects of the testing and confirmation of scientific hypotheses. Bayes or Bust? provides the first balanced treatment of the complex set of issues involved in this nagging conundrum in the philosophy of science. Both Bayesians and anti-Bayesians will find a wealth of new insights on topics ranging from Bayes’s original paper to contemporary formal learning theory.In (...) a paper published posthumously in 1763, the Reverend Thomas Bayes made a seminal contribution to the understanding of "analogical or inductive reasoning." Building on his insights, modem Bayesians have developed an account of scientific inference that has attracted numerous champions as well as numerous detractors. Earman argues that Bayesianism provides the best hope for a comprehensive and unified account of scientific inference, yet the presently available versions of Bayesianisin fail to do justice to several aspects of the testing and confirming of scientific theories and hypotheses. By focusing on the need for a resolution to this impasse, Earman sharpens the issues on which a resolution turns. John Earman is Professor of History and Philosophy of Science at the University of Pittsburgh. (shrink)
Jeﬀrey conditioning allows updating in Bayesian style when the evidence is uncertain. A weighted average, essentially, over classically updating on the alternatives. Unlike classical Bayesian conditioning, this allows learning to be unlearned.
There is considerable confusion about the role of p-values in statistical model checking. To clarify that point, I introduce the distinction between measures of surprise and measures of evidence which come with different epistemological functions. I argue that p-values, often understood as measures of evidence against a null model, do not count as proper measures of evidence and are closer to measures of surprise. Finally, I sketch how the problem of old evidence may be tackled by acknowledging the epistemic role (...) of surprise indices. (shrink)