This paper examines the debate between permissive and impermissive forms of Bayesianism. It briefly discusses some considerations that might be offered by both sides of the debate, and then replies to some new arguments in favor of impermissivism offered by Roger White. First, it argues that White’s defense of Indifference Principles is unsuccessful. Second, it contends that White’s arguments against permissive views do not succeed.
Traditional Bayesianism requires that an agent’s degrees of belief be represented by a real-valued, probabilistic credence function. However, in many cases it seems that our evidence is not rich enough to warrant such precision. In light of this, some have proposed that we instead represent an agent’s degrees of belief as a set of credence functions. This way, we can respect the evidence by requiring that the set, often called the agent’s credal state, includes all credence functions that are (...) in some sense compatible with the evidence. One known problem for this evidentially motivated imprecise view is that in certain cases, our imprecise credence in a particular proposition will remain the same no matter how much evidence we receive. In this article I argue that the problem is much more general than has been appreciated so far, and that it’s difficult to avoid it without compromising the initial evidentialist motivation. _1_ Introduction _2_ Precision and Its Problems _3_ Imprecise Bayesianism and Respecting Ambiguous Evidence _4_ Local Belief Inertia _5_ From Local to Global Belief Inertia _6_ Responding to Global Belief Inertia _7_ Conclusion. (shrink)
Orthodox Bayesianism is a highly idealized theory of how we ought to live our epistemic lives. One of the most widely discussed idealizations is that of logical omniscience: the assumption that an agent’s degrees of belief must be probabilistically coherent to be rational. It is widely agreed that this assumption is problematic if we want to reason about bounded rationality, logical learning, or other aspects of non-ideal epistemic agency. Yet, we still lack a satisfying way to avoid logical omniscience (...) within a Bayesian framework. Some proposals merely replace logical omniscience with a different logical idealization; others sacrifice all traits of logical competence on the altar of logical non-omniscience. We think a better strategy is available: by enriching the Bayesian framework with tools that allow us to capture what agents can and cannot infer given their limited cognitive resources, we can avoid logical omniscience while retaining the idea that rational degrees of belief are in an important way constrained by the laws of probability. In this paper, we offer a formal implementation of this strategy, show how the resulting framework solves the problem of logical omniscience, and compare it to orthodox Bayesianism as we know it. (shrink)
This book explores the Bayesian approach to the logic and epistemology of scientific reasoning. Section 1 introduces the probability calculus as an appealing generalization of classical logic for uncertain reasoning. Section 2 explores some of the vast terrain of Bayesian epistemology. Three epistemological postulates suggested by Thomas Bayes in his seminal work guide the exploration. This section discusses modern developments and defenses of these postulates as well as some important criticisms and complications that lie in wait for the Bayesian epistemologist. (...) Section 3 applies the formal tools and principles of the first two sections to a handful of topics in the epistemology of scientific reasoning: confirmation, explanatory reasoning, evidential diversity and robustness analysis, hypothesis competition, and Ockham's Razor. (shrink)
Two of the most influential theories about scientific inference are inference to the best explanation and Bayesianism. How are they related? Bas van Fraassen has claimed that IBE and Bayesianism are incompatible rival theories, as any probabilistic version of IBE would violate Bayesian conditionalization. In response, several authors have defended the view that IBE is compatible with Bayesian updating. They claim that the explanatory considerations in IBE are taken into account by the Bayesian because the Bayesian either does (...) or should make use of them in assigning probabilities to hypotheses. I argue that van Fraassen has not succeeded in establishing that IBE and Bayesianism are incompatible, but that the existing compatibilist response is also not satisfactory. I suggest that a more promising approach to the problem is to investigate whether explanatory considerations are taken into account by a Bayesian who assigns priors and likelihoods on his or her own terms. In this case, IBE would emerge from the Bayesian account, rather than being used to constrain priors and likelihoods. I provide a detailed discussion of the case of how the Copernican and Ptolemaic theories explain retrograde motion, and suggest that one of the key explanatory considerations is the extent to which the explanation a theory provides depends on its core elements rather than on auxiliary hypotheses. I then suggest that this type of consideration is reflected in the Bayesian likelihood, given priors that a Bayesian might be inclined to adopt even without explicit guidance by IBE. The aim is to show that IBE and Bayesianism may be compatible, not because they can be amalgamated, but rather because they capture substantially similar epistemic considerations. 1 Introduction2 Preliminaries3 Inference to the Best Explanation4 Bayesianism5 The Incompatibilist View : Inference to the Best Explanation Contradicts Bayesianism5. 1 Criticism of the incompatibilist view6 Constraint - Based Compatibilism6. 1 Criticism of constraint - based compatibilism7 Emergent Compatibilism7. 1 Analysis of inference to the best explanation7. 1. 1 Inference to the best explanation on specific hypotheses7. 1. 2 Inference to the best explanation on general theories7. 1. 3 Copernicus versus Ptolemy7. 1. 4 Explanatory virtues7. 1. 5 Summary7. 2 Bayesian account8 Conclusion. (shrink)
We pose and resolve several vexing decision theoretic puzzles. Some are variants of existing puzzles, such as 'Trumped' (Arntzenius and McCarthy 1997), 'Rouble trouble' (Arntzenius and Barrett 1999), 'The airtight Dutch book' (McGee 1999), and 'The two envelopes puzzle' (Broome 1995). Others are new. A unified resolution of the puzzles shows that Dutch book arguments have no force in infinite cases. It thereby provides evidence that reasonable utility functions may be unbounded and that reasonable credence functions need not be countably (...) additive. The resolution also shows that when infinitely many decisions are involved, the difference between making the decisions simultaneously and making them sequentially can be the difference between riches and ruin. Finally, the resolution reveals a new way in which the ability to make binding commitments can save perfectly rational agents from sure losses. (shrink)
Bayesianism is a collection of positions in several related fields, centered on the interpretation of probability as something like degree of belief, as contrasted with relative frequency, or objective chance. However, Bayesianism is far from a unified movement. Bayesians are divided about the nature of the probability functions they discuss; about the normative force of this probability function for ordinary and scientific reasoning and decision making; and about what relation (if any) holds between Bayesian and non-Bayesian concepts.
A Bayesian mind is, at its core, a rational mind. Bayesianism is thus well-suited to predict and explain mental processes that best exemplify our ability to be rational. However, evidence from belief acquisition and change appears to show that we do not acquire and update information in a Bayesian way. Instead, the principles of belief acquisition and updating seem grounded in maintaining a psychological immune system rather than in approximating a Bayesian processor.
Objective Bayesianism is a methodological theory that is currently applied in statistics, philosophy, artificial intelligence, physics and other sciences. This book develops the formal and philosophical foundations of the theory, at a level accessible to a graduate student with some familiarity with mathematical notation.
Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities, they should be calibrated to our evidence of physical probabilities, and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective (...) class='Hi'>Bayesianism are usually justified in different ways. In this paper we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy. Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem. (shrink)
Likelihoodists and Bayesians seem to have a fundamental disagreement about the proper probabilistic explication of relational (or contrastive) conceptions of evidential support (or confirmation). In this paper, I will survey some recent arguments and results in this area, with an eye toward pinpointing the nexus of the dispute. This will lead, first, to an important shift in the way the debate has been couched, and, second, to an alternative explication of relational support, which is in some sense a "middle way" (...) between Likelihoodism and Bayesianism. In the process, I will propose some new work for an old probability puzzle: the "Monty Hall" problem. (shrink)
The Bayesian approach to quantum mechanics of Caves, Fuchs and Schack is presented. Its conjunction of realism about physics along with anti-realism about much of the structure of quantum theory is elaborated; and the position defended from common objections: that it is solipsist; that it is too instrumentalist; that it cannot deal with Wigner's friend scenarios. Three more substantive problems are raised: Can a reasonable ontology be found for the approach? Can it account for explanation in quantum theory? Are subjective (...) probabilities on their own adequate in the quantum domain? The first question is answered in the affirmative, drawing on elements from Nancy Cartwright's philosophy of science. The second two are not: it is argued that these present outstanding difficulties for the project. A quantum Bayesian version of Moore's paradox is developed to illustrate difficulties with the subjectivist account of pure state assignments. (shrink)
One of the fundamental problems of epistemology is to say when the evidence in an agent’s possession justifies the beliefs she holds. In this paper and its sequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: Accuracy An epistemic agent ought to minimize the inaccuracy of her partial beliefs. In this paper, we make this norm mathematically precise in various ways. We describe three epistemic dilemmas that an agent might face if she attempts (...) to follow Accuracy, and we show that the only inaccuracy measures that do not give rise to such dilemmas are the quadratic inaccuracy measures. In the sequel, we derive the main tenets of Bayesianism from the relevant mathematical versions of Accuracy to which this characterization of the legitimate inaccuracy measures gives rise, but we also show that Jeffrey conditionalization has to be replaced by a different method of update in order for Accuracy to be satisfied. (shrink)
In the first paper, I discussed the basic claims of Bayesianism (that degrees of belief are important, that they obey the axioms of probability theory, and that they are rationally updated by either standard or Jeffrey conditionalization) and the arguments that are often used to support them. In this paper, I will discuss some applications these ideas have had in confirmation theory, epistemol- ogy, and statistics, and criticisms of these applications.
One of the fundamental problems of epistemology is to say when the evidence in an agent’s possession justifies the beliefs she holds. In this paper and its prequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: Accuracy An epistemic agent ought to minimize the inaccuracy of her partial beliefs. In the prequel, we made this norm mathematically precise; in this paper, we derive its consequences. We show that the two core tenets of (...) class='Hi'>Bayesianism follow from the norm, while the characteristic claim of the Objectivist Bayesian follows from the norm along with an extra assumption. Finally, we consider Richard Jeffrey’s proposed generalization of conditionalization. We show not only that his rule cannot be derived from the norm, unless the requirement of Rigidity is imposed from the start, but further that the norm reveals it to be illegitimate. We end by deriving an alternative updating rule for those cases in which Jeffrey’s is usually supposed to apply. (shrink)
Bayesianism claims to provide a unified theory of epistemic and practical rationality based on the principle of mathematical expectation. In its epistemic guise it requires believers to obey the laws of probability. In its practical guise it asks agents to maximize their subjective expected utility. Joyce’s primary concern is Bayesian epistemology, and its five pillars: people have beliefs and conditional beliefs that come in varying gradations of strength; a person believes a proposition strongly to the extent that she presupposes (...) its truth in her practical and theoretical reasoning; rational graded beliefs must conform to the laws of probability; evidential relationships should be analyzed subjectively in terms of relations among a person’s graded beliefs and conditional beliefs; empirical learning is best modeled as probabilistic conditioning. Joyce explains each of these claims and evaluates some of the justifications that have been offered for them, including “Dutch book,” “decision-theoretic,” and “non-pragmatic” arguments for and. He also addresses some common objections to Bayesianism, in particular the “problem of old evidence” and the complaint that the view degenerates into an untenable subjectivism. The essay closes by painting a picture of Bayesianism as an “internalist” theory of reasons for action and belief that can be fruitfully augmented with “externalist” principles of practical and epistemic rationality. (shrink)
Objective Bayesianism has been criticised on the grounds that objective Bayesian updating, which on a finite outcome space appeals to the maximum entropy principle, differs from Bayesian conditionalisation. The main task of this paper is to show that this objection backfires: the difference between the two forms of updating reflects negatively on Bayesian conditionalisation rather than on objective Bayesian updating. The paper also reviews some existing criticisms and justifications of conditionalisation, arguing in particular that the diachronic Dutch book justification (...) fails because diachronic Dutch book arguments are subject to a reductio: in certain circumstances one can Dutch book an agent however she changes her degrees of belief . One may also criticise objective Bayesianism on the grounds that its norms are not compulsory but voluntary, the result of a stance. It is argued that this second objection also misses the mark, since objective Bayesian norms are tied up in the very notion of degrees of belief. (shrink)
The idea that the quantum probabilities are best construed as the personal/subjective degrees of belief of Bayesian agents is an old one. In recent years the idea has been vigorously pursued by a group of physicists who fly the banner of quantum Bayesianism. The present paper aims to identify the prospects and problems of implementing QBism, and it critically assesses the claim that QBism provides a resolution of some of the long-standing foundations issues in quantum mechanics, including the measurement (...) problem and puzzles of nonlocality. (shrink)
There has been a probabilistic turn in contemporary cognitive science. Far and away, most of the work in this vein is Bayesian, at least in name. Coinciding with this development, philosophers have increasingly promoted Bayesianism as the best normative account of how humans ought to reason. In this paper, we make a push for exploring the probabilistic terrain outside of Bayesianism. Non-Bayesian, but still probabilistic, theories provide plausible competitors both to descriptive and normative Bayesian accounts. We argue for (...) this general idea via recent work on explanationist models of updating, which are fundamentally probabilistic but assign a substantial, non-Bayesian role to explanatory considerations. (shrink)
Andrew Wayne discusses some recent attempts to account, within a Bayesian framework, for the "common methodological adage" that "diverse evidence better confirms a hypothesis than does the same amount of similar evidence". One of the approaches considered by Wayne is that suggested by Howson and Urbach and dubbed the "correlation approach" by Wayne. This approach is, indeed, incomplete, in that it neglects the role of the hypothesis under consideration in determining what diversity in a body of evidence is relevant diversity. (...) In this paper, it is shown how this gap can be filled, resulting in a more satisfactory account of the evidential role of diversity of evidence. In addition, it is argued that Wayne's criticism of the correlation approach does not indicate a serious flaw in the approach. (shrink)
The inductive reliability of Bayesian methods is explored. The first result presented shows that for any solvable inductive problem of a general type, there exists a subjective prior which yields a Bayesian inductive method that solves the problem, although not all subjective priors give rise to a successful inductive method for the problem. The second result shows that the same does not hold for computationally bounded agents, so that Bayesianism is "inductively incomplete" for such agents. Finally a consistency proof (...) shows that inductive agents do not need to disregard inductive failure on sets of subjective probability 0 in order to be ideally rational. Together the results reveal the inadequacy of the subjective Bayesian norms for scientific methodology. (shrink)
Bayesianism and Inference to the best explanation are two different models of inference. Recently there has been some debate about the possibility of “bayesianizing” IBE. Firstly I explore several alternatives to include explanatory considerations in Bayes’s Theorem. Then I distinguish two different interpretations of prior probabilities: “IBE-Bayesianism” and “frequentist-Bayesianism”. After detailing the content of the latter, I propose a rule for assessing the priors. I also argue that Freq-Bay: endorses a role for explanatory value in the assessment (...) of scientific hypotheses; avoids a purely subjectivist reading of prior probabilities; and fits better than IBE-Bayesianism with two basic facts about science, i.e., the prominent role played by empirical testing and the existence of many scientific theories in the past that failed to fulfil their promises and were subsequently abandoned. (shrink)
Myrvold (2003) has proposed an attractive Bayesian account of why theories that unify phenomena tend to derive greater epistemic support from those phenomena than do theories that fail to unify them. It is argued, however, that "unification" in Myrvold's sense is both too easy and too difficult for theories to achieve. Myrvold's account fails to capture what it is that makes unification sometimes count in a theory's favor.
Many philosophers argue that Bayesian epistemology cannot help us with the traditional Humean problem of induction. I argue that this view is partially but not wholly correct. It is true that Bayesianism does not solve Hume’s problem, in the way that the classical and logical theories of probability aimed to do. However I argue that in one important respect, Hume’s sceptical challenge cannot simply be transposed to a probabilistic context, where beliefs come in degrees, rather than being a yes/no (...) matter. (shrink)
Following the standard practice in sociology, cultural anthropology and history, sociologists, historians of science and some philosophers of science define scientific communities as groups with shared beliefs, values and practices. In this paper it is argued that in real cases the beliefs of the members of such communities often vary significantly in important ways. This has rather dire implications for the convergence defense against the charge of the excessive subjectivity of subjective Bayesianism because that defense requires that communities of (...) Bayesian inquirers share a significant set of modal beliefs. The important implication is then that given the actual variation in modal beliefs across individuals, either Bayesians cannot claim that actual theories have been objectively confirmed or they must accept that such theories have been confirmed relative only to epistemically insignificant communities. (shrink)
Bayesianism provides a rich theoretical framework, which lends itself rather naturally to the explication of various “contrastive” and “non-contrastive” concepts. In this (brief) discussion, I will focus on issues involving “contrastivism”, as they arise in some of the recent philosophy of science, epistemology, and cognitive science literature surrounding Bayesian confirmation theory.
According to van Fraassen, inference to the best explanation is incompatible with Bayesianism. To argue to the contrary, many philosophers have suggested hybrid models of scientific reasoning with both explanationist and probabilistic elements. This paper offers another such model with two novel features. First, its Bayesian component is imprecise. Second, the domain of credence functions can be extended.
Bayesian confirmation theory offers an explicatum for a pretheoretic concept of confirmation. The “problem of irrelevant conjunction” for this theory is that, according to some people's intuitions, the pretheoretic concept differs from the explicatum with regard to conjunctions involving irrelevant propositions. Previous Bayesian solutions to this problem consist in showing that irrelevant conjuncts reduce the degree of confirmation; they have the drawbacks that (i) they don't hold for all ways of measuring degree of confirmation and (ii) they don't remove the (...) conflict with intuition but merely “soften the impact” (as Fitelson has written). A better solution, which avoids both these drawbacks, is to show that the intuition is wrong. (shrink)
Subjective Bayesianism is a major school of uncertain reasoning and statistical inference. It is often criticized for a lack of objectivity: it opens the door to the influence of values and biases, evidence judgments can vary substantially between scientists, it is not suited for informing policy decisions. My paper rebuts these concerns by connecting the debates on scientific objectivity and statistical method. First, I show that the above concerns arise equally for standard frequentist inference with null hypothesis significance tests. (...) Second, the criticisms are based on specific senses of objectivity with unclear epistemic value. Third, I show that Subjective Bayesianism promotes other, epistemically relevant senses of scientific objectivity—most notably by increasing the transparency of scientific reasoning. (shrink)
Proponents of IBE claim that the ability of a hypothesis to explain a range of phenomena in a unifying way contributes to the hypothesis’s credibility in light of these phenomena. I propose a Bayesian justification of this claim that reveals a hitherto unnoticed role for explanatory unification in evaluating the plausibility of a hypothesis: considerations of explanatory unification enter into the determination of a hypothesis’s prior by affecting its ‘explanatory coherence’, that is, the extent to which the hypothesis offers mutually (...) cohesive explanations of various phenomena. (shrink)
Chalmers, responding to Braun, continues arguments from Chalmers for the conclusion that Bayesian considerations favor the Fregean in the debate over the objects of belief in Frege’s puzzle. This short paper gets to the heart of the disagreement over whether Bayesian considerations can tell us anything about Frege’s puzzle and answers, no, they cannot.
Crucial to bayesian contributions to the philosophy of science has been a characteristic psychology, according to which investigators harbor degree of confidence assignments that (insofar as the agents are rational) obey the axioms of the probability calculus. The rub is that, if the evidence of introspection is to be trusted, this fruitful psychology is false: actual investigators harbor no such assignments. The orthodox bayesian response has been to argue that the evidence of introspection is not to be trusted here; it (...) is to investigators' dispositions--not to their felt convictions--that the psychology is meant to be (and succeeds in being) faithful. I argue that this response, in both its orthodox and convex-set bayesian forms, should be rejected--as should the regulative ideals that make the response seem so attractive. I offer a different variant of bayesianism, designed to give the evidence of introspection its due and thus realize (as I claim the other forms of bayesianism cannot) the prescriptive mission of the bayesian project. (shrink)
Orthodox Bayesianism endorses revising by conditionalization. This paper investigates the zero-raising problem, or equivalently the certainty-dropping problem of orthodox Bayesianism: previously neglected possibilities remain neglected, although the new evidence might suggest otherwise. Yet, one may want to model open-minded agents, that is, agents capable of raising previously neglected possibilities. Different reasons can be given for open-mindedness, one of which is fallibilism. The paper proposes a family of open-minded propositional revisions depending on a parameter ϵ. The basic idea is (...) this: first extend the prior to the newly suggested possibilities by mixing the prior with the uniform probability on these possibilities, then conditionalize. This may put the agent back on the right track when her beliefs or evidence happen to be false. The paper justifies this family of equivocal epsilon-conditionalizations as minimal non-biased open-minded modifications of conditionalization. Several variations are discussed, such as mixing with an ad hoc or silent prior instead of the uniform prior, and a generalization to probabilistic information is given. The approach is compared to other accounts, such as Jeffrey’s Bayesianism, Gärdenfors’s probabilistic revision, maximizing entropy, and minimal revision. _1_ Introduction _2_ Downsides of Certainty Conservation _3_ Why Raise Zeros? _3.1_ Accuracy defects _3.2_ Fallibilism and dissolutions _3.3_ Weak fallibilism and partial solutions _4_ Epsilon-Conditionalization _4.1_ Definitions _4.2_ Revision types and information assumptions _4.3_ Epsilon interpretation and flavours _5_ Justifications _5.1_ Open-minded orthodox Bayesianism _5.2_ Minimal revision _5.3_ Transitivity _6_ Conclusion Appendix. (shrink)
Objective Bayesianism says that the strengths of one’s beliefs ought to be probabilities, calibrated to physical probabilities insofar as one has evidence of them, and otherwise sufficiently equivocal. These norms of belief are often explicated using the maximum entropy principle. In this paper we investigate the extent to which one can provide a unified justification of the objective Bayesian norms in the case in which the background language is a first-order predicate language, with a view to applying the resulting (...) formalism to inductive logic. We show that the maximum entropy principle can be motivated largely in terms of minimising worst-case expected loss. (shrink)
Objective Bayesian probability is often defined over rather simple domains, e.g., finite event spaces or propositional languages. This paper investigates the extension of objective Bayesianism to first-order logical languages. It is argued that the objective Bayesian should choose a probability function, from all those that satisfy constraints imposed by background knowledge, that is closest to a particular frequency-induced probability function which generalises the λ = 0 function of Carnap’s continuum of inductive methods.
A common methodological adage holds that diverse evidence better confirms a hypothesis than does the same amount of similar evidence. Proponents of Bayesian approaches to scientific reasoning such as Horwich, Howson and Urbach, and Earman claim to offer both a precise rendering of this maxim in probabilistic terms and an explanation of why the maxim should be part of the methodological canon of good science. This paper contends that these claims are mistaken and that, at best, Bayesian accounts of diverse (...) evidence are crucially incomplete. This failure should lend renewed force to a long-neglected global worry about Bayesian approaches. (shrink)
This is a review essay about David Corfield and Jon Williamson's anthology Foundations of Bayesianism. Taken together, the fifteen essays assembled in the book assess the state of the art in Bayesianism. Such an assessment is timely, because decision theory and formal epistemology have become disciplines that are no longer taught on a routine basis in good philosophy departments. Thus we need to ask: Quo vadis, Bayesianism? The subjects of the articles include Bayesian group decision theory, approaches (...) to the concept of probability, Bayesian approaches in the philosophy of mathematics, reflections on the relationship between causation and probability, the Independence axiom, and a range of criticisms of Bayesianism, among other subjects. While critical of some of the arguments presented in the articles, this review recommends Corfield and Williamson's volume to anyone who is trying to stay abreast of Bayesian research. (shrink)
Against Hellman's (1997) recent claims, I argue that Bayesianism is unable to explain the value of generally successful aspects of scientific methodology, viz., deflecting blame from well-confirmed theories onto auxiliaries and preferring more-varied data. Such an explanation would require not just objectification of priors, but a reason to believe priors will generally fall on values that justify the practice. Given the track record on the objectification problem, adding further conditions on priors merely makes the Bayesian's problems even worse.
Bayesian probability is normally defined over a fixed language or eventspace. But in practice language is susceptible to change, and thequestion naturally arises as to how Bayesian degrees of belief shouldchange as language changes. I argue here that this question poses aserious challenge to Bayesianism. The Bayesian may be able to meet thischallenge however, and I outline a practical method for changing degreesof belief over changes in finite propositional languages.
Maher (1988, 1990) has recently argued that the way a hypothesis is generated can affect its confirmation by the available evidence, and that Bayesian confirmation theory can explain this. In particular, he argues that evidence known at the time a theory was proposed does not confirm the theory as much as it would had that evidence been discovered after the theory was proposed. We examine Maher's arguments for this "predictivist" position and conclude that they do not, in fact, support his (...) view. We also cast doubt on the assumptions of Maher's alleged Bayesian proofs. (shrink)