Objective Bayesianism is a methodological theory that is currently applied in statistics, philosophy, artificial intelligence, physics and other sciences. This book develops the formal and philosophical foundations of the theory, at a level accessible to a graduate student with some familiarity with mathematical notation.
There has been a probabilistic turn in contemporary cognitive science. Far and away, most of the work in this vein is Bayesian, at least in name. Coinciding with this development, philosophers have increasingly promoted Bayesianism as the best normative account of how humans ought to reason. In this paper, we make a push for exploring the probabilistic terrain outside of Bayesianism. Non-Bayesian, but still probabilistic, theories provide plausible competitors both to descriptive and normative Bayesian accounts. We argue for (...) this general idea via recent work on explanationist models of updating, which are fundamentally probabilistic but assign a substantial, non-Bayesian role to explanatory considerations. (shrink)
This paper examines the debate between permissive and impermissive forms of Bayesianism. It briefly discusses some considerations that might be offered by both sides of the debate, and then replies to some new arguments in favor of impermissivism offered by Roger White. First, it argues that White’s (Oxford studies in epistemology, vol 3. Oxford University Press, Oxford, pp 161–186, 2010) defense of Indifference Principles is unsuccessful. Second, it contends that White’s (Philos Perspect 19:445–459, 2005) arguments against permissive views do (...) not succeed. (shrink)
This paper consists of three main parts. First, we give an introduction to Hill’s assumption A (n) and to theory of interval probability, and an overview of recently developed theory and methods for nonparametric predictive inference (NPI), which is based on A (n) and uses interval probability to quantify uncertainty. Thereafter, we illustrate NPI by introducing a variation to the assumption A (n), suitable for inference based on circular data, with applications to several data sets from the literature. This includes (...) attention to comparison of two groups of circular data, and to grouped data. We briefly discuss such inference for multiple future observations. We end the paper with a discussion of NPI and objective Bayesianism. (shrink)
Bayesians regard their solution to the paradox of confirmation as grounds for preferring their theory of confirmation to Hempel’s. They point out that, unlike Hempel, they can at least say that a black raven confirms “All ravens are black” more than a white shoe. However, I argue that this alleged advantage is cancelled out by the fact that Bayesians are equally committed to the view that a white shoe confirms “All non-black things are non-ravens” less than a black raven. In (...) light of this, I reexamine the dialectic between Hempel and the Bayesians. (shrink)
Bayesian theory now incorporates a vast body of mathematical, statistical and computational techniques that are widely applied in a panoply of disciplines, from artificial intelligence to zoology. Yet Bayesians rarely agree on the basics, even on the question of what Bayesianism actually is. This book is about the basics e about the opportunities, questions and problems that face Bayesianism today.
One of the fundamental problems of epistemology is to say when the evidence in an agent’s possession justifies the beliefs she holds. In this paper and its sequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: Accuracy An epistemic agent ought to minimize the inaccuracy of her partial beliefs. In this paper, we make this norm mathematically precise in various ways. We describe three epistemic dilemmas that an agent might face if she attempts (...) to follow Accuracy, and we show that the only inaccuracy measures that do not give rise to such dilemmas are the quadratic inaccuracy measures. In the sequel, we derive the main tenets of Bayesianism from the relevant mathematical versions of Accuracy to which this characterization of the legitimate inaccuracy measures gives rise, but we also show that Jeffrey conditionalization has to be replaced by a different method of update in order for Accuracy to be satisfied. (shrink)
One of the fundamental problems of epistemology is to say when the evidence in an agent’s possession justifies the beliefs she holds. In this paper and its prequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: Accuracy An epistemic agent ought to minimize the inaccuracy of her partial beliefs. In the prequel, we made this norm mathematically precise; in this paper, we derive its consequences. We show that the two core tenets of (...) class='Hi'>Bayesianism follow from the norm, while the characteristic claim of the Objectivist Bayesian follows from the norm along with an extra assumption. Finally, we consider Richard Jeffrey’s proposed generalization of conditionalization. We show not only that his rule cannot be derived from the norm, unless the requirement of Rigidity is imposed from the start, but further that the norm reveals it to be illegitimate. We end by deriving an alternative updating rule for those cases in which Jeffrey’s is usually supposed to apply. (shrink)
Traditional Bayesianism requires that an agent’s degrees of belief be represented by a real-valued, probabilistic credence function. However, in many cases it seems that our evidence is not rich enough to warrant such precision. In light of this, some have proposed that we instead represent an agent’s degrees of belief as a set of credence functions. This way, we can respect the evidence by requiring that the set, often called the agent’s credal state, includes all credence functions that are (...) in some sense compatible with the evidence. One known problem for this evidentially-motivated imprecise view is that in certain cases, our imprecise credence in a particular proposition will remain the same no matter how much evidence we receive. In this paper I argue that the problem is much more general than has been appreciated so far, and that it’s difficult to avoid it without compromising the initial evidentialist motivation. (shrink)
Bayesianism is a collection of positions in several related fields, centered on the interpretation of probability as something like degree of belief, as contrasted with relative frequency, or objective chance. However, Bayesianism is far from a unified movement. Bayesians are divided about the nature of the probability functions they discuss; about the normative force of this probability function for ordinary and scientific reasoning and decision making; and about what relation (if any) holds between Bayesian and non-Bayesian concepts.
In the first paper, I discussed the basic claims of Bayesianism (that degrees of belief are important, that they obey the axioms of probability theory, and that they are rationally updated by either standard or Jeffrey conditionalization) and the arguments that are often used to support them. In this paper, I will discuss some applications these ideas have had in confirmation theory, epistemol- ogy, and statistics, and criticisms of these applications.
Objective Bayesianism has been criticised on the grounds that objective Bayesian updating, which on a finite outcome space appeals to the maximum entropy principle, differs from Bayesian conditionalisation. The main task of this paper is to show that this objection backfires: the difference between the two forms of updating reflects negatively on Bayesian conditionalisation rather than on objective Bayesian updating. The paper also reviews some existing criticisms and justifications of conditionalisation, arguing in particular that the diachronic Dutch book justification (...) fails because diachronic Dutch book arguments are subject to a reductio: in certain circumstances one can Dutch book an agent however she changes her degrees of belief . One may also criticise objective Bayesianism on the grounds that its norms are not compulsory but voluntary, the result of a stance. It is argued that this second objection also misses the mark, since objective Bayesian norms are tied up in the very notion of degrees of belief. (shrink)
Likelihoodists and Bayesians seem to have a fundamental disagreement about the proper probabilistic explication of relational (or contrastive) conceptions of evidential support (or confirmation). In this paper, I will survey some recent arguments and results in this area, with an eye toward pinpointing the nexus of the dispute. This will lead, first, to an important shift in the way the debate has been couched, and, second, to an alternative explication of relational support, which is in some sense a "middle way" (...) between Likelihoodism and Bayesianism. In the process, I will propose some new work for an old probability puzzle: the "Monty Hall" problem. (shrink)
Two of the most influential theories about scientific inference are inference to the best explanation and Bayesianism. How are they related? Bas van Fraassen has claimed that IBE and Bayesianism are incompatible rival theories, as any probabilistic version of IBE would violate Bayesian conditionalization. In response, several authors have defended the view that IBE is compatible with Bayesian updating. They claim that the explanatory considerations in IBE are taken into account by the Bayesian because the Bayesian either does (...) or should make use of them in assigning probabilities to hypotheses. I argue that van Fraassen has not succeeded in establishing that IBE and Bayesianism are incompatible, but that the existing compatibilist response is also not satisfactory. I suggest that a more promising approach to the problem is to investigate whether explanatory considerations are taken into account by a Bayesian who assigns priors and likelihoods on his or her own terms. In this case, IBE would emerge from the Bayesian account, rather than being used to constrain priors and likelihoods. I provide a detailed discussion of the case of how the Copernican and Ptolemaic theories explain retrograde motion, and suggest that one of the key explanatory considerations is the extent to which the explanation a theory provides depends on its core elements rather than on auxiliary hypotheses. I then suggest that this type of consideration is reflected in the Bayesian likelihood, given priors that a Bayesian might be inclined to adopt even without explicit guidance by IBE. The aim is to show that IBE and Bayesianism may be compatible, not because they can be amalgamated, but rather because they capture substantially similar epistemic considerations. 1 Introduction2 Preliminaries3 Inference to the Best Explanation4 Bayesianism5 The Incompatibilist View : Inference to the Best Explanation Contradicts Bayesianism5. 1 Criticism of the incompatibilist view6 Constraint - Based Compatibilism6. 1 Criticism of constraint - based compatibilism7 Emergent Compatibilism7. 1 Analysis of inference to the best explanation7. 1. 1 Inference to the best explanation on specific hypotheses7. 1. 2 Inference to the best explanation on general theories7. 1. 3 Copernicus versus Ptolemy7. 1. 4 Explanatory virtues7. 1. 5 Summary7. 2 Bayesian account8 Conclusion. (shrink)
This chapter presents an overview of the major interpretations of probability followed by an outline of the objective Bayesian interpretation and a discussion of the key challenges it faces. I discuss the ramifications of interpretations of probability and objective Bayesianism for the philosophy of mathematics in general.
Bayesianism claims to provide a unified theory of epistemic and practical rationality based on the principle of mathematical expectation. In its epistemic guise it requires believers to obey the laws of probability. In its practical guise it asks agents to maximize their subjective expected utility. Joyce’s primary concern is Bayesian epistemology, and its five pillars: people have beliefs and conditional beliefs that come in varying gradations of strength; a person believes a proposition strongly to the extent that she presupposes (...) its truth in her practical and theoretical reasoning; rational graded beliefs must conform to the laws of probability; evidential relationships should be analyzed subjectively in terms of relations among a person’s graded beliefs and conditional beliefs; empirical learning is best modeled as probabilistic conditioning. Joyce explains each of these claims and evaluates some of the justifications that have been offered for them, including “Dutch book,” “decision-theoretic,” and “non-pragmatic” arguments for and. He also addresses some common objections to Bayesianism, in particular the “problem of old evidence” and the complaint that the view degenerates into an untenable subjectivism. The essay closes by painting a picture of Bayesianism as an “internalist” theory of reasons for action and belief that can be fruitfully augmented with “externalist” principles of practical and epistemic rationality. (shrink)
Objective Bayesian probability is often defined over rather simple domains, e.g., finite event spaces or propositional languages. This paper investigates the extension of objective Bayesianism to first-order logical languages. It is argued that the objective Bayesian should choose a probability function, from all those that satisfy constraints imposed by background knowledge, that is closest to a particular frequency-induced probability function which generalises the λ = 0 function of Carnap’s continuum of inductive methods.
Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities, they should be calibrated to our evidence of physical probabilities, and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective (...) class='Hi'>Bayesianism are usually justified in different ways. In this paper we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy. Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem. (shrink)
We introduce a distinction, unnoticed in the literature, between four varieties of objective Bayesianism. What we call ' strong objective Bayesianism' is characterized by two claims, that all scientific inference is 'logical' and that, given the same background information two agents will ascribe a unique probability to their priors. We think that neither of these claims can be sustained; in this sense, they are 'dogmatic'. The first fails to recognize that some scientific inference, in particular that concerning evidential (...) relations, is not (in the appropriate sense) logical, the second fails to provide a non-question-begging account of 'same background information'. We urge that a suitably objective Bayesian account of scientific inference does not require either of the claims. Finally, we argue that Bayesianism needs to be fine-grained in the same way that Bayesians fine-grain their beliefs. (shrink)
The inductive reliability of Bayesian methods is explored. The first result presented shows that for any solvable inductive problem of a general type, there exists a subjective prior which yields a Bayesian inductive method that solves the problem, although not all subjective priors give rise to a successful inductive method for the problem. The second result shows that the same does not hold for computationally bounded agents, so that Bayesianism is "inductively incomplete" for such agents. Finally a consistency proof (...) shows that inductive agents do not need to disregard inductive failure on sets of subjective probability 0 in order to be ideally rational. Together the results reveal the inadequacy of the subjective Bayesian norms for scientific methodology. (shrink)
Bayesianism provides a rich theoretical framework, which lends itself rather naturally to the explication of various “contrastive” and “non-contrastive” concepts. In this (brief) discussion, I will focus on issues involving “contrastivism”, as they arise in some of the recent philosophy of science, epistemology, and cognitive science literature surrounding Bayesian confirmation theory.
Following the standard practice in sociology, cultural anthropology and history, sociologists, historians of science and some philosophers of science define scientific communities as groups with shared beliefs, values and practices. In this paper it is argued that in real cases the beliefs of the members of such communities often vary significantly in important ways. This has rather dire implications for the convergence defense against the charge of the excessive subjectivity of subjective Bayesianism because that defense requires that communities of (...) Bayesian inquirers share a significant set of modal beliefs. The important implication is then that given the actual variation in modal beliefs across individuals, either Bayesians cannot claim that actual theories have been objectively confirmed or they must accept that such theories have been confirmed relative only to epistemically insignificant communities. (shrink)
The Raven and the Bayesian. As an essential benefit of their probabilistic account of confirmation, Bayesians state that it provides a twofold solution to the ravens paradox. It is supposed to show that (i) the paradox’s conclusion is tenable because a white shoe only negligibly confirms the hypothesis that all ravens are black, and (ii) the paradox’s first premise is false anyway because a black raven can speak against the hypothesis. I argue that both proposals are not only unable to (...) solve the paradox, but also point to severe difficulties with Bayesianism. The former does not make the conclusion acceptable, and it entails the bizarre consequence that a great amount of non-black non-ravens substantially confirms the ravens hypothesis. The latter does not go far enough because there is a variant of the first premise which follows from Bayesianism and implies a weaker, but nevertheless untenable, variant of the conclusion. (shrink)
Against Hellman's (1997) recent claims, I argue that Bayesianism is unable to explain the value of generally successful aspects of scientific methodology, viz., deflecting blame from well-confirmed theories onto auxiliaries and preferring more-varied data. Such an explanation would require not just objectification of priors, but a reason to believe priors will generally fall on values that justify the practice. Given the track record on the objectification problem, adding further conditions on priors merely makes the Bayesian's problems even worse.
Many philosophers argue that Bayesian epistemology cannot help us with the traditional Humean problem of induction. I argue that this view is partially but not wholly correct. It is true that Bayesianism does not solve Hume’s problem, in the way that the classical and logical theories of probability aimed to do. However I argue that in one important respect, Hume’s sceptical challenge cannot simply be transposed to a probabilistic context, where beliefs come in degrees, rather than being a yes/no (...) matter. (shrink)
Objective Bayesianism says that the strengths of one’s beliefs ought to be probabilities, calibrated to physical probabilities insofar as one has evidence of them, and otherwise sufficiently equivocal. These norms of belief are often explicated using the maximum entropy principle. In this paper we investigate the extent to which one can provide a unified justification of the objective Bayesian norms in the case in which the background language is a first-order predicate language, with a view to applying the resulting (...) formalism to inductive logic. We show that the maximum entropy principle can be motivated largely in terms of minimising worst-case expected loss. (shrink)
Inference to the Best Explanation and Bayesianism have both been proposed as descriptions of the way that people make inferences. This paper argues that one result from cognitive psychology, the "feminist bank teller" experiment, suggests that people use Inference to the Best Explanation rather than Bayesian techniques.
Bayesianism and Inference to the best explanation (IBE) are two different models of inference. Recently there has been some debate about the possibility of “bayesianizing” IBE. Firstly I explore several alternatives to include explanatory considerations in Bayes’s Theorem. Then I distinguish two different interpretationsof prior probabilities: “IBE-Bayesianism” (IBE-Bay) and “frequentist-Bayesianism” (Freq-Bay). After detailing the content of the latter, I propose a rule for assessing the priors. I also argue that Freq-Bay: (i) endorses a role for explanatory value (...) in the assessment of scientific hypotheses; (ii) avoids a purely subjectivist reading of prior probabilities; and (iii) fits better than IBE-Bayesianism with two basic facts about science, i.e., the prominent role played by empirical testing and the existence of many scientific theories in the past that failed to fulfil their promises and were subsequently abandoned. (shrink)
Kyburg goes half-way towards objective Bayesianism. He accepts that frequencies constrain rational belief to an interval but stops short of isolating an optimal degree of belief within this interval. I examine the case for going the whole hog.
Bayesian probability is normally defined over a fixed language or eventspace. But in practice language is susceptible to change, and thequestion naturally arises as to how Bayesian degrees of belief shouldchange as language changes. I argue here that this question poses aserious challenge to Bayesianism. The Bayesian may be able to meet thischallenge however, and I outline a practical method for changing degreesof belief over changes in finite propositional languages.
Crucial to bayesian contributions to the philosophy of science has been a characteristic psychology, according to which investigators harbor degree of confidence assignments that (insofar as the agents are rational) obey the axioms of the probability calculus. The rub is that, if the evidence of introspection is to be trusted, this fruitful psychology is false: actual investigators harbor no such assignments. The orthodox bayesian response has been to argue that the evidence of introspection is not to be trusted here; it (...) is to investigators' dispositions--not to their felt convictions--that the psychology is meant to be (and succeeds in being) faithful. I argue that this response, in both its orthodox and convex-set bayesian forms, should be rejected--as should the regulative ideals that make the response seem so attractive. I offer a different variant of bayesianism, designed to give the evidence of introspection its due and thus realize (as I claim the other forms of bayesianism cannot) the prescriptive mission of the bayesian project. (shrink)
In the last published round of his debate with Walter Block on economic methodology,1 Bryan Caplan introduces Bayes’ Rule as ‘a cure for methodological schizofrenia’. Block had raised the question ‘Why do economists react so violently to empirical evidence against the conventional view of the minimum wage’s effect?’ and answered it with the suggestion that economists do so because they are covert praxeologists. This means that they base most of their economic arguments on conclusions derived from their a priori understanding (...) of human action, although, as methodologists, they prefer to maintain that their arguments are merely appropriately qualified generalisations of empirical observations. Against this, Caplan maintained that neoclassical economists are Bayesians with some strong prior beliefs, which lead them to ascribe low probability to any statement that goes against the strongly held consensus. Presumably, there is such a strongly held consensus with respect to the minimum wage effect. Caplan concluded that ‘[t]he Bayesian position stakes out a compelling middle ground between atheoretical positivism and praxeology. On the one hand, the Bayesian view emphasizes that few propositions are known with certainty, and that we should adjust our probabilities as new information comes in. On the other hand, the Bayesian view recognizes that the rational view is not an average of past empirical findings, much less a naïve faith in the last prominent study.’ Caplan’s references to Bayes should be considered carefully before we accept that Bayesianism makes for a middle ground—let alone a compelling one— between positivism and praxeology. The image of a middle ground may be soothing, but it is no more than a metaphor. Whether it makes sense in this context, is an altogether different matter. (shrink)
This is a review essay about David Corfield and Jon Williamson's anthology Foundations of Bayesianism. Taken together, the fifteen essays assembled in the book assess the state of the art in Bayesianism. Such an assessment is timely, because decision theory and formal epistemology have become disciplines that are no longer taught on a routine basis in good philosophy departments. Thus we need to ask: Quo vadis, Bayesianism? The subjects of the articles include Bayesian group decision theory, approaches (...) to the concept of probability, Bayesian approaches in the philosophy of mathematics, reflections on the relationship between causation and probability, the Independence axiom, and a range of criticisms of Bayesianism, among other subjects. While critical of some of the arguments presented in the articles, this review recommends Corfield and Williamson's volume to anyone who is trying to stay abreast of Bayesian research. (shrink)
The foundations of probability deal with the problem of modelling reasoning in face of uncertainty by a mathematical calculus, usually the standard probability calculus .The three dominating schools in the foundations of probability interpret probabilities as limiting long-run frequencies conceived as an objective property of series of repeatable experiments , or rational betting rates for an individual to bet on the unknown outcome of experiments depending on the individual’s prior assessments updated by evidence , or rational betting rates to bet (...) on the unknown outcome of experiments depending on evidence only, but not on subjective assessments .Apart from the interpretation of probability, frequentism and Bayesianism in particular also differ with respect to the advocated methodology for inference. Frequentists use tests, estimators, and confidence intervals . Bayesians usually start with a prior distribution and use the posterior distribution, which is obtained by conditioning on the evidence, in order to carry out inferences. The prior distribution either models the individual’s personal prior probability assessments or, in objective Bayesianism, is chosen according to some rules in order to allow the evidence to determine the posterior.All three approaches are riddled with difficulties . Frequentism is often accused of circularity, because the assumption of independent identically distributed outcomes is needed in order to connect observations to frequentist probabilities, but ‘iid’ is itself defined probabilistically. Subjective Bayesianism is attacked for being too …. (shrink)
Orthodox Bayesianism tells a story about the epistemic trajectory of an ideally rational agent. The agent begins with a ‘prior’ probability function; thereafter, it conditionalizes on its evidence as it comes in. Consider, then, such an agent at the very beginning of its trajectory. It is ideally rational, but completely ignorant of which world is actual. Call this agent ‘Superbaby’.1 Superbaby personifies the Bayesian story. We argue that it must believe ‘Moorish’ propositions of the form.
It is timely to assess Bayesian models, but Bayesianism is not a religion. Bayesian modeling is typically used as a tool to explain human data. Bayesian models are sometimes equivalent to other models, but have the advantage of explicitly integrating prior hypotheses with new observations. Any lack of representational or neural assumptions may be an advantage rather than a disadvantage.
This paper aims at giving a general outlook of Bayesianism as a set of meta-criteria for scientific methodology. In particular, it discusses Social Bayesianism, that is, the application of Bayesian meta-criteria to scientific institutions. From a Bayesian point of view, methodologies and institutions that simulate Bayesian belief updating are good ones, and those with more discriminatory power are better ones than those with less discriminatory power, other things being equal. This paper applies these ideas to a particular issue: (...) diversity in science. Bayesian considerations reveal some conditions for epistemically desirable diversity in science. (shrink)
Bayesian decision theory, in its classical or strict form, requires agents to have a determinate probability function. In recent years many decision theorists have come to think that this requirement should be weakened to allow for cases in which the agent makes indeterminate probability judgments. It has been claimed that this weakening makes the theory more realistic, and that it makes the theory more tenable as a normative ideal. This paper shows that the usual technique for weakening strict Bayesianism (...) has neither of these claimed advantages. (shrink)
In this paper I critique Peter Lipton’s attempt to deal with the threat of Bayesianism to the normative aspect of his project in Inference to the Best Explanation. I consider the five approaches Lipton proposes for reconciling the doxastic recommendations of Inference to the Best Explanation with BA’s: IBE gives a ‘boost’ to the posterior probability of particularly ‘lovely’ hypotheses after the Bayesian calculation is performed; IBE helps us to set the likelihood of evidence on a given hypothesis; IBE (...) helps us to set the prior probabilities of hypotheses and evidence; IBE guides us in determining which evidence is relevant to a given hypothesis IBE functions as a heuristic for otherwise difficult Bayesian calculations. I agree with Lipton in rejecting . However, I then go on to point out difficulties for , , , and , all of which Lipton provisionally accepts. As far as is concerned, the explanationist and the Bayesian both fall silent in the same situations. In the final analysis, seems to be moot. devolves on and , and since I reject both of them it is a bad option. And should be considered – if at all – only in unimportant cases; in vital ones, BA is clearly better. I then propose a sixth way in which IBE and BA could be seen as complementary. Yet this suggestion relegates IBE to a secondary, supporting rôle vis-à-vis BA. I then question whether this auxiliary status is the most the explanationist can hope for. (shrink)
We pose and resolve several vexing decision theoretic puzzles. Some are variants of existing puzzles, such as 'Trumped' (Arntzenius and McCarthy 1997), 'Rouble trouble' (Arntzenius and Barrett 1999), 'The airtight Dutch book' (McGee 1999), and 'The two envelopes puzzle' (Broome 1995). Others are new. A unified resolution of the puzzles shows that Dutch book arguments have no force in infinite cases. It thereby provides evidence that reasonable utility functions may be unbounded and that reasonable credence functions need not be countably (...) additive. The resolution also shows that when infinitely many decisions are involved, the difference between making the decisions simultaneously and making them sequentially can be the difference between riches and ruin. Finally, the resolution reveals a new way in which the ability to make binding commitments can save perfectly rational agents from sure losses. (shrink)
Andrew Wayne discusses some recent attempts to account, within a Bayesian framework, for the "common methodological adage" that "diverse evidence better confirms a hypothesis than does the same amount of similar evidence". One of the approaches considered by Wayne is that suggested by Howson and Urbach and dubbed the "correlation approach" by Wayne. This approach is, indeed, incomplete, in that it neglects the role of the hypothesis under consideration in determining what diversity in a body of evidence is relevant diversity. (...) In this paper, it is shown how this gap can be filled, resulting in a more satisfactory account of the evidential role of diversity of evidence. In addition, it is argued that Wayne's criticism of the correlation approach does not indicate a serious flaw in the approach. (shrink)