The article is a plea for ethicists to regard probability as one of their most important concerns. It outlines a series of topics of central importance in ethical theory in which probability is implicated, often in a surprisingly deep way, and lists a number of open problems. Topics covered include: interpretations of probability in ethical contexts; the evaluative and normative significance of risk or uncertainty; uses and abuses of expected utility theory; veils of ignorance; Harsanyi’s aggregation theorem; (...) population size problems; equality; fairness; giving priority to the worse off; continuity; incommensurability; nonexpected utility theory; evaluative measurement; aggregation; causal and evidential decision theory; act consequentialism; rule consequentialism; and deontology. (shrink)
The present chapter describes a probabilistic framework of human reasoning. It is based on probability logic. While there are several approaches to probability logic, we adopt the coherence based approach.
An unknown process is generating a sequence of symbols, drawn from an alphabet, A. Given an initial segment of the sequence, how can one predict the next symbol? Ray Solomonoff’s theory of inductive reasoning rests on the idea that a useful estimate of a sequence’s true probability of being outputted by the unknown process is provided by its algorithmic probability (its probability of being outputted by a species of probabilistic Turing machine). However algorithmic probability is a (...) “semimeasure”: i.e., the sum, over all x∈A, of the conditional algorithmic probabilities of the next symbol being x, may be less than 1. Prevailing wisdom has it that algorithmic probability must be normalized, to eradicate this semimeasure property, before it can yield acceptable probability estimates. This paper argues, to the contrary, that the semimeasure property contributes substantially to the power and scope of an algorithmic-probability-based theory of induction, and that normalization is unnecessary. (shrink)
Classical (Bayesian) probability (CP) theory has led to an influential research tradition for modeling cognitive processes. Cognitive scientists have been trained to work with CP principles for so long that it is hard even to imagine alternative ways to formalize probabilities. However, in physics, quantum probability (QP) theory has been the dominant probabilistic approach for nearly 100 years. Could QP theory provide us with any advantages in cognitive modeling as well? Note first that both CP and QP theory (...) share the fundamental assumption that it is possible to model cognition on the basis of formal, probabilistic principles. But why consider a QP approach? The answers are that (1) there are many well-established empirical findings (e.g., from the influential Tversky, Kahneman research tradition) that are hard to reconcile with CP principles; and (2) these same findings have natural and straightforward explanations with quantum principles. In QP theory, probabilistic assessment is often strongly context- and order-dependent, individual states can be superposition states (that are impossible to associate with specific values), and composite systems can be entangled (they cannot be decomposed into their subsystems). All these characteristics appear perplexing from a classical perspective. However, our thesis is that they provide a more accurate and powerful account of certain cognitive processes. We first introduce QP theory and illustrate its application with psychological examples. We then review empirical findings that motivate the use of quantum theory in cognitive theory, but also discuss ways in which QP and CP theories converge. Finally, we consider the implications of a QP theory approach to cognition for human rationality. (shrink)
This book is meant to be a primer, that is, an introduction, to probability logic, a subject that appears to be in its infancy. Probability logic is a subject envisioned by Hans Reichenbach and largely created by Adams. It treats conditionals as bearers of conditional probabilities and discusses an appropriate sense of validity for arguments such conditionals, as well as ordinary statements as premisses. This is a clear well-written text on the subject of probability logic, suitable for (...) advanced undergraduates or graduates, but also of interest to professional philosophers. There are well-thought-out exercises, and a number of advanced topics treated in appendices, while some are brought up in exercises and some are alluded to only in footnotes. By this means, it is hoped that the reader will at least be made aware of most of the important ramifications of the subject and its tie-ins with current research, and will have some indications concerning recent and relevant literature. (shrink)
This is a study in the meaning of natural language probability operators, sentential operators such as probably and likely. We ask what sort of formal structure is required to model the logic and semantics of these operators. Along the way we investigate their deep connections to indicative conditionals and epistemic modals, probe their scalar structure, observe their sensitivity to contex- tually salient contrasts, and explore some of their scopal idiosyncrasies.
This paper motivates and develops a novel semantic framework for deontic modals. The framework is designed to shed light on two things: the relationship between deontic modals and substantive theories of practical rationality and the interaction of deontic modals with conditionals, epistemic modals and probability operators. I argue that, in order to model inferential connections between deontic modals and probability operators, we need more structure than is provided by classical intensional theories. In particular, we need probabilistic structure that (...) interacts directly with the compositional semantics of deontic modals. However, I reject theories that provide this probabilistic structure by claiming that the semantics of deontic modals is linked to the Bayesian notion of expectation. I offer a probabilistic premise semantics that explains all the data that create trouble for the rival theories. (shrink)
In this study we investigate the influence of reason-relation readings of indicative conditionals and ‘and’/‘but’/‘therefore’ sentences on various cognitive assessments. According to the Frege-Grice tradition, a dissociation is expected. Specifically, differences in the reason-relation reading of these sentences should affect participants’ evaluations of their acceptability but not of their truth value. In two experiments we tested this assumption by introducing a relevance manipulation into the truth-table task as well as in other tasks assessing the participants’ acceptability and probability evaluations. (...) Across the two experiments a strong dissociation was found. The reason-relation reading of all four sentences strongly affected their probability and acceptability evaluations, but hardly affected their respective truth evaluations. Implications of this result for recent work on indicative conditionals are discussed. (shrink)
This article outlines a theory of naive probability. According to the theory, individuals who are unfamiliar with the probability calculus can infer the probabilities of events in an extensional way: They construct mental models of what is true in the various possibilities. Each model represents an equiprobable alternative unless individuals have beliefs to the contrary, in which case some models will have higher probabilities than others. The probability of an event depends on the proportion of models in (...) which it occurs. The theory predicts several phenomena of reasoning about absolute probabilities, including typical biases. It correctly predicts certain cognitive illusions in inferences about relative probabilities. It accommodates reasoning based on numerical premises, and it explains how naive reasoners can infer posterior probabilities without relying on Bayes's theorem. Finally, it dispels some common misconceptions of probabilistic reasoning. (shrink)
De Finetti is one of the founding fathers of the subjective school of probability. He held that probabilities are subjective, coherent degrees of expectation, and he argued that none of the objective interpretations of probability make sense. While his theory has been influential in science and philosophy, it has encountered various objections. I argue that these objections overlook central aspects of de Finetti’s philosophy of probability and are largely unfounded. I propose a new interpretation of de Finetti’s (...) theory that highlights these aspects and explains how they are an integral part of de Finetti’s instrumentalist philosophy of probability. I conclude by drawing an analogy between misconceptions about de Finetti’s philosophy of probability and common misconceptions about instrumentalism. (shrink)
In this influential study of central issues in the philosophy of science, Paul Horwich elaborates on an important conception of probability, diagnosing the failure of previous attempts to resolve these issues as stemming from a too-rigid conception of belief. Adopting a Bayesian strategy, he argues for a probabilistic approach, yielding a more complete understanding of the characteristics of scientific reasoning and methodology. Presented in a fresh twenty-first-century series livery, and including a specially commissioned preface written by Colin Howson, illuminating (...) its enduring importance and relevance to philosophical enquiry, this engaging work has been revived for a new generation of readers. (shrink)
Order of information plays a crucial role in the process of updating beliefs across time. In fact, the presence of order effects makes a classical or Bayesian approach to inference difficult. As a result, the existing models of inference, such as the belief-adjustment model, merely provide an ad hoc explanation for these effects. We postulate a quantum inference model for order effects based on the axiomatic principles of quantum probability theory. The quantum inference model explains order effects by transforming (...) a state vector with different sequences of operators for different orderings of information. We demonstrate this process by fitting the quantum model to data collected in a medical diagnostic task and a jury decision-making task. To further test the quantum inference model, a new jury decision-making experiment is developed. Using the results of this experiment, we compare the quantum inference model with two versions of the belief-adjustment model, the adding model and the averaging model. We show that both the quantum model and the adding model provide good fits to the data. To distinguish the quantum model from the adding model, we develop a new experiment involving extreme evidence. The results from this new experiment suggest that the adding model faces limitations when accounting for tasks involving extreme evidence, whereas the quantum inference model does not. Ultimately, we argue that the quantum model provides a more coherent account for order effects that was not possible before. (shrink)
Some have argued that chance and determinism are compatible in order to account for the objectivity of probabilities in theories that are compatible with determinism, like Classical Statistical Mechanics (CSM) and Evolutionary Theory (ET). Contrarily, some have argued that chance and determinism are incompatible, and so such probabilities are subjective. In this paper, I argue that both of these positions are unsatisfactory. I argue that the probabilities of theories like CSM and ET are not chances, but also that they are (...) not subjective probabilities either. Rather, they are a third type of probability, which I call counterfactual probability. The main distinguishing feature of counterfactual-probability is the role it plays in conveying important counterfactual information in explanations. This distinguishes counterfactual probability from chance as a second concept of objective probability. (shrink)
David Wallace has given a decision-theoretic argument for the Born Rule in the context of Everettian quantum mechanics. This approach promises to resolve some long-standing problems with probability in EQM, but it has faced plenty of resistance. One kind of objection charges that the requisite notion of decision-theoretic uncertainty is unavailable in the Everettian picture, so that the argument cannot gain any traction; another kind of objection grants the proof’s applicability and targets the premises. In this article I propose (...) some novel principles connecting the physics of EQM with the metaphysics of modality, and argue that in the resulting framework the incoherence problem does not arise. These principles also help to justify one of the most controversial premises of Wallace’s argument, ‘branching indifference’. Absent any a priori reason to align the metaphysics with the physics in some other way, the proposed principles can be adopted on grounds of theoretical utility. The upshot is that Everettians can, after all, make clear sense of objective probability. 1 Introduction2 Setup3 Individualism versus Collectivism4 The Ingredients of Indexicalism5 Indexicalism and Incoherence5.1 The trivialization problem5.2 The uncertainty problem6 Indexicalism and Branching Indifference6.1 Introducing branching indifference6.2 The pragmatic defence of branching indifference6.3 The non-existence defence of branching indifference6.4 The indexicalist defence of branching indifference7 Conclusion. (shrink)
I describe a realist, ontologically objective interpretation of probability, "far-flung frequency (FFF) mechanistic probability". FFF mechanistic probability is defined in terms of facts about the causal structure of devices and certain sets of frequencies in the actual world. Though defined partly in terms of frequencies, FFF mechanistic probability avoids many drawbacks of well-known frequency theories and helps causally explain stable frequencies, which will usually be close to the values of mechanistic probabilities. I also argue that it's (...) a virtue rather than a failing of FFF mechanistic probability that it does not define single-case chances, and compare some aspects of my interpretation to a recent interpretation proposed by Strevens. (shrink)
When probability discounting (or probability weighting), one multiplies the value of an outcome by one's subjective probability that the outcome will obtain in decision-making. The broader import of defending probability discounting is to help justify cost-benefit analyses in contexts such as climate change. This chapter defends probability discounting under risk both negatively, from arguments by Simon Caney (2008, 2009), and with a new positive argument. First, in responding to Caney, I argue that small costs and (...) benefits need to be evaluated, and that viewing practices at the social level is too coarse-grained. Second, I argue for probability discounting, using a distinction between causal responsibility and moral responsibility. Moral responsibility can be cashed out in terms of blameworthiness and praiseworthiness, while causal responsibility obtains in full for any effect which is part of a causal chain linked to one's act. With this distinction in hand, unlike causal responsibility, moral responsibility can be seen as coming in degrees. My argument is, given that we can limit our deliberation and consideration to that which we are morally responsible for and that our moral responsibility for outcomes is limited by our subjective probabilities, our subjective probabilities can ground probability discounting. (shrink)
Adams’ thesis is generally agreed to be linguistically compelling for simple conditionals with factual antecedent and consequent. We propose a derivation of Adams’ thesis from the Lewis- Kratzer analysis of if-clauses as domain restrictors, applied to probability operators. We argue that Lewis’s triviality result may be seen as a result of inexpressibility of the kind familiar in generalized quantifier theory. Some implications of the Lewis- Kratzer analysis are presented concerning the assignment of probabilities to compounds of conditionals.
The Bayesian model has been used in psychology as the standard reference for the study of probability revision. In the first part of this paper we show that this traditional choice restricts the scope of the experimental investigation of revision to a stable universe. This is the case of a situation that, technically, is known as focusing. We argue that it is essential for a better understanding of human probability revision to consider another situation called updating (Katsuno & (...) Mendelzon, 1992), in which the universe is evolving. In that case the structure of the universe has definitely been transformed and the revision message conveys information on the resulting universe. The second part of the paper presents four experiments based on the Monty Hall puzzle that aim to show that updating is a natural frame for individuals to revise their beliefs. (shrink)
There is a plethora of confirmation measures in the literature. Zalabardo considers four such measures: PD, PR, LD, and LR. He argues for LR and against each of PD, PR, and LD. First, he argues that PR is the better of the two probability measures. Next, he argues that LR is the better of the two likelihood measures. Finally, he argues that LR is superior to PR. I set aside LD and focus on the trio of PD, PR, and (...) LR. The question I address is whether Zalabardo succeeds in showing that LR is superior to each of PD and PR. I argue that the answer is negative. I also argue, though, that measures such as PD and PR, on one hand, and measures such as LR, on the other hand, are naturally understood as explications of distinct senses of confirmation. (shrink)
(2014). Capturing the relationship between conditionals and conditional probability with a trivalent semantics. Journal of Applied Non-Classical Logics: Vol. 24, Three-Valued Logics and their Applications, pp. 144-152. doi: 10.1080/11663081.2014.911535.
We propose an alternative approach to probability theory closely related to the framework of numerosity theory: non-Archimedean probability (NAP). In our approach, unlike in classical probability theory, all subsets of an infinite sample space are measurable and only the empty set gets assigned probability zero (in other words: the probability functions are regular). We use a non-Archimedean field as the range of the probability function. As a result, the property of countable additivity in Kolmogorov’s (...) axiomatization of probability is replaced by a different type of infinite additivity. (shrink)
We provide a 'verisimilitudinarian' analysis of the well-known Linda paradox or conjunction fallacy, i.e., the fact that most people judge the probability of the conjunctive statement "Linda is a bank teller and is active in the feminist movement" (B & F) as more probable than the isolated statement "Linda is a bank teller" (B), contrary to an uncontroversial principle of probability theory. The basic idea is that experimental participants may judge B & F a better hypothesis about Linda (...) as compared to B because they evaluate B & F as more verisimilar than B. In fact, the hypothesis "feminist bank teller", while less likely to be true than "bank teller", may well be a better approximation to the truth about Linda. (shrink)
A definition of causation as probability-raising is threatened by two kinds of counterexample: first, when a cause lowers the probability of its effect; and second, when the probability of an effect is raised by a non-cause. In this paper, I present an account that deals successfully with problem cases of both these kinds. In doing so, I also explore some novel implications of incorporating into the metaphysical investigation considerations of causal psychology.
This paper analyzes concepts of independence and assumptions of convexity in the theory of sets of probability distributions. The starting point is Kyburg and Pittarelli’s discussion of “convex Bayesianism” (in particular their proposals concerning E-admissibility, independence, and convexity). The paper offers an organized review of the literature on independence for sets of probability distributions; new results on graphoid properties and on the justification of “strong independence” (using exchangeability) are presented. Finally, the connection between Kyburg and Pittarelli’s results and (...) recent developments on the axiomatization of non-binary preferences, and its impact on “complete” independence, are described. (shrink)
We explore ways in which purely qualitative belief change in the AGM tradition throws light on options in the treatment of conditional probability. First, by helping see why it can be useful to go beyond the ratio rule defining conditional from one-place probability. Second, by clarifying what is at stake in different ways of doing that. Third, by suggesting novel forms of conditional probability corresponding to familiar variants of qualitative belief change, and conversely. Likewise, we explain how (...) recent work on the qualitative part of probabilistic inference leads to a very broad class of 'proto-probability' functions. (shrink)
A probability distribution is regular if no possible event is assigned probability zero. While some hold that probabilities should always be regular, three counter-arguments have been posed based on examples where, if regularity holds, then perfectly similar events must have different probabilities. Howson (2017) and Benci et al. (2016) have raised technical objections to these symmetry arguments, but we see here that their objections fail. Howson says that Williamson’s (2007) “isomorphic” events are not in fact isomorphic, but Howson (...) is speaking of set-theoretic representations of events in a probability model. While those sets are not isomorphic, Williamson’s physical events are, in the relevant sense. Benci et al. claim that all three arguments rest on a conflation of different models, but they do not. They are founded on the premise that similar events should have the same probability in the same model, or in one case, on the assumption that a single rotation-invariant distribution is possible. Having failed to refute the symmetry arguments on such technical grounds, one could deny their implicit premises, which is a heavy cost, or adopt varying degrees of instrumentalism or pluralism about regularity, but that would not serve the project of accurately modelling chances. (shrink)
This paper defends David Hume's "Of Miracles" from John Earman's (2000) Bayesian attack by showing that Earman misrepresents Hume's argument against believing in miracles and misunderstands Hume's epistemology of probable belief. It argues, moreover, that Hume's account of evidence is fundamentally non-mathematical and thus cannot be properly represented in a Bayesian framework. Hume's account of probability is show to be consistent with a long and laudable tradition of evidential reasoning going back to ancient Roman law.
We offer a probabilistic model of rational consequence relations (Lehmann and Magidor, 1990) by appealing to the extension of the classical Ramsey-Adams test proposed by Vann McGee in (McGee, 1994). Previous and influential models of nonmonotonic consequence relations have been produced in terms of the dynamics of expectations (Gärdenfors and Makinson, 1994; Gärdenfors, 1993).'Expectation' is a term of art in these models, which should not be confused with the notion of expected utility. The expectations of an agent are some form (...) of belief weaker than absolute certainty. Our model offers a modified and extended version of an account of qualitative belief in terms of conditional probability, first presented in (van Fraassen, 1995). We use this model to relate probabilistic and qualitative models of non-monotonic relations in terms of expectations. In doing so we propose a probabilistic model of the notion of expectation. We provide characterization results both for logically finite languages and for logically infinite, but countable, languages. The latter case shows the relevance of the axiom of countable additivity for our probability functions. We show that a rational logic defined over a logically infinite language can only be fully characterized in terms of finitely additive conditional probability. (shrink)
The paper provides a new critical perspective on the propensity interpretation of fitness, by investigating its relationship to the propensity interpretation of probability. Two main conclusions are drawn. First, the claim that fitness is a propensity cannot be understood properly: fitness is not a propensity in the sense prescribed by the propensity interpretation of probability. Second, this interpretation of probability is inessential for explanations proposed by the PIF in evolutionary biology. Consequently, interpreting the probabilistic dimension of fitness (...) in terms of propensities is neither a strong motivation in favor of this interpretation, nor a possible target for substantial criticism. (shrink)
Johannes von Kries’s Spielraum-theory is regarded as one of the most important philosophical contributions of the nineteenth century to an objective interpretation of probability. This paper aims at a critical and contextual analysis of von Kries’s approach: It is contextual insofar as it reconstructs the Spielraum-theory in the historical setting that formed his scientific and philosophical outlook. It is critical insofar as it unfolds systematic tensions and inconsistencies which are rooted in this context, especially in the grave change of (...) mechanism which took place in the late nineteenth century. In this regard, the paper focuses on von Kries’s understanding of natural laws and nomological knowledge in relation to his concept of objective probability. While the formal approach of the Spielraum-theory—as far as developed by von Kries—seems sound, his epistemological claims with respect to nomological knowledge sustain classical mechanism and are hence difficult to substantiate from the point of view of modern science. (shrink)
NOTE: This paper is a reworking of some aspects of an earlier paper – ‘What else justification could be’ and also an early draft of chapter 2 of Between Probability and Certainty. I'm leaving it online as it has a couple of citations and there is some material here which didn't make it into the book (and which I may yet try to explore elsewhere). -/- My concern in this paper is with a certain, pervasive picture of epistemic justification. (...) On this picture, acquiring justification for believing something is essentially a matter of minimising one’s risk of error – so one is justified in believing something just in case it is sufficiently likely, given one’s evidence, to be true. This view is motivated by an admittedly natural thought: If we want to be fallibilists about justification then we shouldn’t demand that something be certain – that we completely eliminate error risk – before we can be justified in believing it. But if justification does not require the complete elimination of error risk, then what could it possibly require if not its minimisation? If justification does not require epistemic certainty then what could it possibly require if not epistemic likelihood? When all is said and done, I’m not sure that I can offer satisfactory answers to these questions – but I will attempt to trace out some possible answers here. The alternative picture that I’ll outline makes use of a notion of normalcy that I take to be irreducible to notions of statistical frequency or predominance. (shrink)
The foundations of probability are viewed through the lens of the subjectivist interpretation. This article surveys conditional probability, arguments for probabilism, probability dynamics, and the evidential and subjective interpretations of probability.
Inductive probability is the logical concept of probability in ordinary language. It is vague but it can be explicated by defining a clear and precise concept that can serve some of the same purposes. This paper presents a general method for doing such an explication and then a particular explication due to Carnap. Common criticisms of Carnap's inductive logic are examined; it is shown that most of them are spurious and the others are not fundamental.
This chapter is a philosophical survey of some leading approaches in formal epistemology in the so-called ‘Bayesian’ tradition. According to them, a rational agent’s degrees of belief—credences—at a time are representable with probability functions. We also canvas various further putative ‘synchronic’ rationality norms on credences. We then consider ‘diachronic’ norms that are thought to constrain how credences should respond to evidence. We discuss some of the main lines of recent debate, and conclude with some prospects for future research.
This paper is a response to Tyler Wunder’s ‘The modality of theism and probabilistic natural theology: a tension in Alvin Plantinga's philosophy’ (this journal). In his article, Wunder argues that if the proponent of the Evolutionary Argument Against Naturalism (EAAN) holds theism to be non-contingent and frames the argument in terms of objective probability, that the EAAN is either unsound or theism is necessarily false. I argue that a modest revision of the EAAN renders Wunder’s objection irrelevant, and that (...) this revision actually widens the scope of the argument. (shrink)
We generalize the Kolmogorov axioms for probability calculus to obtain conditions defining, for any given logic, a class of probability functions relative to that logic, coinciding with the standard probability functions in the special case of classical logic but allowing consideration of other classes of "essentially Kolmogorovian" probability functions relative to other logics. We take a broad view of the Bayesian approach as dictating inter alia that from the perspective of a given logic, rational degrees of (...) belief are those representable by probability functions from the class appropriate to that logic. Classical Bayesianism, which fixes the logic as classical logic, is only one version of this general approach. Another, which we call Intuitionistic Bayesianism, selects intuitionistic logic as the preferred logic and the associated class of probability functions as the right class of candidate representions of epistemic states (rational allocations of degrees of belief). Various objections to classical Bayesianism are, we argue, best met by passing to intuitionistic Bayesianism—in which the probability functions are taken relative to intuitionistic logic—rather than by adopting a radically non-Kolmogorovian, for example, nonadditive, conception of (or substitute for) probability functions, in spite of the popularity of the latter response among those who have raised these objections. The interest of intuitionistic Bayesianism is further enhanced by the availability of a Dutch Book argument justifying the selection of intuitionistic probability functions as guides to rational betting behavior when due consideration is paid to the fact that bets are settled only when/if the outcome bet on becomes known. (shrink)
Maxwell's deduction of the probability distribution over the velocity of gas molecules—one of the most important passages in physics (Truesdell)—presents a riddle: a physical discovery of the first importance was made in a single inferential leap without any apparent recourse to empirical evidence. -/- Tychomancy proposes that Maxwell's derivation was not made a priori; rather, he inferred his distribution from non-probabilistic facts about the dynamics of intermolecular collisions. Further, the inference is of the same sort as everyday reasoning about (...) the physical probabilities attached to such canonical chance setups as tossed coins or rolled dice. The structure of this reasoning is investigated and some simple rules for inferring physical probabilities from symmetries and other causally relevant properties of physical systems are proposed. -/- Not only physics but evolutionary biology and population ecology, the science of measurement error, and climate modeling have benefited enormously from the same kind of reasoning, the book goes on to argue. Inferences from dynamics to probability are so obvious to us, however, that their methodological importance has been largely overlooked. (shrink)
Epistemic closure under known implication is the principle that knowledge of \ and knowledge of \, together, imply knowledge of \. This principle is intuitive, yet several putative counterexamples have been formulated against it. This paper addresses the question, why is epistemic closure both intuitive and prone to counterexamples? In particular, the paper examines whether probability theory can offer an answer to this question based on four strategies. The first probability-based strategy rests on the accumulation of risks. The (...) problem with this strategy is that risk accumulation cannot accommodate certain counterexamples to epistemic closure. The second strategy is based on the idea of evidential support, that is, a piece of evidence supports a proposition whenever it increases the probability of the proposition. This strategy makes progress and can accommodate certain putative counterexamples to closure. However, this strategy also gives rise to a number of counterintuitive results. Finally, there are two broadly probabilistic strategies, one based on the idea of resilient probability and the other on the idea of assumptions that are taken for granted. These strategies are promising but are prone to some of the shortcomings of the second strategy. All in all, I conclude that each strategy fails. Probability theory, then, is unlikely to offer the account we need. (shrink)
We prove that the generalized cancellation axiom for incomplete comparative probability relations introduced by Rios Insua and Alon and Lehrer is stronger than the standard cancellation axiom for complete comparative probability relations introduced by Scott, relative to their other axioms for comparative probability in both the finite and infinite cases. This result has been suggested but not proved in the previous literature.
Iterated conditionals of the form If p, then if q, r are an important topic in philosophical logic. In recent years, psychologists have gained much knowledge about how people understand simple conditionals, but there are virtually no published psychological studies of iterated conditionals. This paper presents experimental evidence from a study comparing the iterated form, If p, then if q, r with the “imported,” noniterated form, If p and q, then r, using a probability evaluation task and a truth-table (...) task, and taking into account qualitative individual differences. This allows us to critically contrast philosophical and psychological approaches that make diverging predictions regarding the interpretation of these forms. The results strongly support the probabilistic Adams conditional and the “new paradigm” that takes this conditional as a starting point. (shrink)
There are many scientific and everyday cases where each of Pr and Pr is high and it seems that Pr is high. But high probability is not transitive and so it might be in such cases that each of Pr and Pr is high and in fact Pr is not high. There is no issue in the special case where the following condition, which I call “C1”, holds: H 1 entails H 2. This condition is sufficient for transitivity in (...) high probability. But many of the scientific and everyday cases referred to above are cases where it is not the case that H 1 entails H 2. I consider whether there are additional conditions sufficient for transitivity in high probability. I consider three candidate conditions. I call them “C2”, “C3”, and “C2&3”. I argue that C2&3, but neither C2 nor C3, is sufficient for transitivity in high probability. I then set out some further results and relate the discussion to the Bayesian requirement of coherence. (shrink)
Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: inferential probability, ensemble probability, and propensity. Class is the basis of inductive logic; deals with the frequencies of (...) events in repeatable experiments; describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge. (shrink)
Understanding probabilities as something other than point values has often been motivated by the need to find more realistic models for degree of belief, and in particular the idea that degree of belief should have an objective basis in “statistical knowledge of the world.” I offer here another motivation growing out of efforts to understand how chance evolves as a function of time. If the world is “chancy” in that there are non-trivial, objective, physical probabilities at the macro-level, then the (...) chance of an event e that happens at a given time is \ until it happens. But whether the chance of e goes to one continuously or not is left open. Discontinuities in such chance trajectories can have surprising and troubling consequences for probabilistic analyses of causation and accounts of how events occur in time. This, coupled with the compelling evidence for quantum discontinuities in chance’s evolution, gives rise to a “continuity bind” with respect to chance probability trajectories. I argue that a viable option for circumventing the continuity bind is to understand the probabilities “imprecisely,” that is, as intervals rather than point values. I then develop and motivate an alternative kind of continuity appropriate for interval-valued chance probability trajectories. (shrink)
The book contains the transcription of a course on the foundations of probability given by the Italian mathematician Bruno de Finetti in 1979 at the a oeNational Institute of Advanced Mathematicsa in Rome.
We study probabilistically informative (weak) versions of transitivity by using suitable definitions of defaults and negated defaults in the setting of coherence and imprecise probabilities. We represent p-consistent sequences of defaults and/or negated defaults by g-coherent imprecise probability assessments on the respective sequences of conditional events. Moreover, we prove the coherent probability propagation rules for Weak Transitivity and the validity of selected inference patterns by proving p-entailment of the associated knowledge bases. Finally, we apply our results to study (...) selected probabilistic versions of classical categorical syllogisms and construct a new version of the square of opposition in terms of defaults and negated defaults. (shrink)
We describe a computational model of two central aspects of people's probabilistic reasoning: descriptive probability estimation and inferential probability judgment. This model assumes that people's reasoning follows standard frequentist probability theory, but it is subject to random noise. This random noise has a regressive effect in descriptive probability estimation, moving probability estimates away from normative probabilities and toward the center of the probability scale. This random noise has an anti-regressive effect in inferential judgement, however. (...) These regressive and anti-regressive effects explain various reliable and systematic biases seen in people's descriptive probability estimation and inferential probability judgment. This model predicts that these contrary effects will tend to cancel out in tasks that involve both descriptive estimation and inferential judgement, leading to unbiased responses in those tasks. We test this model by applying it to one such task, described by Gallistel et al. ). Participants' median responses in this task were unbiased, agreeing with normative probability theory over the full range of responses. Our model captures the pattern of unbiased responses in this task, while simultaneously explaining systematic biases away from normatively correct probabilities seen in other tasks. (shrink)
In Thinking and Acting John Pollock offers some criticisms of Bayesian epistemology, and he defends an alternative understanding of the role of probability in epistemology. Here, I defend the Bayesian against some of Pollock's criticisms, and I discuss a potential problem for Pollock's alternative account.
In this article, I present some new group level interpretations of probability, and champion one in particular: a consensus-based variant where group degrees of belief are construed as agreed upon betting quotients rather than shared personal degrees of belief. One notable feature of the account is that it allows us to treat consensus between experts on some matter as being on the union of their relevant background information. In the course of the discussion, I also introduce a novel distinction (...) between intersubjective and interobjective interpretations of probability. (shrink)
Leibniz’s account of probability has come into better focus over the past decades. However, less attention has been paid to a certain domain of application of that account, that is, the application of it to the moral or ethical domain—the sphere of action, choice and practice. This is significant, as Leibniz had some things to say about applying probability theory to the moral domain, and thought the matter quite relevant. Leibniz’s work in this area is conducted at a (...) high level of abstraction. It establishes a proof of concept, rather than concrete guidelines for how to apply calculations to specific cases. Still, this highly abstract material does allow us to begin to construct a framework for thinking about Leibniz’s approach to the ethical side of probability. (shrink)
Rather than entailing that a particular outcome will occur, many scientific theories only entail that an outcome will occur with a certain probability. Because scientific evidence inevitably falls short of conclusive proof, when choosing between different theories it is standard to make reference to how probable the various options are in light of the evidence. A full understanding of probability in science needs to address both the role of probabilities in theories, or chances, as well as the role (...) of probabilistic judgment in theory choice. In this chapter, the author introduces and distinguishes the two sorts of probability from one another and attempt to offer a satisfactory characterization of how the different uses for probability in science are to be understood. A closing section turns to the question of how views about the chance of some outcome should guide our confidence in that outcome. (shrink)