Switch to: References

Add citations

You must login to add citations.
  1. Learning not to be Naïve: A comment on the exchange between Perrine/Wykstra and Draper.Lara Buchak - 2014 - In Justin McBrayer Trent Dougherty (ed.), Skeptical Theism: New Essays. Oxford University Press.
    Does postulating skeptical theism undermine the claim that evil strongly confirms atheism over theism? According to Perrine and Wykstra, it does undermine the claim, because evil is no more likely on atheism than on skeptical theism. According to Draper, it does not undermine the claim, because evil is much more likely on atheism than on theism in general. I show that the probability facts alone do not resolve their disagreement, which ultimately rests on which updating procedure – conditionalizing or updating (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Beyond Uncertainty: Reasoning with Unknown Possibilities.Katie Steele & H. Orri Stefánsson - 2021 - Cambridge University Press.
    The main aim of this book is to introduce the topic of limited awareness, and changes in awareness, to those interested in the philosophy of decision-making and uncertain reasoning.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  • Updating, Undermining, and Independence.Jonathan Weisberg - 2015 - British Journal for the Philosophy of Science 66 (1):121-159.
    Sometimes appearances provide epistemic support that gets undercut later. In an earlier paper I argued that standard Bayesian update rules are at odds with this phenomenon because they are ‘rigid’. Here I generalize and bolster that argument. I first show that the update rules of Dempster–Shafer theory and ranking theory are rigid too, hence also at odds with the defeasibility of appearances. I then rebut three Bayesian attempts to solve the problem. I conclude that defeasible appearances pose a more difficult (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   28 citations  
  • Commutativity or Holism? A Dilemma for Conditionalizers.Jonathan Weisberg - 2009 - British Journal for the Philosophy of Science 60 (4):793-812.
    Conditionalization and Jeffrey Conditionalization cannot simultaneously satisfy two widely held desiderata on rules for empirical learning. The first desideratum is confirmational holism, which says that the evidential import of an experience is always sensitive to our background assumptions. The second desideratum is commutativity, which says that the order in which one acquires evidence shouldn't affect what conclusions one draws, provided the same total evidence is gathered in the end. (Jeffrey) Conditionalization cannot satisfy either of these desiderata without violating the other. (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   64 citations  
  • Conditional Learning Through Causal Models.Jonathan Vandenburgh - 2020 - Synthese (1-2):2415-2437.
    Conditional learning, where agents learn a conditional sentence ‘If A, then B,’ is difficult to incorporate into existing Bayesian models of learning. This is because conditional learning is not uniform: in some cases, learning a conditional requires decreasing the probability of the antecedent, while in other cases, the antecedent probability stays constant or increases. I argue that how one learns a conditional depends on the causal structure relating the antecedent and the consequent, leading to a causal model of conditional learning. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Belief Revision for Growing Awareness.Katie Steele & H. Orri Stefánsson - 2021 - Mind 130 (520):1207–1232.
    The Bayesian maxim for rational learning could be described as conservative change from one probabilistic belief or credence function to another in response to newinformation. Roughly: ‘Hold fixed any credences that are not directly affected by the learning experience.’ This is precisely articulated for the case when we learn that some proposition that we had previously entertained is indeed true (the rule of conditionalisation). But can this conservative-change maxim be extended to revising one’s credences in response to entertaining propositions or (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  • Modus Ponens and Modus Tollens for Conditional Probabilities, and Updating on Uncertain Evidence.Jordan Howard Sobel - 2009 - Theory and Decision 66 (2):103 - 148.
    There are narrowest bounds for P(h) when P(e) = y and P(h/e) = x, which bounds collapse to x as y goes to 1. A theorem for these bounds -- bounds for probable modus ponens -- entails a principle for updating on possibly uncertain evidence subject to these bounds that is a generalization of the principle for updating by conditioning on certain evidence. This way of updating on possibly uncertain evidence is appropriate when updating by ’probability kinematics’ or ’Jeffrey-conditioning’ is, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Expert deference as a belief revision schema.Joe Roussos - 2020 - Synthese (1-2):1-28.
    When an agent learns of an expert's credence in a proposition about which they are an expert, the agent should defer to the expert and adopt that credence as their own. This is a popular thought about how agents ought to respond to (ideal) experts. In a Bayesian framework, it is often modelled by endowing the agent with a set of priors that achieves this result. But this model faces a number of challenges, especially when applied to non-ideal agents (who (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Peer Disagreement: A Call for the Revision of Prior Probabilities.Sven Rosenkranz & Moritz Schulz - 2015 - Dialectica 69 (4):551-586.
    The current debate about peer disagreement has so far mainly focused on the question of whether peer disagreements provide genuine counterevidence to which we should respond by revising our credences. By contrast, comparatively little attention has been devoted to the question by which process, if any, such revision should be brought about. The standard assumption is that we update our credences by conditionalizing on the evidence that peer disagreements provide. In this paper, we argue that non-dogmatist views have good reasons (...)
    Direct download  
     
    Export citation  
     
    Bookmark   8 citations  
  • A new resolution of the Judy Benjamin Problem.Igor Douven & Jan-Willem Romeijn - 2011 - Mind 120 (479):637 - 670.
    A paper on how to adapt your probabilisitc beliefs when learning a conditional.
    Direct download (16 more)  
     
    Export citation  
     
    Bookmark   30 citations  
  • Having a look at the Bayes Blind Spot.Miklós Rédei & Zalán Gyenis - 2019 - Synthese 198 (4):3801-3832.
    The Bayes Blind Spot of a Bayesian Agent is, by definition, the set of probability measures on a Boolean σ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma $$\end{document}-algebra that are absolutely continuous with respect to the background probability measure of a Bayesian Agent on the algebra and which the Bayesian Agent cannot learn by a single conditionalization no matter what evidence he has about the elements in the Boolean σ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • How should your beliefs change when your awareness grows?Richard Pettigrew - forthcoming - Episteme:1-25.
    Epistemologists who study credences have a well-developed account of how you should change them when you learn new evidence; that is, when your body of evidence grows. What's more, they boast a diverse range of epistemic and pragmatic arguments that support that account. But they do not have a satisfactory account of when and how you should change your credences when you become aware of possibilities and propositions you have not entertained before; that is, when your awareness grows. In this (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Simultaneous belief updates via successive Jeffrey conditionalization.Ilho Park - 2013 - Synthese 190 (16):3511-3533.
    This paper discusses simultaneous belief updates. I argue here that modeling such belief updates using the Principle of Minimum Information can be regarded as applying Jeffrey conditionalization successively, and so that, contrary to what many probabilists have thought, the simultaneous belief updates can be successfully modeled by means of Jeffrey conditionalization.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Correcting credences with chances.Ilho Park - 2018 - Synthese 198 (1):509-536.
    Lewis’s Principal Principle is widely recognized as a rationality constraint that our credences should satisfy throughout our epistemic life. In practice, however, our credences often fail to satisfy this principle because of our various epistemic limitations. Facing such violations, we should correct our credences in accordance with this principle. In this paper, I will formulate a way of correcting our credences, which will be called the Adams Correcting Rules and then show that such a rule yields non-commutativity between conditionalizing and (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  • Independence Day?Matthew Mandelkern & Daniel Rothschild - 2019 - Journal of Semantics 36 (2):193-210.
    Two recent and influential papers, van Rooij 2007 and Lassiter 2012, propose solutions to the proviso problem that make central use of related notions of independence—qualitative in the first case, probabilistic in the second. We argue here that, if these solutions are to work, they must incorporate an implicit assumption about presupposition accommodation, namely that accommodation does not interfere with existing qualitative or probabilistic independencies. We show, however, that this assumption is implausible, as updating beliefs with conditional information does not (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • The principle of maximum entropy and a problem in probability kinematics.Stefan Lukits - 2014 - Synthese 191 (7):1-23.
    Sometimes we receive evidence in a form that standard conditioning (or Jeffrey conditioning) cannot accommodate. The principle of maximum entropy (MAXENT) provides a unique solution for the posterior probability distribution based on the intuition that the information gain consistent with assumptions and evidence should be minimal. Opponents of objective methods to determine these probabilities prominently cite van Fraassen’s Judy Benjamin case to undermine the generality of maxent. This article shows that an intuitive approach to Judy Benjamin’s case supports maxent. This (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • An Objective Justification of Bayesianism II: The Consequences of Minimizing Inaccuracy.Hannes Leitgeb & Richard Pettigrew - 2010 - Philosophy of Science 77 (2):236-272.
    One of the fundamental problems of epistemology is to say when the evidence in an agent’s possession justifies the beliefs she holds. In this paper and its prequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: Accuracy An epistemic agent ought to minimize the inaccuracy of her partial beliefs. In the prequel, we made this norm mathematically precise; in this paper, we derive its consequences. We show that the two core tenets of Bayesianism (...)
    Direct download (14 more)  
     
    Export citation  
     
    Bookmark   145 citations  
  • Merging of opinions and probability kinematics.Simon M. Huttegger - 2015 - Review of Symbolic Logic 8 (4):611-648.
    We explore the question of whether sustained rational disagreement is possible from a broadly Bayesian perspective. The setting is one where agents update on the same information, with special consideration being given to the case of uncertain information. The classical merging of opinions theorem of Blackwell and Dubins shows when updated beliefs come and stay closer for Bayesian conditioning. We extend this result to a type of Jeffrey conditioning where agents update on evidence that is uncertain but solid. However, merging (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   27 citations  
  • Inductive Learning in Small and Large Worlds.Simon M. Huttegger - 2017 - Philosophy and Phenomenological Research 95 (1):90-116.
  • On Indeterminate Updating of Credences.Leendert Huisman - 2014 - Philosophy of Science 81 (4):537-557.
    The strategy of updating credences by minimizing the relative entropy has been questioned by many authors, most strongly by means of the Judy Benjamin puzzle. I present a new analysis of Judy Benjamin–like forms of new information and defend the thesis that in general the rational posterior is indeterminate, meaning that a family of posterior credence functions rather than a single one is the rational response when that type of information becomes available. The proposed thesis extends naturally to all cases (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • The value of cost-free uncertain evidence.Patryk Dziurosz-Serafinowicz & Dominika Dziurosz-Serafinowicz - 2021 - Synthese 199 (5-6):13313-13343.
    We explore the question of whether cost-free uncertain evidence is worth waiting for in advance of making a decision. A classical result in Bayesian decision theory, known as the value of evidence theorem, says that, under certain conditions, when you update your credences by conditionalizing on some cost-free and certain evidence, the subjective expected utility of obtaining this evidence is never less than the subjective expected utility of not obtaining it. We extend this result to a type of update method, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Justifying Lewis’s Kinematics of Chance.Patryk Dziurosz-Serafinowicz - 2021 - British Journal for the Philosophy of Science 72 (2):439-463.
    In his ‘A Subjectivist’s Guide to Objective Chance’, Lewis argued that a particular kinematical model for chances follows from his principal principle. According to this model, any later chance function is equal to an earlier chance function conditional on the complete intervening history of non-modal facts. This article first investigates the conditions that any kinematical model for chance needs to satisfy to count as Lewis’s kinematics of chance. Second, it presents Lewis’s justification for his kinematics of chance and explains why (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Probabilistic Alternatives to Bayesianism: The Case of Explanationism.Igor Douven & Jonah N. Schupbach - 2015 - Frontiers in Psychology 6.
    There has been a probabilistic turn in contemporary cognitive science. Far and away, most of the work in this vein is Bayesian, at least in name. Coinciding with this development, philosophers have increasingly promoted Bayesianism as the best normative account of how humans ought to reason. In this paper, we make a push for exploring the probabilistic terrain outside of Bayesianism. Non-Bayesian, but still probabilistic, theories provide plausible competitors both to descriptive and normative Bayesian accounts. We argue for this general (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   26 citations  
  • Learning Conditional Information.Igor Douven - 2012 - Mind and Language 27 (3):239-263.
    Some of the information we receive comes to us in an explicitly conditional form. It is an open question how to model the accommodation of such information in a Bayesian framework. This paper presents data suggesting that there may be no strictly Bayesian account of updating on conditionals. Specifically, the data seem to indicate that such updating at least sometimes proceeds on the basis of explanatory considerations, which famously have no home in standard Bayesian epistemology. The paper also proposes a (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  • Ramsey’s test, adams’ thesis, and left-nested conditionals.Richard Dietz & Igor Douven - 2010 - Review of Symbolic Logic 3 (3):467-484.
    Adams famously suggested that the acceptability of any indicative conditional whose antecedent and consequent are both factive sentences amounts to the subjective conditional probability of the consequent given the antecedent. The received view has it that this thesis offers an adequate partial explication of Ramsey’s test, which characterizes graded acceptability for conditionals in terms of hypothetical updates on the antecedent. Some results in van Fraassen may raise hope that this explicatory approach to Ramsey’s test is extendible to left-nested conditionals, that (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Higher-Order Beliefs and the Undermining Problem for Bayesianism.Lisa Cassell - 2019 - Acta Analytica 34 (2):197-213.
    Jonathan Weisberg has argued that Bayesianism’s rigid updating rules make Bayesian updating incompatible with undermining defeat. In this paper, I argue that when we attend to the higher-order beliefs we must ascribe to agents in the kinds of cases Weisberg considers, the problem he raises disappears. Once we acknowledge the importance of higher-order beliefs to the undermining story, we are led to a different understanding of how these cases arise. And on this different understanding of things, the rigid nature of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Commutativity, Normativity, and Holism: Lange Revisited.Lisa Cassell - 2020 - Canadian Journal of Philosophy 50 (2):159-173.
    Lange (2000) famously argues that although Jeffrey Conditionalization is non-commutative over evidence, it’s not defective in virtue of this feature. Since reversing the order of the evidence in a sequence of updates that don’t commute does not reverse the order of the experiences that underwrite these revisions, the conditions required to generate commutativity failure at the level of experience will fail to hold in cases where we get commutativity failure at the level of evidence. If our interest in commutativity is, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Bayesian coherentism.Lisa Cassell - 2020 - Synthese 198 (10):9563-9590.
    This paper considers a problem for Bayesian epistemology and proposes a solution to it. On the traditional Bayesian framework, an agent updates her beliefs by Bayesian conditioning, a rule that tells her how to revise her beliefs whenever she gets evidence that she holds with certainty. In order to extend the framework to a wider range of cases, Jeffrey (1965) proposed a more liberal version of this rule that has Bayesian conditioning as a special case. Jeffrey conditioning is a rule (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  • Revisiting McGee’s Probabilistic Analysis of Conditionals.John Cantwell - 2022 - Journal of Philosophical Logic (5):1-45.
    This paper calls for a re-appraisal of McGee's analysis of the semantics, logic and probabilities of indicative conditionals presented in his 1989 paper Conditional probabilities and compounds of conditionals. The probabilistic measures introduced by McGee are given a new axiomatisation built on the principle that the antecedent of a conditional is probabilistically independent of the conditional and a more transparent method of constructing such measures is provided. McGee's Dutch book argument is restructured to more clearly reveal that it introduces a (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  • The kinematics of belief and desire.Richard Bradley - 2007 - Synthese 156 (3):513-535.
    Richard Jeffrey regarded the version of Bayesian decision theory he floated in ‘The Logic of Decision’ and the idea of a probability kinematics—a generalisation of Bayesian conditioning to contexts in which the evidence is ‘uncertain’—as his two most important contributions to philosophy. This paper aims to connect them by developing kinematical models for the study of preference change and practical deliberation. Preference change is treated in a manner analogous to Jeffrey’s handling of belief change: not as mechanical outputs of combinations (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   21 citations  
  • Proposition-valued random variables as information.Richard Bradley - 2010 - Synthese 175 (1):17 - 38.
    The notion of a proposition as a set of possible worlds or states occupies central stage in probability theory, semantics and epistemology, where it serves as the fundamental unit both of information and meaning. But this fact should not blind us to the existence of prospects with a different structure. In the paper I examine the use of random variables—in particular, proposition-valued random variables— in these fields and argue that we need a general account of rational attitude formation with respect (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark  
  • Bayesianism and self-doubt.Darren Bradley - 2020 - Synthese 199 (1-2):2225-2243.
    How should we respond to evidence when our evidence indicates that we are rationally impaired? I will defend a novel answer based on the analogy between self-doubt and memory loss. To believe that one is now impaired and previously was not is to believe that one’s epistemic position has deteriorated. Memory loss is also a form of epistemic deterioration. I argue that agents who suffer from epistemic deterioration should return to the priors they had at an earlier time. I develop (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Attitudes, deliberation and decisions.Richard Bradley - 2022 - Synthese 200 (1):1-18.
    In this paper I discuss the challenges of several authors to the claims I make in Decision Theory with a Human Face regarding the relation between preference and choice, the nature of conditional desire, the semantics of conditionals, attitudes to chances and their role in individuating prospects, belief change under growing awareness and choice under ambiguity.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Judy Benjamin is a Sleeping Beauty.Luc Bovens - 2010 - Analysis 70 (1):23-26.
    I argue that van Fraassen's Judy Benjamin Problem and Elga's Sleeping Beauty Problem have the same structure.
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  • Weighted averaging, Jeffrey conditioning and invariance.Denis Bonnay & Mikaël Cozic - 2018 - Theory and Decision 85 (1):21-39.
    Jeffrey conditioning tells an agent how to update her priors so as to grant a given probability to a particular event. Weighted averaging tells an agent how to update her priors on the basis of testimonial evidence, by changing to a weighted arithmetic mean of her priors and another agent’s priors. We show that, in their respective settings, these two seemingly so different updating rules are axiomatized by essentially the same invariance condition. As a by-product, this sheds new light on (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • General properties of general Bayesian learning.Miklós Rédei & Zalán Gyenis - unknown
    We investigate the general properties of general Bayesian learning, where ``general Bayesian learning'' means inferring a state from another that is regarded as evidence, and where the inference is conditionalizing the evidence using the conditional expectation determined by a reference probability measure representing the background subjective degrees of belief of a Bayesian Agent performing the inference. States are linear functionals that encode probability measures by assigning expectation values to random variables via integrating them with respect to the probability measure. If (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • How to Learn Concepts, Consequences, and Conditionals.Franz Huber - 2015 - Analytica: an electronic, open-access journal for philosophy of science 1 (1):20-36.
    In this brief note I show how to model conceptual change, logical learning, and revision of one's beliefs in response to conditional information such as indicative conditionals that do not express propositions.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • The Bayes Blind Spot of a finite Bayesian Agent is a large set.Zalán Gyenis & Miklós Rédei - unknown
    The Bayes Blind Spot of a Bayesian Agent is the set of probability measures on a Boolean algebra that are absolutely continuous with respect to the background probability measure of a Bayesian Agent on the algebra and which the Bayesian Agent cannot learn by conditionalizing no matter what evidence he has about the elements in the Boolean algebra. It is shown that if the Boolean algebra is finite, then the Bayes Blind Spot is a very large set: it has the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations