About this topic
Summary Scoring rules play an important role in statistics, decision theory, and formal epistemology.  They underpin techniques for eliciting a person's credences in statistics.  And they have been exploited in epistemology to give arguments for various norms that are thought to govern credences, such as Probabilism, Conditionalization, the Reflection Principle, the Principal Principle, and Principles of Indifference, as well as accounts of peer disagreement and the Sleeping Beauty puzzle. A scoring rule is a function that assigns a penalty to an agent's credence (or partial belief or degree of belief) in a given proposition.  The penalty depends on whether the proposition is true or false.  Typically, if the proposition is true then the penalty increases as the credence decreases (the less confident you are in a true proposition, the more you will be penalised); and if the proposition is false then the penalty increases as the credence increases (the more confident you are in a false proposition, the more you will be penalised). In statistics and the theory of eliciting credences, we usually interpret the penalty assigned to a credence by a scoring rule as the monetary loss incurred by an agent with that credence.  In epistemology, we sometimes interpret it as the so-called 'gradational inaccuracy' of the agent's credence:  just as a full belief in a true proposition is more accurate than a full disbelief in that proposition, a higher credence in a true proposition is more accurate than a lower one; and just as a full disbelief in a false proposition is more accurate than a full belief, a lower credence in a false proposition is more accurate than a higher one.  Sometimes, in epistemology, we interpret the penalty given by a scoring rule more generally:  we take it to be the loss in so-called 'cognitive utility' incurred by an agent with that credence, where this is intended to incorporate a measure of the accuracy of the credence, but also measures of all other doxastic virtues it might have as well. Scoring rules assign losses or penalties to individual credences.  But we can use them to define loss or penalty functions for credence functions as well.  The loss assigned to a credence function is just the sum of the losses assigned to the individual credences it gives.  Using this, we can argue for such doxastic norms as Probabilism, Conditionalization, and so on.  For instance, for a large collection of scoring rules, the following holds:  If a credence function violates Probabilism, then there is a credence function that satisfies Probabilism that incurs a lower penalty regardless of how the world turns out.  That is, any non-probabilistic credence function is dominated by a probabilistic one.  Also, for the same large collection of scoring rules, the following holds:  If one's current credence function is a probability function, one will expect updating by conditionalization to incur a lower penalty than updating by any other rule.  There is a substantial and growing body of work on how scoring rules can be used to establish other doxastic norms.
Key works Leonard Savage (Savage 1971) and Bruno de Finetti (de Finetti 1970) introduced the notion of a scoring rule independently.  The notion was introduced into epistemology by Jim Joyce (Joyce 1998) and Graham Oddie (Oddie 1997).  Joyce used it to justify Probabilism; Oddie used it to justify Conditionalization.  Since then, authors have improved and generalized both arguments.  Improved arguments for Probabilism can be found in (Joyce 2009), (Leitgeb & Pettigrew 2010), (Leitgeb & Pettigrew 2010), (Predd et al 2009), (Schervish et al manuscript).  Improved arguments for Conditionalization can be found in (Greaves & Wallace 2006), (Easwaran 2013).  Furthermore, other norms have been considered, such as the Principal Principle: (Pettigrew 2012), (Pettigrew 2013).
Introductions Pettigrew, Richard (2011) 'Epistemic Utility Arguments for Probabilism', Stanford Encyclopedia of Philosophy (Pettigrew 2011)
  Show all references
Related categories
Siblings:
64 found
Search inside:
(import / add options)   Sort by:
1 — 50 / 64
  1. Davide P. Cervone, William V. Gehrlein, William S. Zwicker, Which Scoring Rule Maximizes Condorcet, Marcello Basili, Alain Chateauneuf & Fulvio Fontini (2005). Mohammed Abdellaoui/Editorial Statement 1–2 Mohammed Abdellaoui and Peter P. Wakker/The Likelihood Method for Decision Under Uncertainty 3–76 AAJ Marley and R. Duncan Luce/Independence Properties Vis--Vis Several Utility Representations 77–143. [REVIEW] Theory and Decision 58:409-410.
    Remove from this list | Direct download  
     
    My bibliography  
     
    Export citation  
  2. Jake Chandler (2013). Acceptance, Aggregation and Scoring Rules. Erkenntnis 78 (1):201 - 217.
    As the ongoing literature on the paradoxes of the Lottery and the Preface reminds us, the nature of the relation between probability and rational acceptability remains far from settled. This article provides a novel perspective on the matter by exploiting a recently noted structural parallel with the problem of judgment aggregation. After offering a number of general desiderata on the relation between finite probability models and sets of accepted sentences in a Boolean sentential language, it is noted that a number (...)
    Remove from this list | Direct download (7 more)  
     
    My bibliography  
     
    Export citation  
  3. Bruno de Finetti (1981). The Role of 'Dutch Books' and of 'Proper Scoring Rules'. British Journal for the Philosophy of Science 32 (1):55-56.
  4. Bruno de Finetti (1981). The Role of 'Dutch Books' and of 'Proper Scoring Rules'. British Journal for the Philosophy of Science 32 (1):55-56.
  5. Bruno de Finetti (1972). Probability, Induction, and Statistics. New York: John Wiley.
    Remove from this list |
    Translate to English
    |
     
    My bibliography  
     
    Export citation  
  6. Bruno de Finetti (1970). Theory of Probability. New York: John Wiley.
  7. Igor Douven (2013). Inference to the Best Explanation, Dutch Books, and Inaccuracy Minimisation. Philosophical Quarterly 63 (252):428-444.
    Bayesians have traditionally taken a dim view of the Inference to the Best Explanation (IBE), arguing that, if IBE is at variance with Bayes' rule, then it runs afoul of the dynamic Dutch book argument. More recently, Bayes' rule has been claimed to be superior on grounds of conduciveness to our epistemic goal. The present paper aims to show that neither of these arguments succeeds in undermining IBE.
    Remove from this list | Direct download (7 more)  
     
    My bibliography  
     
    Export citation  
  8. Kenny Easwaran (2013). Expected Accuracy Supports Conditionalization—and Conglomerability and Reflection. Philosophy of Science 80 (1):119-142.
  9. Kenny Easwaran & Branden Fitelson (2012). An 'Evidentialist' Worry About Joyce's Argument for Probabilism. Dialetica 66 (3):425-433.
    To the extent that we have reasons to avoid these “bad B -properties”, these arguments provide reasons not to have an incoherent credence function b — and perhaps even reasons to have a coherent one. But, note that these two traditional arguments for probabilism involve what might be called “pragmatic” reasons (not) to be (in)coherent. In the case of the Dutch Book argument, the “bad” property is pragmatically bad (to the extent that one values money). But, it is not clear (...)
    Remove from this list | Direct download (10 more)  
     
    My bibliography  
     
    Export citation  
  10. Don Fallis (2007). Attitudes Toward Epistemic Risk and the Value of Experiments. Studia Logica 86 (2):215 - 246.
    Several different Bayesian models of epistemic utilities (see, e. g., [37], [24], [40], [46]) have been used to explain why it is rational for scientists to perform experiments. In this paper, I argue that a model-suggested independently by Patrick Maher [40] and Graham Oddie [46]-that assigns epistemic utility to degrees of belief in hypotheses provides the most comprehensive explanation. This is because this proper scoring rule (PSR) model captures a wider range of scientifically acceptable attitudes toward epistemic risk than the (...)
    Remove from this list | Direct download (7 more)  
     
    My bibliography  
     
    Export citation  
  11. Don Fallis (2002). Goldman on Probabilistic Inference. Philosophical Studies 109 (3):223 - 240.
    In his recent book, Knowledge in a Social World, Alvin Goldman claims to have established that if a reasoner starts with accurate estimates of the reliability of new evidence and conditionalizes on this evidence, then this reasoner is objectively likely to end up closer to the truth. In this paper, I argue that Goldman's result is not nearly as philosophically significant as he would have us believe. First, accurately estimating the reliability of evidence – in the sense that Goldman requires (...)
    Remove from this list | Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  12. Branden Fitelson, Accuracy & Coherence.
    This talk is (mainly) about the relationship two types of epistemic norms: accuracy norms and coherence norms. A simple example that everyone will be familiar with.
    Remove from this list |
    Translate to English
    | Direct download (2 more)  
     
    My bibliography  
     
    Export citation  
  13. Branden Fitelson, Accuracy & Coherence II.
    Comparative. Let C be the full set of S’s comparative judgments over B × B. The innaccuracy of C at a world w is given by the number of incorrect judgments in C at w.
    Remove from this list |
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  14. Branden Fitelson, Accuracy & Coherence III.
    In this talk, I will explain why only one of Miller’s two types of language-dependence-of-verisimilitude problems is a (potential) threat to the sorts of accuracy-dominance approaches to coherence that I’ve been discussing.
    Remove from this list |
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  15. Branden Fitelson (2012). Accuracy, Language Dependence, and Joyce's Argument for Probabilism. Philosophy of Science 79 (1):167-174.
    Remove from this list | Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  16. Branden Fitelson & Lara Buchak, Separability Assumptions in Scoring-Rule-Based Arguments for Probabilism.
    - In decision theory, an agent is deciding how to value a gamble that results in different outcomes in different states. Each outcome gets a utility value for the agent.
    Remove from this list |
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  17. Branden Fitelson & Lara Buchak, Advice-Giving and Scoring-Rule-Based Arguments for Probabilism.
    Dutch Book Arguments. B is susceptibility to sure monetary loss (in a certain betting set-up), and F is the formal role played by non-Pr b’s in the DBT and the Converse DBT. Representation Theorem Arguments. B is having preferences that violate some of Savage’s axioms (and/or being unrepresentable as an expected utility maximizer), and F is the formal role played by non-Pr b’s in the RT.
    Remove from this list |
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  18. Branden Fitelson & Kenny Easwaran, Partial Belief, Full Belief, and Accuracy–Dominance.
    Arguments for probabilism aim to undergird/motivate a synchronic probabilistic coherence norm for partial beliefs. Standard arguments for probabilism are all of the form: An agent S has a non-probabilistic partial belief function b iff (⇐⇒) S has some “bad” property B (in virtue of the fact that their p.b.f. b has a certain kind of formal property F). These arguments rest on Theorems (⇒) and Converse Theorems (⇐): b is non-Pr ⇐⇒ b has formal property F.
    Remove from this list |
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  19. Hilary Greaves & David Wallace (2006). Justifying Conditionalization: Conditionalization Maximizes Expected Epistemic Utility. Mind 115 (459):607-632.
    According to Bayesian epistemology, the epistemically rational agent updates her beliefs by conditionalization: that is, her posterior subjective probability after taking account of evidence X, pnew, is to be set equal to her prior conditional probability pold(·|X). Bayesians can be challenged to provide a justification for their claim that conditionalization is recommended by rationality—whence the normative force of the injunction to conditionalize? There are several existing justifications for conditionalization, but none directly addresses the idea that conditionalization will be epistemically rational (...)
    Remove from this list | Direct download (9 more)  
     
    My bibliography  
     
    Export citation  
  20. Alan Hájek (2008). Arguments for–or Against–Probabilism? British Journal for the Philosophy of Science 59 (4):793 - 819.
    Four important arguments for probabilism—the Dutch Book, representation theorem, calibration, and gradational accuracy arguments—have a strikingly similar structure. Each begins with a mathematical theorem, a conditional with an existentially quantified consequent, of the general form: if your credences are not probabilities, then there is a way in which your rationality is impugned. Each argument concludes that rationality requires your credences to be probabilities. I contend that each argument is invalid as formulated. In each case there is a mirror-image theorem and (...)
    Remove from this list | Direct download (8 more)  
     
    My bibliography  
     
    Export citation  
  21. Robin Hanson, Eliciting Objective Probabilities Via Lottery Insurance Games.
    Since utilities and probabilities jointly determine choices, event-dependent utilities complicate the elicitation of subjective event probabilities. However, for the usual purpose of obtaining the information embodied in agent beliefs, it is sufficient to elicit objective probabilities, i.e., probabilities obtained by updating a known common prior with that agent’s further information. Bayesians who play a Nash equilibrium of a certain insurance game before they obtain relevant information will afterward act regarding lottery ticket payments as if they had event-independent risk-neutral utility and (...)
    Remove from this list | Direct download  
     
    My bibliography  
     
    Export citation  
  22. Robin Hanson, Logarithmic Market Scoring Rules for Modular Combinatorial Information Aggregation.
    In practice, scoring rules elicit good probability estimates from individuals, while betting markets elicit good consensus estimates from groups. Market scoring rules combine these features, eliciting estimates from individuals or groups, with groups costing no more than individuals. Regarding a bet on one event given another event, only logarithmic versions preserve the probability of the given event. Logarithmic versions also preserve the conditional probabilities of other events, and so preserve conditional independence relations. Given logarithmic rules that elicit relative probabilities of (...)
    Remove from this list | Direct download  
     
    My bibliography  
     
    Export citation  
  23. Alan H.´ajek (2008). Arguments for–or Against–Probabilism? British Journal for the Philosophy of Science 59 (4):793 - 819.
    Four important arguments for probabilism--the Dutch Book, representation theorem, calibration, and gradational accuracy arguments--have a strikingly similar structure. Each begins with a mathematical theorem, a conditional with an existentially quantified consequent, of the general form: if your credences are not probabilities, then there is a way in which your rationality is impugned. Each argument concludes that rationality requires your credences to be probabilities. I contend that each argument is invalid as formulated. In each case there is a mirror-image theorem and (...)
    Remove from this list | Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  24. D. J. Johnstone (2007). The Value of a Probability Forecast From Portfolio Theory. Theory and Decision 63 (2):153-203.
    A probability forecast scored ex post using a probability scoring rule (e.g. Brier) is analogous to a risky financial security. With only superficial adaptation, the same economic logic by which securities are valued ex ante – in particular, portfolio theory and the capital asset pricing model (CAPM) – applies to the valuation of probability forecasts. Each available forecast of a given event is valued relative to each other and to the “market” (all available forecasts). A forecast is seen to be (...)
    Remove from this list | Direct download (3 more)  
     
    My bibliography  
     
    Export citation  
  25. Victor Richmond Jose (2009). A Characterization for the Spherical Scoring Rule. Theory and Decision 66 (3):263-281.
    Strictly proper scoring rules have been studied widely in statistical decision theory and recently in experimental economics because of their ability to encourage assessors to honestly provide their true subjective probabilities. In this article, we study the spherical scoring rule by analytically examining some of its properties and providing some new geometric interpretations for this rule. Moreover, we state a theorem which provides an axiomatic characterization for the spherical scoring rule. The objective of this analysis is to provide a better (...)
    Remove from this list | Direct download (3 more)  
     
    My bibliography  
     
    Export citation  
  26. James Joyce (2009). Accuracy and Coherence: Prospects for an Alethic Epistemology of Partial Belief. In Franz Huber & Christoph Schmidt-Petri (eds.), Degrees of Belief. Synthese. 263-297.
  27. James Joyce (1999). The Foundations of Causal Decision Theory. Cambridge University Press.
  28. James M. Joyce (1998). A Nonpragmatic Vindication of Probabilism. Philosophy of Science 65 (4):575-603.
    The pragmatic character of the Dutch book argument makes it unsuitable as an "epistemic" justification for the fundamental probabilist dogma that rational partial beliefs must conform to the axioms of probability. To secure an appropriately epistemic justification for this conclusion, one must explain what it means for a system of partial beliefs to accurately represent the state of the world, and then show that partial beliefs that violate the laws of probability are invariably less accurate than they could be otherwise. (...)
    Remove from this list | Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  29. Brian Kierland & Bradley Monton (2005). Minimizing Inaccuracy for Self-Locating Beliefs. Philosophy and Phenomenological Research 70 (2):384-395.
    One's inaccuracy for a proposition is defined as the squared difference between the truth value (1 or 0) of the proposition and the credence (or subjective probability, or degree of belief) assigned to the proposition. One should have the epistemic goal of minimizing the expected inaccuracies of one's credences. We show that the method of minimizing expected inaccuracy can be used to solve certain probability problems involving information loss and self-locating beliefs (where a self-locating belief of a temporal part of (...)
    Remove from this list | Direct download (11 more)  
     
    My bibliography  
     
    Export citation  
  30. Frank Lad (1984). The Calibration Question. British Journal for the Philosophy of Science 35 (3):213-221.
    Recent discussion of the calibration of probability assessments is related to the earlier influential attitudes of Fréchet. The limiting frequency criterion of good calibration is criticised as being of no relevance to the evaluation of the probability of any event. An operational definition of good calibration is proposed which treats calibration properties as characteristics of the assessor's entire body of opinion, not of opinion about some particular event or events. In these terms a result is shown which says that every (...)
    Remove from this list | Direct download (8 more)  
     
    My bibliography  
     
    Export citation  
  31. Barry Lam (2013). Calibrated Probabilities and the Epistemology of Disagreement. Synthese 190 (6):1079-1098.
    This paper assesses the comparative reliability of two belief-revision rules relevant to the epistemology of disagreement, the Equal Weight and Stay the Course rules. I use two measures of reliability for probabilistic belief-revision rules, calibration and Brier Scoring, to give a precise account of epistemic peerhood and epistemic reliability. On the calibration measure of reliability, epistemic peerhood is easy to come by, and employing the Equal Weight rule generally renders you less reliable than Staying the Course. On the Brier-Score measure (...)
    Remove from this list | Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  32. Marc Lange (1999). Calibration and the Epistemological Role of Bayesian Conditionalization. Journal of Philosophy 96 (6):294-324.
    Remove from this list | Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  33. Hannes Leitgeb & Richard Pettigrew (2010). An Objective Justification of Bayesianism II: The Consequences of Minimizing Inaccuracy. Philosophy of Science 77 (2):236-272.
    One of the fundamental problems of epistemology is to say when the evidence in an agent’s possession justifies the beliefs she holds. In this paper and its prequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: Accuracy An epistemic agent ought to minimize the inaccuracy of her partial beliefs. In the prequel, we made this norm mathematically precise; in this paper, we derive its consequences. We show that the two core tenets of Bayesianism (...)
    Remove from this list | Direct download (7 more)  
     
    My bibliography  
     
    Export citation  
  34. Hannes Leitgeb & Richard Pettigrew (2010). An Objective Justification of Bayesianism I: Measuring Inaccuracy. Philosophy of Science 77 (2):201-235.
    One of the fundamental problems of epistemology is to say when the evidence in an agent’s possession justifies the beliefs she holds. In this paper and its sequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: Accuracy An epistemic agent ought to minimize the inaccuracy of her partial beliefs. In this paper, we make this norm mathematically precise in various ways. We describe three epistemic dilemmas that an agent might face if she attempts (...)
    Remove from this list | Direct download (7 more)  
     
    My bibliography  
     
    Export citation  
  35. Dominique Lepelley, Patrick Pierron & Fabrice Valognes (2000). Scoring Rules, Condorcet Efficiency and Social Homogeneity. Theory and Decision 49 (2):175-196.
    In a three-candidate election, a scoring rule s (s in [0,1]) assigns 1, s, and 0 points (respectively) to each first, second and third place in the individual preference rankings. The Condorcet efficiency of a scoring rule is defined as the conditional probability that this rule selects the winner in accordance with Condorcet criteria (three Condorcet criteria are considered in the paper). We are interested in the following question: What rule s has the greatest Condorcet efficiency? After recalling the known (...)
    Remove from this list | Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  36. Benjamin Anders Levinstein (2012). Leitgeb and Pettigrew on Accuracy and Updating. Philosophy of Science 79 (3):413-424.
    Remove from this list | Direct download (3 more)  
     
    My bibliography  
     
    Export citation  
  37. Patrick Maher (2002). Joyce's Argument for Probabilism. Philosophy of Science 69 (1):73-81.
    James Joyce's 'Nonpragmatic Vindication of Probabilism' gives a new argument for the conclusion that a person's credences ought to satisfy the laws of probability. The premises of Joyce's argument include six axioms about what counts as an adequate measure of the distance of a credence function from the truth. This paper shows that (a) Joyce's argument for one of these axioms is invalid, (b) his argument for another axiom has a false premise, (c) neither axiom is plausible, and (d) without (...)
    Remove from this list | Direct download (7 more)  
     
    My bibliography  
     
    Export citation  
  38. Sarah Moss (2011). Scoring Rules and Epistemic Compromise. Mind 120 (480):1053-1069.
    It is commonly assumed that when we assign different credences to a proposition, a perfect compromise between our opinions simply ‘splits the difference’ between our credences. I introduce and defend an alternative account, namely that a perfect compromise maximizes the average of the expected epistemic values that we each assign to alternative credences in the disputed proposition. I compare the compromise strategy I introduce with the traditional strategy of compromising by splitting the difference, and I argue that my strategy is (...)
    Remove from this list | Direct download (11 more)  
     
    My bibliography  
     
    Export citation  
  39. Ilkka Niiniluoto (2011). Revising Beliefs Towards the Truth. Erkenntnis 75 (2):165-181.
    Belief revision (BR) and truthlikeness (TL) emerged independently as two research programmes in formal methodology in the 1970s. A natural way of connecting BR and TL is to ask under what conditions the revision of a belief system by new input information leads the system towards the truth. It turns out that, for the AGM model of belief revision, the only safe case is the expansion of true beliefs by true input, but this is not very interesting or realistic as (...)
    Remove from this list | Direct download (8 more)  
     
    My bibliography  
     
    Export citation  
  40. Graham Oddie (1997). Conditionalization, Cogency, and Cognitive Value. British Journal for the Philosophy of Science 48 (4):533-541.
    Remove from this list | Direct download (9 more)  
     
    My bibliography  
     
    Export citation  
  41. Philip Percival (2002). Epistemic Consequentialism. Aristotelian Society Supplementary Volume 76 (1):121–151.
    [Philip Percival] I aim to illuminate foundational epistemological issues by reflecting on 'epistemic consequentialism'-the epistemic analogue of ethical consequentialism. Epistemic consequentialism employs a concept of cognitive value playing a role in epistemic norms governing belief-like states that is analogous to the role goodness plays in act-governing moral norms. A distinction between 'direct' and 'indirect' versions of epistemic consequentialism is held to be as important as the familiar ethical distinction on which it is based. These versions are illustrated, respectively, by cognitive (...)
    Remove from this list | Direct download (7 more)  
     
    My bibliography  
     
    Export citation  
  42. Richard Pettigrew, Self-Locating Belief and the Goal of Accuracy.
    The goal of a partial belief is to be accurate, or close to the truth. By appealing to this norm, I seek norms for partial beliefs in self-locating and non-self-locating propositions. My aim is to find norms that are analogous to the Bayesian norms, which, I argue, only apply unproblematically to partial beliefs in non-self-locating propositions. I argue that the goal of a set of partial beliefs is to minimize the expected inaccuracy of those beliefs. However, in the self-locating framework, (...)
    Remove from this list |
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  43. Richard Pettigrew (2013). A New Epistemic Utility Argument for the Principal Principle. Episteme 10 (1):19-35.
    Jim Joyce has presented an argument for Probabilism based on considerations of epistemic utility [Joyce, 1998]. In a recent paper, I adapted this argument to give an argument for Probablism and the Principal Principle based on similar considerations [Pettigrew, 2012]. Joyce’s argument assumes that a credence in a true proposition is better the closer it is to maximal credence, whilst a credence in a false proposition is better the closer it is to minimal credence. By contrast, my argument in that (...)
    Remove from this list | Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  44. Richard Pettigrew (2013). What Chance‐Credence Norms Should Not Be. Noûs 47 (3).
    A chance-credence norm states how an agent's credences in propositions concerning objective chances ought to relate to her credences in other propositions. The most famous such norm is the Principal Principle (PP), due to David Lewis. However, Lewis noticed that PP is too strong when combined with many accounts of chance that attempt to reduce chance facts to non-modal facts. Those who defend such accounts of chance have offered two alternative chance-credence norms: the first is Hall's and Thau's New Principle (...)
    Remove from this list | Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  45. Richard Pettigrew (2012). Accuracy, Chance, and the Principal Principle. Philosophical Review 121 (2):241-275.
    In ‘A Non-Pragmatic Vindication of Probabilism’, Jim Joyce attempts to ‘depragmatize’ de Finetti’s prevision argument for the claim that our partial beliefs ought to satisfy the axioms of probability calculus. In this paper, I adapt Joyce’s argument to give a non-pragmatic vindication of various versions of David Lewis’ Principal Principle, such as the version based on Isaac Levi's account of admissibility, Michael Thau and Ned Hall's New Principle, and Jenann Ismael's Generalized Principal Principle. Joyce enumerates properties that must be had (...)
    Remove from this list | Direct download (8 more)  
     
    My bibliography  
     
    Export citation  
  46. Richard Pettigrew, Epistemic Utility Arguments for Probabilism. Stanford Encyclopedia.
    Remove from this list | Direct download  
     
    My bibliography  
     
    Export citation  
  47. Richard Pettigrew (2011). An Improper Introduction to Epistemic Utility Theory. In Henk de Regt, Samir Okasha & Stephan Hartmann (eds.), Proceedings of EPSA: Amsterdam '09. Springer. 287--301.
    Beliefs come in different strengths. What are the norms that govern these strengths of belief? Let an agent's belief function at a particular time be the function that assigns, to each of the propositions about which she has an opinion, the strength of her belief in that proposition at that time. Traditionally, philosophers have claimed that an agent's belief function at any time ought to be a probability function (Probabilism), and that she ought to update her belief function upon obtaining (...)
    Remove from this list | Direct download (2 more)  
     
    My bibliography  
     
    Export citation  
  48. Joel Predd, Robert Seiringer, Elliott Lieb, Daniel Osherson, H. Vincent Poor & Sanjeev Kulkarni (2009). Probabilistic Coherence and Proper Scoring Rules. IEEE Transactions on Information Theory 55 (10):4786-4792.
    We provide self-contained proof of a theorem relating probabilistic coherence of forecasts to their non-domination by rival forecasts with respect to any proper scoring rule. The theorem recapitulates insights achieved by other investigators, and clarifi es the connection of coherence and proper scoring rules to Bregman divergence.
    Remove from this list |
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  49. Leonard Savage (1971). Elicitation of Personal Probabilities and Expectations. Journal of the American Statistical Association 66 (336):783-801.
    Remove from this list | Direct download  
     
    My bibliography  
     
    Export citation  
  50. Mark Schervish, Teddy Seidenfeld & Mark Schervish Joseph, Coherence with Proper Scoring Rules.
    • Coherence1 for previsions of random variables with generalized betting; • Coherence2 for probability forecasts of events with Brier score penalty; • Coherence3 probability forecasts of events with various proper scoring rules.
    Remove from this list |
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
1 — 50 / 64