About this topic
Summary Scoring rules play an important role in statistics, decision theory, and formal epistemology.  They underpin techniques for eliciting a person's credences in statistics.  And they have been exploited in epistemology to give arguments for various norms that are thought to govern credences, such as Probabilism, Conditionalization, the Reflection Principle, the Principal Principle, and Principles of Indifference, as well as accounts of peer disagreement and the Sleeping Beauty puzzle. A scoring rule is a function that assigns a penalty to an agent's credence (or partial belief or degree of belief) in a given proposition.  The penalty depends on whether the proposition is true or false.  Typically, if the proposition is true then the penalty increases as the credence decreases (the less confident you are in a true proposition, the more you will be penalised); and if the proposition is false then the penalty increases as the credence increases (the more confident you are in a false proposition, the more you will be penalised). In statistics and the theory of eliciting credences, we usually interpret the penalty assigned to a credence by a scoring rule as the monetary loss incurred by an agent with that credence.  In epistemology, we sometimes interpret it as the so-called 'gradational inaccuracy' of the agent's credence:  just as a full belief in a true proposition is more accurate than a full disbelief in that proposition, a higher credence in a true proposition is more accurate than a lower one; and just as a full disbelief in a false proposition is more accurate than a full belief, a lower credence in a false proposition is more accurate than a higher one.  Sometimes, in epistemology, we interpret the penalty given by a scoring rule more generally:  we take it to be the loss in so-called 'cognitive utility' incurred by an agent with that credence, where this is intended to incorporate a measure of the accuracy of the credence, but also measures of all other doxastic virtues it might have as well. Scoring rules assign losses or penalties to individual credences.  But we can use them to define loss or penalty functions for credence functions as well.  The loss assigned to a credence function is just the sum of the losses assigned to the individual credences it gives.  Using this, we can argue for such doxastic norms as Probabilism, Conditionalization, the Principal Principle, the Principle of Indifference, the Reflection Principle, norms for resolving peer disagreement, norms for responding to higher-order evidence, and so on.  For instance, for a large collection of scoring rules, the following holds:  If a credence function violates Probabilism, then there is a credence function that satisfies Probabilism that incurs a lower penalty regardless of how the world turns out.  That is, any non-probabilistic credence function is dominated by a probabilistic one.  Also, for the same large collection of scoring rules, the following holds:  If one's current credence function is a probability function, one will expect updating by conditionalization to incur a lower penalty than updating by any other rule.  There is a substantial and growing body of work on how scoring rules can be used to establish other doxastic norms.
Key works Leonard Savage (Savage 1971) and Bruno de Finetti (de Finetti 1970) introduced the notion of a scoring rule independently.  The notion was introduced into epistemology by Jim Joyce (Joyce 1998) and Graham Oddie (Oddie 1997).  Joyce used it to justify Probabilism; Oddie used it to justify Conditionalization.  Since then, authors have improved and generalized both arguments.  Improved arguments for Probabilism can be found in (Joyce 2009), (Leitgeb & Pettigrew 2010), (Leitgeb & Pettigrew 2010), (Predd et al 2009), (Schervish et al manuscript), (Pettigrew 2016).  Improved arguments for Conditionalization can be found in (Greaves & Wallace 2006), (Easwaran 2013), (Schoenfield forthcoming), (Pettigrew 2016).  Furthermore, other norms have been considered, such as the Principal Principle (Pettigrew 2012), (Pettigrew 2013), the Principle of Indifference (Pettigrew 2016), the Reflection Principle (Huttegger 2013), norms for resolving peer disagreement (Moss 2011), (Levinstein 2015), (Levinstein 2017), and norms for responding to higher-order evidence (Schoenfield 2016).
Introductions Pettigrew, Richard (2011) 'Epistemic Utility Arguments for Probabilism', Stanford Encyclopedia of Philosophy (Pettigrew 2011)
Related categories

132 found
Order:
1 — 50 / 132
  1. Scoring and Keying Multiple Choice Tests: A Case Study in Irrationality. [REVIEW]Maya Bar-Hillel, David Budescu & Yigal Attali - 2005 - Mind and Society 4 (1):3-12.
    We offer a case-study in irrationality, showing that even in a high stakes context, intelligent and well trained professionals may adopt dominated practices. In multiple-choice tests one cannot distinguish lucky guesses from answers based on knowledge. Test-makers have dealt with this problem by lowering the incentive to guess, through penalizing errors (called formula scoring), and by eliminating various cues for outperforming random guessing (e.g., a preponderance of correct answers in middle positions), through key balancing. These policies, though widespread and intuitively (...)
  2. The Arrangement of Successive Convergents in Order of Accuracy.Alexander Brown - 1915 - Transactions of the Royal Society of South Africa 5 (1):653-657.
  3. Accuracy in Annotating.Francis Bywater - 1988 - The Chesterton Review 14 (4):645-645.
  4. Calibration and Probabilism.Michael Caie - 2014 - Ergo, an Open Access Journal of Philosophy 1.
  5. Rational Probabilistic Incoherence? A Reply to Michael Caie.Catrin Campbell-Moore - 2015 - Philosophical Review 124 (3):393-406.
    In Michael Caie's article “Rational Probabilistic Incoherence,” Caie argues that in light of certain situations involving self-reference, it is sometimes rational to have probabilistically incoherent credences. This essay further considers his arguments. It shows that probabilism isn't to blame for the failure of rational introspection and that Caie's modified accuracy criterion conflicts with Dutch book considerations, is scoring rule dependent, and leads to the failure of rational introspection.
  6. Chancy Accuracy and Imprecise Credence.Jennifer Carr - 2015 - Philosophical Perspectives 29 (1):67-81.
  7. Epistemic Expansions.Jennifer Carr - 2015 - Res Philosophica 92 (2):217-236.
  8. Mohammed Abdellaoui/Editorial Statement 1–2 Mohammed Abdellaoui and Peter P. Wakker/The Likelihood Method for Decision Under Uncertainty 3–76 AAJ Marley and R. Duncan Luce/Independence Properties Vis--Vis Several Utility Representations 77–143. [REVIEW]Davide P. Cervone, William V. Gehrlein, William S. Zwicker, Which Scoring Rule Maximizes Condorcet, Marcello Basili, Alain Chateauneuf & Fulvio Fontini - 2005 - Theory and Decision 58:409-410.
  9. Acceptance, Aggregation and Scoring Rules.Jake Chandler - 2013 - Erkenntnis 78 (1):201-217.
    As the ongoing literature on the paradoxes of the Lottery and the Preface reminds us, the nature of the relation between probability and rational acceptability remains far from settled. This article provides a novel perspective on the matter by exploiting a recently noted structural parallel with the problem of judgment aggregation. After offering a number of general desiderata on the relation between finite probability models and sets of accepted sentences in a Boolean sentential language, it is noted that a number (...)
  10. Standard Issue Scoring Manual.Anne Colby - 1987 - In The Measurement of Moral Judgment. Cambridge University Press.
  11. Epistemic Accuracy and Subjective Probability.Marcello D'Agostino & Corrado Sinigaglia - 2010 - In M. Dorato M. Suàrez (ed.), Epsa Epistemology and Methodology of Science. Springer. pp. 95--105.
  12. The Role of 'Dutch Books' and of 'Proper Scoring Rules'.de Finetti Bruno - 1981 - British Journal for the Philosophy of Science 32 (1):55-56.
  13. Probability, Induction, and Statistics.Bruno de Finetti - 1972 - New York: John Wiley.
  14. Theory of Probability.Bruno de Finetti - 1970 - New York: John Wiley.
  15. Goldman on Probabilistic Inference.Don Fallis - 2002 - Philosophical Studies 109 (3):223 - 240.
    In his recent book, Knowledge in a Social World, Alvin Goldman claims to have established that if a reasoner starts with accurate estimates of the reliability of new evidence and conditionalizes on this evidence, then this reasoner is objectively likely to end up closer to the truth. In this paper, I argue that Goldman's result is not nearly as philosophically significant as he would have us believe. First, accurately estimating the reliability of evidence – in the sense that Goldman requires (...)
  16. Inference to the Best Explanation, Dutch Books, and Inaccuracy Minimisation.Igor Douven - 2013 - Philosophical Quarterly 63 (252):428-444.
    Bayesians have traditionally taken a dim view of the Inference to the Best Explanation, arguing that, if IBE is at variance with Bayes ' rule, then it runs afoul of the dynamic Dutch book argument. More recently, Bayes ' rule has been claimed to be superior on grounds of conduciveness to our epistemic goal. The present paper aims to show that neither of these arguments succeeds in undermining IBE.
  17. Reliability for Degrees of Belief.Jeff Dunn - 2015 - Philosophical Studies 172 (7):1929-1952.
    We often evaluate belief-forming processes, agents, or entire belief states for reliability. This is normally done with the assumption that beliefs are all-or-nothing. How does such evaluation go when we’re considering beliefs that come in degrees? I consider a natural answer to this question that focuses on the degree of truth-possession had by a set of beliefs. I argue that this natural proposal is inadequate, but for an interesting reason. When we are dealing with all-or-nothing belief, high reliability leads to (...)
  18. Expected Accuracy Supports Conditionalization—and Conglomerability and Reflection.Kenny Easwaran - 2013 - Philosophy of Science 80 (1):119-142.
  19. An ‘Evidentialist’ Worry About Joyce's Argument for Probabilism.Kenny Easwaran & Branden Fitelson - 2012 - Dialectica 66 (3):425-433.
  20. An 'Evidentialist' Worry About Joyce's Argument for Probabilism.Kenny Easwaran & Branden Fitelson - 2012 - Dialetica 66 (3):425-433.
    To the extent that we have reasons to avoid these “bad B -properties”, these arguments provide reasons not to have an incoherent credence function b — and perhaps even reasons to have a coherent one. But, note that these two traditional arguments for probabilism involve what might be called “pragmatic” reasons (not) to be (in)coherent. In the case of the Dutch Book argument, the “bad” property is pragmatically bad (to the extent that one values money). But, it is not clear (...)
  21. Attitudes Toward Epistemic Risk and the Value of Experiments.Don Fallis - 2007 - Studia Logica 86 (2):215-246.
    Several different Bayesian models of epistemic utilities (see, e. g., [37], [24], [40], [46]) have been used to explain why it is rational for scientists to perform experiments. In this paper, I argue that a model-suggested independently by Patrick Maher [40] and Graham Oddie [46]-that assigns epistemic utility to degrees of belief in hypotheses provides the most comprehensive explanation. This is because this proper scoring rule (PSR) model captures a wider range of scientifically acceptable attitudes toward epistemic risk than the (...)
  22. The Brier Rule Is Not a Good Measure of Epistemic Utility.Don Fallis & Peter J. Lewis - 2016 - Australasian Journal of Philosophy 94 (3):576-590.
    ABSTRACTMeasures of epistemic utility are used by formal epistemologists to make determinations of epistemic betterness among cognitive states. The Brier rule is the most popular choice among formal epistemologists for such a measure. In this paper, however, we show that the Brier rule is sometimes seriously wrong about whether one cognitive state is epistemically better than another. In particular, there are cases where an agent gets evidence that definitively eliminates a false hypothesis, but where the Brier rule says that things (...)
  23. Epistemic Utility and Theory Acceptance: Comments on Hempel.Robert Feleppa - 1981 - Synthese 46 (3):413 - 420.
  24. The Role of 'Dutch Books' and of 'Proper Scoring Rules'.Finetti Bruno De - 1981 - British Journal for the Philosophy of Science 32 (1):55 - 56.
  25. Accuracy & Coherence.Branden Fitelson - unknown
    This talk is (mainly) about the relationship two types of epistemic norms: accuracy norms and coherence norms. A simple example that everyone will be familiar with.
  26. Accuracy & Coherence II.Branden Fitelson - unknown
    Comparative. Let C be the full set of S’s comparative judgments over B × B. The innaccuracy of C at a world w is given by the number of incorrect judgments in C at w.
  27. Accuracy & Coherence III.Branden Fitelson - unknown
    In this talk, I will explain why only one of Miller’s two types of language-dependence-of-verisimilitude problems is a (potential) threat to the sorts of accuracy-dominance approaches to coherence that I’ve been discussing.
  28. Accuracy, Language Dependence, and Joyce's Argument for Probabilism.Branden Fitelson - 2012 - Philosophy of Science 79 (1):167-174.
  29. Separability Assumptions in Scoring-Rule-Based Arguments for Probabilism.Branden Fitelson & Lara Buchak - unknown
    - In decision theory, an agent is deciding how to value a gamble that results in different outcomes in different states. Each outcome gets a utility value for the agent.
  30. Advice-Giving and Scoring-Rule-Based Arguments for Probabilism.Branden Fitelson & Lara Buchak - unknown
    Dutch Book Arguments. B is susceptibility to sure monetary loss (in a certain betting set-up), and F is the formal role played by non-Pr b’s in the DBT and the Converse DBT. Representation Theorem Arguments. B is having preferences that violate some of Savage’s axioms (and/or being unrepresentable as an expected utility maximizer), and F is the formal role played by non-Pr b’s in the RT.
  31. Partial Belief, Full Belief, and Accuracy–Dominance.Branden Fitelson & Kenny Easwaran - manuscript
    Arguments for probabilism aim to undergird/motivate a synchronic probabilistic coherence norm for partial beliefs. Standard arguments for probabilism are all of the form: An agent S has a non-probabilistic partial belief function b iff (⇐⇒) S has some “bad” property B (in virtue of the fact that their p.b.f. b has a certain kind of formal property F). These arguments rest on Theorems (⇒) and Converse Theorems (⇐): b is non-Pr ⇐⇒ b has formal property F.
  32. Justifying Conditionalization: Conditionalization Maximizes Expected Epistemic Utility.Hilary Greaves & David Wallace - 2006 - Mind 115 (459):607-632.
    According to Bayesian epistemology, the epistemically rational agent updates her beliefs by conditionalization: that is, her posterior subjective probability after taking account of evidence X, pnew, is to be set equal to her prior conditional probability pold(·|X). Bayesians can be challenged to provide a justification for their claim that conditionalization is recommended by rationality—whence the normative force of the injunction to conditionalize? There are several existing justifications for conditionalization, but none directly addresses the idea that conditionalization will be epistemically rational (...)
  33. Note on Bem and Funder's Scheme for Scoring Q Sorts.Bert F. Green - 1980 - Psychological Review 87 (2):212-214.
  34. Arguments for–or Against–Probabilism?Alan Hájek - 2008 - British Journal for the Philosophy of Science 59 (4):793 - 819.
    Four important arguments for probabilism—the Dutch Book, representation theorem, calibration, and gradational accuracy arguments—have a strikingly similar structure. Each begins with a mathematical theorem, a conditional with an existentially quantified consequent, of the general form: if your credences are not probabilities, then there is a way in which your rationality is impugned. Each argument concludes that rationality requires your credences to be probabilities. I contend that each argument is invalid as formulated. In each case there is a mirror-image theorem and (...)
  35. Arguments For—Or Against—Probabilism?Alan Hájek - 2009 - In Franz Huber & Christoph Schmidt-Petri (eds.), British Journal for the Philosophy of Science. Springer. pp. 229--251.
    Four important arguments for probabilism—the Dutch Book, representation theorem, calibration, and gradational accuracy arguments—have a strikingly similar structure. Each begins with a mathematical theorem, a conditional with an existentially quantified consequent, of the general form: if your credences are not probabilities, then there is a way in which your rationality is impugned. Each argument concludes that rationality requires your credences to be probabilities. I contend that each argument is invalid as formulated. In each case there is a mirror-image theorem and (...)
  36. The Effect of Presenting Various Numbers of Discrete Steps on Scale Reading Accuracy.Harold W. Hake & W. R. Garner - 1951 - Journal of Experimental Psychology 42 (5):358.
  37. Eliciting Objective Probabilities Via Lottery Insurance Games.Robin Hanson - unknown
    Since utilities and probabilities jointly determine choices, event-dependent utilities complicate the elicitation of subjective event probabilities. However, for the usual purpose of obtaining the information embodied in agent beliefs, it is sufficient to elicit objective probabilities, i.e., probabilities obtained by updating a known common prior with that agent’s further information. Bayesians who play a Nash equilibrium of a certain insurance game before they obtain relevant information will afterward act regarding lottery ticket payments as if they had event-independent risk-neutral utility and (...)
  38. Logarithmic Market Scoring Rules for Modular Combinatorial Information Aggregation.Robin Hanson - unknown
    In practice, scoring rules elicit good probability estimates from individuals, while betting markets elicit good consensus estimates from groups. Market scoring rules combine these features, eliciting estimates from individuals or groups, with groups costing no more than individuals. Regarding a bet on one event given another event, only logarithmic versions preserve the probability of the given event. Logarithmic versions also preserve the conditional probabilities of other events, and so preserve conditional independence relations. Given logarithmic rules that elicit relative probabilities of (...)
  39. Immoderately Rational.Sophie Horowitz - 2014 - Philosophical Studies 167 (1):41-56.
    Believing rationally is epistemically valuable, or so we tend to think. It’s something we strive for in our own beliefs, and we criticize others for falling short of it. We theorize about rationality, in part, because we want to be rational. But why? I argue that how we answer this question depends on how permissive our theory of rationality is. Impermissive and extremely permissive views can give good answers; moderately permissive views cannot.
  40. What Probability Probably Isn't.C. Howson - 2015 - Analysis 75 (1):53-59.
    Joyce and others have claimed that degrees of belief are estimates of truth-values and that the probability axioms are conditions of admissibility for these estimates with respect to a scoring rule penalizing inaccuracy. In this article, I argue that the claim that the rules of probability are truth-directed in this way depends on an assumption that is both implausible and lacks any supporting evidence, strongly suggesting that the probability axioms have nothing intrinsically to do with truth-directedness.
  41. Dutch-Book Arguments Against Using Conditional Probabilities for Conditional Bets.Keith Hutchison - 2012 - Open Journal of Philosophy 2 (3):195.
    We consider here an important family of conditional bets, those that proceed to settlement if and only if some agreed evidence is received that a condition has been met. Despite an opinion widespread in the literature, we observe that when the evidence is strong enough to generate certainty as to whether the condition has been met or not, using traditional conditional probabilities for such bets will NOT preserve a gambler from having a synchronic Dutch Book imposed upon him. On the (...)
  42. The Value of a Probability Forecast From Portfolio Theory.D. J. Johnstone - 2007 - Theory and Decision 63 (2):153-203.
    A probability forecast scored ex post using a probability scoring rule (e.g. Brier) is analogous to a risky financial security. With only superficial adaptation, the same economic logic by which securities are valued ex ante – in particular, portfolio theory and the capital asset pricing model (CAPM) – applies to the valuation of probability forecasts. Each available forecast of a given event is valued relative to each other and to the “market” (all available forecasts). A forecast is seen to be (...)
  43. Economic Darwinism: Who has the Best Probabilities? [REVIEW]David Johnstone - 2007 - Theory and Decision 62 (1):47-96.
    Simulation evidence obtained within a Bayesian model of price-setting in a betting market, where anonymous gamblers queue to bet against a risk-neutral bookmaker, suggests that a gambler who wants to maximize future profits should trade on the advice of the analyst cum probability forecaster who records the best probability score, rather than the highest trading profits, during the preceding observation period. In general, probability scoring rules, specifically the log score and better known “Brier” (quadratic) score, are found to have higher (...)
  44. A Characterization for the Spherical Scoring Rule.Victor Richmond Jose - 2009 - Theory and Decision 66 (3):263-281.
    Strictly proper scoring rules have been studied widely in statistical decision theory and recently in experimental economics because of their ability to encourage assessors to honestly provide their true subjective probabilities. In this article, we study the spherical scoring rule by analytically examining some of its properties and providing some new geometric interpretations for this rule. Moreover, we state a theorem which provides an axiomatic characterization for the spherical scoring rule. The objective of this analysis is to provide a better (...)
  45. Accuracy and Coherence: Prospects for an Alethic Epistemology of Partial Belief.James Joyce - 2009 - In Franz Huber & Christoph Schmidt-Petri (eds.), Degrees of Belief. Synthese. pp. 263-297.
  46. The Foundations of Causal Decision Theory.James M. Joyce - 1999 - Cambridge University Press.
    This book defends the view that any adequate account of rational decision making must take a decision maker's beliefs about causal relations into account. The early chapters of the book introduce the non-specialist to the rudiments of expected utility theory. The major technical advance offered by the book is a 'representation theorem' that shows that both causal decision theory and its main rival, Richard Jeffrey's logic of decision, are both instances of a more general conditional decision theory. The book solves (...)
  47. A Nonpragmatic Vindication of Probabilism.James M. Joyce - 1998 - Philosophy of Science 65 (4):575-603.
    The pragmatic character of the Dutch book argument makes it unsuitable as an "epistemic" justification for the fundamental probabilist dogma that rational partial beliefs must conform to the axioms of probability. To secure an appropriately epistemic justification for this conclusion, one must explain what it means for a system of partial beliefs to accurately represent the state of the world, and then show that partial beliefs that violate the laws of probability are invariably less accurate than they could be otherwise. (...)
  48. A Nonpragmatic Vindication of Probabilism.James M. Joycetl - 1998 - Philosophy of Science 65 (4):575-603.
  49. An ‘Evidentialist’ Worry About Joyce's Argument for Probabilism.Branden Fitelson Kenny Easwaran - 2012 - Dialectica 66 (3):425-433.
    Joyce () argues that for any credence function that doesn't satisfy the probability axioms, there is another function that dominates it in terms of accuracy. But if some potential credence functions are ruled out as violations of the Principal Principle, then some non‐probabilistic credence functions fail to be dominated. We argue that to fix Joyce's argument, one must show that all epistemic values for credence functions derive from accuracy.
  50. Minimizing Inaccuracy for Self-Locating Beliefs.Brian Kierland & Bradley Monton - 2005 - Philosophy and Phenomenological Research 70 (2):384-395.
    One's inaccuracy for a proposition is defined as the squared difference between the truth value (1 or 0) of the proposition and the credence (or subjective probability, or degree of belief) assigned to the proposition. One should have the epistemic goal of minimizing the expected inaccuracies of one's credences. We show that the method of minimizing expected inaccuracy can be used to solve certain probability problems involving information loss and self-locating beliefs (where a self-locating belief of a temporal part of (...)
1 — 50 / 132