About this topic
Summary Scoring rules play an important role in statistics, decision theory, and formal epistemology.  They underpin techniques for eliciting a person's credences in statistics.  And they have been exploited in epistemology to give arguments for various norms that are thought to govern credences, such as Probabilism, Conditionalization, the Reflection Principle, the Principal Principle, and Principles of Indifference, as well as accounts of peer disagreement and the Sleeping Beauty puzzle. A scoring rule is a function that assigns a penalty to an agent's credence (or partial belief or degree of belief) in a given proposition.  The penalty depends on whether the proposition is true or false.  Typically, if the proposition is true then the penalty increases as the credence decreases (the less confident you are in a true proposition, the more you will be penalised); and if the proposition is false then the penalty increases as the credence increases (the more confident you are in a false proposition, the more you will be penalised). In statistics and the theory of eliciting credences, we usually interpret the penalty assigned to a credence by a scoring rule as the monetary loss incurred by an agent with that credence.  In epistemology, we sometimes interpret it as the so-called 'gradational inaccuracy' of the agent's credence:  just as a full belief in a true proposition is more accurate than a full disbelief in that proposition, a higher credence in a true proposition is more accurate than a lower one; and just as a full disbelief in a false proposition is more accurate than a full belief, a lower credence in a false proposition is more accurate than a higher one.  Sometimes, in epistemology, we interpret the penalty given by a scoring rule more generally:  we take it to be the loss in so-called 'cognitive utility' incurred by an agent with that credence, where this is intended to incorporate a measure of the accuracy of the credence, but also measures of all other doxastic virtues it might have as well. Scoring rules assign losses or penalties to individual credences.  But we can use them to define loss or penalty functions for credence functions as well.  The loss assigned to a credence function is just the sum of the losses assigned to the individual credences it gives.  Using this, we can argue for such doxastic norms as Probabilism, Conditionalization, the Principal Principle, the Principle of Indifference, the Reflection Principle, norms for resolving peer disagreement, norms for responding to higher-order evidence, and so on.  For instance, for a large collection of scoring rules, the following holds:  If a credence function violates Probabilism, then there is a credence function that satisfies Probabilism that incurs a lower penalty regardless of how the world turns out.  That is, any non-probabilistic credence function is dominated by a probabilistic one.  Also, for the same large collection of scoring rules, the following holds:  If one's current credence function is a probability function, one will expect updating by conditionalization to incur a lower penalty than updating by any other rule.  There is a substantial and growing body of work on how scoring rules can be used to establish other doxastic norms.
Key works Leonard Savage (Savage 1971) and Bruno de Finetti (de Finetti 1970) introduced the notion of a scoring rule independently.  The notion was introduced into epistemology by Jim Joyce (Joyce 1998) and Graham Oddie (Oddie 1997).  Joyce used it to justify Probabilism; Oddie used it to justify Conditionalization.  Since then, authors have improved and generalized both arguments.  Improved arguments for Probabilism can be found in (Joyce 2009), (Leitgeb & Pettigrew 2010), (Leitgeb & Pettigrew 2010), (Predd et al 2009), (Schervish et al manuscript), (Pettigrew 2016).  Improved arguments for Conditionalization can be found in (Greaves & Wallace 2006), (Easwaran 2013), (Schoenfield 2017), (Pettigrew 2016).  Furthermore, other norms have been considered, such as the Principal Principle (Pettigrew 2012), (Pettigrew 2013), the Principle of Indifference (Pettigrew 2016), the Reflection Principle (Huttegger 2013), norms for resolving peer disagreement (Moss 2011), (Levinstein 2015), (Levinstein 2017), and norms for responding to higher-order evidence (Schoenfield 2018).
Introductions Pettigrew, Richard (2011) 'Epistemic Utility Arguments for Probabilism', Stanford Encyclopedia of Philosophy (Pettigrew 2011)
Related categories

180 found
Order:
1 — 50 / 180
  1. What is Justified Credence?Richard Pettigrew - 2021 - Episteme 18 (1):16-30.
    In this paper, we seek a reliabilist account of justified credence. Reliabilism about justified beliefs comes in two varieties: process reliabilism (Goldman, 1979, 2008) and indicator reliabilism (Alston, 1988, 2005). Existing accounts of reliabilism about justified credence comes in the same two varieties: Jeff Dunn (2015) proposes a version of process reliabilism, while Weng Hong Tang (2016) offers a version of indicator reliabilism. As we will see, both face the same objection. If they are right about what justification is, it (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  2. On the Pragmatic and Epistemic Virtues of Inference to the Best Explanation.Richard Pettigrew - 2021 - Synthese 199 (5-6):12407-12438.
    In a series of papers over the past twenty years, and in a new book, Igor Douven has argued that Bayesians are too quick to reject versions of inference to the best explanation that cannot be accommodated within their framework. In this paper, I survey their worries and attempt to answer them using a series of pragmatic and purely epistemic arguments that I take to show that Bayes’ Rule really is the only rational way to respond to your evidence.
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  3. Partial Belief, Full Belief, and Accuracy–Dominance.Branden Fitelson & Kenny Easwaran - manuscript
    Arguments for probabilism aim to undergird/motivate a synchronic probabilistic coherence norm for partial beliefs. Standard arguments for probabilism are all of the form: An agent S has a non-probabilistic partial belief function b iff (⇐⇒) S has some “bad” property B (in virtue of the fact that their p.b.f. b has a certain kind of formal property F). These arguments rest on Theorems (⇒) and Converse Theorems (⇐): b is non-Pr ⇐⇒ b has formal property F.
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  4. Self-Locating Belief and the Goal of Accuracy.Richard Pettigrew - manuscript
    The goal of a partial belief is to be accurate, or close to the truth. By appealing to this norm, I seek norms for partial beliefs in self-locating and non-self-locating propositions. My aim is to find norms that are analogous to the Bayesian norms, which, I argue, only apply unproblematically to partial beliefs in non-self-locating propositions. I argue that the goal of a set of partial beliefs is to minimize the expected inaccuracy of those beliefs. However, in the self-locating framework, (...)
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  5. Pooling: A User's Guide.Richard Pettigrew & Jonathan Weisberg - manuscript
    We often learn the credences of others without getting to hear the evidence on which they’re based. And, in these cases, it is often unfeasible or overly onerous to update on this social evidence by conditionalizing on it. How, then, should we respond to it? We consider four methods for aggregating your credences with the credences of others: arithmetic, geometric, multiplicative, and harmonic pooling. Each performs well for some purposes and poorly for others. We describe these in Sections 1-4. In (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  6. Higher-Order Evidence and the Dynamics of Self-Location: An Accuracy-Based Argument for Calibrationism.Brett Topey - manuscript
    The thesis that agents should calibrate their beliefs in the face of higher-order evidence—i.e., should adjust their first-order beliefs in response to evidence suggesting that the reasoning underlying those beliefs is faulty—is sometimes thought to be in tension with Bayesian approaches to belief update: in order to obey Bayesian norms, it's claimed, agents must remain steadfast in the face of higher-order evidence. But I argue that this claim is incorrect. In particular, I motivate a minimal constraint on a reasonable treatment (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  7. A Non-Pragmatic Dominance Argument for Conditionalization.Robert Williams - manuscript
    In this paper, I provide an accuracy-based argument for conditionalization (via reflection) that does not rely on norms of maximizing expected accuracy. -/- (This is a draft of a paper that I wrote in 2013. It stalled for no very good reason. I still believe the content is right).
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  8. The Foundations of Epistemic Decision Theory.Jason Konek & Ben Levinstein - 2017
    According to accuracy-first epistemology, accuracy is the fundamental epistemic good. Epistemic norms — Probabilism, Conditionalization, the Principal Principle, etc. — have their binding force in virtue of helping to secure this good. To make this idea precise, accuracy-firsters invoke Epistemic Decision Theory (EpDT) to determine which epistemic policies are the best means toward the end of accuracy. Hilary Greaves and others have recently challenged the tenability of this programme. Their arguments purport to show that EpDT encourages obviously epistemically irrational behavior. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  9. Downwards Propriety in Epistemic Utility Theory.Alejandro Pérez Carballo - forthcoming - Mind.
    Epistemic Utility Theory is often identified with the project of *axiology-first epistemology*—the project of vindicating norms of epistemic rationality purely in terms of epistemic value. One of the central goals of axiology-first epistemology is to provide a justification of the central norm of Bayesian epistemology, Probabilism. The first part of this paper presents a new challenge to axiology first epistemology: I argue that in order to justify Probabilism in purely axiological terms, proponents of axiology first epistemology need to justify a (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  10. Epistemic Consequentialism, Veritism, and Scoring Rules.Marc-Kevin Daoust & Charles Côté-Bouchard - forthcoming - Erkenntnis:1-25.
    We argue that there is a tension between two monistic claims that are the core of recent work in epistemic consequentialism. The first is a form of monism about epistemic value, commonly known as veritism: accuracy is the sole final objective to be promoted in the epistemic domain. The other is a form of monism about a class of epistemic scoring rules: that is, strictly proper scoring rules are the only legitimate measures of inaccuracy. These two monisms, we argue, are (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  11. Updating Without Evidence.Yoaav Isaacs & Jeffrey Sanford Russell - forthcoming - Noûs.
    Sometimes you are unreliable at fulfilling your doxastic plans: for example, if you plan to be fully confident in all truths, probably you will end up being fully confident in some falsehoods by mistake. In some cases, there is information that plays the classical role of *evidence*—your beliefs are perfectly discriminating with respect to some possible facts about the world—and there is a standard expected-accuracy-based justification for planning to *conditionalize* on this evidence. This planning-oriented justification extends to some cases where (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  12. On Accuracy and Coherence with Infinite Opinion Sets.Mikayla Kelley - forthcoming - Philosophy of Science.
    There is a well-known equivalence between avoiding accuracy dominance and having probabilistically coherent credences (see, e.g., de Finetti 1974, Joyce 2009, Predd et al. 2009, Pettigrew 2016). However, this equivalence has been established only when the set of propositions on which credence functions are defined is finite. In this paper, I establish connections between accuracy dominance and coherence when credence functions are defined on an infinite set of propositions. In particular, I establish the necessary results to extend the classic accuracy (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  13. Epistemic Conservativity and Imprecise Credence.Jason Konek - forthcoming - Philosophy and Phenomenological Research.
    Unspecific evidence calls for imprecise credence. My aim is to vindicate this thought. First, I will pin down what it is that makes one's imprecise credences more or less epistemically valuable. Then I will use this account of epistemic value to delineate a class of reasonable epistemic scoring rules for imprecise credences. Finally, I will show that if we plump for one of these scoring rules as our measure of epistemic value or utility, then a popular family of decision rules (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   22 citations  
  14. On the Best Accuracy Arguments for Probabilism.Michael Nielsen - forthcoming - Philosophy of Science:1-9.
    In a recent paper, Pettigrew (2021) reports a generalization of the celebrated accuracy-dominance theorem due to Predd et al. (2009). But Pettigrew’s proof is incorrect. I will explain the mistakes and provide a correct proof.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  15. Bayesian Updating When What You Learn Might Be False.Richard Pettigrew - forthcoming - Erkenntnis:1-16.
    Michael Rescorla (2020) has recently pointed out that the standard arguments for Bayesian Conditionalization assume that whenever you take yourself to learn something with certainty, it's true. Most people would reject this assumption. In response, Rescorla offers an improved Dutch Book argument for Bayesian Conditionalization that does not make this assumption. My purpose in this paper is two-fold. First, I want to illuminate Rescorla's new argument by giving a very general Dutch Book argument that applies to many cases of updating (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  16. Epistemic Risk and the Demands of Rationality.Richard Pettigrew - forthcoming - Oxford, UK: Oxford University Press.
    The short abstract: Epistemic utility theory + permissivism about attitudes to epistemic risk => permissivism about rational credences. The longer abstract: I argue that epistemic rationality is permissive. More specifically, I argue for two claims. First, a radical version of interpersonal permissivism about rational credence: for many bodies of evidence, there is a wide range of credal states for which there is some individual who might rationally adopt that state in response to that evidence. Second, a slightly less radical version (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  17. Best Laid Plans: Idealization and the Rationality–Accuracy Bridge.Brett Topey - forthcoming - British Journal for the Philosophy of Science.
    Hilary Greaves and David Wallace argue that conditionalization maximizes expected accuracy and so is a rational requirement, but their argument presupposes a particular picture of the bridge between rationality and accuracy: the Best-Plan-to-Follow picture. And theorists such as Miriam Schoenfield and Robert Steel argue that it's possible to motivate an alternative picture—the Best-Plan-to-Make picture—that does not vindicate conditionalization. I show that these theorists are mistaken: it turns out that, if an update procedure maximizes expected accuracy on the Best-Plan-to-Follow picture, it's (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Dilating and Contracting Arbitrarily.David Builes, Sophie Horowitz & Miriam Schoenfield - 2022 - Noûs 56 (1):3-20.
    Standard accuracy-based approaches to imprecise credences have the consequence that it is rational to move between precise and imprecise credences arbitrarily, without gaining any new evidence. Building on the Educated Guessing Framework of Horowitz (2019), we develop an alternative accuracy-based approach to imprecise credences that does not have this shortcoming. We argue that it is always irrational to move from a precise state to an imprecise state arbitrarily, however it can be rational to move from an imprecise state to a (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  19. Accuracy-First Epistemology Without Additivity.Richard Pettigrew - 2022 - Philosophy of Science 89 (1):128-151.
    Accuracy arguments for the core tenets of Bayesian epistemology differ mainly in the conditions they place on the legitimate ways of measuring the inaccuracy of our credences. The best existing arguments rely on three conditions: Continuity, Additivity, and Strict Propriety. In this paper, I show how to strengthen the arguments based on these conditions by showing that the central mathematical theorem on which each depends goes through without assuming Additivity.
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  20. Strict Propriety is Weak.Catrin Campbell-Moore & Benjamin A. Levinstein - 2021 - Analysis 81 (1):8-13.
    Considerations of accuracy – the epistemic good of having credences close to truth-values – have led to the justification of a host of epistemic norms. These arguments rely on specific ways of measuring accuracy. In particular, the accuracy measure should be strictly proper. However, the main argument for strict propriety supports only weak propriety. But strict propriety follows from weak propriety given strict truth directedness and additivity. So no further argument is necessary.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  21. Scoring, Truthlikeness, and Value.Igor Douven - 2021 - Synthese 199 (3-4):8281-8298.
    There is an ongoing debate about which rule we ought to use for scoring probability estimates. Much of this debate has been premised on scoring-rule monism, according to which there is exactly one best scoring rule. In previous work, I have argued against this position. The argument given there was based on purely a priori considerations, notably the intuition that scoring rules should be sensitive to truthlikeness relations if, and only if, such relations are present among whichever hypotheses are at (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  22. Symmetry and partial belief geometry.Stefan Lukits - 2021 - European Journal for Philosophy of Science 11 (3):1-24.
    When beliefs are quantified as credences, they are related to each other in terms of closeness and accuracy. The “accuracy first” approach in formal epistemology wants to establish a normative account for credences based entirely on the alethic properties of the credence: how close it is to the truth. To pull off this project, there is a need for a scoring rule. There is widespread agreement about some constraints on this scoring rule, but not whether a unique scoring rule stands (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  23. XIII—Dutch Book and Accuracy Theorems.Anna Mahtani - 2021 - Proceedings of the Aristotelian Society 120 (3):309-327.
    Dutch book and accuracy arguments are used to justify certain rationality constraints on credence functions. Underlying these Dutch book and accuracy arguments are associated theorems, and I show that the interpretation of these theorems can vary along a range of dimensions. Given that the theorems can be interpreted in a variety of different ways, what is the status of the associated arguments? I consider three possibilities: we could aggregate the results of the differently interpreted theorems in some way, and motivate (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  24. Accuracy-dominance and conditionalization.Michael Nielsen - 2021 - Philosophical Studies 178 (10):3217-3236.
    Epistemic decision theory produces arguments with both normative and mathematical premises. I begin by arguing that philosophers should care about whether the mathematical premises (1) are true, (2) are strong, and (3) admit simple proofs. I then discuss a theorem that Briggs and Pettigrew (2020) use as a premise in a novel accuracy-dominance argument for conditionalization. I argue that the theorem and its proof can be improved in a number of ways. First, I present a counterexample that shows that one (...)
    Remove from this list   Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   4 citations  
  25. Accuracy and Credal Imprecision.Dominik Berger & Nilanjan Das - 2020 - Noûs 54 (3):666-703.
    Many have claimed that epistemic rationality sometimes requires us to have imprecise credal states (i.e. credal states representable only by sets of credence functions) rather than precise ones (i.e. credal states representable by single credence functions). Some writers have recently argued that this claim conflicts with accuracy-centered epistemology, i.e., the project of justifying epistemic norms by appealing solely to the overall accuracy of the doxastic states they recommend. But these arguments are far from decisive. In this essay, we prove some (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  26. A Deference Model of Epistemic Authority.Sofia Ellinor Bokros - 2020 - Synthese 198 (12):12041-12069.
    How should we adjust our beliefs in light of the testimony of those who are in a better epistemic position than ourselves, such as experts and other epistemic superiors? In this paper, I develop and defend a deference model of epistemic authority. The paper attempts to resolve the debate between the preemption view and the total evidence view of epistemic authority by taking an accuracy-first approach to the issue of how we should respond to authoritative and expert testimony. I argue (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  27. An Accuracy‐Dominance Argument for Conditionalization.R. A. Briggs & Richard Pettigrew - 2020 - Noûs 54 (1):162-181.
  28. Time-Slice Rationality and Self-Locating Belief.David Builes - 2020 - Philosophical Studies 177 (10):3033-3049.
    The epistemology of self-locating belief concerns itself with how rational agents ought to respond to certain kinds of indexical information. I argue that those who endorse the thesis of Time-Slice Rationality ought to endorse a particular view about the epistemology of self-locating belief, according to which ‘essentially indexical’ information is never evidentially relevant to non-indexical matters. I close by offering some independent motivations for endorsing Time-Slice Rationality in the context of the epistemology of self-locating belief.
    Remove from this list   Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   2 citations  
  29. Accuracy Monism and Doxastic Dominance: Reply to Steinberger.Matt Hewson - 2020 - Analysis 80 (3):450-456.
    Given the standard dominance conditions used in accuracy theories for outright belief, epistemologists must invoke epistemic conservatism if they are to avoid licensing belief in both a proposition and its negation. Florian Steinberger (2019) charges the committed accuracy monist — the theorist who thinks that the only epistemic value is accuracy — with being unable to motivate this conservatism. I show that the accuracy monist can avoid Steinberger’s charge by moving to a subtly different set of dominance conditions. Having done (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  30. On the Accuracy of Group Credences.Richard Pettigrew - 2020 - Oxford Studies in Epistemology 6.
    to appear in Szabó Gendler, T. & J. Hawthorne (eds.) Oxford Studies in Epistemology volume 6 We often ask for the opinion of a group of individuals. How strongly does the scientific community believe that the rate at which sea levels are rising increased over the last 200 years? How likely does the UK Treasury think it is that there will be a recession if the country leaves the European Union? What are these group credences that such questions request? And (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  31. What is conditionalization, and why should we do it?Richard Pettigrew - 2020 - Philosophical Studies 177 (11):3427-3463.
    Conditionalization is one of the central norms of Bayesian epistemology. But there are a number of competing formulations, and a number of arguments that purport to establish it. In this paper, I explore which formulations of the norm are supported by which arguments. In their standard formulations, each of the arguments I consider here depends on the same assumption, which I call Deterministic Updating. I will investigate whether it is possible to amend these arguments so that they no longer depend (...)
    Remove from this list   Direct download (4 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   6 citations  
  32. An Accuracy Argument in Favor of Ranking Theory.Eric Raidl & Wolfgang Spohn - 2020 - Journal of Philosophical Logic 49 (2):283-313.
    Fitelson and McCarthy have proposed an accuracy measure for confidence orders which favors probability measures and Dempster-Shafer belief functions as accounts of degrees of belief and excludes ranking functions. Their accuracy measure only penalizes mistakes in confidence comparisons. We propose an alternative accuracy measure that also rewards correct confidence comparisons. Thus we conform to both of William James’ maxims: “Believe truth! Shun error!” We combine the two maxims, penalties and rewards, into one criterion that we call prioritized accuracy optimization. That (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  33. The Uniqueness of Local Proper Scoring Rules: The Logarithmic Family.Jingni Yang - 2020 - Theory and Decision 88 (2):315-322.
    Local proper scoring rules provide convenient tools for measuring subjective probabilities. Savage, 783–801, 1971) has shown that the only local proper scoring rule for more than two exclusive events is the logarithmic family. We generalize Savage by relaxing the properness and the domain, and provide simpler proof.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34. A Theory of Epistemic Risk.Boris Babic - 2019 - Philosophy of Science 86 (3):522-550.
    I propose a general alethic theory of epistemic risk according to which the riskiness of an agent’s credence function encodes her relative sensitivity to different types of graded error. After motivating and mathematically developing this approach, I show that the epistemic risk function is a scaled reflection of expected inaccuracy. This duality between risk and information enables us to explore the relationship between attitudes to epistemic risk, the choice of scoring rules in epistemic utility theory, and the selection of priors (...)
    Remove from this list   Direct download (9 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  35. Accuracy and Ur-Prior Conditionalization.Nilanjan Das - 2019 - Review of Symbolic Logic 12 (1):62-96.
    Recently, several epistemologists have defended an attractive principle of epistemic rationality, which we shall call Ur-Prior Conditionalization. In this essay, I ask whether we can justify this principle by appealing to the epistemic goal of accuracy. I argue that any such accuracy-based argument will be in tension with Evidence Externalism, i.e., the view that agent's evidence may entail non-trivial propositions about the external world. This is because any such argument will crucially require the assumption that, independently of all empirical evidence, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  36. Lockeans Maximize Expected Accuracy.Kevin Dorst - 2019 - Mind 128 (509):175-211.
    The Lockean Thesis says that you must believe p iff you’re sufficiently confident of it. On some versions, the 'must' asserts a metaphysical connection; on others, it asserts a normative one. On some versions, 'sufficiently confident' refers to a fixed threshold of credence; on others, it varies with proposition and context. Claim: the Lockean Thesis follows from epistemic utility theory—the view that rational requirements are constrained by the norm to promote accuracy. Different versions of this theory generate different versions of (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   55 citations  
  37. Accuracy, Conditionalization, and Probabilism.Don Fallis & Peter J. Lewis - 2019 - Synthese 198 (5):4017-4033.
    Accuracy-based arguments for conditionalization and probabilism appear to have a significant advantage over their Dutch Book rivals. They rely only on the plausible epistemic norm that one should try to decrease the inaccuracy of one’s beliefs. Furthermore, conditionalization and probabilism apparently follow from a wide range of measures of inaccuracy. However, we argue that there is an under-appreciated diachronic constraint on measures of inaccuracy which limits the measures from which one can prove conditionalization, and none of the remaining measures allow (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  38. Accuracy and the Imps.James M. Joyce & Brian Weatherson - 2019 - Logos and Episteme 10 (3):263-282.
    Recently several authors have argued that accuracy-first epistemology ends up licensing problematic epistemic bribes. They charge that it is better, given the accuracy-first approach, to deliberately form one false belief if this will lead to forming many other true beliefs. We argue that this is not a consequence of the accuracy-first view. If one forms one false belief and a number of other true beliefs, then one is committed to many other false propositions, e.g., the conjunction of that false belief (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  39. IP Scoring Rules: Foundations and Applications.Jason Konek - 2019 - Proceedings of Machine Learning Research 103:256-264.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  40. In Favor of Logarithmic Scoring.Randall G. McCutcheon - 2019 - Philosophy of Science 86 (2):286-303.
    Shuford, Albert and Massengill proved, a half century ago, that the logarithmic scoring rule is the only proper measure of inaccuracy determined by a differentiable function of probability assigned the actual cell of a scored partition. In spite of this, the log rule has gained less traction in applied disciplines and among formal epistemologists that one might expect. In this paper we show that the differentiability criterion in the Shuford et. al. result is unnecessary and use the resulting simplified characterization (...)
    Remove from this list   Direct download (7 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  41. What Accuracy Could Not Be.Graham Oddie - 2019 - British Journal for the Philosophy of Science 70 (2):551-580.
    Two different programmes are in the business of explicating accuracy—the truthlikeness programme and the epistemic utility programme. Both assume that truth is the goal of inquiry, and that among inquiries that fall short of realizing the goal some get closer to it than others. Truthlikeness theorists have been searching for an account of the accuracy of propositions. Epistemic utility theorists have been searching for an account of the accuracy of credal states. Both assume we can make cognitive progress in an (...)
    Remove from this list   Direct download (7 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  42. On the Accuracy of Group Credences.Richard Pettigrew - 2019 - Oxford Studies in Epistemology 6.
    We often ask for the opinion of a group of individuals. How strongly does the scientific community believe that the rate at which sea levels are rising has increased over the last 200 years? How likely does the UK Treasury think it is that there will be a recession if the country leaves the European Union? What are these group credences that such questions request? And how do they relate to the individual credences assigned by the members of the particular (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  43. Repugnant Accuracy.Brian Talbot - 2019 - Noûs 53 (3):540-563.
    Accuracy‐first epistemology is an approach to formal epistemology which takes accuracy to be a measure of epistemic utility and attempts to vindicate norms of epistemic rationality by showing how conformity with them is beneficial. If accuracy‐first epistemology can actually vindicate any epistemic norms, it must adopt a plausible account of epistemic value. Any such account must avoid the epistemic version of Derek Parfit's “repugnant conclusion.” I argue that the only plausible way of doing so is to say that accurate credences (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  44. Lying, Accuracy and Credence.Matthew A. Benton - 2018 - Analysis 78 (2):195-198.
    Traditional definitions of lying require that a speaker believe that what she asserts is false. Sam Fox Krauss seeks to jettison the traditional belief requirement in favour of a necessary condition given in a credence-accuracy framework, on which the liar expects to impose the risk of increased inaccuracy on the hearer. He argues that this necessary condition importantly captures nearby cases as lies which the traditional view neglects. I argue, however, that Krauss's own account suffers from an identical drawback of (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  45. Ideal Counterpart Theorizing and the Accuracy Argument for Probabilism.Clinton Castro & Olav Vassend - 2018 - Analysis 78 (2):207-216.
    One of the main goals of Bayesian epistemology is to justify the rational norms credence functions ought to obey. Accuracy arguments attempt to justify these norms from the assumption that the source of value for credences relevant to their epistemic status is their accuracy. This assumption and some standard decision-theoretic principles are used to argue for norms like Probabilism, the thesis that an agent’s credence function is rational only if it obeys the probability axioms. We introduce an example that shows (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  46. Can All-Accuracy Accounts Justify Evidential Norms?Christopher J. G. Meacham - 2018 - In Kristoffer Ahlstrom-Vij & Jeff Dunn (eds.), Epistemic Consequentialism. Oxford: Oxford University Press.
    Some of the most interesting recent work in formal epistemology has focused on developing accuracy-based approaches to justifying Bayesian norms. These approaches are interesting not only because they offer new ways to justify these norms, but because they potentially offer a way to justify all of these norms by appeal to a single, attractive epistemic goal: having accurate beliefs. Recently, Easwaran & Fitelson (2012) have raised worries regarding whether such “all-accuracy” or “purely alethic” approaches can accommodate and justify evidential Bayesian (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  47. Disagreement, Credences, and Outright Belief.Michele Palmira - 2018 - Ratio 31 (2):179-196.
    This paper addresses a largely neglected question in ongoing debates over disagreement: what is the relation, if any, between disagreements involving credences and disagreements involving outright beliefs? The first part of the paper offers some desiderata for an adequate account of credal and full disagreement. The second part of the paper argues that both phenomena can be subsumed under a schematic definition which goes as follows: A and B disagree if and only if the accuracy conditions of A's doxastic attitude (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  48. Précis of Accuracy and the Laws of Credence.Richard Pettigrew - 2018 - Philosophy and Phenomenological Research 96 (3):749-754.
  49. The Population Ethics of Belief: In Search of an Epistemic Theory X.Richard Pettigrew - 2018 - Noûs 52 (2):336-372.
    Consider Phoebe and Daphne. Phoebe has credences in 1 million propositions. Daphne, on the other hand, has credences in all of these propositions, but she's also got credences in 999 million other propositions. Phoebe's credences are all very accurate. Each of Daphne's credences, in contrast, are not very accurate at all; each is a little more accurate than it is inaccurate, but not by much. Whose doxastic state is better, Phoebe's or Daphne's? It is clear that this question is analogous (...)
    Remove from this list   Direct download (9 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  50. Information and Inaccuracy.William Roche & Tomoji Shogenji - 2018 - British Journal for the Philosophy of Science 69 (2):577-604.
    This article proposes a new interpretation of mutual information. We examine three extant interpretations of MI by reduction in doubt, by reduction in uncertainty, and by divergence. We argue that the first two are inconsistent with the epistemic value of information assumed in many applications of MI: the greater is the amount of information we acquire, the better is our epistemic position, other things being equal. The third interpretation is consistent with EVI, but it is faced with the problem of (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark   6 citations  
1 — 50 / 180