About this topic
Summary Scoring rules play an important role in statistics, decision theory, and formal epistemology.  They underpin techniques for eliciting a person's credences in statistics.  And they have been exploited in epistemology to give arguments for various norms that are thought to govern credences, such as Probabilism, Conditionalization, the Reflection Principle, the Principal Principle, and Principles of Indifference, as well as accounts of peer disagreement and the Sleeping Beauty puzzle. A scoring rule is a function that assigns a penalty to an agent's credence (or partial belief or degree of belief) in a given proposition.  The penalty depends on whether the proposition is true or false.  Typically, if the proposition is true then the penalty increases as the credence decreases (the less confident you are in a true proposition, the more you will be penalised); and if the proposition is false then the penalty increases as the credence increases (the more confident you are in a false proposition, the more you will be penalised). In statistics and the theory of eliciting credences, we usually interpret the penalty assigned to a credence by a scoring rule as the monetary loss incurred by an agent with that credence.  In epistemology, we sometimes interpret it as the so-called 'gradational inaccuracy' of the agent's credence:  just as a full belief in a true proposition is more accurate than a full disbelief in that proposition, a higher credence in a true proposition is more accurate than a lower one; and just as a full disbelief in a false proposition is more accurate than a full belief, a lower credence in a false proposition is more accurate than a higher one.  Sometimes, in epistemology, we interpret the penalty given by a scoring rule more generally:  we take it to be the loss in so-called 'cognitive utility' incurred by an agent with that credence, where this is intended to incorporate a measure of the accuracy of the credence, but also measures of all other doxastic virtues it might have as well. Scoring rules assign losses or penalties to individual credences.  But we can use them to define loss or penalty functions for credence functions as well.  The loss assigned to a credence function is just the sum of the losses assigned to the individual credences it gives.  Using this, we can argue for such doxastic norms as Probabilism, Conditionalization, the Principal Principle, the Principle of Indifference, the Reflection Principle, norms for resolving peer disagreement, norms for responding to higher-order evidence, and so on.  For instance, for a large collection of scoring rules, the following holds:  If a credence function violates Probabilism, then there is a credence function that satisfies Probabilism that incurs a lower penalty regardless of how the world turns out.  That is, any non-probabilistic credence function is dominated by a probabilistic one.  Also, for the same large collection of scoring rules, the following holds:  If one's current credence function is a probability function, one will expect updating by conditionalization to incur a lower penalty than updating by any other rule.  There is a substantial and growing body of work on how scoring rules can be used to establish other doxastic norms.
Key works Leonard Savage (Savage 1971) and Bruno de Finetti (de Finetti 1970) introduced the notion of a scoring rule independently.  The notion was introduced into epistemology by Jim Joyce (Joyce 1998) and Graham Oddie (Oddie 1997).  Joyce used it to justify Probabilism; Oddie used it to justify Conditionalization.  Since then, authors have improved and generalized both arguments.  Improved arguments for Probabilism can be found in (Joyce 2009), (Leitgeb & Pettigrew 2010), (Leitgeb & Pettigrew 2010), (Predd et al 2009), (Schervish et al manuscript), (Pettigrew 2016).  Improved arguments for Conditionalization can be found in (Greaves & Wallace 2005), (Easwaran 2013), (Schoenfield 2017), (Pettigrew 2016).  Furthermore, other norms have been considered, such as the Principal Principle (Pettigrew 2012), (Pettigrew 2013), the Principle of Indifference (Pettigrew 2016), the Reflection Principle (Huttegger 2013), norms for resolving peer disagreement (Moss 2011), (Levinstein 2015), (Levinstein 2017), and norms for responding to higher-order evidence (Schoenfield 2018).
Introductions Pettigrew, Richard (2011) 'Epistemic Utility Arguments for Probabilism', Stanford Encyclopedia of Philosophy (Pettigrew 2011)
Related categories

145 found
1 — 50 / 145
  1. added 2018-12-06
    Accuracy and Credal Imprecision.Dominik Berger & Nilanjan Das - forthcoming - Noûs.
    Many have claimed that epistemic rationality sometimes requires us to have imprecise credal states (i.e. credal states representable only by sets of credence functions) rather than precise ones (i.e. credal states representable by single credence functions). Some writers have recently argued that this claim conflicts with accuracy-centered epistemology, i.e., the project of justifying epistemic norms by appealing solely to the overall accuracy of the doxastic states they recommend. But these arguments are far from decisive. In this essay, we prove some (...)
  2. added 2018-11-22
    Disagreement, Credences, and Outright Belief.Michele Palmira - 2018 - Ratio 31 (2):179-196.
    This paper addresses a largely neglected question in ongoing debates over disagreement: what is the relation, if any, between disagreements involving credences and disagreements involving outright beliefs? The first part of the paper offers some desiderata for an adequate account of credal and full disagreement. The second part of the paper argues that both phenomena can be subsumed under a schematic definition which goes as follows: A and B disagree if and only if the accuracy conditions of A's doxastic attitude (...)
  3. added 2018-11-12
    A Non-Pragmatic Dominance Argument for Conditionalization.Robert Williams - manuscript
    In this paper, I provide an accuracy-based argument for conditionalization (via reflection) that does not rely on norms of maximizing expected accuracy. -/- (This is a draft of a paper that I wrote in 2013. It stalled for no very good reason. I still believe the content is right).
  4. added 2018-09-14
    An Accuracy Based Approach to Higher Order Evidence.Miriam Schoenfield - 2018 - Philosophy and Phenomenological Research 96 (3):690-715.
    The aim of this paper is to apply the accuracy based approach to epistemology to the case of higher order evidence: evidence that bears on the rationality of one's beliefs. I proceed in two stages. First, I show that the accuracy based framework that is standardly used to motivate rational requirements supports steadfastness—a position according to which higher order evidence should have no impact on one's doxastic attitudes towards first order propositions. The argument for this will require a generalization of (...)
  5. added 2018-09-10
    A Theory of Epistemic Risk.Boris Babic - forthcoming - Philosophy of Science.
    I propose a general alethic theory of epistemic risk according to which the riskiness of an agent's credence function encodes their relative sensitivity to different types of graded error. After motivating and mathematically developing this approach, I show that the epistemic risk function is a scaled reflection of expected inaccuracy. This duality between risk and information enables us to explore the relationship between attitudes to epistemic risk, the choice of scoring rule in epistemic utility theory, and the selection of priors (...)
  6. added 2018-09-06
    Précis of Accuracy and the Laws of Credence.Richard Pettigrew - 2018 - Philosophy and Phenomenological Research 96 (3):749-754.
  7. added 2018-07-30
    Accuracy, Conditionalization, and Probabilism.Peter J. Lewis & Don Fallis - manuscript
    Accuracy-based arguments for conditionalization and probabilism appear to have a significant advantage over their Dutch Book rivals. They rely only on the plausible epistemic norm that one should try to decrease the inaccuracy of one's beliefs. Furthermore, it seems that conditionalization and probabilism follow from a wide range of measures of inaccuracy. However, we argue that among the measures in the literature, there are some from which one can prove conditionalization, others from which one can prove probabilism, and none from (...)
  8. added 2018-06-25
    In Favor of Logarithmic Scoring.Randall G. McCutcheon - forthcoming - Philosophy of Science.
    Shuford, Albert and Massengill proved, a half century ago, that the logarithmic scoring rule is the only proper measure of inaccuracy determined by a differentiable function of probability assigned the actual cell of a scored partition. In spite of this, the log rule has gained less traction in applied disciplines and among formal epistemologists that one might expect. In this paper we show that the differentiability criterion in the Shuford et. al. result is unnecessary and use the resulting simplified characterization (...)
  9. added 2018-06-25
    An Accuracy‐Dominance Argument for Conditionalization.R. A. Briggs & Richard Pettigrew - forthcoming - Noûs.
  10. added 2018-06-25
    Information and Inaccuracy.William Roche & Tomoji Shogenji - 2018 - British Journal for the Philosophy of Science 69 (2):577-604.
    This article proposes a new interpretation of mutual information. We examine three extant interpretations of MI by reduction in doubt, by reduction in uncertainty, and by divergence. We argue that the first two are inconsistent with the epistemic value of information assumed in many applications of MI: the greater is the amount of information we acquire, the better is our epistemic position, other things being equal. The third interpretation is consistent with EVI, but it is faced with the problem of (...)
  11. added 2018-06-25
    Information and Inaccuracy.William Roche & Tomoji Shogenji - 2016 - British Journal for the Philosophy of Science:axw025.
    This paper proposes a new interpretation of mutual information (MI). We examine three extant interpretations of MI by reduction in doubt, by reduction in uncertainty, and by divergence. We argue that the first two are inconsistent with the epistemic value of information (EVI) assumed in many applications of MI: the greater is the amount of information we acquire, the better is our epistemic position, other things being equal. The third interpretation is consistent with EVI, but it is faced with the (...)
  12. added 2018-05-29
    What is Justified Credence?Richard Pettigrew - manuscript
    In this paper, we seek a reliabilist account of justified credence. Reliabilism about justified beliefs comes in two varieties: process reliabilism (Goldman, 1979, 2008) and indicator reliabilism (Alston, 1988, 2005). Existing accounts of reliabilism about justified credence comes in the same two varieties: Jeff Dunn (2015) proposes a version of process reliabilism, while Weng Hong Tang (2016) offers a version of indicator reliabilism. As we will see, both face the same objection. If they are right about what justification is, it (...)
  13. added 2018-02-20
    Accuracy and Ur-Prior Conditionalization.Nilanjan Das - forthcoming - Review of Symbolic Logic:1-35.
    Recently, several epistemologists have defended an attractive principle of epistemic rationality, which we shall call Ur-Prior Conditionalization. In this essay, I ask whether we can justify this principle by appealing to the epistemic goal of accuracy. I argue that any such accuracy-based argument will be in tension with Evidence Externalism, i.e., the view that agent's evidence may entail non-trivial propositions about the external world. This is because any such argument will crucially require the assumption that, independently of all empirical evidence, (...)
  14. added 2018-02-17
    Arguments for–or Against–Probabilism?A. Hajek - 2008 - British Journal for the Philosophy of Science 59 (4):793-819.
    Four important arguments for probabilism--the Dutch Book, representation theorem, calibration, and gradational accuracy arguments--have a strikingly similar structure. Each begins with a mathematical theorem, a conditional with an existentially quantified consequent, of the general form: if your credences are not probabilities, then there is a way in which your rationality is impugned. Each argument concludes that rationality requires your credences to be probabilities. I contend that each argument is invalid as formulated. In each case there is a mirror-image theorem and (...)
  15. added 2018-01-11
    The Accuracy and Rationality of Imprecise Credences.Miriam Schoenfield - 2017 - Noûs 51 (4):667-685.
    It has been claimed that, in response to certain kinds of evidence, agents ought to adopt imprecise credences: doxastic states that are represented by sets of credence functions rather than single ones. In this paper I argue that, given some plausible constraints on accuracy measures, accuracy-centered epistemologists must reject the requirement to adopt imprecise credences. I then show that even the claim that imprecise credences are permitted is problematic for accuracy-centered epistemology. It follows that if imprecise credal states are permitted (...)
  16. added 2017-11-06
    Accuracy Uncomposed: Against Calibrationism.Ben Levinstein - 2017 - Episteme 14 (1):59-69.
    Pettigrew offers new axiomatic constraints on legitimate measures of inaccuracy. His axiom called ‘Decomposition’ stipulates that legitimate measures of inaccuracy evaluate a credence function in part based on its level of calibration at a world. I argue that if calibration is valuable, as Pettigrew claims, then this fact is an explanandum for accuracy-rst epistemologists, not an explanans, for three reasons. First, the intuitive case for the importance of calibration isn’t as strong as Pettigrew believes. Second, calibration is a perniciously global (...)
  17. added 2017-10-16
    A Pragmatist’s Guide to Epistemic Utility.Benjamin Anders Levinstein - 2017 - Philosophy of Science 84 (4):613-638.
    We use a theorem from M. J. Schervish to explore the relationship between accuracy and practical success. If an agent is pragmatically rational, she will quantify the expected loss of her credence with a strictly proper scoring rule. Which scoring rule is right for her will depend on the sorts of decisions she expects to face. We relate this pragmatic conception of inaccuracy to the purely epistemic one popular among epistemic utility theorists.
  18. added 2017-09-19
    Permissive Rationality and Sensitivity.Benjamin Anders Levinstein - 2017 - Philosophy and Phenomenological Research 94 (2):342-370.
    Permissivism about rationality is the view that there is sometimes more than one rational response to a given body of evidence. In this paper I discuss the relationship between permissivism, deference to rationality, and peer disagreement. I begin by arguing that—contrary to popular opinion—permissivism supports at least a moderate version of conciliationism. I then formulate a worry for permissivism. I show that, given a plausible principle of rational deference, permissive rationality seems to become unstable and to collapse into unique rationality. (...)
  19. added 2017-08-30
    Lockeans Maximize Expected Accuracy.Kevin Dorst - forthcoming - Mind:fzx028.
    The Lockean Thesis says that you must believe p iff you’re sufficiently confident of it. On some versions, the 'must' asserts a metaphysical connection; on others, it asserts a normative one. On some versions, 'sufficiently confident' refers to a fixed threshold of credence; on others, it varies with proposition and context. Claim: the Lockean Thesis follows from epistemic utility theory—the view that rational requirements are constrained by the norm to promote accuracy. Different versions of this theory generate different versions of (...)
  20. added 2017-05-30
    Direct Inference From Imprecise Frequencies.Paul D. Thorn - 2017 - In Michela Massimi, Jan-Willem Romeijn & Gerhard Schurz (eds.), EPSA15 Selected Papers - The 5th conference of the European Philosophy of Science. Springer. pp. 347-358.
    It is well known that there are, at least, two sorts of cases where one should not prefer a direct inference based on a narrower reference class, in particular: cases where the narrower reference class is gerrymandered, and cases where one lacks an evidential basis for forming a precise-valued frequency judgment for the narrower reference class. I here propose (1) that the preceding exceptions exhaust the circumstances where one should not prefer direct inference based on a narrower reference class, and (...)
  21. added 2017-03-06
    Reward Versus Risk in Uncertain Inference: Theorems and Simulations.Gerhard Schurz & Paul D. Thorn - 2012 - Review of Symbolic Logic 5 (4):574-612.
    Systems of logico-probabilistic reasoning characterize inference from conditional assertions that express high conditional probabilities. In this paper we investigate four prominent LP systems, the systems _O, P_, _Z_, and _QC_. These systems differ in the number of inferences they licence _. LP systems that license more inferences enjoy the possible reward of deriving more true and informative conclusions, but with this possible reward comes the risk of drawing more false or uninformative conclusions. In the first part of the paper, we (...)
  22. added 2017-02-15
    Calibration and Probabilism.Michael Caie - 2014 - Ergo: An Open Access Journal of Philosophy 1.
  23. added 2017-02-14
    Orderings Based on the Banks Set: Some New Scoring Methods for Multicriteria Decision Making.Scott Moser - 2015 - Complexity 20 (5):63-76.
  24. added 2017-02-13
    Test-Taking Behavior Under Formula and Number-Right Scoring Conditions.Barbara S. Plake, Steven L. Wise & Anne L. Harvey - 1988 - Bulletin of the Psychonomic Society 26 (4):316-318.
  25. added 2017-02-11
    Accuracy of Report and Central Readiness.Frank Leavitt - 1969 - Journal of Experimental Psychology 81 (3):542.
  26. added 2017-02-11
    The Effect of Presenting Various Numbers of Discrete Steps on Scale Reading Accuracy.Harold W. Hake & W. R. Garner - 1951 - Journal of Experimental Psychology 42 (5):358.
  27. added 2017-02-07
    Parameterizing and Scoring Mixed Ancestral Graphs.Thomas Richardson & Peter Spirtes - unknown
    Thomas Richardson and Peter Spirtes. Parameterizing and Scoring Mixed Ancestral Graphs.
  28. added 2017-01-29
    6. Accuracy: A Sense of Reality.Bernard Williams - 2010 - In Truth and Truthfulness: An Essay in Genealogy. Princeton University Press. pp. 123-148.
  29. added 2017-01-26
    A Nonpragmatic Vindication of Probabilism.James M. Joycetl - 1998 - Philosophy of Science 65 (4):575-603.
  30. added 2017-01-25
    Accuracy in Annotating.Francis Bywater - 1988 - The Chesterton Review 14 (4):645-645.
  31. added 2017-01-22
    The Role of 'Dutch Books' and of 'Proper Scoring Rules'.Bruno De Finetti - 1981 - British Journal for the Philosophy of Science 32 (1):55 - 56.
  32. added 2017-01-21
    Scoring and Keying Multiple Choice Tests: A Case Study in Irrationality. [REVIEW]Maya Bar-Hillel, David Budescu & Yigal Attali - 2005 - Mind and Society 4 (1):3-12.
    We offer a case-study in irrationality, showing that even in a high stakes context, intelligent and well trained professionals may adopt dominated practices. In multiple-choice tests one cannot distinguish lucky guesses from answers based on knowledge. Test-makers have dealt with this problem by lowering the incentive to guess, through penalizing errors (called formula scoring), and by eliminating various cues for outperforming random guessing (e.g., a preponderance of correct answers in middle positions), through key balancing. These policies, though widespread and intuitively (...)
  33. added 2017-01-20
    Standard Issue Scoring Manual.Anne Colby - 1987 - In The Measurement of Moral Judgment. Cambridge University Press.
  34. added 2017-01-19
    Epistemic Utility and Theory Acceptance: Comments on Hempel.Robert Feleppa - 1981 - Synthese 46 (3):413 - 420.
  35. added 2017-01-16
    Stability of Cooperation Under Image Scoring in Group Interactions.Heinrich H. Nax, Matjaž Perc, Attila Szolnoki & Dirk Helbing - unknown
    Image scoring sustains cooperation in the repeated two-player prisoner’s dilemma through indirect reciprocity, even though defection is the uniquely dominant selfish behaviour in the one-shot game. Many real-world dilemma situations, however, firstly, take place in groups and, secondly, lack the necessary transparency to inform subjects reliably of others’ individual past actions. Instead, there is revelation of information regarding groups, which allows for ‘group scoring’ but not for image scoring. Here, we study how sensitive the positive results related to image scoring (...)
  36. added 2017-01-16
    Note on Bem and Funder's Scheme for Scoring Q Sorts.Bert F. Green - 1980 - Psychological Review 87 (2):212-214.
  37. added 2017-01-16
    The Arrangement of Successive Convergents in Order of Accuracy.Alexander Brown - 1915 - Transactions of the Royal Society of South Africa 5 (1):653-657.
  38. added 2017-01-15
    The Brier Rule Is Not a Good Measure of Epistemic Utility.Don Fallis & Peter J. Lewis - 2016 - Australasian Journal of Philosophy 94 (3):576-590.
    Measures of epistemic utility are used by formal epistemologists to make determinations of epistemic betterness among cognitive states. The Brier rule is the most popular choice among formal epistemologists for such a measure. In this paper, however, we show that the Brier rule is sometimes seriously wrong about whether one cognitive state is epistemically better than another. In particular, there are cases where an agent gets evidence that definitively eliminates a false hypothesis, but where the Brier rule says that things (...)
  39. added 2017-01-14
    Scoring Rules and Social Choice Properties: Some Characterizations.Bonifacio Llamazares & Teresa Peña - 2015 - Theory and Decision 78 (3):429-450.
  40. added 2017-01-14
    Strictly Proper Scoring Rules.Juergen Landes - unknown
    Epistemic scoring rules are the en vogue tool for justifications of the probability norm and further norms of rational belief formation. They are different in kind and application from statistical scoring rules from which they arose. In the first part of the paper I argue that statistical scoring rules, properly understood, are in principle better suited to justify the probability norm than their epistemic brethren. Furthermore, I give a justification of the probability norm applying statistical scoring rules. In the second (...)
  41. added 2017-01-14
    Dominating Countably Many Forecasts.Mark J. Schervish, Teddy Seidenfeld & Joseph B. Kadane - unknown
    We investigate differences between a simple Dominance Principle applied to sums of fair prices for variables and dominance applied to sums of forecasts for variables scored by proper scoring rules. In particular, we consider differences when fair prices and forecasts correspond to finitely additive expectations and dominance is applied with infinitely many prices and/or forecasts.
  42. added 2016-12-12
    What Probability Probably Isn't.C. Howson - 2015 - Analysis 75 (1):53-59.
    Joyce and others have claimed that degrees of belief are estimates of truth-values and that the probability axioms are conditions of admissibility for these estimates with respect to a scoring rule penalizing inaccuracy. In this article, I argue that the claim that the rules of probability are truth-directed in this way depends on an assumption that is both implausible and lacks any supporting evidence, strongly suggesting that the probability axioms have nothing intrinsically to do with truth-directedness.
  43. added 2016-12-08
    Attitudes Toward Epistemic Risk and the Value of Experiments.Don Fallis - 2007 - Studia Logica 86 (2):215-246.
    Several different Bayesian models of epistemic utilities (see, e. g., [37], [24], [40], [46]) have been used to explain why it is rational for scientists to perform experiments. In this paper, I argue that a model-suggested independently by Patrick Maher [40] and Graham Oddie [46]-that assigns epistemic utility to degrees of belief in hypotheses provides the most comprehensive explanation. This is because this proper scoring rule (PSR) model captures a wider range of scientifically acceptable attitudes toward epistemic risk than the (...)
  44. added 2016-12-08
    Goldman on Probabilistic Inference.Fallis Don - 2002 - Philosophical Studies 109 (3):223 - 240.
    In his recent book, Knowledge in a Social World, Alvin Goldman claims to have established that if a reasoner starts with accurate estimates of the reliability of new evidence and conditionalizes on this evidence, then this reasoner is objectively likely to end up closer to the truth. In this paper, I argue that Goldman's result is not nearly as philosophically significant as he would have us believe. First, accurately estimating the reliability of evidence – in the sense that Goldman requires (...)
  45. added 2016-11-29
    Scoring Rules, Condorcet Efficiency and Social Homogeneity.Dominique Lepelley, Patrick Pierron & Fabrice Valognes - 2000 - Theory and Decision 49 (2):175-196.
    In a three-candidate election, a scoring rule s (s in [0,1]) assigns 1, s, and 0 points (respectively) to each first, second and third place in the individual preference rankings. The Condorcet efficiency of a scoring rule is defined as the conditional probability that this rule selects the winner in accordance with Condorcet criteria (three Condorcet criteria are considered in the paper). We are interested in the following question: What rule s has the greatest Condorcet efficiency? After recalling the known (...)
  46. added 2016-11-26
    Epistemic Conservativity and Imprecise Credence.Jason Konek - forthcoming - Philosophy and Phenomenological Research.
    Unspecific evidence calls for imprecise credence. My aim is to vindicate this thought. First, I will pin down what it is that makes one's imprecise credences more or less epistemically valuable. Then I will use this account of epistemic value to delineate a class of reasonable epistemic scoring rules for imprecise credences. Finally, I will show that if we plump for one of these scoring rules as our measure of epistemic value or utility, then a popular family of decision rules (...)
  47. added 2016-11-26
    The Population Ethics of Belief: In Search of an Epistemic Theory X.Richard Pettigrew - 2018 - Noûs 52 (2):336-372.
    Consider Phoebe and Daphne. Phoebe has credences in 1 million propositions. Daphne, on the other hand, has credences in all of these propositions, but she's also got credences in 999 million other propositions. Phoebe's credences are all very accurate. Each of Daphne's credences, in contrast, are not very accurate at all; each is a little more accurate than it is inaccurate, but not by much. Whose doxastic state is better, Phoebe's or Daphne's? It is clear that this question is analogous (...)
  48. added 2016-11-26
    Conditionalization Does Not Maximize Expected Accuracy.Miriam Schoenfield - 2017 - Mind 126 (504):1155-1187.
    Greaves and Wallace argue that conditionalization maximizes expected accuracy. In this paper I show that their result only applies to a restricted range of cases. I then show that the update procedure that maximizes expected accuracy in general is one in which, upon learning P, we conditionalize, not on P, but on the proposition that we learned P. After proving this result, I provide further generalizations and show that much of the accuracy-first epistemology program is committed to KK-like iteration principles (...)
  49. added 2016-11-26
    What Accuracy Could Not Be.Graham Oddie - 2017 - British Journal for the Philosophy of Science:axx032.
    Two different programs are in the business of explicating accuracy—the truthlikeness program and the epistemic utility program. Both assume that truth is the goal of inquiry, and that among inquiries that fall short of realizing the goal some get closer to it than others. TL theorists have been searching for an account of the accuracy of propositions. EU theorists have been searching for an account of the accuracy of credal states. Both assume we can make cognitive progress in an inquiry (...)
  50. added 2016-11-26
    Jamesian Epistemology Formalised: An Explication of ‘the Will to Believe’.Richard Pettigrew - 2016 - Episteme 13 (3):253-268.
    Famously, William James held that there are two commandments that govern our epistemic life: Believe truth! Shun error! In this paper, I give a formal account of James' claim using the tools of epistemic utility theory. I begin by giving the account for categorical doxastic states that is, credences. The latter part of the paper thus answers a question left open in Pettigrew.
1 — 50 / 145