Skip to main content
Log in

Imprecise evidence without imprecise credences

  • Published:
Philosophical Studies Aims and scope Submit manuscript

Abstract

Does rationality require imprecise credences? Many hold that it does: imprecise evidence requires correspondingly imprecise credences. I argue that this is false. The imprecise view faces the same arbitrariness worries that were meant to motivate it in the first place. It faces these worries because it incorporates a certain idealization. But doing away with this idealization effectively collapses the imprecise view into a particular kind of precise view. On this alternative, our attitudes should reflect a kind of normative uncertainty: uncertainty about what to believe. This view refutes the claim that precise credences are inappropriately informative or committal. Some argue that indeterminate evidential support requires imprecise credences; but I argue that indeterminate evidential support instead places indeterminate requirements on credences, and is compatible with the claim that rational credences may always be precise.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. See e.g. Jeffrey (1983) and van Fraassen (1990).

  2. See e.g. Levi (1974, 1980, 1985), Walley (1991), Joyce (2005, 2010), Weatherson (2008), Sturgeon (2008), Hájek and Smithson (2012), and Moss (2014).

  3. This might be interpreted as a question about epistemic norms for cognitively idealized agents. I’d rather interpret it as a question about what evidentialist epistemology requires.

  4. Some have associated certain patterns of preferences or behaviors with imprecise credences: for example, having distinct buying and selling prices for gambles (Walley 1991), or being willing to forego sure gains in particular diachronic betting contexts (Elga 2010). But treating these as symptomatic of imprecise credences, rather than precise credences, depends on specific assumptions about how precise credences must be manifested in behavior: for example, that agents with precise credences are expected utility maximizers.

  5. E.g. Fine (1973).

  6. E.g. Kyburg (1983).

  7. Beyond this sufficient condition, there’s some controversy among proponents of imprecise credences about what practical rationality requires of agents with imprecise credences. See e.g. Seidenfeld (2004), Weatherson (2008), Joyce (2010), Williams (2014), and Moss (2014).

  8. “Unspecific bodies of evidence” may include empty bodies of evidence.

  9. E.g. in Elga (2010).

  10. White (2009), see Pedersen and Wheeler (2014) for discussion.

  11. Imprecise credences are often called “mushy” credences.

  12. In particular, Bertrand’s (1889) paradoxes; see e.g. the hollow cube example in Seidenfeld (1978) and van Fraassen (1989). For a defense of the principle of indifference, see White (2009).

  13. There are other kinds of motivation for rationally permissible imprecise credences. One is the view that credences intuitively needn’t obey Trichotomy, the claim that for all propositions A and B, c(A) is either greater than, less than, or equal to c(B). (See e.g. (Schoenfield 2012).) Moss (2014) argues that imprecise credences provide a good way to model rational changes of heart (in a distinctly epistemic sense). Hájek and Smithson (2012) suggest imprecise credences as a way of representing rational attitudes towards events with undefined expected value. Finally, there’s the empirical possibility of indeterminate chances, also discussed in Hájek and Smithson (2012): if there are set-valued chances, the Principal Principle seems to demand set-valued credences. Only the last of these suggests that imprecise credences are rationally required; I’ll return to it in Sect. 4.

  14. See e.g. Joyce (2010): “Since the data we receive is often incomplete, imprecise or equivocal, the epistemically right response is often to have opinions that are similarly incomplete, imprecise or equivocal.”

  15. More cautiously, we might distinguish between first- and higher-order objective chances. Suppose a coin has been chosen at random from an urn containing 50 coins biased 3/4 toward heads and 50 coins biased 3/4 toward tails. The first-order objective chance that the chosen coin will land heads on the next toss is either .75 or .25, but the second-order objective chance of heads is .5. Mushers in the first category might allow for precise credences where only higher-order objective chances are known. This seems to be the position of Joyce (2010).

  16. Ellsberg (1961).

  17. Schoenfield (2012) and Konek (forthcoming) seem to fall into this category, given their choice of motivating examples.

  18. Walley more or less concedes this point (104–105). He distinguishes “incomplete” versus “exhaustive” interpretations of imprecise credences, similar to the “imprecise” versus “indeterminate” interpretations discussed in Levi (1985). On the incomplete interpretation, which he generally uses, the degree of imprecision can be partly determined by the incomplete elicitation of an agent’s belief state. On the exhaustive interpretation, by contrast, imprecision is determined solely by indeterminacy in the agent’s belief state. The latter interpretation, Walley acknowledges, requires “the same kind of precise discriminations as the Bayesian theory” (105). The musher position, as I’ve defined it, is concerned with the exhaustive interpretation: there are no epistemic obligations to be such that someone else has incomplete information about one’s belief state. Walley offers no theory of (complete) imprecise credences that is not susceptible to arbitrariness worries.

  19. Of course, the distribution of weights need not be symmetric or smooth.

  20. More accurately, it should represent the non-vague grounding base for the vague version of what set membership represents.

  21. \(\Gamma\)-Maximin—the rule according to which one should choose the option that has the greatest minimum expected value—is not susceptible to Elga’s objection. But that decision rule is unattractive for other reasons, including the fact that it sometimes requires turning down cost-free information. (See Seidenfeld 2004; thanks to Seidenfeld for discussion.) Still other rules, such as Weatherson’s (2008) rule “Caprice” and Williams’s (2014) rule “Randomize,” are committed to the controversial claim that what’s rational for an agent to do depends not just on her credences and values, but also her past actions.

  22. For this to work out as an expectation, we’ll need to normalize the weighting such that the total weights sum to 1. Assuming the weighted averages are probabilistic—a plausible constraint on the weighting—the resulting recommended actions will be rational (or anyway not Dutch-bookable).

  23. The idea of reassessing imprecise evidence with higher-order probabilities is addressed in Savage (1972), 81 and Walley (1991), 258–261.

  24. In economics, it’s common to distinguish “risk” and “uncertainty,” in the sense introduced in Knight (1921). Knightian “risk” involves known (or knowable) objective probabilities, while Knightian “uncertainty” involves unknown (or unknowable) objective probabilities. This is not the ordinary sense of “uncertainty”—i.e. the state of not being certain—that I use throughout this paper.

  25. A brief argument: introspection may be a form of perception (inner sense). Our perceptual faculties sometimes lead us astray. Whether introspection is a form of perception is arguably empirical. Rationality doesn’t require certainty about empirical psychology. So it’s possible that ideal rationality doesn’t require perfect introspection. And it’s possible that ideal rationality does require perfect introspection but doesn’t require ideal agents to know that they can introspect perfectly.

  26. Note: this is not an interpretation of epistemic probability that presupposes objective Bayesianism.

  27. Modesty is further discussed in Christensen (2010), Williamson (2007), Elga (2013), Lasonen-Aarnio (2014), and Sliwa and Horowitz (2015).

  28. There are also some cases, too complex to discuss here, where an ideally rational agent might simply not be in a position to know what her evidence is, and therefore be uncertain about epistemic probabilities. See (Williamson 2007; Christensen 2010), and (Elga 2013).

  29. A caveat: it’s compatible with the view I’m defending that there are no such bodies of evidence. It might be that every body of evidence not only supports precise credences, but supports certainty in the rationality of just those precise credences.

  30. It might even be that the Mystery Coin example is not really an example of a case where it’s not clear what credence to have. Credence .5 is the obvious candidate, even without the principle of indifference to bolster it. If you had to bet on heads in the Mystery Coin case, I suspect you’d bet as though you had credence .5.

  31. I’ll mention some possible constraints that uncertainty about rationality places on our other credences. What’s been said so far has been neutral about whether there are level-bridging norms: norms that link one’s beliefs about what’s rational with what is in fact rational. But a level-bridging response, involving something like Christensen’s (2010) principle of Rational Reflection, is a live possibility. (See Elga 2013) for a refinement of the principle.) According to this principle, our rational first-order credences should be a weighted average of the credences we think might be rational (on Elga’s version, conditional on their own rationality), weighted by our credence in each that it is rational. This principle determines what precise probabilities an agent should have when she is rationally uncertain about what’s rational.

    Note, however, that a principle like this won’t provide a recipe to check whether your credences are rational: whatever second-order credences you have, you may also be uncertain about whether your second-order credences are the rational ones to have, and so on. And so this kind of coherence constraint doesn’t provide any direct guidance about how to respond to imprecise evidence.

  32. And except in the rare case where, e.g., you’re betting on propositions about what’s rational.

  33. Thanks to Julia Staffel for pressing me on this point.

  34. Again, see Elga (2010).

  35. Hájek and Smithson (2012) argue that interpretivism directly favors modeling even ideal agents with imprecise credences. After all, a finite agent’s dispositions won’t determine a unique probability function/utility function pair that can characterize her behavioral dispositions. And this just means that all of the probability/utility pairs that characterize the agent are equally accurate. So, doesn’t interpretivism entail at least the permissibility of imprecise credences? I find this argument compelling. But it doesn’t tell us anything about epistemic norms. It doesn’t suggest that evidence ever makes it rationally required to have imprecise credences. And so this argument doesn’t support the musher view under discussion.

  36. Konek is more circumspect about the kind of probability at issue. My objections to this view apply equally well if we sub some other form of probability in for chance.

  37. This is a relative of what White (2009) calls the Chance Grounding Thesis, which he attributes to a certain kind of musher: “Only on the basis of known chances can one legitimately have sharp credences. Otherwise one’s spread of credence should cover the range of chance hypotheses left open by your evidence” (174).

  38. See Skyrms (1977).

  39. See also White (2009), 162–164.

  40. Of course, the rational agent may ascribe herself precise or imprecise credences and so occupy the theorist’s position. But in doing so, the comparative informativeness in her ascription of precise credences is still informativeness about her own psychological states, not about how coin-tosses might turn out.

  41. Thanks to Chris Meacham for discussion and to Graham Oddie for this example.

  42. Cf. Yalcin (2012).

  43. Thanks to Wolfgang Schwarz, R.A. Briggs, and Alan Hájek for suggesting this formulation of the motivation.

  44. Roger White suggested an analogous example to me in personal communication.

  45. This point extends to another argument that has been given for imprecise credences. According to Hájek and Smithson (2012), there could be indeterminate chances, so that some event E’s chance might be indeterminate—not merely unknown—over some interval like [.2, .5]. This might be the case if the relative frequency of some event-type is at some times .27, at others .49, etc.—changing in unpredictable ways, forever, such that there is no precise limiting relative frequency. Hájek & Smithson argue that the possibility of indeterminate objective chances, combined with a generalization of Lewis’s Principal Principle, yields the result that it is rationally required to have imprecise credences. Hájek & Smithson suggest that the following generalization of the Principal Principle captures how we should respond to indeterminate chances:

    PP*:

    Rational credences are such that \(C(A \mid Ch(A) = [n,m]) = [n,m]\) (if there’s no inadmissible evidence).

    But there are other possible generalizations of the Principal Principle that are equally natural, e.g. PP\(\dagger\):

    PP\(\dagger\):

    Rational credences are such that \(C(A \mid Ch(A) = [n,m]) \in [n,m]\) (if there’s no inadmissible evidence).

    The original Principal Principle is a special case of both. (Note that PP\(\dagger\) only states a necessary condition on rational credences and not a sufficient one. So it isn’t necessarily a permissive principle.) Hájek & Smithson don’t address this alternative, but it seems to me perfectly adequate for the sharper to use for constraining credences in the face of indeterminate chances. So it’s not obvious that indeterminate chances require us to have indeterminate credences.

References

  • Bertrand, J. (1889). Calcul des probabilités. Paris: Gauthier-Villars.

    Google Scholar 

  • Christensen, David. (2010). Rational reflection. Philosophical Perspectives, 24(1), 121–140.

    Article  Google Scholar 

  • Elga, A. (2007). Reflection and disagreement. Noûs, 41(3), 478–502.

    Article  Google Scholar 

  • Elga, A. (2010). Subjective Probabilities Should Be Sharp. Philosophers’ Imprint, 10(05).

  • Elga, A. (2013). The puzzle of the unmarked clock and the new rational reflection principle. Philosophical Studies, 164(1), 127–139.

    Article  Google Scholar 

  • Ellsberg, D. (1961). Risk, ambiguity, and the Savage axioms. Quarterly Journal of Economics, 75(4), 643–669.

    Article  Google Scholar 

  • Fine, T. L. (1973). Theories of Probability. New York: Academic Press.

    Google Scholar 

  • Good, I. J. (1962). Subjective probability as the measure of a non-measurable set. In P. S. E. Nagel & A. Tarski (Eds.), Logic, methodology and philosophy of science (pp. 319–329). Palo Alto: Stanford University Press.

    Google Scholar 

  • Hájek, A., & Smithson, M. (2012). Rationality and Indeterminate Probabilities. Synthèse, 187(1), 33–48.

    Article  Google Scholar 

  • Jeffrey, Richard C. (1983). Bayesianism with a human face. In Testing Scientific Theories (Vol. 10, pp. 133–156). University of Minnesota Press

  • Joyce, J. M. (2005). How probabilities reflect evidence. Philosophical Perspectives, 19(1), 153–8211.

    Article  Google Scholar 

  • Joyce, J. M. (2010). A defense of imprecise credences in inference and decision making. Philosophical Perspectives, 24(1), 281–323.

    Article  Google Scholar 

  • Knight, F. (1921). Risk, uncertainty, and profit. Boston: Houghton and Mifflin.

    Google Scholar 

  • Konek, J. (forthcoming). Epistemic Conservativity and Imprecise Credence. Philosophy and Phenomenological Research.

  • Kyburg, H. (1983). Epistemology and inference. Minneapolis: University of Minnesota Press.

    Google Scholar 

  • Lasonen-Aarnio, M. (2014). Higher-order evidence and the limits of defeat. Philosophy and Phenomenological Research, 88(2), 314–345.

    Article  Google Scholar 

  • Levi, I. (1974). On indeterminate probabilities. Journal of Philosophy, 71(13), 391–418.

    Article  Google Scholar 

  • Levi, I. (1980). The enterprise of knowledge: an essay on knowledge, credal probability, and chance. Cambridge: MIT Press.

    Google Scholar 

  • Levi, I. (1985). Imprecision and indeterminacy in probability judgment. Philosophy of Science, 52(3), 390–409.

    Article  Google Scholar 

  • Moss, S. (2014). Credal dilemmas. Noûs, 49(4), 665–683.

    Article  Google Scholar 

  • Pedersen, A. P., & Wheeler, G. (2014). Demystifying dilation. Erkenntnis, 79(6), 1305–1342.

    Article  Google Scholar 

  • Savage, L. (1972). The foundations of statistics, 2nd revised edition (first published 1954). New York: Dover.

  • Schoenfield, M. (2012). Chilling out on epistemic rationality: A defense of imprecise credences (and other imprecise doxastic attitudes). Philosophical Studies, 158, 197–219.

    Article  Google Scholar 

  • Seidenfeld, T. (1978). Direct inference and inverse inference. Journal of Philosophy, 75(12), 709–730.

    Article  Google Scholar 

  • Seidenfeld, T. (2004). A contrast between two decision rules for use with (convex) sets of probabilities: \(\Gamma\)-maximin versus e-admissibility. Synthèse, 140, 69–88.

    Article  Google Scholar 

  • Skyrms, B. (1977). Resiliency, propensities, and causal necessity. Journal of Philosophy, 74, 704–713.

    Article  Google Scholar 

  • Sliwa, P., & Horowitz, S. (2015). Respecting all the evidence. Philosophical Studies, 172(11), 2835–2858.

    Article  Google Scholar 

  • Sturgeon, S. (2008). Reason and the grain of belief. Noûs, 42(1), 139–165.

    Article  Google Scholar 

  • van Fraassen, B. (1989). Laws and Symmetry. Oxford University Press: Oxford.

    Book  Google Scholar 

  • van Fraassen, B. (1990). Figures in a probability landscape. In Truth or consequences (pp. 345–356). Dordrecht: Kluwer.

  • Walley, P. (1991). Statistical reasoning with imprecise probabilities. Boca Raton: Chapman & Hall.

    Book  Google Scholar 

  • Weatherson, B. (2008). Decision making with imprecise probabilities (Unpublished).

  • White, R. (2009). Evidential symmetry and mushy credence. In Oxford studies in epistemology. Oxford: Oxford University Press.

  • Williamson, J. (2014). How uncertain do we need to be? Erkenntnis, 79, 1249–1271.

    Article  Google Scholar 

  • Williams, J., & Robert, G. (2014). Decision making under uncertainty. Philosophers Imprint, 14(4), 1–34.

    Google Scholar 

  • Williamson, T. (2007). Improbable knowing (unpublished notes).

  • Yalcin, S. (2012). Bayesian expressivism. Proceedings of the Aristotelian Society, 112(2.2), 123–160.

    Article  Google Scholar 

Download references

Acknowledgements

Thanks to R.A. Briggs, Ryan Doody, Alan Hájek, Richard Holton, Sophie Horowitz, Wolfgang Schwarz, Teddy Seidenfeld, Julia Staffel, Roger White, and audiences at the Australian National University, UCSD, and SLACRR for invaluable feedback.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jennifer Rose Carr.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Carr, J.R. Imprecise evidence without imprecise credences. Philos Stud 177, 2735–2758 (2020). https://doi.org/10.1007/s11098-019-01336-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11098-019-01336-7

Keywords

Navigation