Abstract
It is natural to think of precise probabilities as being special cases of imprecise probabilities, the special case being when one’s lower and upper probabilities are equal. I argue, however, that it is better to think of the two models as representing two different aspects of our credences, which are often (if not always) vague to some degree. I show that by combining the two models into one model, and understanding that model as a model of vague credence, a natural interpretation arises that suggests a hypothesis concerning how we can improve the accuracy of aggregate credences. I present empirical results in support of this hypothesis. I also discuss how this modeling interpretation of imprecise probabilities bears upon a philosophical objection that has been raised against them, the so-called inductive learning problem.
Similar content being viewed by others
Notes
There has been some debate over whether the sets in question should be convex or not (see e.g., Kyburg and Pittarelli 1996). For the sake of simplicity, I shall be assuming convexity, but nothing hinges upon this.
This is somewhat analogous to epistemicism about vagueness (e.g., Williamson 1994).
For convenience, I’m speaking as though objects themselves can be vague, but we need not be committed to ontological vagueness. We could instead talk about about vagueness at the level of semantics: e.g., ‘the cloud’ may not determinately refer to a unique set of molecules in the sky.
One might claim that the evidence requires that your credence be a fraction of 100, since the number of blue balls is an integer between 0 and 70 and the total number of balls is 100. However, this is not true, because various chance-mixings of hypotheses are compatible with the evidence. For example, the urn’s constitution may have been determined by a fair coin toss that decided whether there would be 30 or 31 blue balls. Conditional on that hypothesis, one’s credence that the ball I drew is blue should be 30.5.
There are also several other arguments for why it is rational for one to have indeterminate credences. See Bradley (2014) for an overview of them.
Levi (2009) has argued that we also need to also specify what the agent fully believes. This detail will not matter for what I have to say in this paper.
This assumption has long been debated (see e.g., Kyburg and Pittarelli 1996). Strictly speaking, I do not need it in what follows, but it does greatly simplify the discussion, so I make it only for that reason.
There is an issue here concerning how we can conditionalise on events that are assigned a credence of 0. I’m assuming that these difficulties can be avoided in one way or another—e.g., by taking conditional probabilities as primitive (see e.g., Hájek 2003).
This can be measured in various ways—e.g., by the supremum of the distances between points in the interval and 1.
Note that the notion of specificity is distinct from Keynes’ notion of weight, which always increases as one’s evidence increases.
An anonymous referee has pointed out that van Fraassen uses the term ‘vague’ to apply to sharply defined intervals. Although I think this is an infelicitous use of the term ‘vague’, van Fraassen still moves from the point that our credences are often not precise to representing them with credal sets.
Gärdenfors and Sahlin (1982) have a very similar model, but they understand the function as a “degree of epistemic reliability”. They explain that more reliable credence functions are those that are backed up by more information than other distributions (ibid, p. 366). This could be a mere terminological difference, but when the information in question is vague, it seems to me that there can be a difference in the “reliability” in two credence functions, even though they are “backed up” by the same amount of information. More importantly, though, is that Gärdenfors and Sahlin attribute no special importance to those credence functions that have the highest degree of “reliability” (p. 377), they only use their reliability measure to determine (with help from an additional risk aversion attitude) to determine a set of probabilities that are used in their decision procedure (p. 370). As I shall argue, this neglects an important aspect to our credences.
Note that a fuzzy credal set is not a fuzzy probability, which is a probability function defined over an algebra of fuzzy sets (see e.g., Zadeh 1968; Gudder 2000). Note also that this precise notion of a fuzzy credence is different from Sturgeon (2008) notion of fuzzy confidence, which has more to do with the notion of vague credence from the previous section.
Since the support of m is [0, 1], there are no other arbitrary cut-off points.
It is very important to note that this really is only a formal equivalence, and the intended interpretation of the membership function of a fuzzy credal set is not that it is some kind of higher-order probability. For example, the degree to which a credence function is a member of an agent’s fuzzy credal set is not the agent’s uncertainty as to whether that credence function perfectly represents, or ought to perfectly represent the agent’s doxastic state. This is, in no way, higher-order probability theory.
Thanks to Alan Hájek for suggesting this idea.
The sizes of the demographic breakdowns do not sum to 350, the total sample size, because some participants did not respond to all of the demographic questions, which were optional.
Some additional assumptions need to be made—in particular, that the credal set contains no anti-inductive priors.
I have replaced Rinard’s M-Green hypothesis with her \(H_1\) hypothesis from earlier. This does not affect the philosophical issues.
Recall, that this point applies to ideally rational agents—one’s credence might be vague precisely because one is responding appropriately to one’s vague evidence.
References
Armstrong, S., & Collopy, F. (1992). Error measures for generalizing about forecasting methods: Empirical comparisons. International Journal of Forecasting, 8(1), 69–80.
Berinsky, A. J., Huber, G. A., & Lenz, G. S. (2012). Evaluating online labor markets for experimental research: Amazon.com’s mechanical turk. Political Analysis, 20(3), 351–368.
Bradley, S. (2014). Imprecise probabilities. Stanford Encyclopedia of Philosophy. Stanford: Stanford University.
Bradley, S., & Steele, K. (2012). Uncertainty, learning, and the “problem” of dilation. Erkenntnis, 79(6), 1287–1303.
Chandler, J. (2014). Subjective probabilities need not be sharp. Erkenntnis, 79(6), 1273–1286.
Christensen, D. (2004). Putting logic in its place. Oxford: Oxford University Press.
Dallmann, J. M. (2014). A normatively adequate credal reductivism. Synthese, 191(10), 2301–2313.
Dardashti, R., Glynn, L., Thébault, K., & Frisch, M. (2014). Unsharp humean chances in statistical physics: A reply to beisbart. In M. C. Galavotti et al. (Eds.), New directions in the philosophy of science (pp. 531–542). Dordrecht: Springer.
Elga, A. (2010). Subjective probabilities should be sharp. Philosophers’ Imprint, 10(5), 1–11.
Eriksson, L., & Hájek, A. (2007). What Are Degrees of Belief? Studia Logica, 86(2), 183–213.
Gärdenfors, P., & Sahlin, N.-E. (1982). Unreliable probabilities, risk taking, and decision making. Synthese, 53(3), 361–386.
Gudder, S. (2000). What is fuzzy probability theory? Foundations of Physics, 30(10), 1663–1678.
Hájek, A. (2000). Objecting vaguely to Pascal’s Wager. Philosophical Studies, 98(1), 1–14.
Hájek, A. (2003). What conditional probability could not be. Synthese, 137(3), 273–323.
Hájek, A., & Smithson, M. (2012). Rationality and indeterminate probabilities. Synthese, 187(1), 33–48.
Jeffrey, R. (1983). Bayesianism with a human face. Testing Scientific Theories, Minnesota Studies in the Philosophy of Science, 10, 133–156.
Jeffrey, R. (1987). Indefinite probability judgment: A reply to Levi. Philosophy of Science, 54, 586–591.
Joyce, J. M. (2005). How probabilities reflect evidence. Philosophical Perspectives, 19(1), 153–178.
Joyce, J. M. (2010). A defense of imprecise credences in inference and decision making. Philosophical Perspectives, 24(1), 281–323.
Kaplan, M. (1983). Decision theory as philosophy. Philosophy of Science, 50(4), 549–577.
Kaplan, M. (1996). Decision theory as philosophy. Cambridge: Cambridge University Press.
Keynes, J. M. (1921). A treatise on probability. London: Macmillan.
Koopman, B. O. (1940). The bases of probability. Bulletin of the American Mathematical Society, 46(10), 763–774.
Kyburg, H. (1983). Epistemology and Inference. Minneapolis: University of Minnesota Press.
Kyburg, H. E, Jr. (1961). Probability and the logic of rational belief. Middletown: Wesleyan University Press.
Kyburg, H. E, Jr., & Pittarelli, M. (1996). Set-based bayesianism. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, 26(3), 324–339.
Levi, I. (1974). On indeterminate probabilities. Journal of Philosophy, 71(13), 391–418.
Levi, I. (1980). The enterprise of knowledge: an essay on knowledge, credal probability, and chance. Cambridge, MA: MIT Press.
Levi, I. (1985). Imprecision and indeterminacy in probability judgment. Philosophy of Science, 52(3), 390–409.
Levi, I. (2000). Imprecise and indeterminate probabilities. Risk, Decision and Policy, 5(2), 111–122.
Levi, I. (2009). Why indeterminate probability is rational. Journal of Applied Logic, 7(4), 364–376.
Maher, P. (2006). Book review: David Christensen. Putting logic in its place: Formal constraints on rational belief. Notre Dame Journal of Formal Logic, 47(1), 133–149.
Milgram, E. (2009). Hard truths. Malden: Wiley.
Moss, S. (2014). Credal dilemmas. Noûs. doi:10.1111/nous.12073.
Paolacci, G., Chandler, J., & Ipeirotis, P. (2010). Running experiments on Amazon Mechanical Turk. Judgment and Decision Making, 5(5), 411–419.
Pedersen, P., & Wheeler, G. (2014). Demystifying dilation. Erkenntnis, 79(6), 1305–1342.
Rinard, S. (2013). Against radical credal imprecision. Thought: A Journal of Philosophy, 2(2), 157–165.
Seidenfeld, T., Schervish, M. J., & Kadane, J. B. (2012). Forecasting with imprecise probabilities. International Journal of Approximate Reasoning, 53(8), 1248–1261.
Seidenfeld, T., & Wasserman, L. (1993). Dilation for sets of probabilities. The Annals of Statistics, 21(3), 1139–1154.
Singer, D. J. (2014). Sleeping beauty should be imprecise. Synthese, 191(14), 3159–3172.
Sturgeon, S. (2008). Reason and the grain of belief. Noûs, 42(1), 139–165.
Van Fraassen, B. C. (1990). Figures in a probability landscape, chapter 21. In Truth or consequences (pp. 345–356). Dordrecht: Kluwer.
Van Fraassen, B. C. (2006). Vague expectation value loss. Philosophical Studies, 127(3), 483–491.
Walley, P. (1991). Statistical reasoning with imprecise probabilities. London: Chapman Hall.
Wheeler, G. (2014). Character matching and the locke pocket of belief. In Context Epistemology (Ed.), Epistemology, context, and formalism (pp. 187–195). cham: Springer.
White, R. (2010). Evidential symmetry and mushy credence. In J. Hawthorne (Eds.), Oxford studies in epistemology. Oxford: Oxford University Press..
Williamson, T. (1994). Vagueness. London: Routledge.
Zadeh, L. A. (1968). Probability measures of fuzzy events. Journal of mathematical analysis and applications, 23(2), 421–427.
Acknowledgments
The author would like to thank Seamus Bradley, Mark Burgman, Rachael Briggs, Alan Hájek, Michael Morreau, Daniel Nolan, and Reuben Stern for helpful discussion and feedback. This work was financially supported by the Alexander von Humboldt Foundation.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Lyon, A. Vague Credence. Synthese 194, 3931–3954 (2017). https://doi.org/10.1007/s11229-015-0782-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11229-015-0782-5