Skip to main content
Log in

Chilling out on epistemic rationality

A defense of imprecise credences (and other imprecise doxastic attitudes)

  • Published:
Philosophical Studies Aims and scope Submit manuscript

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Notes

  1. See, for example, Sober (2002).

  2. See, for example, Earman (1992), Howson and Urbach (1993) and Strevens (2005).

  3. Levi (1985, p. 392).

  4. See, for example, Levi (1974), Jeffrey (1983), Kaplan (1996), Joyce (2005, 2010).

  5. I should also note that one could be insensitive to evidential sweetening with respect to two propositions p and q where q is not the negation of p, and much of what I say here will apply to such cases as well. However, for simplicity, I am going to restrict my discussion to cases of insensitivity to evidential sweetening to cases in which the two propositions in question are mutually exclusive.

  6. For discussions of insensitivity to sweetening in the context of practical rationality and ethics see, for example, Chang (1997), Hare (2010), and Schoenfield (ms.)a.

  7. For early discussions of this model see Jefferey (1983) and Levi (1985).

  8. In fact, there are a number of compelling arguments for precision (see Elga 2010; White 2010) and I won’t be able to address all of them here. The response I give to the argument that I will be presenting is also responsive to White’s argument for precision, and I think it may have some bearing on Elga’s argument as well, but I will leave that for another time.

  9. A more precise version of the principle would say that if you are fully confident that your future doxastic attitude will be A, you should now adopt A, and if you are less than fully confident that your future attitude will be A, your attitude should be an average of the possible attitudes you might have, weighted by the probability of you having those attitudes. For our purposes, however, this rough version is good enough.

  10. This argument was inspired by a similar argument in decision theory described in Hare (2010).

  11. To see why, recall that who was put in which cell was determined by the flip of a fair coin. If the coin landed heads (H), Smith is in Cell #1 and if the coin landed tails (T), Jones is in Cell #1.

    1. (1)

      Cell 1 ↔  [(S&H) or (J&T)] (in other words, the person in Cell1 is guilty if and only if Smith is guilty and the coin landed heads or Jones is guilty and the coin landed tails).

    2. (2)

      Pr (Cell 1) = Pr [(S&H) or (J&T)]

    3. (3)

      Pr (Cell 1) = .5 Pr (S) + 0.5 Pr (J)

    4. (4)

      Pr (J) = 1 − Pr (S) (since either Smith or Jones is guilty)

    5. (5)

      Pr (Cell 1) = 0.5 Pr (S) + 0.5 [1 − Pr (S)] = 0.5

    6. (6)

      Pr (Cell 2) = 1 − Pr (Cell 1) = 0.5.

  12. There is an argument in White (2010 pp. 175–181), which, in this case, could be applied to defend the claim that you should match your credence in Cell 1 to your credence in H, rather than the other way around.

  13. This way of motivating the claim that, in the opaque case, you should be sensitive to sweetening, was inspired by a case discussed in White (2010).

  14. Some proponents of imprecise credences might think that the correct version of the reflection principle will only tell you to defer to your future doxastic states if you know what your entire representor will be at the later time. This condition is not satisfied in this case. However, I think it would be a mistake to restrict reflection principles in this way. We don’t want the principles that tell us how to defer to experts (whether they are other people, or just future time slices of ourselves) to be applicable only in cases when we know what the expert’s entire representor is, since we rarely have such information.

  15. More precisely, it is a distinction between what attitudes we should have, and what attitudes agents with perfect cognitive capacities, and who are unreflective, would have—where by this I mean, that these agents don’t worry about the possibility of their own error. The addition of the “unreflectivity” requirement is important for reasons discussed by Christensen (2008), and is necessary for agent neutrality, which will be discussed shortly. (This kind of perfect rationality is related to the notion Hartry Field (2000) describes as “ideal credibility”). For convenience, in what follows, I will use the term “agents with perfect cognitive capacities” to refer to unreflective agents with perfect cognitive capacities. Since I do not think that any agents should be unreflective I hesitate to use this terminology. I use it anyway, since I think it conveys something of the idea I'm trying to develop. A less agent-centered (and perhaps entirely uninformative) way of thinking about the degree of confidence in a proposition that the evidence supports is as its evidential probability.

  16. Schoenfield (ms.)b. Also, see Aarnio (2010) and Sepielli (ms.) for discussions of distinctions along these general lines.

  17. I do not mean to suggest that all evidence is propositional, but only that, for those propositions that are part of our evidence if they entail p, then our evidence supports a high degree of confidence in p.

  18. Two notes here: First, the agent neutrality condition applies to de dicto propositions only. Second, if you are a permissivist, and think that what S’s evidence supports depends on S’s priors, or standards of reasoning, we can let the evidential support relation be a three place relation between the evidence, the agent’s priors, and doxastic attitudes. It will still be true that what the evidence supports does not depend on which particular agent is evaluating it (though what the evidence supports will depend on the agent’s priors, or standards of reasoning).

  19. If you are a permissivist, we will add to this the qualification that your friend has the same standards of reasoning, or prior probability function as you do.

  20. This phenomenon was first discussed (as far as I know) in Christensen (2010).

  21. There are different ways of measuring accuracy, but the general idea is that an accurate agent will have high credences in truths and low credences in falsehoods.

  22. Note that deliberating with a principle does not require successfully following it. If the principles of reasonability were the ones such that successfully following them would help us achieve our epistemic aims—the only principle we would need would be one which told us to be fully confident in all and only the truths. Since you can be reasonable without being fully confident in all and only the truths, it is important that the test for a principle of reasonability be concerned with the result of trying to follow the principle rather than the result of following it.

  23. At least according to Bayesian conditionalization, once a function assigns 1 or 0 to a proposition, it will always assign 1 or 0, no matter how much new evidence one conditionalizes on.

  24. Some people (Joyce 2010 for example) might object by saying that the structure does not need to show up in the intervals that represent the agent’s attitudes towards individual propositions, so long as the structure is found in the representor as a whole. However, I think it is important that we be able to represent an agent’s attitude towards a single proposition without building in information about the agent’s entire representor. (One reason for this is described in footnote 13).

  25. I argue against the uniqueness assumption in Schoenfield (ms)c.

  26. Briggs (Briggs 2009) has come up with a principle that is supposed to take these kinds of considerations into account. She has a reflection principle which she calls “Distorted Reflection.” This principle tells you that if you know that your later credence in p will be r, and you would not lose any information between now and then, your credence in p now should be r − Dr, where Dr is a factor that expresses your expected departure from rationality. If we can formalize our expected departure from rationality (she has a suggestion as to how to do this as well) this may be exactly the kind of principle we need.

References

  • Aarnio, M. L., (2010). “Unreasonable Knowledge.” Philosophical Perspectives 24, 1–21.

  • Briggs, R. (2009). Distorted reflection. Philosophical Review, 118(1), 59–85.

    Article  Google Scholar 

  • Chang, R. (1997). Introduction to incommensurability, incomparability and practical reason. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Christensen, D. (2008). Does murphy’s law apply in epistemology? Self-doubt and rational ideals. In T. S. Gendler & J. Hawthorne (Eds.), Oxford studies in epistemology (Vol. 2). Oxford: Oxford University Press.

  • Christensen, D. (2010). Higher order evidence. Philosophy and Phenomenological Research, 81(1), 185–215.

    Article  Google Scholar 

  • Earman, J. (1992). Bayes or bust. Cambridge, MA: MIT Press.

    Google Scholar 

  • Elga, A. (2010). Subjective probabilities should be sharp. Philosophers’ Imprint, 10(5), 1–11.

    Google Scholar 

  • Field, H. (2000). Apriority as an evaluative notion. In P. Boghossian & C. Peacocke (Eds.), New essays on the a priori. New York: Oxford.

  • Hare, C. (2010). Take the Sugar. Analysis, 70(2), 237–247.

    Article  Google Scholar 

  • Howson, C., & Urbach, P. (1993). Scientific reasoning: The Bayesian approach (2nd ed.). Chicago: Open Count Publishing.

    Google Scholar 

  • Jeffrey, R. (1983). Bayesianism with a human face. In J. Earman (Ed.), Testing scientific theories. Minneapolis, MN: University of Minnesota Press.

    Google Scholar 

  • Joyce, J. M. (2005). How probabilities reflect evidence. Philosophical Perspectives, 19, 153–178.

    Article  Google Scholar 

  • Joyce, J. M. (2010). A defense of imprecise credences in inference and decision making. Philosophical Perspectives, 24, 281–323.

    Article  Google Scholar 

  • Kaplan, M. (1996). Decision theory as philosophy. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Levi, I. (1974). On indeterminate probabilities. Journal of Philosophy, 71, 391–418.

    Article  Google Scholar 

  • Levi, I. (1985). Imprecision and indeterminacy in probability judgment. Philosophy of Science, 52, 390–409.

    Article  Google Scholar 

  • Sober, E. (2002). Bayesianism—Its Scope and Limits. In R Swinburne (Ed.) Bayes’ theorem (vol. 113). Oxford: Oxford University Press.

  • Strevens, M. (2005). The Baysian approach in the philosophy of science. In D. M. Borchet (Ed.), Encyclopedia of philosophy (2nd ed.). Detroit: Macmillan Reference.

    Google Scholar 

  • van Fraassen, B. (1984). Belief and the will. Journal of Philosophy, 81, 235–256.

    Article  Google Scholar 

  • White, R. (2010). Evidential symmetry and mushy credence. In T. S. Gendler & J. Hawthorne (Eds.), Oxford studies in epistemology (Vol. 3). Oxford: Oxford University Press.

  • Schoenfield, Miriam (ms)a. Why Acquaintance Matters”.

  • Schoenfield, Miriam (ms)b. Expecting too much of epistemic rationality: Why we need two notions instead of one.

  • Schoenfield, Miriam (ms)c. Permission to Believe.

  • Sepielli, A (ms). Evidence, Reasonableness and Disagreement. “Unpublished”.

Download references

Acknowledgements

In writing this paper, I have benefited greatly from conversations with David Christensen, Adam Elga, Daniel Greco, Caspar Hare, Eli Hirsch, Carrie Ichikawa Jenkins, Julia Markovits, Rebecca Millsop, Agustin Rayo, Susanna Rinard, Robert Stalnaker, Stephen Yablo and Roger White. I also received extremely helpful feedback from the audience at the Bellingham Summer Philosophy Conference, 2011, and members of the 2011–2012 MIT Job Market Seminar.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Miriam Schoenfield.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Schoenfield, M. Chilling out on epistemic rationality. Philos Stud 158, 197–219 (2012). https://doi.org/10.1007/s11098-012-9886-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11098-012-9886-7

Keywords

Navigation