Skip to main content
Log in

Respecting all the evidence

  • Published:
Philosophical Studies Aims and scope Submit manuscript

Abstract

Plausibly, you should believe what your total evidence supports. But cases of misleading higher-order evidence—evidence about what your evidence supports—present a challenge to this thought. In such cases, taking both first-order and higher-order evidence at face value leads to a seemingly irrational incoherence between one’s first-order and higher-order attitudes: you will believe P, but also believe that your evidence doesn’t support P. To avoid sanctioning tension between epistemic levels, some authors have abandoned the thought that both first-order and higher-order evidence have rational bearing. This sacrifice is both costly and unnecessary. We propose a principle, Evidential Calibration, which requires rational agents to accommodate first-order evidence correctly, while allowing rational uncertainty about what to believe. At the same time, it rules out irrational tensions between epistemic levels. We show that while there are serious problems for some views on which we can rationally believe, “P, but my evidence doesn’t support P”, Evidential Calibration avoids these problems. An important upshot of our discussion is a new way to think about the relationship between epistemic levels: why first-order and higher-order attitudes should generally be aligned, and why it is sometimes—though not always—problematic when they diverge.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. See Smithies (2012) for further discussion of this point.

  2. Kelly (2005), pp. 4–5. It is not entirely clear whether Kelly’s position here is better described as giving up Desideratum 2, or as giving up Desideratum 3 like the “Level-Splitting” views discussed below. Nevertheless, many of the considerations Kelly raises could naturally be used to motivate giving up Desideratum 2, so it is helpful to look at his view in this context. Also see Titelbaum (ms).

  3. White (2009). White calls this the “Calibration Rule”. White is not explicit about what he means by “draw[ing] the conclusion”; sometimes he writes as if it amounts to forming a belief, but the thought behind the rule is general enough to also apply to cases where we should suspend judgment. We will interpret it as a view about educated guesses, so as to stay neutral about this question. (we will say more about how we understand “educated guesses” in Sect. 3, when we spell out our view, Evidential Calibration.).

  4. White (2009), p. 234.

  5. Ibid.

  6. This is one of the main lines of criticism that has been raised in the disagreement and higher-order evidence literature against views on which higher-order evidence mandates reducing confidence in one’s conclusions. See, e.g., Kelly (2010) for an example of this objection in the context of peer disagreement; see Christensen (2007b) and (2011) for responses. See also Weatherson (ms) for more extensive criticism of views on which accommodating higher-order evidence requires us to “screen off” first-order evidence completely. Christensen [(2011), Section 1] denies that his “Conciliatory” view is committed to rejecting Desideratum 1. According to Christensen, “Conciliationism tells us what the proper response is to one particular kind of evidence. … If one starts out by botching things epistemically, and then takes correct account of one bit of evidence, it’s unlikely that one will end up with fully rational beliefs.” (p. 4, highlighting ours) Our project can be seen as complementary: our aim is to articulate how one should respond to one’s total evidence.

  7. Here we are thinking of the cleanest case: someone who has no reason to doubt her reliability. This may be distinct from someone who has good reason to think she’s reliable. We will return to this issue in Sect. 5.3.

  8. See Coates (2012), Hazlett (2012), Weatherson (ms), Lasonen-Aarnio (forthcoming), and Wedgwood (2011) for views roughly along these lines. Williamson [(notes a) and (notes b)] defends the view that one can know that P, but that it can be highly improbable on one’s evidence that one knows that P. Coates and Weatherson focus on the rationality of believing conjunctions like, “P, but my evidence doesn’t support P”; others accept verdicts along these lines as a consequence of their other commitments. We will not go through the details of what these authors say here, so what we say here should not be taken as definitive arguments against their particular views. Rather, we focus on the implications for a certain natural way of spelling out a Level-Splitting view, and (eventually) argue why we should not go this way.

  9. Weatherson (ms), p. 15. He continues: “And, assuming a natural connection between evidence and reason, that in turn isn’t very different from it being the case that she should believe p, and she should believe that her evidence does not support p.”

  10. See, e.g., Smithies (2012) and Titelbaum (ms). Feldman (2005) defends a view broadly in line with Desideratum 3, and writes that “[o]ne wonders what circumstances could make [level-tension] reasonable” (p. 108–9).

  11. See Coates (2012) and Weatherson (ms) for this point, and for analogies to ethics.

  12. See Williamson (2000), especially Chapter 4.

  13. This scenario parallels Christensen’s “Reasonable Prudence” [Christensen (2010a, b), p. 12].

  14. In Anton’s case, propensity to guess right “in a given situation” means something like this: looking at the particular type of test results, while under the influence of Sam’s magic mushrooms. The relevant situation here must be understood in way that is “independent of” or “prior to” his particular evidence or reasoning about which dose to give his patient (if we included Anton’s first-order evidence and allowed it to have its usual epistemic role, it would seem that Anton should remain confident in his guess on the basis of that evidence). The intuitive thought here is clear enough, although spelling out the appropriate “independence principle” is a delicate job. We will assume here that the job can be done. See Christensen (2007a, b, 2011; Elga (2007), and Vavova (ms), among others, for further discussion of these issues. If no good independence principle can be formulated, this will raise challenges for any account (including Guess Calibration) on which higher-order evidence rationally affects first-order beliefs.

  15. An outstanding issue: what if your evidence supports .5 confidence in each of P and ~P? Then it does not favor either option. EC is silent on how you should respond to higher-order evidence in this kind of situation.

    Thinking more about this kind of case, however, it’s unclear how we could have HOE that would rationally change our credences. If your evidence supports .5 in each of P and ~P, you would be justified in guessing either way; you would guess arbitrarily. Guessing arbitrarily, you would expect your reliability to be equal to chance. But suppose you have HOE that suggests that your reliability is worse than chance. What should you think? It looks like in this case you can’t win: no matter which option you choose, you should expect to be wrong [see Egan and Elga (2005) for more on this topic. They argue that you cannot rationally expect yourself to be anti-reliable]. On the other hand, what if your HOE suggests that you’re better than chance? Then you have a different kind of puzzle. What explains your high reliability? A guardian angel guiding you to the truth? A knack for extra-sensory perception? If it is rational to change your credence in these situations, perhaps it’s because you have some additional first-order evidence (about the guardian angel, for example). More likely, we think, it will simply not be rational for you to change your credence if your evidence is no stronger than chance.

  16. See, especially, Elga (2007) on how one should accommodate evidence of disagreement:

    Equal weight view: Upon finding out that an advisor disagrees, your probability that you are right should equal your prior conditional probability that you would be right. Prior to what? Prior to your thinking through the disputed issue, and finding out what the advisor thinks of it. Conditional on what? On whatever you have learned about the circumstances of the disagreement.

    Evidential Calibration accommodates Elga’s principle, but it goes farther: it is formulated more generally, and it gives an account of what to believe on one’s total evidence, making the contribution of one’s first-order evidence clear.

  17. The exact relationship between your expected reliability and these other factors may not be straightforward or simple, and we cannot address the question in full here. But we can say some general things about this relationship. First, if you are rationally confident that you are perfectly reliable at rationally assessing your evidence, your expected reliability should just equal the strength of that evidence. Could you rationally regard yourself as anti-reliable with respect to some question —i.e. could your expected reliability be significantly lower than chance? According to Egan and Elga (2005), the answer is “no”. On pain of incoherence, an agent with decent access to her own beliefs must assign low probability to the claim that she is an anti-expert about the subject matter. Thus, there are, arguably, independent constraints on the lower bound of your expected reliability.

  18. Though many authors assume that this combination of attitudes is irrational, direct arguments for this claim are surprisingly hard to come by. See Christensen (2007a, b) for an argument that one should not be certain of some logical truth, while being less than certain of one’s own rationality in deducing the logical truth; see also White (2009) and (ms); and Horowitz (2014). The arguments in this section are expanded upon in Horowitz (2014). See Elga (2013) for an independent, but complementary, proposal regarding when level-tension is and is not rational.

  19. See Horowitz (2014), White (2009) and (ms), as well as Christensen (2007a, b), for discussion of similar points. See Vogel (2000) and Cohen (2002), e.g., for more general discussion of bootstrapping.

  20. See also White (2009) for discussion of how Guess Calibration prevents rational bootstrapping.

  21. See, e.g., Vogel (2000) and Cohen (2002).

  22. See Goldman (1986). See Smithies (2012) for further (skeptical) discussion of Goldman’s no-defeaters clause.

  23. Some authors would argue that in cases like this, we should have imprecise or “mushy” credences. We won’t discuss this possibility here, as it will not affect our arguments.

  24. See Christensen (2010a, b) gives a formal definition of this principle:

    Rational Reflection: \({\text{Cr}}\left( {{\text{A}}|\Pr \left( {\text{A}} \right) \, = {\text{ n}}} \right) \, = {\text{ n}}\) where Cr is your credence in A, and Pr is the function describing the ideally rational credence for you to have in A. Strictly speaking, our version of Rational Reflection is entailed by Christensen’s formal account, but does not entail it; however, this distinction won’t make a difference for our purposes. See Christensen (2010a, b) and Elga (2013) for further discussion of this principle.

  25. For the record, Rational Reflection is also incompatible with Level-Splitting. Since Level-Splitting licenses high confidence in “P, but most likely my evidence doesn’t support P”, e.g., it allows one’s rational credence in P to be much higher than his expected rational credence in P.

  26. Our discussion here follows Christensen (2010a, b) and Elga (2013). Our Calculation case is meant to parallel the case of the unmarked clock discussed in those papers, and in Williamson (notes a) and (notes b), with at least one salient difference: in the clock case, we are asked to imagine someone who is uncertain about what her evidence is, but knows what is supported in various evidential situations. In Calculation, Anton knows what his evidence is and is unsure what it supports.

  27. One kind of uncertainty it permits is cases where, for example, one is unsure whether one’s credence is too high or too low, but takes either possibility to be equally likely. See Christensen (2010a, b) for further discussion.

  28. Elga (2013) draws a similar conclusion regarding Rational Reflection, but approaches the issue from a different angle.

  29. See, for instance, Roush (2009) for skepticism about this. There is also a worry whether it’s coherent to think that an agent could comprehend such global doubts about her reasoning. See Wright (2004), who develops this worry into an anti-skeptical response.

  30. Lasonen-Aarnio (forthcoming) discusses this kind of response.

  31. For an example how this issue plays out in the recent epistemology of perception: Pryor (2000) has defended a view on which we can trust perception while remaining neutral on whether our perceptual faculties are reliable. In his (2006), White argues that this leads to bootstrapping.

  32. White (2006) suggests the route we choose here as one option to avoid both bootstrapping and skepticism: “Suppose that we abandon dogmatism, and insist that in order to gain perceptual justification for believing that P, we must have independent justification for believing that we are not victims of a visual illusion that P. We could nevertheless insist that we have a kind of default justification for assuming the general reliability of our perceptual faculties. We are entitled to believe that our faculties tend to deliver the truth unless we have some positive reason to doubt this.” (p. 553).

References

  • Christensen, D. (2007a). Epistemology of disagreement: The good news. The Philosophical Review, 116(2), 187–217.

    Article  Google Scholar 

  • Christensen, D. (2007b). Does murphy’s law apply in epistemology? Self-doubt and rational ideals. Oxford Studies in Epistemology, 2, 3–31.

    Google Scholar 

  • Christensen, D. (2010a). Higher-order evidence. Philosophy and Phenomenological Research, 81(1), 85–215.

    Article  Google Scholar 

  • Christensen, D. (2010b). Rational reflection. Philosophical Perspectives (Vol. 24, pp. 121–140). Blackwell, Oxford: Epistemology.

  • Christensen, D. (2011). Disagreement, question-begging and epistemic self-criticism. Philosopher’s Imprint 1, No. 6.

  • Coates, A. (2012). Rational epistemic akrasia. American Philosophical Quarterly, 1(2), 113–124.

    Google Scholar 

  • Cohen, S. (2002). Basic knowledge and the problem of easy knowledge. Philosophy and Phenomenological Research, 65(2), 309–339.

    Article  Google Scholar 

  • Egan, A., & Elga, A. (2005). I can’t believe I’m stupid. Philosphical Perspectives, 19(1), 77–93.

    Article  Google Scholar 

  • Elga, A. (ms). Lucky to be Rational.

  • Elga, A. (2007). Reflection and disagreement. Noûs, 41(3), 478–502.

    Article  Google Scholar 

  • Elga, A. (2013). The puzzle of the unmarked clock and the new rational reflection principle. Philosophical Studies, 164(1), 127–139.

    Article  Google Scholar 

  • Feldman, R. (2005). Respecting the evidence. Philosophical Perspectives (Vol. 19, pp. 95–119). Blackwell, Oxford: Epistemology.

  • Goldman, A. (1986). Epistemology and cognition. Cambridge: Harvard University Press.

    Google Scholar 

  • Hazlett, A. (2012). Higher-order epistemic attitudes and intellectual humility. Episteme, 9(3), 205–223.

    Article  Google Scholar 

  • Horowitz, S. (2014). Epistemic akrasia. Noûs, 48(4), 718–744.

    Article  Google Scholar 

  • Kelly, T. (2005). The epistemic significance of disagreement. In J. Hawthorne & T. Gendler (Eds.), Oxford studies in epistemology (Vol. 1). Oxford: Oxford University Press.

    Google Scholar 

  • Kelly, T. (2010). Peer disagreement and higher order evidence. In R. Feldman & T. Warfield (Eds.), Disagreement (pp. 11–174). Oxford: Oxford University Press.

  • Lasonen-Aarnio, M. (forthcoming). Higher-order evidence and the limits of defeat. Philosophy and Phenomenological Research.

  • Pryor, J. (2000). The skeptic and the dogmatist. Noûs, 34(4), 517–549.

    Article  Google Scholar 

  • Roush, S. (2009). Second-guessing: A self-help manual. Episteme, 6(3), 251–268.

    Article  Google Scholar 

  • Smithies, D. (2012). Moore’s paradox and the accessibility of justification. Philosophy and Phenomenological Research, 85(2), 273–300.

    Article  Google Scholar 

  • Titelbaum, M. (ms). In Defense of Right Reason.

  • Vavova, K. (ms). Irrelevant Influences.

  • Vavova, K. (2013). Confidence, evidence, and disagreement. Erkenntnis, 79(1), 173–183.

    Google Scholar 

  • Vogel, J. (2000). Reliabilism leveled. Journal of Philosophy, 97(11), 602–623.

    Article  Google Scholar 

  • Weatherson, B. (ms). Do judgments screen evidence?

  • Wedgwood, R. (2011). Justified inference. Synthese, 189(2), 1–23.

    Google Scholar 

  • White, R. (2006). Problems for dogmatism. Philosophical Studies, 131(3), 525–557.

    Article  Google Scholar 

  • White, R. (2009). On treating oneself and others as thermometers. Episteme, 6(3), 233–250.

    Article  Google Scholar 

  • White, R. (ms). Disrespecting the evidence.

  • Williamson, T. (2000). Knowledge and its limits. Oxford: Oxford University Press.

    Google Scholar 

  • Williamson, T. (notes a). Improbable knowing.

  • Williamson, T. (notes b). Very improbable knowing.

  • Wright, C. (2004). Wittgensteinian Certainties. In D. McManus (Ed.), Wittgenstein and Scepticism. London: Routledge.

    Google Scholar 

Download references

Acknowledgments

This paper has benefitted greatly from helpful feedback and discussion at many earlier stages. We would especially like to thank Roger White, Miriam Schoenfield, and David Christensen. Thanks also to Adam Elga, Josh Schechter, Tom Dougherty, Alex Byrne, Dan Greco, Jennifer Carr, Alan Hazlett, Brian Hedden, Brendan Dill, Jack Marley-Payne, Bernhard Salow, Katia Vavova, Kenny Walden, and Rebecca Millsop, as well as audiences at the MIT Epistemology Group, the 2012 MITing of the Minds conference, the University of Kent, the University of Leeds, the University of Edinburgh, and the University of Cambridge. Special thanks to Stew Cohen for very helpful and extensive written comments on earlier drafts.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sophie Horowitz.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sliwa, P., Horowitz, S. Respecting all the evidence. Philos Stud 172, 2835–2858 (2015). https://doi.org/10.1007/s11098-015-0446-9

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11098-015-0446-9

Keywords

Navigation