Skip to main content
Log in

Epistemic dilemmas and rational indeterminacy

  • Published:
Philosophical Studies Aims and scope Submit manuscript

Abstract

This paper is about epistemic dilemmas, i.e., cases in which one is doomed to have a doxastic attitude that is rationally impermissible no matter what. My aim is to develop and defend a position according to which there can be genuine rational indeterminacy; that is, it can be indeterminate which principles of rationality one should satisfy and thus indeterminate which doxastic attitudes one is permitted or required to have. I am going to argue that this view can resolve epistemic dilemmas in a systematic way while also enjoying some important advantages over its rivals.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. I am going to use "doxastic attitude" to refer to both credences and beliefs. This is so that I can make gereral claims about importantly different kinds of epistemic dilemmas. I will also use "doxastic attitude" in a loose way such that suspeding and witholding judgement count as having a doxastic attitude (as opposed to lacking one). This is just for the sake of convenience; nothing substantive hangs on this.

  2. One example of this is the Paradox of Global Defeat (e.g., Sliwa and Horowitz 2015, p. 2853). Here an agent receives testimony that all of her perceptual faculties are malfunctioning, including the auditory faculties that she used to consume the testimony in question.

  3. For instance, in the Epistemic Liar Paradox (e.g., Caie 2012), there are various ways of showing that there is no permissible attitude for an agent to take towards p, where p is “I do not believe that p is true.”.

  4. See Worsnip (2015).

  5. See, e.g., Conee (1982), Kroon (1983), Sorensen (1987) and Richter (1990) on the Anti-Expertise Paradox. Here one should believe that they will believe p just in case p is false. Consequently, any attitude that one has about p will be impermissible. See Caie (2013) and Egan and Elga (2005) for a credal version of this case.

  6. Versions of this case can be found in Christensen (2007b, 2010).

  7. By “higher-order evidence” I mean evidence that bears on the likelihood that one has evaluated one’s first-order evidence correctly, e.g., Christensen (2010).

  8. But what is a tautology? Following Titelbaum (2015), I will stick to a very weak interpretation according to which a tautology is a logical truth that ordinary agents can easily understand. This interpretation is so weak that all Bayesians accept it, and it avoids many of the problems involved with logical omniscience, i.e., it avoids requiring agents to be certain of tautologies that are beyond their ability to comprehend.

  9. E.g., Ramsey (1931), de Finetti (1980), Skryms (1975) and Christensen (1996).

  10. E.g., Joyce (1998, 2009) and Pettigrew (2016).

  11. E.g., White (2009), Sliwa and Horowitz (2015) and Christensen (forthcoming). Given the setup of LOGIC PROBLEM, all calibrationists agree about the credence that Anna should have in (L)—they just do so for different reasons.

  12. Thus, expected reliability is a normative notion, i.e., it is not how reliable one actually expects to be, but how reliable one should expect to be given the higher-order evidence in question.

  13. It is worth noting that even if one rejects Calibrationism in favor of an alternative view (i.e., a view that is neither steadfast nor level-splitting), one will still be saddled with the same bootstrapping problem discussed in fn 14. For this general problem will arise for any non-Calibrationist view whatsoever. Thanks to David Christensen for pointing this out.

  14. Here is one way of putting the bootstrapping argument. Suppose that Calibrationism is false and that Anna should be certain that (L) is true. Suppose also that Anna answers 99 other logic questions like the one. Finally, suppose that while she is answering these problems, she justifiably believes that she is under the influence of Chad’s logic drug.

    If Calibrationism is false, then Anna should be certain that all 100 of these tautologies are true. But – and here is the key question—what should Anna think is going on here? From Anna’s own perspective, what explains why she got all of the answers correct even though she knows that if she is affected by the drug, she will likely get 80 questions wrong?

    From her own perspective, it seems like the best explanation for her success is that she was actually immune from the effects of the drugs. Thus, if one denies Calibrationism, one should maintain that Anna can bootstrap her way to the conclusion that the drugs did not affect her. But this is an illegitimate way for Anna to form a belief about her own physiology. Thus, Calibrationism is true. See Horowitz (2014) and Sliwa and Horowitz (2015) for a more thorough presentation of this argument.

  15. Perhaps one could maintain that this conflict could be easily avoided by weakening Probabilism such that Normality only applies to those tautologies for which one lacks higher-order evidence to the effect that one has botched one’s assessment of the proposition in question. But see Titelbaum (2015) for some serious problems with going this route.

  16. Caie (2012), Turri (2012), Lasonen-Aarnio (2014) and Schoenfield (2015) also defend versions of this principle.

  17. Of course, Probabilism and Calibrationism are not totally uncontroversial. Caie (2013) argues against Probabilism, Schoenfield (2014) and Titelbaum (2015) push back against Calibrationism in general, and Smithies (2015) argues against the verdict that Calibrationism delivers in LOGIC PROBLEM in particular. If one is sympathetic with these arguments, then one can interpret the dialectic thus far as attempting to establish the following conditional claim: if these widely accepted principles are true, then the argument from (1) to (5) is sound.

    It is worth re-emphasizing here, though, that this is just our stalking horse and that there are many other epistemic dilemmas that pose the same general worry.

  18. The term “weighted conception” of practical reasons comes from Horty (2003).

  19. While Christensen (2007b, 2010) himself may or may not endorse PRIORITY, some of the arguments he gives suggest that this view should be taken seriously. For instance, he suggests that a view like PRIORITY has something going for it insofar as we are “quite familiar with other ideals that operate as values to be maximized, yet whose maximizations must in certain cases be balanced against, or otherwise constrained by, other values… In ethics, promoting well-being and respecting rights may illustrate a different sort of way in which one ideal constrains another.” (2007a, b, p. 28).

  20. Horty (2003) makes these two points about a similar view of practical reasons.

  21. There are, of course, other ways of capturing the spirit of PRIORITY, e.g., one could develop an epistemic analog of Ross’ (2010) view of prima and ultima facie obligations. For instance, one could say that the principles of rationality only place prima facie constraints on the attitudes that we are permitted and required to have. Thus, when conflicts occur, one principle will take priority over the other such that there are never any genuine dilemmas.

    But insofar as this view also relies on the idea that the principles of rationality are ranked or weighted in some way, the Problem of Weight Assignment applies here too. Thanks to Sandy Goldberg and Stephen White for a useful discussion here.

  22. See Worsnip (2015, pp. 34–35).

  23. See Worsnip (2015, pp. 34–35).

  24. See Worsnip (2015, pp. 35).

  25. For instance, suppose your doctor has loads of first-order evidence that supports the claim that you have a virus (= p). And suppose that on the basis of this first-order evidence your doctor becomes confident that p is true. Finally, suppose that your doctor also acquires some higher-order evidence that strongly suggests that she evaluated the first-order evidence correctly, e.g., she was given a stimulant that makes her 98% reliable at drawing the right conclusions in cases like this.

    Now, if Calibrationism is taken as a wide-scope principle, then your doctor could satisfy it by simply giving up her initial assessment of the first-order evidence, e.g., by saying, “Despite my initial assessment of the evidence, and despite the stimulants, I am no longer judging that you have this virus.” But since calibrationists should maintain that your doctor can only satisfy Calibrationism by becoming highly confident that p is true, it is best to read this principle as having a narrow-scope.

  26. Priest (2002), Ross (2010), Brouwer (2014) and Hughes (2017) endorse this view. On one interpretation, Christensen (2007b, 2010) endorses this view too.

  27. For more on OMP and related principles see Greco (2012).

  28. It is worth noting that Brouwer (2014) rejects OMP and all other ought-implies-can principles. According to Brouwer, we should instead accept “ought-implies-blamelessness.” Brouwer argues that by rejecting OMP and embracing ought-implies-blamelessness, proponents of CONFLICT can avoid certain logical objections to the possibility of dilemmas. While it is beyond the scope of this paper to adequately address Brouwer’s proposal, in Leonard (ms. a) I argue that ought-implies-blamelessness is false. Here is a very brief summary of that argument: There are dilemmas in which you ought to do A, and you ought to do B, but if you do B you are blameworthy for doing so. For instance, in Horty’s (2003, p. 100) “double promise” case, suppose that you promise S that you will do A. And suppose that because you forgot that you made this promise, you also promise H that you will do B (where you cannot do both A and B). Now, even if you have a promissory obligation to do B, it still seems appropriate for S to blame you for doing B instead of fulfilling your promise to do A. Thus, you can be blameworthy for doing something that you ought to do. Thus, denying OMP in favor of ought-implies-blamelessness will not help proponents of CONFLICT avoid the objection I am raising here.

  29. See Brink (1994) and Horty (2003) for a related objection to CONFLICT that crucially involves the following principle:

    Agglomeration: If you ought to do A, and if you ought to do B, then you ought to do (A and B).

    The objection is that if CONFLICT is true, then there can be dilemmas in which one ought to do A and one also ought to do B. Thus, because Agglomeration is true, one ought to do (A and B). But ought-implies-can and, ex hypothesi, one cannot do (A and B). Thus, insofar as Agglomeration and ought-implies-can are true, CONFLICT is false.

    Because Brink and Horty have compelling reasons for giving up on Agglomeration, they conclude that this objection does undermine the possibility of dilemmas. I am sympathetic with their arguments, but it is important to note that the objection I have given does not rely on Agglomeration and is thus not susceptible to the same worries.

    See Leonard (ms. a) for more on the objection given here and how it differs in important ways from previous attempts to show that dilemmas cannot arise for reasons pertaining to deontic logic.

  30. While I am going to put things in terms of classical supervaluationism (e.g., Barnes 2010; Barnes and Williams 2011; Cariani and Santorio 2017), with a bit of tweaking things could be easily reformulated in terms of standard supervaluationism too (e.g., Fine 1975; Keefe 2000).

  31. It is worth pausing here to note that any complete theory of indeterminacy must have something to say about the source of the indeterminacy in question, e.g., is it linguistic, or ontic, or epistemic? INDETERMINACY sheds some light on this question, since it is only compatible with the source of rational indeterminacy being linguistic or ontic; if INDETERMINACY is true, then the source of rational indeterminacy cannot be epistemic. More specifically, it is hard to see how an epistemicist theory of rational indeterminacy could work in cases like LOGIC PROBLEM, since it is implausible to think that the indeterminacy could be adequately characterized in terms of it being in principle unknowable as to which credences one should have when subject to conflicting requirements. That is, since the dilemma posed by LOGIC PROBLEM does not rest on any kind of sorites series, it is hard to see why one could not in principle come to know which principle overrides the other, e.g., why would it necessarily be the case that one’s belief that Calibrationism overrides Probabilism is unsafe and thus cannot amount to knowledge?

    And while this is only a small step in the right direction, it is the most that can be said without taking a stand on the much larger meta-epistemological project of determining where our epistemological principles come from; that is, just as meta-ethicists are interested in the source of our moral principles, fully answering this question would require one to determine whether we should be realists or anti-realists or whatever about the source of Probabilism, Calibrationism, etc. Thus, the most that should be said here is that if INDETERMINACY is true, then the source of the indeterminacy is either linguistic or ontic. Because settling this issue will be so thorny, the fact that proponents of INDETERMINACY can remain neutral on this question is something that I take to be a feature of the view.

    Thanks to an anonymous referee for some helpful comments here.

  32. See Williams (2014a, b, 2016, 2017) for other cases in which it can be indeterminate what we should do.

  33. It is worth noting that, as things currently stand, proponents of PRIORITY might argue that while the Problem of Weight Assignment needs to be addressed, their view does enjoy the following advantage over INDETERMINACY: All else being equal, it is better for a theory of rationality to make (determinate) recommendations about which attitudes an agent ought to have. Thus, because PRIORITY can always deliver such recommendations in cases of apparent epistemic dilemmas, and because INDETERMINACY cannot, there is a good reason to prefer the former to the latter.

    In Leonard (ms. b) I argue against this line of though; that is, I offer a positive reason for thinking that our theories of rationality should not always make determinate recommendations about which attitude an agent ought to have in an epistemic dilemma. While fully defending this argument would take us too far afield, in a nutshell the idea is this: In some epistemic dilemmas – i.e., ones in which the truth of the proposition in question crucially depends on the attitude that the agent has—it is fitting for agents to be rationally perplexed about what they should think (see Caie 2012 for an example of such a case). I argue that because it is important to account for the phenomenology here, and because INDETERMINACY can do so whereas PRIORITY cannot, considerations about what it is like to be an agent in the grip of these dilemmas provides a positive reason for accepting INDETERMINACY and rejecting PRIORITY.

    Thanks to an anonymous referee for prompting me to address this point.

  34. This paper has greatly benifited from the help of Amy Flowerree, Sandy Goldberg, Rebecca Mason, Lisa Miracchi, Baron Reed, Blake Roeber, Declan Smithies, Stephen White, and participants at the 2017 Formal Epistemology Workshop and the 2017 meeting of the Society for Exact Philosophy. I am especially grateful to Fabrizio Cariani, David Christensen, and Jennifer Lackey for many insightful comments and fun discussions.

References

  • Baier, K. (1958). The moral point of view: A rational basis of ethics. Philosophical Review, 69(4), 548–553.

    Google Scholar 

  • Barnes, E. (2010). Ontic vagueness: A guide for the perplexed. Nous,44, 607–627.

    Google Scholar 

  • Barnes, E., & Williams, R. (2011). A theory of metaphysical indeterminacy. Oxford Studies in Metaphysics,6, 103–148.

    Google Scholar 

  • Brink, D. (1994). Moral conflict and its structure. The Philosophical Review,103, 215–247.

    Google Scholar 

  • Broom, J. (1998). Is incommensurability vagueness? In R. Chang (Ed.), Incommensurability, incomparability and practical reason (pp. 67–89). Cambridge: Harvard University Press.

    Google Scholar 

  • Broome, J. (2004). Reasons. In: R. J. Wallace, S. Michael, S. Samuel, & P. Philip (Eds.), Reason and Values: Essays on the Moral Philosophy of Joseph Raz (pp. 28–55). Oxford University Press.

  • Brouwer, T. (2014). A paradox of rejection. Synthese,18, 4451–4464.

    Google Scholar 

  • Caie, M. (2012). Belief and indeterminacy. Philosophical Review.,121, 1–54.

    Google Scholar 

  • Caie, M. (2013). Rational probabilistic incoherence. Philosophical Review.,122, 527–575.

    Google Scholar 

  • Cariani, F., & Santorio, P. (2017). Will done better. Mind,127, 129–165.

    Google Scholar 

  • Christensen, D. (1996). Dutch-book arguments deparagmatized: Epistemic consistency for partial believers. Journal of Philosophy,93, 450–479.

    Google Scholar 

  • Christensen, D. (2007a). Epistemology of disagreement: The good news. The Philosophical Review,116, 187–217.

    Google Scholar 

  • Christensen, D. (2007b). Does murphy’s law apply in epistemology? Self-doubt and rational ideals. Oxford Studies in Epistemology,2, 3–31.

    Google Scholar 

  • Christensen, D. (2010). Higher-order evidence. Philosophy and Phenomenological Research,81, 85–215.

    Google Scholar 

  • Christensen, D. (Forthcoming) Disagreement, drugs, etc.: From accuracy to Akrasia. Episteme.

  • Conee, E. (1982). Against Moral Dilemmas. Philosophical Review, 91, 97–97.

    Google Scholar 

  • De Finetti, B. (1980). Foresight: Its logical laws, its subjective sources. In H. E. Kyburg Jr. & H. E. Smokler (Eds.), Studies in subjective probability (pp. 94–158). New York: Wiley.

    Google Scholar 

  • Elga, A., & Egan, A. (2005). I can’t believe i’m stupid. Philosophical Perspectives,19, 79–93.

    Google Scholar 

  • Fine, K. (1975). Vagueness, truth and logic. Synthese, 30, 265–300. Reprinted with corrections in Keefe and Smith (eds) Vagueness: A reader (MIT Press, Cambridge MA: 1997, pp. 119-50).

  • Greco, D. (2012). The impossibility of skepticism. The Philosophical Review,12, 317–358.

    Google Scholar 

  • Horowitz, S. (2014). Epistemic akrasia. Nous.,48, 718–744.

    Google Scholar 

  • Horty, J. (2003). Reasoning with moral conflicts. Nous,37, 557–605.

    Google Scholar 

  • Hughes, N. (2017). Dilemmic epistemology. Synthese. https://doi.org/10.1007/s11229-017-1639-x.

    Google Scholar 

  • Joyce, J. (1998). A non-pragmatic vindication of probabilism. Philosophy of Science,65, 575–603.

    Google Scholar 

  • Joyce, J. (2009). Accuracy and coherence: Prospects for an alethic epistemology of partial belief. In F. Huber & C. Schmidt-Petri (Eds.), Degrees of belief (pp. 263–297). Berlin: Springer.

    Google Scholar 

  • Keefe, R. (2000). Theories of vagueness. Cambridge: Cambridge University Press.

    Google Scholar 

  • Kroon, F. (1983). Rationality and paradox. Analysis,43, 455–461.

    Google Scholar 

  • Lasonen-Aarnio, M. (2014). Higher-order evidence and the limits of defeat. Philosophy and Phenomenological Research,88, 314–345.

    Google Scholar 

  • Leonard, N. Ms. a. No dilemmas.

  • Leonard, N. Ms. b. Belief and rational indeterminacy.

  • Nagel, T. (1970). The possibility of altruism. Princeton University Press.

  • Pettigrew, R. (2016). Accuracy and the laws of credence. Oxford: Oxford University Press.

    Google Scholar 

  • Priest, G. (2002). Rational dilemmas. Analysis,62, 11–16.

    Google Scholar 

  • Ramsey, F. (1931). Truth and probability. In R. B. Braithwaite (Ed.), The Foundations of Mathematics and other Logic Essays (pp. 156–198). New York: Harcourt, Brace and Company.

    Google Scholar 

  • Richter, R. (1990). Ideal rationality and hand-waving. Australasian Journal of Philosophy,68, 147–156.

    Google Scholar 

  • Ross, J. (2010). Sleeping beauty, countable additivity, and rational dilemmas. Philosophical Review.,119, 411–447.

    Google Scholar 

  • Schoenfield, M. (2014). Permission to believe: Why permissivism is true and what it tells us about irrelevant influences on belief. Nous,48, 193–218.

    Google Scholar 

  • Schoenfield, M. (2015). A dilemma for calibrationism. Philosophy and Phenomenological Research,91, 425–455.

    Google Scholar 

  • Skryms, B. (1975). Choice and chance (2nd ed.). Encino, CA: Dickenson.

    Google Scholar 

  • Sliwa, P., & Horowitz, S. (2015). Respecting all the evidence. Philosophical Studies,172(11), 2835–2858.

    Google Scholar 

  • Smithies, D. (2015). Ideal rationality and logical omniscience. Synthese,192, 2769–2793.

    Google Scholar 

  • Sorensen, R. (1987). Anti-expertise, instability, and rational choice. Australasian Journal of Philosophy,65, 301–315.

    Google Scholar 

  • Titelbaum, M. (2015). Rationality’s fixed point or: In defense of right reason. In T. Gendler & J. Hawthorne (Eds.), Oxford Studies in Epistemology (Vol. 5, pp. 253–294). Oxford: Oxford University Press.

    Google Scholar 

  • Turri, J. (2012). A puzzle about withholding. The Philosophical Quarterly.,62, 355–364.

    Google Scholar 

  • White, R. (2009). On treating oneself and others as thermometers. Episteme,6(3), 233–250.

    Google Scholar 

  • Williams, R. (2014a). Decision making under indeterminacy. Philosopher’s Imprint,14, 1–34.

    Google Scholar 

  • Williams, R. (2014b). Nonclassical minds and indeterminate survival. Philosophical Review,123, 379–428.

    Google Scholar 

  • Williams, R. (2016). Angst, indeterminacy, and conflicting value. Ratio,29, 412–433.

    Google Scholar 

  • Williams, R. (2017). Indeterminate oughts. Ethics,127, 3.

    Google Scholar 

  • Worsnip, A. (2015). The conflict of evidence and coherence. Philosophy and Phenomenological Research, 96: 3–44.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nick Leonard.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Leonard, N. Epistemic dilemmas and rational indeterminacy. Philos Stud 177, 573–596 (2020). https://doi.org/10.1007/s11098-018-1195-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11098-018-1195-3

Keywords

Navigation