Abstract
This paper is about epistemic dilemmas, i.e., cases in which one is doomed to have a doxastic attitude that is rationally impermissible no matter what. My aim is to develop and defend a position according to which there can be genuine rational indeterminacy; that is, it can be indeterminate which principles of rationality one should satisfy and thus indeterminate which doxastic attitudes one is permitted or required to have. I am going to argue that this view can resolve epistemic dilemmas in a systematic way while also enjoying some important advantages over its rivals.
Similar content being viewed by others
Notes
I am going to use "doxastic attitude" to refer to both credences and beliefs. This is so that I can make gereral claims about importantly different kinds of epistemic dilemmas. I will also use "doxastic attitude" in a loose way such that suspeding and witholding judgement count as having a doxastic attitude (as opposed to lacking one). This is just for the sake of convenience; nothing substantive hangs on this.
One example of this is the Paradox of Global Defeat (e.g., Sliwa and Horowitz 2015, p. 2853). Here an agent receives testimony that all of her perceptual faculties are malfunctioning, including the auditory faculties that she used to consume the testimony in question.
For instance, in the Epistemic Liar Paradox (e.g., Caie 2012), there are various ways of showing that there is no permissible attitude for an agent to take towards p, where p is “I do not believe that p is true.”.
See Worsnip (2015).
See, e.g., Conee (1982), Kroon (1983), Sorensen (1987) and Richter (1990) on the Anti-Expertise Paradox. Here one should believe that they will believe p just in case p is false. Consequently, any attitude that one has about p will be impermissible. See Caie (2013) and Egan and Elga (2005) for a credal version of this case.
By “higher-order evidence” I mean evidence that bears on the likelihood that one has evaluated one’s first-order evidence correctly, e.g., Christensen (2010).
But what is a tautology? Following Titelbaum (2015), I will stick to a very weak interpretation according to which a tautology is a logical truth that ordinary agents can easily understand. This interpretation is so weak that all Bayesians accept it, and it avoids many of the problems involved with logical omniscience, i.e., it avoids requiring agents to be certain of tautologies that are beyond their ability to comprehend.
Thus, expected reliability is a normative notion, i.e., it is not how reliable one actually expects to be, but how reliable one should expect to be given the higher-order evidence in question.
It is worth noting that even if one rejects Calibrationism in favor of an alternative view (i.e., a view that is neither steadfast nor level-splitting), one will still be saddled with the same bootstrapping problem discussed in fn 14. For this general problem will arise for any non-Calibrationist view whatsoever. Thanks to David Christensen for pointing this out.
Here is one way of putting the bootstrapping argument. Suppose that Calibrationism is false and that Anna should be certain that (L) is true. Suppose also that Anna answers 99 other logic questions like the one. Finally, suppose that while she is answering these problems, she justifiably believes that she is under the influence of Chad’s logic drug.
If Calibrationism is false, then Anna should be certain that all 100 of these tautologies are true. But – and here is the key question—what should Anna think is going on here? From Anna’s own perspective, what explains why she got all of the answers correct even though she knows that if she is affected by the drug, she will likely get 80 questions wrong?
From her own perspective, it seems like the best explanation for her success is that she was actually immune from the effects of the drugs. Thus, if one denies Calibrationism, one should maintain that Anna can bootstrap her way to the conclusion that the drugs did not affect her. But this is an illegitimate way for Anna to form a belief about her own physiology. Thus, Calibrationism is true. See Horowitz (2014) and Sliwa and Horowitz (2015) for a more thorough presentation of this argument.
Perhaps one could maintain that this conflict could be easily avoided by weakening Probabilism such that Normality only applies to those tautologies for which one lacks higher-order evidence to the effect that one has botched one’s assessment of the proposition in question. But see Titelbaum (2015) for some serious problems with going this route.
Of course, Probabilism and Calibrationism are not totally uncontroversial. Caie (2013) argues against Probabilism, Schoenfield (2014) and Titelbaum (2015) push back against Calibrationism in general, and Smithies (2015) argues against the verdict that Calibrationism delivers in LOGIC PROBLEM in particular. If one is sympathetic with these arguments, then one can interpret the dialectic thus far as attempting to establish the following conditional claim: if these widely accepted principles are true, then the argument from (1) to (5) is sound.
It is worth re-emphasizing here, though, that this is just our stalking horse and that there are many other epistemic dilemmas that pose the same general worry.
The term “weighted conception” of practical reasons comes from Horty (2003).
While Christensen (2007b, 2010) himself may or may not endorse PRIORITY, some of the arguments he gives suggest that this view should be taken seriously. For instance, he suggests that a view like PRIORITY has something going for it insofar as we are “quite familiar with other ideals that operate as values to be maximized, yet whose maximizations must in certain cases be balanced against, or otherwise constrained by, other values… In ethics, promoting well-being and respecting rights may illustrate a different sort of way in which one ideal constrains another.” (2007a, b, p. 28).
Horty (2003) makes these two points about a similar view of practical reasons.
There are, of course, other ways of capturing the spirit of PRIORITY, e.g., one could develop an epistemic analog of Ross’ (2010) view of prima and ultima facie obligations. For instance, one could say that the principles of rationality only place prima facie constraints on the attitudes that we are permitted and required to have. Thus, when conflicts occur, one principle will take priority over the other such that there are never any genuine dilemmas.
But insofar as this view also relies on the idea that the principles of rationality are ranked or weighted in some way, the Problem of Weight Assignment applies here too. Thanks to Sandy Goldberg and Stephen White for a useful discussion here.
See Worsnip (2015, pp. 34–35).
See Worsnip (2015, pp. 34–35).
See Worsnip (2015, pp. 35).
For instance, suppose your doctor has loads of first-order evidence that supports the claim that you have a virus (= p). And suppose that on the basis of this first-order evidence your doctor becomes confident that p is true. Finally, suppose that your doctor also acquires some higher-order evidence that strongly suggests that she evaluated the first-order evidence correctly, e.g., she was given a stimulant that makes her 98% reliable at drawing the right conclusions in cases like this.
Now, if Calibrationism is taken as a wide-scope principle, then your doctor could satisfy it by simply giving up her initial assessment of the first-order evidence, e.g., by saying, “Despite my initial assessment of the evidence, and despite the stimulants, I am no longer judging that you have this virus.” But since calibrationists should maintain that your doctor can only satisfy Calibrationism by becoming highly confident that p is true, it is best to read this principle as having a narrow-scope.
For more on OMP and related principles see Greco (2012).
It is worth noting that Brouwer (2014) rejects OMP and all other ought-implies-can principles. According to Brouwer, we should instead accept “ought-implies-blamelessness.” Brouwer argues that by rejecting OMP and embracing ought-implies-blamelessness, proponents of CONFLICT can avoid certain logical objections to the possibility of dilemmas. While it is beyond the scope of this paper to adequately address Brouwer’s proposal, in Leonard (ms. a) I argue that ought-implies-blamelessness is false. Here is a very brief summary of that argument: There are dilemmas in which you ought to do A, and you ought to do B, but if you do B you are blameworthy for doing so. For instance, in Horty’s (2003, p. 100) “double promise” case, suppose that you promise S that you will do A. And suppose that because you forgot that you made this promise, you also promise H that you will do B (where you cannot do both A and B). Now, even if you have a promissory obligation to do B, it still seems appropriate for S to blame you for doing B instead of fulfilling your promise to do A. Thus, you can be blameworthy for doing something that you ought to do. Thus, denying OMP in favor of ought-implies-blamelessness will not help proponents of CONFLICT avoid the objection I am raising here.
See Brink (1994) and Horty (2003) for a related objection to CONFLICT that crucially involves the following principle:
Agglomeration: If you ought to do A, and if you ought to do B, then you ought to do (A and B).
The objection is that if CONFLICT is true, then there can be dilemmas in which one ought to do A and one also ought to do B. Thus, because Agglomeration is true, one ought to do (A and B). But ought-implies-can and, ex hypothesi, one cannot do (A and B). Thus, insofar as Agglomeration and ought-implies-can are true, CONFLICT is false.
Because Brink and Horty have compelling reasons for giving up on Agglomeration, they conclude that this objection does undermine the possibility of dilemmas. I am sympathetic with their arguments, but it is important to note that the objection I have given does not rely on Agglomeration and is thus not susceptible to the same worries.
See Leonard (ms. a) for more on the objection given here and how it differs in important ways from previous attempts to show that dilemmas cannot arise for reasons pertaining to deontic logic.
It is worth pausing here to note that any complete theory of indeterminacy must have something to say about the source of the indeterminacy in question, e.g., is it linguistic, or ontic, or epistemic? INDETERMINACY sheds some light on this question, since it is only compatible with the source of rational indeterminacy being linguistic or ontic; if INDETERMINACY is true, then the source of rational indeterminacy cannot be epistemic. More specifically, it is hard to see how an epistemicist theory of rational indeterminacy could work in cases like LOGIC PROBLEM, since it is implausible to think that the indeterminacy could be adequately characterized in terms of it being in principle unknowable as to which credences one should have when subject to conflicting requirements. That is, since the dilemma posed by LOGIC PROBLEM does not rest on any kind of sorites series, it is hard to see why one could not in principle come to know which principle overrides the other, e.g., why would it necessarily be the case that one’s belief that Calibrationism overrides Probabilism is unsafe and thus cannot amount to knowledge?
And while this is only a small step in the right direction, it is the most that can be said without taking a stand on the much larger meta-epistemological project of determining where our epistemological principles come from; that is, just as meta-ethicists are interested in the source of our moral principles, fully answering this question would require one to determine whether we should be realists or anti-realists or whatever about the source of Probabilism, Calibrationism, etc. Thus, the most that should be said here is that if INDETERMINACY is true, then the source of the indeterminacy is either linguistic or ontic. Because settling this issue will be so thorny, the fact that proponents of INDETERMINACY can remain neutral on this question is something that I take to be a feature of the view.
Thanks to an anonymous referee for some helpful comments here.
It is worth noting that, as things currently stand, proponents of PRIORITY might argue that while the Problem of Weight Assignment needs to be addressed, their view does enjoy the following advantage over INDETERMINACY: All else being equal, it is better for a theory of rationality to make (determinate) recommendations about which attitudes an agent ought to have. Thus, because PRIORITY can always deliver such recommendations in cases of apparent epistemic dilemmas, and because INDETERMINACY cannot, there is a good reason to prefer the former to the latter.
In Leonard (ms. b) I argue against this line of though; that is, I offer a positive reason for thinking that our theories of rationality should not always make determinate recommendations about which attitude an agent ought to have in an epistemic dilemma. While fully defending this argument would take us too far afield, in a nutshell the idea is this: In some epistemic dilemmas – i.e., ones in which the truth of the proposition in question crucially depends on the attitude that the agent has—it is fitting for agents to be rationally perplexed about what they should think (see Caie 2012 for an example of such a case). I argue that because it is important to account for the phenomenology here, and because INDETERMINACY can do so whereas PRIORITY cannot, considerations about what it is like to be an agent in the grip of these dilemmas provides a positive reason for accepting INDETERMINACY and rejecting PRIORITY.
Thanks to an anonymous referee for prompting me to address this point.
This paper has greatly benifited from the help of Amy Flowerree, Sandy Goldberg, Rebecca Mason, Lisa Miracchi, Baron Reed, Blake Roeber, Declan Smithies, Stephen White, and participants at the 2017 Formal Epistemology Workshop and the 2017 meeting of the Society for Exact Philosophy. I am especially grateful to Fabrizio Cariani, David Christensen, and Jennifer Lackey for many insightful comments and fun discussions.
References
Baier, K. (1958). The moral point of view: A rational basis of ethics. Philosophical Review, 69(4), 548–553.
Barnes, E. (2010). Ontic vagueness: A guide for the perplexed. Nous,44, 607–627.
Barnes, E., & Williams, R. (2011). A theory of metaphysical indeterminacy. Oxford Studies in Metaphysics,6, 103–148.
Brink, D. (1994). Moral conflict and its structure. The Philosophical Review,103, 215–247.
Broom, J. (1998). Is incommensurability vagueness? In R. Chang (Ed.), Incommensurability, incomparability and practical reason (pp. 67–89). Cambridge: Harvard University Press.
Broome, J. (2004). Reasons. In: R. J. Wallace, S. Michael, S. Samuel, & P. Philip (Eds.), Reason and Values: Essays on the Moral Philosophy of Joseph Raz (pp. 28–55). Oxford University Press.
Brouwer, T. (2014). A paradox of rejection. Synthese,18, 4451–4464.
Caie, M. (2012). Belief and indeterminacy. Philosophical Review.,121, 1–54.
Caie, M. (2013). Rational probabilistic incoherence. Philosophical Review.,122, 527–575.
Cariani, F., & Santorio, P. (2017). Will done better. Mind,127, 129–165.
Christensen, D. (1996). Dutch-book arguments deparagmatized: Epistemic consistency for partial believers. Journal of Philosophy,93, 450–479.
Christensen, D. (2007a). Epistemology of disagreement: The good news. The Philosophical Review,116, 187–217.
Christensen, D. (2007b). Does murphy’s law apply in epistemology? Self-doubt and rational ideals. Oxford Studies in Epistemology,2, 3–31.
Christensen, D. (2010). Higher-order evidence. Philosophy and Phenomenological Research,81, 85–215.
Christensen, D. (Forthcoming) Disagreement, drugs, etc.: From accuracy to Akrasia. Episteme.
Conee, E. (1982). Against Moral Dilemmas. Philosophical Review, 91, 97–97.
De Finetti, B. (1980). Foresight: Its logical laws, its subjective sources. In H. E. Kyburg Jr. & H. E. Smokler (Eds.), Studies in subjective probability (pp. 94–158). New York: Wiley.
Elga, A., & Egan, A. (2005). I can’t believe i’m stupid. Philosophical Perspectives,19, 79–93.
Fine, K. (1975). Vagueness, truth and logic. Synthese, 30, 265–300. Reprinted with corrections in Keefe and Smith (eds) Vagueness: A reader (MIT Press, Cambridge MA: 1997, pp. 119-50).
Greco, D. (2012). The impossibility of skepticism. The Philosophical Review,12, 317–358.
Horowitz, S. (2014). Epistemic akrasia. Nous.,48, 718–744.
Horty, J. (2003). Reasoning with moral conflicts. Nous,37, 557–605.
Hughes, N. (2017). Dilemmic epistemology. Synthese. https://doi.org/10.1007/s11229-017-1639-x.
Joyce, J. (1998). A non-pragmatic vindication of probabilism. Philosophy of Science,65, 575–603.
Joyce, J. (2009). Accuracy and coherence: Prospects for an alethic epistemology of partial belief. In F. Huber & C. Schmidt-Petri (Eds.), Degrees of belief (pp. 263–297). Berlin: Springer.
Keefe, R. (2000). Theories of vagueness. Cambridge: Cambridge University Press.
Kroon, F. (1983). Rationality and paradox. Analysis,43, 455–461.
Lasonen-Aarnio, M. (2014). Higher-order evidence and the limits of defeat. Philosophy and Phenomenological Research,88, 314–345.
Leonard, N. Ms. a. No dilemmas.
Leonard, N. Ms. b. Belief and rational indeterminacy.
Nagel, T. (1970). The possibility of altruism. Princeton University Press.
Pettigrew, R. (2016). Accuracy and the laws of credence. Oxford: Oxford University Press.
Priest, G. (2002). Rational dilemmas. Analysis,62, 11–16.
Ramsey, F. (1931). Truth and probability. In R. B. Braithwaite (Ed.), The Foundations of Mathematics and other Logic Essays (pp. 156–198). New York: Harcourt, Brace and Company.
Richter, R. (1990). Ideal rationality and hand-waving. Australasian Journal of Philosophy,68, 147–156.
Ross, J. (2010). Sleeping beauty, countable additivity, and rational dilemmas. Philosophical Review.,119, 411–447.
Schoenfield, M. (2014). Permission to believe: Why permissivism is true and what it tells us about irrelevant influences on belief. Nous,48, 193–218.
Schoenfield, M. (2015). A dilemma for calibrationism. Philosophy and Phenomenological Research,91, 425–455.
Skryms, B. (1975). Choice and chance (2nd ed.). Encino, CA: Dickenson.
Sliwa, P., & Horowitz, S. (2015). Respecting all the evidence. Philosophical Studies,172(11), 2835–2858.
Smithies, D. (2015). Ideal rationality and logical omniscience. Synthese,192, 2769–2793.
Sorensen, R. (1987). Anti-expertise, instability, and rational choice. Australasian Journal of Philosophy,65, 301–315.
Titelbaum, M. (2015). Rationality’s fixed point or: In defense of right reason. In T. Gendler & J. Hawthorne (Eds.), Oxford Studies in Epistemology (Vol. 5, pp. 253–294). Oxford: Oxford University Press.
Turri, J. (2012). A puzzle about withholding. The Philosophical Quarterly.,62, 355–364.
White, R. (2009). On treating oneself and others as thermometers. Episteme,6(3), 233–250.
Williams, R. (2014a). Decision making under indeterminacy. Philosopher’s Imprint,14, 1–34.
Williams, R. (2014b). Nonclassical minds and indeterminate survival. Philosophical Review,123, 379–428.
Williams, R. (2016). Angst, indeterminacy, and conflicting value. Ratio,29, 412–433.
Williams, R. (2017). Indeterminate oughts. Ethics,127, 3.
Worsnip, A. (2015). The conflict of evidence and coherence. Philosophy and Phenomenological Research, 96: 3–44.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Leonard, N. Epistemic dilemmas and rational indeterminacy. Philos Stud 177, 573–596 (2020). https://doi.org/10.1007/s11098-018-1195-3
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11098-018-1195-3