Skip to main content
Log in

Rational self-doubt and the failure of closure

  • Published:
Philosophical Studies Aims and scope Submit manuscript

Abstract

Closure for justification is the claim that thinkers are justified in believing the logical consequences of their justified beliefs, at least when those consequences are competently deduced. Many have found this principle to be very plausible. Even more attractive is the special case of Closure known as Single-Premise Closure. In this paper, I present a challenge to Single-Premise Closure. The challenge is based on the phenomenon of rational self-doubt—it can be rational to be less than fully confident in one’s beliefs and patterns of reasoning. In rough outline, the argument is as follows: Consider a thinker who deduces a conclusion from a justified initial premise via an incredibly long sequence of simple competent deductions. Surely, such a thinker should suspect that he has made a mistake somewhere. And surely, given this, he should not believe the conclusion of the deduction even though he has a justified belief in the initial premise.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. See Field (2009b).

  2. See Harman (1986, 1995).

  3. I believe that the term “rational self-doubt” is due to Christensen. See, for example, Christensen (2008).

  4. See Lasonen-Aarnio (2008) for a related objection to closure principles for knowledge.

  5. See Kyburg (1970) and Makinson (1965), respectively. There are also the familiar objections to closure for knowledge put forward by Dretske (1970) and Nozick (1981) in their discussions of skepticism. So far as I can tell, the issues raised in this paper have no direct connection with the familiar skeptical challenges.

  6. See, for example, Harman (1986).

  7. Most cognitive psychologists seem to agree that there is a distinctively deductive kind of reasoning. The main debate concerns exactly how it is to be characterized. See Evans et al. (1993) for discussion of the major views. Some psychologists, including Cheng and Holyoak (1985) and Cosmides (1989), suggest that we do not employ topic-neutral rules of inference, but only domain-specific reasoning mechanisms. This view faces several difficulties. But even if it is correct, there remains the question of whether justification is closed under any domain-specific patterns of inference.

  8. Employing a rule of inference should not be taken to require reflective appreciation of the rule. There are familiar obscurities in the notion of following a rule. See Kripke (1982) for the classic discussion and Boghossian (1989) for an overview of the resulting debate. See Boghossian (2008) for discussion of the particular difficulties facing a rule-based picture of reasoning. However, I am not aware of any attractive alternative picture of reasoning.

  9. For my purposes here, all that I really need is that there is a competence/performance distinction for deductive reasoning. Appealing to rules helps to explicate this distinction—thinkers may employ incorrect rules or they may misapply correct ones.

  10. Williamson (2000, p. 117). There are alternative motivations for closure principles. For instance, one could argue that what one’s total evidence supports is always closed under (single-premise) logical entailment and then argue for a tight connection between justification and evidential support. This is a less intuitive and more theory-driven motivation than Williamson’s.

  11. Notice that coherence principles must be stated for justification rather than knowledge. It is trivial that thinkers cannot know every one of a logically inconsistent set of propositions. This provides evidence that the more basic principles linking deductive inference with rationality concern justification rather than knowledge.

  12. Some clarifications about the notion of epistemic responsibility may be helpful here. First, having a responsible belief does not require that the relevant inquiry was carried out in a fully responsible manner. A thinker may have been irresponsible in, for example, not sufficiently gathering evidence and nevertheless count as responsible in forming a belief given the evidence at hand. Second, responsibility should not be identified with blamelessness. In a strict sense of “blame”, we do not typically blame thinkers—that is, have Strawsonian reactive attitudes—for their beliefs. While there may be an extended sense of blame on which we do blame thinkers for their beliefs, so far as I can tell we do not do so in any systematic way. Moreover, on this extended sense, thinkers can count as epistemically irresponsible but blameless if they have an appropriate excuse.

  13. See Hawthorne (2004, p. 33).

  14. For instance, the “retaining justified beliefs” clause is present because thinkers can lose justification for believing the premises of a deduction once they notice that an implausible conclusion follows from them.

  15. Closure should be distinguished from transmission, as introduced in Wright (1985). It is compatible with Closure that in certain cases a thinker cannot acquire additional justification for a belief on the basis of competently deducing it from justified premises. For example, having justified beliefs in the premises may require antecedently possessing justification for believing the conclusion.

  16. Strictly speaking, there is a distinction between the proposition that all the tickets will lose and the conjunction of the 1,000 conjuncts. But since I may be (nearly) certain of their equivalence, this cannot be used to avoid the counterexample.

  17. For some examples, see Pollock (1983), Evnine (1999), and Nelkin (2000).

  18. See Vogel (1990) for helpful discussion.

  19. As I’ve stated the lottery and preface paradoxes, they involve subjects with a bit more cognitive resources—more computational power, better short-term memories, etc.—than we actually possess. This small amount of idealizing does not undermine the use of the counterexamples. Our intuitions about such subjects are as strong as our intuitions about ordinary reasoners. Moreover, there are related cases that don’t require even this small amount of idealization. See Christensen (2004, chap. 3).

  20. Some philosophers have defended the view that I am justified in believing the conjunction despite also having a justified belief that the conjunction is likely false. This view strikes me as deeply unintuitive.

  21. This principle should be generalized to accommodate zero-premise inferences. I’ll leave this generalization implicit in what follows.

  22. I do not know who first stated these objections. They appear to be part of philosophical folklore.

  23. It might be suggested that the no-defeaters clause should be built into the definition of competent deduction. However, it is cleaner to keep it distinct. Whether a thinker has made a competent deduction shouldn’t depend on her meta-beliefs about her reasoning.

  24. Adding a no-defeaters clause may be incompatible with strict forms of Bayesianism. Insofar as it is, this is more of a difficulty with strict forms of Bayesianism than with the no-defeaters clause.

  25. Hume, A Treatise on Human Nature, I.IV.I. The main point of Hume’s discussion in this section is to provide a (fallacious) argument that the belief in the conclusion of an inference isn’t rationally supported by its premises. But the considerations put forward in the excerpt above do not depend on the details of this argument.

  26. Locke, An Essay Concerning Human Understanding, IV.II.6.

  27. There is a puzzle concerning Locke’s view. In Locke, “dimness” is not the absence of light, but the absence of clarity. This is ultimately to be understood in terms of a notion of resemblance with the world. The puzzle is this: If a thinker performs a competent deduction from known premises, the conclusion—no matter how long the inference—should resemble the world just as well as the premises (collectively) do. So why is there any additional dimness in the conclusion? What this suggests is that the real problem with long sequences of deductions fits Hume’s diagnosis. It concerns our awareness of our own fallibility.

  28. One might worry that there are not enough interesting single-premise deductions to cause difficulties for SPC. This is not a serious worry. Simple single-premise deductive rules include conjunction elimination, disjunction introduction, double-negation introduction and elimination, as well the following conditional rules: A/if B then A; both A and if A then B/B; and if A then B/if it is not the case that A then it is not the case that B. There are also rules that allow us to work within embeddings. For instance, the inference from (A and B and (if B then C) and D) to (A and C and D) plausibly counts as a simple single-premise deductive inference. These are more than sufficient to allow non-trivial sequences of simple single-premise deductions. Moreover, if we require the long sequence of deductions to have a single (perhaps conjunctive) initial premise, but we allow multi-premise deductions later in the sequence (using the earlier members of the sequence as premises), we will still have a counterexample to Bayesian views.

  29. And even if I don’t have any positive evidence for the claim that I’m prone to errors in my reasoning, I should presumably take into account the possibility that I’ve made a mistake.

  30. Mutatis mutandis, this also provides a counterexample to single-premise closure for knowledge.

  31. One might worry that the long chain of deductions is a sorites argument and therefore my conclusion that SPC is false is a hostage to the correct treatment of vagueness. In response, I’d like to make two points. First, the long chain of inferences does not resemble a classical sorites argument in that the major premise, SPC, is not primarily motivated by considerations having to do with vagueness or indeterminacy. It is not motivated by some kind of tolerance in the concept of justification. Rather, it is motivated by the thought that deduction is fully epistemically secure. Second, even were the long chain of deductions a sorites argument, the major contemporary solutions to the sorites paradox—supervaluationism, epistemicism, degree theories, and so on—all agree that the major premise in a classical sorites argument is false. Where they disagree is in what they say next. Thanks to Stew Cohen for pressing me on this issue.

  32. Of course, there is a defeater for one of the steps of the deduction in the thin sense that the premise of the deduction is justified and the conclusion is unjustified. However, modifying SPC by adding a clause to rule out this kind of defeater would trivialize the principle. Moreover, this is not a good way to characterize the intuitive notion of a defeater. Roughly put, a step of an argument is defeated only if that step is to blame for the lack of justification for the conclusion. In the long sequence of deductions, none of the individual steps need be defeated in this thicker sense. Thanks to Stew Cohen for helpful discussion of this issue.

  33. Of course, there may be deductive steps at which the thinker’s rational degree of belief increases—perhaps, for instance, the inference from A to either A or B.

  34. Lasonen-Aarnio (2008) uses related considerations to argue that multi-premise and single-premise closure for knowledge stand or fall together. Her arguments primarily focus on a safety-based conception of knowledge. But one of her central ideas is similar. Given that (i) knowledge is incompatible with a high objective chance of falsity and (ii) the objective chance that I’ve made a mistake can aggregate over long chains of inference, knowledge is not closed under competent deduction. A major difference between her argument and the one presented here is that in the case of justification, the appropriate construal of risk concerns rational degree of confidence rather than objective chance.

  35. Plausibly, having justification to believe that one’s deductive reasoning is not fully reliable (whether or not one believes it) suffices. So does merely having the relevant belief (whether or not it is justified).

  36. See Alston (1980).

  37. Williamson (forthcoming). Also see Williamson (2009) for relevant discussion.

  38. There is an important contrast between worries about the reliability of the inputs to our reasoning—for instance, from vision—and worries about the reliability of our reasoning, itself. But I don’t see how the line of response on offer could be sensitive to this contrast.

  39. See Pollock (1986). My characterization of the distinction between rebutting and undercutting defeat differs from his.

  40. This example is originally due to Pollock.

  41. Christensen (2010) uses the term “higher-order evidence” in discussing this kind of defeat. One way to get a grip on the contrast between undercutting defeat and higher-order defeat is in terms of conditional probabilities. The probability that a wall that looks red is red is presumably greater than the probability that a wall that looks red is red given that the wall is illuminated by red lights. In contrast, suppose that some premise entails some conclusion but that seeing this entailment relies on a complex bit of reasoning. The probability that the conclusion is true given that the premise is true is no greater than the probability that the conclusion is true given that the premise is true and given that I’m unreliable in the relevant kind of reasoning.

  42. See Elga (unpublished).

  43. See Christensen (2008).

  44. This diagnosis is especially natural for those who endorse a justification or a knowledge norm on action.

  45. See Williamson (2000, pp. 257–258) for an analogous response put forward in defense of a knowledge norm on assertion.

  46. This response faces a second problem. There are apparent cases of higher-order defeat in which I have double-checked my calculations as much as is possible for me. In such cases, there is no prospect of claiming that I am exhibiting some kind of failing in not further checking my reasoning.

  47. See Lewis (1971). Also see Field (2000) and Elga (2010) for closely related arguments.

  48. There is a complication here due to the fact that there are different measures of reliability. For simplicity, I’ll assume that there is a single relevant measure.

  49. Two rules are competitors if they provide incompatible pronouncements on what to believe on some matter given the very same inputs.

  50. The restriction to rules that the thinker could employ is intended to exclude from consideration such rules as “believe all and only the truths”.

  51. I suspect that the second version of the argument is more fundamental than the first.

  52. See Field (2000, 2009a) for responses to this worry.

  53. I owe this way of putting the point to Adam Elga.

  54. Indeed, I should also think that every thinker is justified in having the corresponding belief. Everyone should believe that they are epistemically special. But unlike everyone else, I really am epistemically special. Or so I should think. I owe this point to Phil Galligan.

  55. See Christensen (2008) for discussion of whether rational ideals can be jointly incoherent.

  56. This line of thought is reminiscent of the excerpt from Hume. It is also reminiscent of the regress argument in Carroll (1895), albeit in a much more general setting.

  57. This is essentially the argument that Hume goes on to make in the section that the excerpt is taken from. Hume’s argument is fallacious. Among other problems, we needn’t think we are more likely to have credences that are too high than credences that are too low. So there is no reason to think that our credences will drain away.

  58. Moreover, the thought that deductive rules are exempt from rational policing because they cannot lead us astray seems to depend on an overly reliabilist conception of epistemic responsibility.

  59. Policing as much as is reasonable should not be identified with policing as much as is possible (given the thinker’s cognitive powers). Indeed, it may be problematic to police one’s reasoning as much as is possible. Too much policing may introduce more errors than it corrects. Thanks to Gideon Rosen for pressing me on this issue.

  60. This is one application of what might be called “the Spiderman principle” in epistemology: With greater cognitive power comes greater epistemic responsibility. This is a plausible principle. For instance, it helps to explain why small children have fewer epistemic obligations to check their reasoning than we do.

  61. There are three ways of weakening the closure principle for justification to try to address the long sequence argument while still maintaining that deductive inference is, in some sense and in some circumstances, fully epistemically secure: (i) Closure applies to propositional rather than doxastic justification; (ii) Closure only applies to ideal epistemic agents; (iii) Closure is a rational ideal. I think each of these proposals is untenable, but I do not have the space to argue for this here.

References

  • Alston, W. (1980). Level confusions in epistemology. Midwest Studies in Philosophy, 5, 143–145.

    Article  Google Scholar 

  • Boghossian, P. (1989). The rule following considerations. Mind, 98, 507–549.

    Article  Google Scholar 

  • Boghossian, P. (Ed.). (2008). Epistemic rules. In Content and justification (pp. 109–134). Oxford: Oxford University Press.

  • Carroll, L. (1895). What the tortoise said to Achilles. Mind, 4, 278–280.

    Article  Google Scholar 

  • Cheng, P., & Holyoak, K. (1985). Pragmatic reasoning schemas. Psychology, 17, 391–416.

    Google Scholar 

  • Christensen, D. (2004). Putting logic in its place. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Christensen, D. (2008). Does Murphy’s law apply in epistemology? Self-doubt and rational ideals. Oxford Studies in Epistemology, 2, 3–31.

    Google Scholar 

  • Christensen, D. (2010). Higher-order evidence. Philosophy and Phenomenological Research, 81, 185–215.

    Article  Google Scholar 

  • Cosmides, L. (1989). The logic of social exchange. Cognition, 31, 187–276.

    Article  Google Scholar 

  • Dretske, F. (1970). Epistemic operators. Journal of Philosophy, 67, 1007–1023.

    Article  Google Scholar 

  • Elga, A. (2010). How to disagree about how to disagree. In R. Feldman & T. Warfield (Eds.), Disagreement. Oxford: Oxford University Press.

    Google Scholar 

  • Elga, A. (unpublished). Lucky to be rational. http://www.princeton.edu/~adame/papers/bellingham-lucky.pdf.

  • Evans, J., Newstead, S., & Byrne, R. (1993). Human reasoning: The psychology of deduction. Hillsdale: Lawrence Erlbaum.

    Google Scholar 

  • Evnine, S. (1999). Believing conjunctions. Synthese, 118, 201–227.

    Article  Google Scholar 

  • Field, H. (2000). A prioricity as an evaluative notion. In P. Boghossian & C. Peacocke (Eds.), New essays on the a priori (pp. 117–149). Oxford: Oxford University Press.

    Chapter  Google Scholar 

  • Field, H. (2009a). Epistemology without metaphysics. Philosophical Studies, 143, 249–290.

    Article  Google Scholar 

  • Field, H. (2009b). What is the normative role of logic? Proceedings of the Aristotelian Society Supplementary Volume, 133, 251–268.

    Article  Google Scholar 

  • Harman, G. (1986). Change in view: Principles of reasoning. Cambridge, MA: MIT Press.

    Google Scholar 

  • Harman, G. (1995). Rationality. In E. Smith & D. Oshershon (Eds.), Thinking: An invitation to cognitive science (Vol. 3, pp. 175–211). Cambridge, MA: The MIT Press.

    Google Scholar 

  • Hawthorne, J. (2004). Knowledge and lotteries. Oxford: Oxford University Press.

    Google Scholar 

  • Kripke, S. (1982). Wittgenstein on rules and private language. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Kyburg, H. (1970). Conjunctivitis. In M. Swain (Ed.), Induction, acceptance, and rational belief (pp. 55–82). New York: Humanities Press.

    Chapter  Google Scholar 

  • Lasonen-Aarnio, M. (2008). Single premise deduction and risk. Philosophical Studies, 141, 157–173.

    Article  Google Scholar 

  • Lewis, D. (1971). Immodest inductive methods. Philosophy of Science, 38, 54–63.

    Article  Google Scholar 

  • Makinson, D. (1965). The paradox of the preface. Analysis, 25, 205–207.

    Article  Google Scholar 

  • Nelkin, D. (2000). The lottery paradox, knowledge, and rationality. The Philosophical Review, 109, 373–409.

    Google Scholar 

  • Nozick, R. (1981). Philosophical explanations. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Pollock, J. (1983). Epistemology and probability. Synthese, 55, 231–252.

    Article  Google Scholar 

  • Pollock, J. (1986). Contemporary theories of knowledge (1st ed.). Towota, NJ: Rowman and Littlefield Publishers.

    Google Scholar 

  • Vogel, J. (1990). Are there counterexamples to the closure principle? In M. Roth & G. Ross (Eds.), Doubting: Contemporary perspectives on skepticism (pp. 13–27). Dordrecht: Kluwer.

    Google Scholar 

  • Williamson, T. (2000). Knowledge and its limits. Oxford: Oxford University Press.

    Google Scholar 

  • Williamson, T. (2009). Reply to Hawthorne and Lasonen-Aarnio. In P. Greenough & D. Pritchard (Eds.), Williamson on knowledge (pp. 313–329). Oxford: Oxford University Press.

    Google Scholar 

  • Williamson, T. (forthcoming). Very improbable knowing. In T. Dougherty (Ed.), Evidentialism and its discontents. Oxford: Oxford University Press.

  • Wright, C. (1985). Facts and certainty. Proceedings of the British Academy, 7, 429–472.

    Google Scholar 

Download references

Acknowledgments

Earlier versions of this paper were presented at the Basic Knowledge III workshop at the University of St. Andrews, a workshop on epistemology at the University of Geneva, the Theoretical Philosophy forum at Eötvös University, departmental colloquia at the University of Connecticut and at Princeton University, and the Epistemology Reading Group at MIT. I would like to thank the audiences at these events for their questions and comments. I would also like to thank Maria Lasonen Aarnio, Paul Boghossian, David Christensen, Stew Cohen, Dylan Dodd, Adam Elga, David Estlund, Hartry Field, Phil Galligan, Michael Heumer, Chris Hill, Ram Neta, Stephen Read, Gideon Rosen, Nico Silins, Paul Silva, Ralph Wedgwood, Tim Williamson, and Zsofia Zvolensky for helpful discussions at various stages of this project.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Joshua Schechter.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Schechter, J. Rational self-doubt and the failure of closure. Philos Stud 163, 429–452 (2013). https://doi.org/10.1007/s11098-011-9823-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11098-011-9823-1

Keywords

Navigation