Stop Making Sense? On a Puzzle about Rationality Clayton Littlejohn cmlittlejohn@gmail.com Forthcoming in Philosophy and Phenomenological Research 1. Introduction On a natural way of drawing the line between the internal and the external, knowledge is an externalist notion.1 Knowledge requires a proper fit between appearance and reality, so the stuff knowledge is made of isn't just in the head. It might seem clear to you that p, you might have strong evidence for p, and you might reason as carefully as anyone can in concluding that p, but p still might be false. If, however, it seems clear to you that p, you have incredibly strong evidence for p, and you reason carefully in concluding that p, isn't there something good about believing what you do? If all the available evidence supports p, it might be unreasonable for you not to believe p. If that's right, maybe it's just the stuff in the head that matters to rationality. The gap between appearance and reality is a potential threat to knowledge, but it doesn't normally seem to be a direct threat to rationality. Consider the new evil demon case.2 Your non-factive mental duplicate is deceived by a Cartesian demon. Everything you see and remember, they seem to see and remember. Everything that strikes you as plausible strikes them that way, too. You reason in just the same ways. You draw all and only the same conclusions. In spite of this, there are vast differences in what you know. In spite of this, there doesn't seem to be any difference in how rational your beliefs are. This suggests that neither the presence of the appearancereality gap nor the things on the far side of it have any direct bearing on what's rational to believe. Perhaps this is because the absence or presence of such things doesn't have any direct bearing on what's intelligible from your point of view. A natural explanation as to why rationality supervenes upon the mental is an evidentialist explanation. The reason that facts about your mental states wholly determine whether it's rational for you to believe a proposition is that facts about your mental states determine what evidence you have and evidential support relations determine what's rational for you to believe. If your evidence provides sufficiently strong support for your beliefs, they're rational. If they're rational, it's because they're supported to a sufficient degree by the evidence. This evidentialist explanation is not uncontroversial, but neither is it unpopular. There has been a debate about whether your evidence supervenes upon your non-factive mental states, but I'd like to bracket this issue we haven't paid enough 1 I would like to thank Maria Alvarez, Mary Carman, Adam Carter, Charles CoteBouchard, Earl Conee, Christina Dietz, Paul Doody, Trent Dougherty, Claire Field, Branden Fitelson, John Gibbons, John Hawthorne, Frank Hoffman, Nick Hughes, 2 See Cohen (1984) and Wedgwood (2002). While the intuitions that underwrite the new evil demon point tell us something important about rationality, I don't think they tell us anything about justification. In Littlejohn (forthcoming), I argue that rationality and justification have to be distinguished on the grounds that the former is required for certain kinds of excuses. 2 attention to the second part of the evidentialist explanation.3 Should we say that rational beliefs are rational because the evidence provides sufficiently strong support for them? Even if we grant that rationality supervenes upon the evidence, this is a stronger claim, a grounding thesis. It's not clear whether we should think that it's true. I don't think it is true. I don't think epistemic rationality is merely a matter of strong evidential support. My target is an evidentialist view with three core commitments: Dependence: If you rationally believe p, you have evidence for p that provides sufficiently strong support for p.4 Priority: The possession of evidence for p is independent from and prior to the rational status of your belief concerning p.5 Structural Sufficiency: If you have evidence for p that provides sufficiently strong support for p, it's rational to believe p.6 Structural sufficiency says that there's a reason why the evidence plays the rational role that it does. By providing a level of support that crosses some sort of threshold, the evidence makes it rational to believe what's rational to believe. We should reject structural sufficiency. It's possible for two propositions to receive the same level of evidential support where it's rational to believe only one of them. Thus, strong evidential support isn't the stuff that rationality is made of. The argument against evidentialism will be indirect. I will present a puzzle about rationality, discuss three potential solutions, and show that we need to reject evidentialism to solve the puzzle. Even if rationality supervenes upon the mental, evidentialism doesn't explain why this is. 2. The Puzzle There's been considerable debate recently about (putative) rational requirements such as these: L: If t is a ticket for a fair lottery that hasn't been drawn with more than 1,00,000 tickets, you're rationally required to believe that t is a losing ticket.7 3 While Conee and Feldman (2004) defend the view that evidence supervenes upon a subject's non-factive mental states, Alvarez (2010), Hyman (1999), Littlejohn (2012), Mantel (2013), Mitova (forthcoming), McDowell (1998), Pritchard (2012), and Williamson (2000) argue against this supervenience thesis. 4 Anscombe (1962) argues that the knowledge you have of the position of your limbs constitutes a perfectly good counterexample to Dependence. Littlejohn (forthcoming b) and McGinn (2012) argue that perceptual beliefs constitute knowledge without being held for reasons. 5 On the more plausible accounts of having reasons (e.g., views on which having p as a reason involves knowing or justifiably believing p), the possession of reasons involves a normative dimension that's incompatible with the idea that possessed reasons ground positive epistemic status. For challenges to Priority, see Beddor (forthcoming), Littlejohn (2012, forthcoming b), and Sosa and Sylvan (forthcoming). 6 See Conee and Feldman (2004). 3 D: If you acknowledge that your peer disagrees with you about whether p, you're rationally required to refrain from believing p.8 The arguments can pull us in different directions. I've changed my mind about L. When this happened, novel considerations seemed to give me good reason to change my mind. The optimist in me thinks that they could have made it rational for me to believe L. The pluralist in me thinks that my opponents could have had strong evidence for their views about L, mistaken though they were. If we suppose that it's possible to have evidence that supports L, it should be possible to have evidence that provides a sufficient level of support for L: 1. You have sufficient evidence for you to believe L.9 If (1) is correct and you believe L on the basis of this evidence, structural sufficiency tells us: 2. You rationally believe L. The move from (1) to (2) seems plausible. Just as we know that there are features of your perspective that can make it rational to believe you have hands by making it seem that you have them, don't we know that there are features of your perspective that can make it rational to believe L by making it seem as if L is true? Bearing in mind what L says, suppose I give you a ticket for a lottery. Let p be the proposition that your ticket is a loser. Since you rationally believe L, this seems to follow from (2): 3. You rationally believe that rationality requires you to believe p. With this belief in place and with its blessing from rationality, it's hard to see how rationality could then require you to refrain from believing that the ticket I just gave you is a loser. If it doesn't, we have this: 4. You rationally believe p. Here's the turn. All that I've told you about L is that you have sufficiently strong evidence for it. I never said whether it was true. Can't we suppose that L is false? If so, we get this: 5. L is false. You're actually rationally required to refrain from believing p. It's hard to see how rationality could require you to believe and refrain from believing the very same proposition, so we have to give something up. This is the puzzle. 3. Three Responses Let's consider three responses to our puzzle. The first starts from the idea that features of your perspective make it reasonable to believe things generally. By making it rational to believe L, they thereby have an effect on whether it's rational for you to 7 See Hawthorne (2004) and Nelkin (2000). 8 See Feldman (2006). 9 If a belief has the kind of 'sufficient' support at issue, it might be irrational but it won't be irrational for want of evidential support. It doesn't seem terribly plausible to deny (1) because there are things that we rationally believe about rationality. If this fact isn't itself trouble for the evidentialist, then we often have sufficient evidence to believe things about rationality. If we can have strong evidence to believe things about rationality, it seems that somebody could have strong evidence for L. 4 believe lottery propositions. According to the perspectivist, rationality requires a mesh between your beliefs and your beliefs about rationality: Enkratic Requirement: Rationality requires that you don't both believe that you're rationally required to believe p and refrain from believing p. 10 If features of your perspective make your beliefs about rationality rational, they'll help to determine whether it's rational for you to, say, believe lottery propositions. They accept (1)-(4) and reject (5). The Enkratic Requirement implies it's not possible for certain kinds of mistaken beliefs about rationality to be rational: Fixed-Point Thesis: If you believe that rationality requires believing p, this belief is either true or rationally prohibited.11 This is a surprising consequence of the Enkratic Requirement. Some people don't like surprises. You might think that there can be rational mistakes about just about anything. The best evidence might be misleading. If it's good enough evidence, it might make mistakes reasonable. If you think strong but misleading evidence can make it rational to form mistaken beliefs about rationality, you're an incoherentist.12 The incoherentist thinks that there can be rationally acceptable 'mismatches' where rationality permits refraining from believing p even if you rationally believe belief is required. Incoherentists think it's fine to stipulate that (5) is true. They reject (4) and try to show that (1)-(3) doesn't support it. Objectivists agree with the perspectivist rationality requires you to be enkratic.13 They disagree with the perspectivist about the rational significance of beliefs about rationality on our first-order attitudes. Objectivists think that we should think of the requirements of rationality as independent targets our attitudes aim to hit when we're thinking about rationality. When our beliefs about rationality miss their targets, they're irrational. This means that beliefs about rationality are different from beliefs about the weather, but this is a difference we have to live with. The facts on the bottom place constraints on what's rational to believe about rationality. They reject (2). 4. Perspectivism Perspectivists try to solve the puzzle by denying (5) on the grounds that it conflicts with (1)-(4). They deny that we can specify the rational requirements that apply to you 10 Feldman (2005), Foley (2001) and Gibbons (2014) are the two writers who seem to be the most sympathetic to perspectivism. Broome (2013), Greco (2014), Horowitz (2014), Ichikawa and Jarvis (2013), and Smithies (2012) defend the enkratic requirement and/or fixed-point thesis for justification or rationality, but it's not clear whether they should be classified as perspectivists or objectivists. 11 Titelbaum (2015, forthcoming) shows that the fixed-point thesis can be derived from the enkratic requirement. He takes the enkratic requirement to be intuitive, as do I, but offers little positive argument for it. I hope to supplement his arguments and offer an explanation of the requirement below. 12 See Coates (2012) and Lasonen-Aarnio (MS.) for defenses of incoherentism. 13 Titelbaum (2015, forthcoming) seems to be an objectivist. Littlejohn (2012), Sutton (2007), Steglich-Peterson (2013), and Williamson (forthcoming) defend similar views concerning justification, but not for rationality. 5 without taking any account of your perspective. The internal connections between your perspective and your beliefs determine whether they're rational and determine whether something like L applies to you. Let's look at two arguments for perspectivism. The intelligibility argument starts with the observation that a rational response must be intelligible from the subject's perspective. Suppose a subject's options always contain at least one rationally permitted option. Rationality couldn't reasonably require you to refrain from believing, refrain from disbelieving, and refrain from refraining. Suppose that your evidence strongly supports the belief that you're rationally required to believe p. If so, you might think that if the subject believes that she's rationally required to believe p, this belief is rationally permitted. Suppose, that's right and that's what she believes. Which of the following options would be rationally permitted? 1. Believing ~p whilst rationally believing that she's rationally required to believe p. 2. Believing neither p nor ~p whilst rationally believing that she's rationally required to believe p. 3. Believing p whilst rationally believing that she's rationally required to believe p. It isn't intelligible to suspend or disbelieve in light of the belief that believing is rationally required, so (1) and (2) fail the intelligibility test. Thus, if one option is rationally permitted, it's (3). Perspectivism is vindicated.14 The second argument for perspectivism is the evidentialist argument. Evidentialists say we should respect all the evidence, including higher-order evidence. Suppose, if only for reductio, there's a counterexample to perspectivism. The counterexample would have to be a case in which (a) you rationally believe that you're rationally required to believe p and (b) believing p isn't rationally permitted. Feldman (2005) thinks this is impossible. If (a) holds, there's sufficient evidence for believing that you're rationally required to believe p. If (b) holds, there's not sufficient support for believing p. The trouble with this description of the case, Feldman says, is that the evidence that supports the epistemic belief and ensures that (a) is met is evidence that supports the first-order belief p, in which case (b) isn't met. 4.1 A Response Let's start with the evidentialist argument. One problem with it is that Feldman overlooks the possibility of having sufficiently strong evidence for an anti-evidentialist view of rationality, such as a view on which you can be rationally required to believe p in the absence of evidence when such a belief is desirable. While such a view strikes us as implausible, there might have been subjects that believed it on strong evidence. We don't know what happened in William James' basement. Maybe he locked students away in cages and fed them on a diet of gruel and arguments for the pragmatist view of rationality just described. If their evidence supported this pragmatist view, the evidentialist should recognize that these views were rationally held. If so, evidentialism says that these subjects would rationally believe that they were rationally required to believe p even when they knew that p wasn't supported by evidence. The level of 14 See Gibbons (2014). Foley (2001) might also be sympathetic to this line of argument. Fantl and McGrath (2009) offer a similar argument for internalism about justification. 6 evidential support for p wouldn't be sufficient for rationality, not even if the belief that p is rationally required was supported to a sufficiently high degree. Evidentialism predicts counterexamples to the enkratic requirement, so there's no good evidentialist argument for perspectivism. We can see the tension between evidentialism and perspectivism if we consider someone reasoning as follows: P1. There's sufficient evidential support for p. P2. If there's sufficient evidential support for p, I'm rationally required to believe p. C. I'm rationally required to believe p. Suppose that, in keeping with evidentialism, you know (P2). Suppose, however, that you have strong but misleading evidence for (P1). Evidentialists and perspectivists agree that you rationally believe both premises. They should agree that you could rationally accept (C). Since (P1) is false, however, there isn't sufficient evidence for p. The evidentialists should say, then, that the argument's conclusion is false even if both premises are rationally believed. Perspectivists have to accept the argument's conclusion and reject the evidentialist's dependence thesis. Might we accept perspectivism and reject evidentialism? Doesn't the intelligibility argument show that it's possible to rationally believe things without evidence (e.g., when we believe on misleading evidence that we have sufficient evidence for believing p)? I think not. While I'm not a fan of evidentialism, I worry that the perspectivist objection overgeneralizes. It threatens to lead to a kind of epistemic anarchism on which there aren't any principles that specify the conditions that determine what's rationally required of all rational subjects. The perspectivist objection to evidentialism applies to any view that says that there's at least one rational requirement with these two properties: 1. The requirement states, 'If C obtains, you're rationally permitted to believe p' where the fact that C obtains isn't a fact about rationality. 2. C's obtaining is a necessary condition for rational permission (i.e., if C doesn't obtain, you're rationally prohibited from believing p). Since it seems that any plausible account of epistemic rationality will posit at least some norm that has these two properties, we should reject perspectivism. To see this, notice that a subject's evidence might provide arbitrarily strong support for (a) false propositions about the rational significance of C or (b) false propositions about whether C obtains.15 Under these conditions, perspectivists would say that what's rationally required isn't what the (putative) principle states. The perspectivist would say that the principle is spurious. It seems to be a pretty weak requirement on a theory of rational belief that it recognizes at least one rational requirement that meets both conditions, so I think there's something wrong with the perspectivist idea that it makes sense from our perspective to conform to the requirements of rationality. The price we pay for categorical rational requirements, requirements that are binding on us even when we're not properly sensitive to their demands, is the rejection of the perspectivist view. 15 For further discussion of this point and its significance, see Littlejohn (2014, forthcoming) and Srinivasan (forthcoming). 7 There's a truth in the neighborhood of the evidentialist's priority thesis that we mustn't lose sight of. Thoughts about rationality don't make things rational, not even when backed by evidence. Consider the relationship between a subject's attitudes about fitting emotional responses and fitting emotional responses. We don't think that part of what determines whether anger or joy is fitting is a subject's attitude towards whether it's fitting. Having strong evidence for your theory of fitting emotional response wouldn't make it rational for you to be angered by the sight of animals happily basking in the sun, not even if that followed from your theory. If rational belief is anything like a fitting response to accessible features of your situation, we should likewise be skeptical of the suggestion that the fittingness of such doxastic responses is determined by beliefs about rational responses.16 If rational support doesn't flow down from evidence for beliefs about rationality to the beliefs you take to be rational, either rationality doesn't care about whether your higher-order and lowerorder attitudes mesh (as the incoherentists believe) or the constraints that apply to the lower-order attitudes apply all the way up (as the objectivists believe). 5. Incoherentism Incoherentism is a natural choice for evidentialists. If strong evidential support for beliefs about rationality doesn't invariably trickle down to provide evidential support for beliefs we think we're required to have, shouldn't we reject the enkratic requirement? The incoherentists think so. They think that structural sufficiency shows us that there are counterexamples to the fixed-point thesis, cases in which there's sufficient evidential support for believing false propositions about the requirements of rationality. Since arguments against the fixed-point thesis are, inter alia, arguments against the enkratic requirement, the incoherentist thinks that we can solve the puzzle by rejecting this requirement. By doing so, we can retain (5) and retain (1)-(3).17 Unfortunately, incoherentism is hard on our intuitions. Consider a dramatization of an exchange between you and your epistemic conscience: EC: Let's start with the bad news. These are the results of your periodic epistemic evaluation. A lot of your first-order doxastic responses we've flagged for irrationality. Do you want to start with omissions or commissions? You: Omissions. EC: Fine. You don't believe p. You: That's right. EC: Right, I know you know that. It's irrational. You're rationally required to believe p. You: That seems right to me. 16 Once we see why Feldman's argument for perspectivism isn't a goer, perspectivists shouldn't be tempted to think that some formal or structural relationship is in place so that the evidence that supports higher-order beliefs provides sufficiently strong support for lower-order attitudes. So, while they might not describe their view as a view on which attitudes about rationality make lower-order beliefs rational, they really can't say that such higher-order beliefs merely ensure that there's sufficient support for lower-order attitudes. 17 See Coates (2012) and Lasonon-Aarnio (MS). 8 EC: I thought you'd say that. You don't seem to remember, but I told you the same thing on the last three visits. And yet, here we are. Look, if you don't agree with my assessments, just tell me. I'm starting to worry that you don't take this seriously. You: On the contrary! I take this very seriously. I agreed earlier and I agree with you now. EC: So, what gives? If you agree that it's irrational for you not to believe p, why are you just sitting there? Why don't you get up and change your mind? You: I'm not sure that that's called for. I agree that it's not rational for me to refrain from believing p. I believe that. Really, that seems obvious to me. I just don't know what change is called for. EC: Is that because you're waiting for the good news? We've run the tests and your beliefs about rationality are all fine. You: Oh, I expected as much. I'm certain that my higher-order beliefs are all rational. EC: I've lost the thread. You agree that it's irrational for you not to believe p. You agree that it's rational for you to agree on this point. You acknowledge that you don't believe p. You just don't yet see that this calls for any sort of change. You: Right. EC: Should we continue with these evaluations? You: Yes, of course we should, they're very important. When you discover a mismatch the discovery should be the beginning of epistemic selfassessment and revision, not the conclusion of it. If, however, the incoherentist is right, your akratic state might be just the thing that's keeping you in line with the requirements of rationality. In the exchange with your epistemic conscience, you don't seem very reasonable, so it's hard to see how maintaining your akratic position could be preferable from the point of view of rationality than alternatives in which you conform to the enkratic requirement. There's a further reason to be uneasy about this idea of rational epistemic akrasia. Suppose someone believes evidentialism. Suppose she has sufficient evidence to believe that she's rationally required to believe p but she doesn't believe p. She violates the enkratic requirement. Incoherentists should think that it doesn't matter to the rationality of her relevant attitudes whether she knows that she doesn't believe p or not, so let's say that she knows that she doesn't believe p.18 If she's aware that she doesn't believe p, it seems to her that she cannot settle the question whether p. While she takes the question to be open, she thinks that there's not only evidence that supports p, it requires her to settle the question whether p in a particular way. It's hard to understand how she could (a) rationally take the question to be one that she cannot now settle if (b) she also thinks that her evidence rationally compels her to settle it in a particular way. If you judge that your evidence rationally compels you to believe that the correct answer to the question whether p is p, wouldn't any reasonable person take 18 The incoherentist shouldn't think that it matters whether the subject knows that she doesn't believe p. The evidentialist view seems to predict that there will be counterexamples to the enkratic requirement even when the subject knows that she doesn't believe p. 9 that question to be closed?19 The mindset of this person is opaque. It's hard to see how rationality could sanction such a mindset. If rationality requires you not to knowingly violate the enkratic requirement, it should require you not to violate it at all. 6. Objectivism Objectivism is the best of a bad bunch. Because objectivists recognize the enkratic requirement, they avoid objections to incoherentism. The argument from perspectivism to epistemic anarchism assumed that the internal connections between features of a subject's perspective and her attitudes about rationality wholly determined whether those attitudes were rational. The objectivist doesn't think that such internal connections are sufficient on their own to make the relevant attitudes rational because they don't guarantee that they'd hit an independent target. The argument for epistemic anarchism is blocked from the outset. We solve the puzzle by denying that (1) establishes (2). Doesn't this point to obvious problems with objectivism? The only defenses of the enkratic requirement and fixed-point thesis appeal to contested intuitions or arguments that support perspectivism. One might reasonably worry that objectivist responses to the puzzle are ad hoc. What's worse is that objectivism seems to conflict with some platitudinous claims about the way that the features of our perspective make our beliefs rational. Doesn't the intelligibility argument rule this view out? Objectivists have to respond to these worries. Let's start with the intelligibility argument. It rests on two assumptions: Intelligibility Thesis: If φ-ing is a rational response to the situation, φ-ing is an intelligible response to the situation (i.e., one that makes sense from the subject's point of view).20 Availability Thesis: In any situation there's at least one rationally permitted response to that situation. These imply that in any given situation there's at least one response that's rationally intelligible.21 Without the availability thesis, the intelligibility argument won't go 19 This point is similar to points defended by Adler (2002). 20 Space doesn't permit an extended discussion of the intelligibility thesis, but I have worries about it, too. If the intelligibility of responding to a rational requirement requires registering that there's something in the situation that merits the response, the intelligibility thesis implies that those who don't have the proper sensitivity or understanding won't be bound by the (putative) rational requirement because they lack what's needed to register its significance. This implies, in turn, that the requirement isn't categorical for it applies only to those who can appreciate its rational force. If our conception of rational requirements is, however, the conception of requirements that have rational authority for all rational creatures, the intelligibility thesis needs to be seriously modified. Once modified, I doubt that the modified thesis would support the argument for perspectivism. 21 One potential source of difficulty for the intelligibility thesis would involve cases in which following the evidence and argument leads you to endorse views on which there are true contradictions, rationally required belief in true contradictions, or truth-value gaps. The moves from the evidence (consisting of 'seemings' or testimony, say) to the 10 through. The success of the argument depends upon whether we can run an argument by elimination to show that once you rationally believe p to be rationally required, believing p is rationally permitted on the grounds that alternatives aren't intelligible. One thing the objectivist could argue is that the theses don't pair together terribly well. Think about the possibility of muddles, situations in which none of the available options is intelligible to someone. If you're guilty of some gross rational failing, can't you arrange things so that none of the available options is intelligible? If so, the intelligibility thesis is at odds with the availability thesis. We can revise the availability thesis to avoid this: Modest Availability Thesis: If you find yourself in a situation and this isn't the result of some rational failure on your part, there's at least one rationally permitted response to that situation. The weakened thesis doesn't support the argument for perspectivism. Suppose you and a peer disagree about L in that you think that we're rationally required to believe lottery propositions and they think that we're prohibited from believing them. If you both judge, in keeping with your views, that the lottery proposition is one that you're rationally required to believe or prohibited from believing, objectivism says that one of you will find yourself in a situation in which there are no rationally permissible options when it comes to the lottery proposition. If you cannot intelligibly suspend on whether p when you believe belief to be rationally required, suspension and disbelief would be ruled out. If you are on the wrong side of the debate about L, however, belief would also be ruled out. This doesn't threaten the Modest Availability Thesis, however, because if you're the one who's wrong about what rationality requires, objectivism says that you're in the bad situation as a result of a rational failure on your part. Objectivists might be fine with the idea of perplexity secundum quid even if they reject the idea of perplexity simpliciter (i.e., they might reject the idea that there are no permissible options when you're guilty of some sort of wrong even if they think that when you do no wrong there must always be at least one permissible option). But, you might ask, where's the rational failure? You've followed the evidence and the evidence suggested that L is true. How can this be a case of rational failure? The objectivist says that this is a case of perplexity secundum quid because mistakes about the requirements of rationality are rational failures. This is only satisfying if we have a defense of the fixed-point thesis. Titelbaum suggests that the thesis might be correct because we all happen to have propositional justification to believe the truth about what rationality requires of us. As he puts it, the belief in a view that posits, gaps, gluts, or rationally required beliefs in true contradictions might each be intelligible but what about the end point belief? Can't one be under the illusion that their view about gaps or gluts is intelligible? It seems to me that a perfectly respectable view to take here is that it's not intelligible to believe that some proposition is neither true nor false or to believe that some proposition is both true and false while conceding that the steps that take you to such conclusions is intelligible. If this is right, it's not clear that a stable and coherent set of intuitions supports Intelligibility. Thanks to John Hawthorne for pressing the point about whether there can be rational belief in contradictions. For a discussion of the intelligibility of beliefs in gaps, Williamson (1992) is a good place to start. For a defense of the rationality of belief in contradictions, see Priest (2006). 11 reason that the 'justificatory map' is arranged in such a way that we don't have justification for believing falsehoods about the requirements of rationality is that "every agent possesses apriori, propositional justification for true beliefs about the requirements of rationality in her current situation" (forthcoming: 21).22 Is this convincing? If justification is a matter of strong evidential support, the suggestion is that the reason it's irrational to form false beliefs about what rationality requires is that we all have strong (undefeated?) evidence for the right views of rationality. Is this plausible? The possession of evidence for any particular view depends upon contingent facts about a subject's psychology. Changing a subject's mental states by presenting new arguments that she finds convincing can change a subject's evidence. Haven't some of us had evidence for L and later had evidence that weighs strongly against L? Titelbaum's explanation assumes we have assets we don't have. My explanation of the fixed-point thesis focuses on liabilities, not assets. Consider an example. Suppose your accountant watches while you fill in your forms for the IRS and he tells you that you ought to take certain deductions and report certain kinds of income in specific ways. The result is that you lose money you could have saved and you break a few laws. Meanwhile, a neighbor does their taxes in just the same way you've done working on their own. Your neighbor isn't competent at handling this kind of situation. Their actions manifest this incompetence. What about your accountant? He manifests the same kind of incompetence and equally shows himself to be incapable of managing the situation even though this incompetence is manifested in his beliefs about what you should do rather than the actions that manifested the neighbor's incompetence. Both have shown themselves to be insensitive or unresponsive to the relevant features of the situation in spite of their awareness of them. A similar point applies when it comes to handling reasons/evidence. Rationality requires an understanding of what's required when reasons apply to you. If your firstorder attitudes violate rational requirements (e.g., by believing on the basis of the wrong kind of grounds or on the basis of insufficient evidence), you'll manifest the kind of incompetence at handling reasons that merits the charge of irrationality. If instead you judge that you should form beliefs that happen to violate these requirements, this judgment reflects the same incompetence, the same failure to discern what a situation requires of you, that the first-order irrational belief did. Since this failure is what makes for the irrationality of the first-order attitude, it makes the belief about rationality irrational. This is why mistaken beliefs about what rationality requires of you are themselves irrational beliefs. The fixed-point thesis isn't true because we all happen to have evidence for the right list of rational requirements; rather, it's true because the grounds for saying that someone's attitudes are irrational is that those attitudes reveal a kind of incompetence with respect to handling reasons and their demands. As it happens, mistaken beliefs about what rationality requires will manifest that kind of incompetence. 7. Objectivism and Evidentialism 22 Ichikawa and Jenkins (2013) and Smithies (2012) say something similar in support of the idea that we'll always have propositional justification to believe truths about the requirements of rationality. 12 There's a quick argument from objectivism to the denial of structural sufficiency. A source (e.g., testimony, apparent rational insight, reasoning) might provide evidence that R1 and R2 are both genuine requirements of rationality. Suppose only R1 is. If the support is sufficiently strong in both cases, structural sufficiency tells us that it's rational to believe both to be rational requirements. The fixed-point thesis says, however, that it could only be rational to believe one to be a rational requirement. Thus, according to the fixed-point thesis, rationality isn't simply a matter of having sufficiently strong evidential support.23 If rationality isn't simply a matter of strong evidential support, what is it? The principles that capture the requirements of rationality have application conditions that pick out conditions that matter to epistemology much in the way that, say, a law's application condition is connected to some value that the law aims to protect. If you're aware of the relevant condition but aren't moved in the way the principle states you're required to be, this manifests a kind of unresponsiveness to the relevant value, de re unresponsiveness.24 The objectivist sees this kind of responsiveness as essential to rational belief formation. If rational beliefs are irrational because they're de re unresponsive, we have an explanation of the enkratic requirement and fixed-point thesis. Just as the first-order belief that, say, some ticket lost might count as irrational because it's not properly responsive to epistemically relevant features that call for certain responses, the belief that you're rationally compelled to believe this ticket to be a loser is in its own way de re unresponsive as it manifests the same commitment to go against the things that our epistemic standards care about. Here's one lesson to take from this. If we start by helping themselves to evidence and possession and then try to construct a theory of rational belief on which the stuff that makes rational belief rational is some formal relation between the rational belief and the elements that support it, we face a difficulty. It seems that evidence that we have a genuine insight, proper understanding, or proper sensibility might be misleading. Thus, we have to choose between a view on which such failures don't matter to rationality or maintain that the evidence cannot be misleading because the evidence for thinking something is insightful or shows proper sensitivity just makes something insightful or properly sensitive. Neither option is palatable. The worry can be put like this. Take a view on which actual rational insight or understanding is a necessary precondition for having rational beliefs about what rationality requires from you. To rationally believe, say, that you shouldn't violate the enkratic requirement or shouldn't believe lottery propositions, a merely apparent rational insight won't do. We need a genuine insight and genuine understanding. Some will object to such a view on the grounds that a merely apparent rational insight should have some rational force that's comparable to the rational force of a genuine one much in the way that, say, hallucination should have comparable rational force to perception. We'll be invited to think of genuine and merely apparent rational insight 23 Arguments against Structural Sufficiency appear to be arguments against Foley's (2009) Lockean thesis, but space doesn't permit discussion of the significance of the arguments against Structural Sufficiency on debates about the relationship between belief and degrees of belief here. 24 See Arpaly (2002). 13 as having some sort of common character and that it cannot be more rational to respond to the genuine rational insights than the apparent ones.25 This has to be a mistake, but where does the mistake lie? It's not in the idea of a mock insight. It's in the idea that there's no rational difference between mock insight and genuine insight. Notice, however, that we're not going to make much headway in understanding where the difference lies until we see how limited assets-based explanations are in epistemology. We'd be forced to fight this fight in terms familiar from debates about the rational role of experience and the significance of error cases. We'd have to find something that's an aspect of genuine insight that's lacking from a mock insight. This is the wrong way to approach the issue. The difference between sensory error and the errors about rationality's requirements are clearer when we think about the role of value. In the case of sensory error, your beliefs don't manifest any sort of de re unresponsiveness. They don't show that you're bad at understanding what reasons require of you, only that you sometimes make mistakes about which reasons there are that place demands on you. In the case of mock insight, you're committing yourself to something perverse, something bad, something untoward and revealing that your values are out of line with the things that epistemology cares about. It's perverse to care about things that epistemology takes to be worthless or to fail to respect the things that epistemology values and then to insist that you care about epistemology's approval. We'll see that the evidentialist's formal approach to rationality is bankrupt if we think about things like the rational relations between beliefs, actions, and emotions. Foley (2001) once defended the view that epistemic rationality plays a foundational role in the overall theory of rationality because, he said, if you rationally believe rationality requires you to feel, think, or do, it follows that rationality will permit feeling, thinking, or so doing. If you rationally judge that rationality requires being angry or going to the left, the features of your perspective that make the belief rational ensure that the emotion or action is rational, too. It's clear now that if we think of a subject's perspective as a collection of mental states that make things seem to her to be a certain way, this model isn't very good, not if you think that the rationality of being angry about something depends upon whether it's anger directed at a fitting object.26 Your evidence could provide arbitrarily strong support for a theory of fitting objects of anger according to which it's appropriate to be angry about things like the happiness of children or the equitable distribution of resources, but it's not fitting to be angry about such things. If fittingness is connected to rationality, the rationality of an emotional response can't be wholly determined by the stuff that Foley thinks makes for rational belief. Either rationality of emotion has nothing to do with whether the emotional response is appropriate or he has to admit that he got things the wrong way around. If indeed there's a nexus and the rationality of a belief is connected to the rational standing of the beliefs, actions, and emotions that beliefs rationalize, he has to see that the determinants of epistemic rationality 25 See Huemer (2006, 2007). The point I'm challenging is one he takes to be an important internalist insight, albeit one that differs in subtle but important ways from the intuitions that underwrite Cohen's (1984) argument for internalism. 26 One needn't take the notion of fittingness to play the fundamental role that McHugh (2014) assigns it to appreciate the point that what's fitting isn't itself determined by beliefs based on strong evidence about what's fitting. 14 aren't just features of your perspective but also includes the features of things that determine what response is fitting. We should see that something similar holds for belief. Certain beliefs are appropriate responses to epistemic situations, situations that we characterize (in part) in terms of a subject's perspective on the world. Just as certain beliefs won't be fitting in certain epistemic situations, certain beliefs about what rationality requires of you will be constrained by features of the epistemic situation, not your take on it. References Adler, Jonathan. 2002. Belief's Own Ethics. MIT University Press. Alvarez, Maria. 2010. Kinds of Reasons. Oxford University Press. Anscombe, G.E.M. 1962. On Sensations of Position. Analysis 22: 55-58. Arpaly, Nomy. 2002. Unprincipled Virtue. Oxford University Press. Beddor, Bob. Forthcoming. Evidentialism, Circularity, and Grounding. Philosophical Studies. Broome, John. 2013. Rationality Through Reasoning. Wiley-Blackwell. Coates, Allen. 2012. Rational Epistemic Akrasia. American Philosophical Quarterly 49: 11324. Cohen, Stewart. 1984. Justification and Truth. Philosophical Studies 46: 279-95. Conee, Earl and Richard Feldman. 2004. Evidentialism. Oxford University Press. Fantl, Jeremy and Matt McGrath. 2009. Knowledge in an Uncertain World. Oxford University Press. Feldman, Richard. 2005. Respecting the Evidence. Philosophical Perspectives 19: 95-119. Feldman, Richard. 2006. Epistemological Puzzles about Disagreement. In S. Hetherington (ed.), Epistemology Futures. Oxford University Press, pp. 216-37. Foley, Richard. 2001. The Foundational Role of Epistemology in a General Theory of Rationality. In A. Fairweather and L. Zagzebski (ed.), Virtue Epistemology: Essays on Epistemic Virtue and Responsibility. Oxford University Press, pp. 214– 31. Foley, Richard. 2009. Beliefs, Degrees of Belief, and the Lockean Thesis. In F. Huber and C. Schmidt-Petri (ed.), Degrees of Belief. Springer. Gibbons, John. 2014. The Norm of Belief. Oxford University Press. Greco, Dan. 2014. A Puzzle about Epistemic Akrasia. Philosophical Studies 167: 201-19. Hawthorne, John. 2004. Knowledge and Lotteries. Oxford University Press. Horowitz, Sophie. 2014. Epistemic Akrasia. Nous 48: 718-44. Huemer, Michael. 2006. Phenomenal Conservatism and the Internalist Intuition. American Philosophical Quarterly 43: 147-58. Huemer, Michael. 2007. Compassionate Phenomenal Conservatism. Philosophy and Phenomenological Research 74: 30-55. Hyman, John. 1999. How Knowledge Works. The Philosophical Quarterly 50: 433-51. Ichikawa, Jonathan and Ben Jarvis. 2013. The Rules of Thought. Oxford University Press. Lasonen-Aarnio, Maria. MS. Enkrasia or Evidentialism: Learning to Love Mismatch. Littlejohn, Clayton. 2012. Justification and the Truth-Connection. Cambridge University Press. Littlejohn, Clayton. 2014. The Unity of Reason. In J. Turri and C. Littlejohn (ed.), Epistemic Norms: New Essays on Action, Belief, and Assertion. Oxford University Press, pp. 235-55. Littlejohn, Clayton. Forthcoming. A Plea for Epistemic Excuses. In F. Dorsch and J. Dutant (ed.), The New Evil Demon. Oxford University Press. 15 Littlejohn, Clayton. Forthcoming b. How and Why Knowledge is First. In A. Carter, E. Gordon, and B. Jarvis (ed.), Knowledge First? Oxford University Press. McDowell, John. 1998. Criteria, Defeasibility, and Knowledge. In Meaning, Knowledge, and Reality. Harvard University Press. McHugh, Conor. 2014. Fitting Belief. Proceedings of the Aristotelian Society 114: 167-87. McGinn, Marie. 2012. Non-Inferential Knowledge. Proceedings of the Aristotelian Society 112: 1-28. Mantel, Susanne. 2013. Acting for Reasons, Apt Action, and Knowledge. Synthese 190: 3865-3888. Mitova, Veli. Forthcoming. Truthy Psychologism about Evidence. Philosophical Studies. Nelkin, Dana. 2000. The Lottery Paradox, Knowledge, and Rationality. Philosophical Review 109: 373–409. Priest, Graham. 2006. Doubt Truth to be a Liar. Oxford University Press. Pritchard, Duncan. 2012. Epistemological Disjunctivism. Oxford University Press. Smithies, Declan. 2012. Moore's Paradox and the Accessibility of Justification. Philosophy and Phenomenological Research 85: 273-300. Sosa, Ernest and Kurt Sylvan. Forthcoming. The Place of Reasons in Epistemology. In D. Star (ed.) The Oxford Handbook of Reasons and Normativity. Srinivasan, Amia. Forthcoming. Normativity without Cartesian Privilege. Philosophical Issues. Steglich-Peterson, Asbjorn. 2013. Truth as the Aim of Justification. In T. Chan (ed.), The Aim of Belief. Oxford University Press. Sutton, Jonathan. 2007. Without Justification. MIT University Press. Titelbaum, Michael. Forthcoming. Rationality's Fixed Point (Or: In Defense of Right Reason). In J. Hawthorne and T. Gendler (ed.), Oxford Studies in Epistemology. Oxford University Press. Titelbaum, Michael. 2015. How to Derive a Narrow-Scope Requirement from WideScope Requirements. Philosophical Studies 172: 535-42. Wedgwood, Ralph. 2002. Internalism Explained. Philosophy and Phenomenological Research 64: 349-69. Whiting, Daniel. 2014. Keeping Things in Perspective: Reasons, Rationality, and the Apriori. Journal of Ethics and Social Philosophy. Williamson, Timothy. 1992. Vagueness and Ignorance. Proceedings of the Aristotelian Society, Supplementary Volume 66: 145-62. Williamson, Timothy. 2000. Knowledge and its Limits. Oxford University Press. Williamson, Timothy. Forthcoming. Justifications, Excuses, and Skeptical Scenarios. In F. Dorsch and J. Dutant (ed.), The New Evil Demon. Oxford University Press.