1 Introduction

Consider the following case:

car insurance: Your car insurance company offers you a deal. The offer is to insure your car for the next five years against theft for the nominal price of ten dollars. You do not live in circumstances where car theft is particularly prevalent. It is rational for you to believe that your car will not be stolen during the next five years. But, of course, car theft has been known to happen on rare occasion nearby. And, you do not consider yourself to be especially different from the owners of those stolen cars.

Cases like car insurance appear to diminish the importance of rational belief, and this is because it seems you are rational to act in opposition to what you rationally believe is the all-things-considered best thing to do. After all, you believe that your car will not be stolen and also that, if your car will not be stolen, the insurance won’t be needed; and further, you believe that it would be best not to spend money on insurance if it isn’t needed. Accordingly, you believe that it would be best not to spend money on car insurance. Nevertheless, you rationally purchase the insurance.Footnote 1 This kind of case appears to show that the link between rational belief and rational action is broken; one cannot straightforwardly decide what to do on the basis of what one (even rationally) believes to be best.Footnote 2

In contrast, rational credence appears to do much better here. Of course, you may rationally believe that your car will not be stolen—and thus believe that if you purchase the insurance, the end result will simply be less money for other things. However, if you are rational, you have at least some minimal credence in the proposition that your car will be stolen. The rationality of this minimal credence (along with the rationality of your preferences) can explain why it’s rational to purchase the insurance.Footnote 3

Cases such as car insurance might seem to suggest that what is fundamental in epistemology is credence (alternatively known as “degrees of belief”) rather than belief. Perhaps the epistemology of belief might be important for reasons that aren’t directly related to action, but a divorce between rational belief and rational action at least greatly diminishes the importance of having the capacity to believe rationally. The project of this paper is to explain why this threat to traditional epistemology—epistemology that focuses primarily on belief—may not be as severe as it first appears. The advantage that a credence framework has in accounting for rational action might well be secured by dispensing with credence in favor of other doxastic attitudes that are more similar to belief by being representational.Footnote 4

Moreover, we suggest that this alternative framework can still allocate a special place to belief. Ultimately, we will characterize belief in terms of acceptance. Acceptance is the mental state of taking some propositional content \(\langle p\rangle \) for granted—whether consciously or not—in practical reasoning and rational decision-making; acceptance ordinarily causes acting as if \(\langle p\rangle \) is true.Footnote 5 Acceptance of \(\langle p\rangle \) involves using \(\langle p\rangle \) as a working hypothesis. Acceptance of a propositional content is sometimes rational and sometimes not, depending on the strength of one’s epistemic position and the practical stakes. (As an important aside, we note that “epistemic position” is intended throughout to be a neutral way of talking about something like the subject’s evidential condition as determined by her total body of evidence without raising any question about what evidence is. Thus, in particular, “epistemic position” is not a way of referring to the subject’s doxastic states either taken collectively or individually.) Our proposal will be that belief is the weakest doxastic attitude that normally suffices for rational acceptance. So, it can make sense for a cognizer to traffic largely in beliefs, considering whether to take up stronger or weaker doxastic attitudes only when practical stakes are sufficiently high or low.Footnote 6 Moreover, we will suggest that whether to take up stronger or weaker doxastic attitudes can, in a wide range of cases, be determined by what one believes. So, agents can regularly make decisions that are rational across a wide range of circumstances on the basis of rationally held beliefs alone.

Our project is significant because one of the deepest ideological divides within epistemology concerns the relative importance of belief versus credence. The moral of this paper is that one may have to look beyond decision-making and action to settle that debate. For instance, it may be more productive to consider Harman’s (1986) contention that understanding human reasoning requires a belief-based framework rather than a credence-based one.Footnote 7 Alternatively, one might consider whether or not the basic doxastic elements should be representations in the sense highlighted in Sect. .

Before continuing, however, it’s worth clarifying what isn’t at issue in this paper. The credence-based framework is closely associated with probabilism, the view that, insofar as a subject’s assignment of credence is rational, it behaves like an assignment of probability. However, something very much like probabilism is also accepted among some traditional epistemologists: it is sometimes accepted that beliefs can be more or less rational, and moreover, that levels of rationality function like levels of likelihood, which should be understood as a kind of probability.Footnote 8 Clearly, then, it cannot be a requirement that defenders of belief-based epistemology do without anything like epistemic probability. We will discuss this theoretical possibility in Sect. , but strictly speaking, accepting it is not part of the belief-based picture. What’s primarily at issue is not the viability of epistemic probability, but the extent to which epistemic statuses of credence are more fundamental than epistemic statuses of belief.

2 §1

Both belief and credal states qualify as doxastic attitudes, broadly construed. But, belief is an “on-or-off” attitude; believing a proposition stands in opposition to withholding belief with respect to that proposition. By contrast, credence comes in degrees; arguably, each of a continuum of opposing credal states is possible in principle (even if not in practice for limited human beings). Perhaps more importantly, the correctness of beliefs—their accuracy—is an all or nothing affair. A belief is correct if it is true, and incorrect if false. In contrast, to the extent that accuracy makes sense for credal states, it is a matter of degree.Footnote 9 In taking up belief towards a proposition, the aim is to possess the truth (by believing it),Footnote 10 yet all but the most extreme credal states do not take a definitive position per se on what the truth of a given matter is. For any proposition \(\langle p\rangle \), credences seem to split the difference (with a particular weighting) between opposing positions on whether \(\langle p\rangle \). For instance, if one holds .3 credence towards the proposition that it will rain today, one’s credal state is not incorrect in any obvious sense if the proposition turns out to be false, i.e. it does not, in fact, rain. However, there is no obvious reason why this would change if one’s credence is, instead, .9 (unless .9 credence is taken to additionally constitute belief or some similar “on-or-off” attitude)Footnote 11 since there is no principled difference between .3 and .9, only a difference of degree.

It is fairly obvious that there must be certain relationships between the respective epistemologies of beliefs and credal states.Footnote 12 For instance, it is clear that it can’t simultaneously be rational to believe \(\langle p\rangle \) while it is also rational to hold the minimum credence in \(\langle p\rangle \). Nevertheless, the relationship between these epistemologies is not altogether clear. Echoing Weatherson (2005),Footnote 13 we might ask:

Do we really have two subject matters here (epistemology of degrees of belief and epistemology of belief tout court) or two descriptions of the one subject matter? If just one subject matter, what relationship is there between the two modes of description of this subject matter?

These questions concern the extent to which the epistemic statuses of belief settle the epistemic statuses of credence—and vice-versa. The concern is whether the complete account of the epistemic statuses of beliefs leaves a remainder for the complete account of the epistemic statuses of credences—and vice-versa. There are, of course, metaphysical and epistemic interpretations of settling and remainder—corresponding, we think, to each of the two questions. The epistemic reading (the second question) might be stated (roughly) in the terms of Chalmers (2012): to what extent are the epistemic statuses of credence scrutable from those of belief—and vice-versa? The metaphysical reading (the first question) would be: to what extent do the epistemic statuses of credence metaphysically supervene on those of belief—and vice-versa? (The orthodox position is that while scrutability entails metaphysical supervenience, the converse may not be true, so these questions are genuinely distinct.)Footnote 14

In addition, however, there is a further question about whether there is any explanatory order to the epistemic statuses of belief and credence, respectively. Could it be that when it isn’t rational to believe \(\langle p\rangle \), but rational to hold the minimum credence in \(\langle p\rangle \), that the former explains the latter? There are three available positions on the fundamentality of belief-based versus credence-based epistemology:

Belief Fundamentalist Epistemology (bfe): The epistemic statuses of belief are more fundamental.

Credence Fundamentalist Epistemology (cfe): The epistemic statuses of credence are more fundamental.

Anti-Fundamentalist Epistemology (afe): Neither kind of epistemic status is more fundamental.

afe implies that, when it comes to the epistemic statuses of belief and credence, there is no (asymmetrical) constitutive dependence of one on the other. Consider, for instance, the status of (epistemic) rationality. The idea is that it is not the case that a belief is rational because a certain kind of credal state is rational, nor is a credal state rational because some kind of belief is rational. Matters of rationality for belief and credence might be interrelated, but one does not participate in (metaphysically) grounding the other. The degree of credence one is rationally committed to having in some proposition \(\langle p\rangle \) is not, for instance, constitutively a matter of how easy it would be to come to have a rational belief with propositional content \(\langle p\rangle \) even if the former has consequences for the latter and vice-versa.Footnote 15

bfe and cfe both deny afe. Consider first bfe. Strong bfe insists that the epistemic statuses of credence are wholly determined and constitutively explained by the epistemic statuses of belief (so there is no remainder). A non-trivial example might be the Williamsonian suggestion that rational credence is probability conditional on (the certainty) of those beliefs that qualify as knowledge.Footnote 16 However, bfe itself insists only that the epistemic statuses of credence are at least partly grounded in those of belief while admitting that there may be other factors as well. (Of course, this “partial grounding” could not be reciprocal, but rather must be asymmetrical.)

Analogously, strong cfe insists that the epistemic statuses of belief are wholly determined and constitutively explained by the epistemic statuses of credence (so, again, there is no remainder). Consider this bi-conditional: a belief \(\langle p\rangle \) is rational if and only if credence in \(\langle p\rangle \) above some given threshold is rational.Footnote 17 If the left-hand side is understood as the analysandum and the right-hand side is understood as the analysans, then what we have is an example of strong cfe. However, as with bfe, cfe need not be strong. A non-strong version of cfe might suggest that rational credence above a given threshold is a necessary precondition for rational belief, but that other factors unrelated to credence contribute to constituting the rationality of belief as well.

All three positions—bfe, cfe, and afe—have an air of plausibility to them. Wedgwood (2012) notes that a spectrum of positions have been taken on the relationship between belief and credence.Footnote 18 Presumably, the spectrum looks very similar when it comes to the relationship between the epistemologies of belief and credence. Our interest in this paper is this relationship between the epistemologies (which isn’t necessarily settled by the relationship between the psychologies of belief and credence). Nevertheless, we want to be very clear that our intention in this paper is not to rule out any of bfe, cfe, and afe. Rather, we wish to show that a certain prominent consideration in favor of cfe—the one we will explore in the next section—can be resisted. This will open the door to epistemological frameworks that emphasize the importance of doxastic representations, i.e. bfe and afe. But, it is well beyond the scope of this paper to give a positive argument for any such framework.

3 §2

As was noted in the introduction, cfe appears to gain support from cases such as car insurance. Because in that case it is rational for you to believe that your car will not be stolen during the next five years, it will ordinarily be rational for you to believe that you will be financially worse off by accepting the offer to purchase insurance. Stipulate that car insurance is an ordinary case. Stipulate also that you will gain no emotional comfort from having the insurance. A theft of the car would be bad, but it would not spell financial demise for you. Indeed, as far as this decision is concerned, the only considerations that are relevant for you are financial. Still, it could easily be rational for you to accept the offer to purchase this car insurance.

A credence-based epistemology can easily accommodate this result. It may be rational for you to believe that you will be financially worse off by accepting the offer. But, that need not imply that it would be rational to hold the maximum credence in this proposition. Indeed, if the former does imply the latter, then there may be bigger problems. It becomes more difficult to envision how rational belief is even possible—so skepticism threatens. Or if the former does not imply the latter, the relationship between rational action and credence becomes more obscure.Footnote 19 If fairly ordinary beliefs are rational and rational belief entails rational maximum credence, then it could not plausibly be that rational maximum credence rationalizes staking one’s life on the truth of the belief (because the rationality of ordinary beliefs certainly doesn’t). In other words, we’ve moved quite far from even the spirit of Ramsey’s suggestion that credences are closely associated with betting patterns.Footnote 20 Putting aside the theoretical option of an entailment between rational belief and rational maximum credence, it is plausible that although it is rational for you to believe that you would be financially worse off, it also could easily be rational to hold some non-minimal credence in the proposition that you would be financially better off by accepting the offer.Footnote 21 And, this non-minimal credence could be above the threshold required to make it rational to accept the offer to purchase the car theft insurance for ten dollars.Footnote 22

Without the resources of a credence-based epistemology, it becomes more difficult to understand the rationality of accepting the offer. Ex hypothesi, what you believe is that you would be financially worse off by taking the offer, and no other non-financial considerations are relevant. So, how could it be rational to accept the offer?

The question is not merely how it could be rational for you to do what is, by your own beliefs, worse. Perhaps beliefs are simply not the kind of cognitive state to rely on in this situation. Using the terminology from the introduction, we might say, paceFantl and McGrath (2009) and others, that the rationality of belief need not imply the rationality of acceptance.Footnote 23 Perhaps this is counterintuitive, but we can bite that bullet. Even so, a problem remains. How could it be rational for you to do what is, by your own beliefs, worse unless it is rational for you to have some further doxastic state that puts you in a position to appreciate that this action would be, in fact, rational? Presumably, it couldn’t. So what are these further doxastic states that put you in a position to appreciate that this action would be, in fact, rational? Call this “the Further Doxastic State Question.”

One option for circumventing the Further Doxastic State Question is to deny that the description of car insurance leaves much room for it to be rational to accept the offer. The description stipulates that it is rational for you believe that your car will not be stolen during the next five years. But then, one might say, it could not but be rational for you to reject the offer to insure for car theft given that financial considerations are all that matters. So, if it is ordinarily rational to accept this kind of offer (i.e. not in car insurance, but in less unusual cases with a similar description), then it is only because, in these same ordinary circumstances, it is not rational to believe that your car will not be stolen during the next five years. This might be because, as a general rule, it is very difficult to be in a strong enough epistemic position to rationally have this belief. But then we’re trending towards skepticism again—an unpalatable result.Footnote 24

Alternatively, an option would be to say the following: that even though it wouldn’t ordinarily be difficult to be in a strong enough epistemic position rationally to have this belief, being in a “practical environment”Footnote 25 where this offer is available makes it difficult. The latter approach involves thinking that rationality for belief is subject to pragmatic encroachment: pragmatic factors about what courses of action play a role in determining the rationality of belief.Footnote 26

However, pragmatic encroachment creates problems for the rational governance of belief.Footnote 27 Rational governance of belief implies the agent believes, by and large, because it is rational to believe. Pragmatic encroachment implies that whether it is rational to believe depends on strength of epistemic position and the relevant pragmatic factors. So to believe, by and large, when it is rational and not, by and large, otherwise, one will have to be able to track (fallibly) the strength of one’s epistemic position and the relevant pragmatic factors. How does one track (even fallibly) the strength of one’s epistemic position and the relevant pragmatic factors? Call this “the Tracking Question.”

Again, credence-based epistemology provides a straightforward answer. One can track strength of one’s epistemic position with regard to any proposition \(\langle p\rangle \) with more fine-grained credal states. And, one can track pragmatic factors using one’s credal states and preferences. The relative practical merits of acceptance versus non-acceptance of a proposition \(\langle p\rangle \) can be measured by something like their relative expected values (as determined by credences and preferences). If belief rationally requires acceptance, then we can measure the practical merits of belief versus non-belief in the same kind of way.Footnote 28

The emerging picture is one where, in one way or another, credence looks to be more theoretically interesting than belief. First consider the purely intellectualist approach where pragmatic factors don’t matter for epistemic evaluation. On this approach, we run into the Further Doxastic State Question (so long, at least, as we steer clear of skepticism). We could easily answer this question by pointing to credence as the further doxastic state, but only by acknowledging a more fundamental relationship between rational credence and rational action than there is between rational belief and rational action. This may well obviate the need for belief in the cognitive system—at least in principle. At least as far as rational action is concerned, belief would turn out to be essential only if it is simply constituted by credence in some sort of way, e.g. belief is a matter of having credence above a certain threshold. When it comes to rational action, only a modest role for belief is apparently available, if any. In practice, it may be useful to have beliefs, because dealing with credence is cognitively taxing and we are rationally limited.Footnote 29 But, ideally, it would be better to make decisions using credences. And, one has to move from belief-based practical reasoning to credence-based practical reasoning whenever one perceives sufficient risk.

On a pragmatic encroachment approach, we run into the Tracking Question (so long as we continue to steer clear of skepticism). We could easily answer this question by pointing to credence as the tracker, but then one is rationally governing one’s believing by having credal states. But why bother to manage one’s beliefs by having credal states?Footnote 30 Why not just manage one’s actions directly instead? Again, credence seems to obviate the need for belief in the cognitive system in principle (unless credence simply constitutes belief); and, it may obviate the need for belief in practice as well.

On both approaches, rational credence gives us a way of answering a pressing question. But, if rational credence is the way that we must answer these pressing questions, then bfe looks dubious. If we have to turn to rational credence to solve problems that arise from casting our epistemology solely in terms of belief, then it just looks like a credence-based epistemology has more resources than a belief-based epistemology in the sense that a belief-based epistemology simply cannot account for the same range of phenomena—in particular, either rational action (if we keep the relationship between rational belief and rational action loose)Footnote 31 or rational belief governance (if we keep it tight).Footnote 32 This would be a good reason (even if not an especially conclusive reason) to think that bfe is false. Moreover, cfe (but not necessarily strong cfe!) would be hard to avoid unless there is some other range of phenomena that a belief-based epistemology can account for but a credence-based epistemology cannot. (Remember: cfe merely entails the explanatory priority of credence-based epistemology; only strong cfe entails that credence-based epistemology settles belief-based epistemology without remainder.)

Of course, it is not our contention that either the Further Doxastic State Question or the Tracking Question have to be answered by pointing to credence; our aim is to pursue a way for resisting this idea. Because we are particularly concerned by the Tracking Question,Footnote 33 we prefer to take an intellectualist approach (rather than opt for pragmatic encroachment). So, our interest here will be in resisting cfe by finding an answer to the Further Doxastic State Question. It is not our intention, however, to take a stand here about other ways that an advocate for the theoretical prominence of belief might respond to this dilemma. For all we say, there may be other promising avenues to pursue.

By way of reminder, the Further Doxastic State Question is “What are those further doxastic states that put you in a position to appreciate that some possible action would be, in fact, rational in cases like car insurance in which, by your own beliefs, this action is worse?” One possible answer to this question is these further doxastic states are, in fact, beliefs.Footnote 34 So, while by your own beliefs, this action is worse, still by some other route, your beliefs let you appreciate that this action would be, in fact, rational. If it could be made to work, this kind of answer would definitely help the advocate of bfe.

This is not the route that we intend to pursue most immediately. In Sect. , we do propose that, in practice, rational action may be possible, by and large, for an agent trafficking only in beliefs precisely because an agent’s whole network of beliefs may register a lot about how secure any particular belief is in light of the agent’s experiences. Nevertheless, we think that a straightforward “further beliefs” answer to the Further Doxastic State Question confronts a sort of wrong content problem. In car insurance, for instance, the relevant question seems to be whether your car will be stolen in the near future—something you believe to be false. Presumably, you have beliefs that are inferentially connected to this belief, e.g. that you live in a safe neighborhood. You may also have beliefs about how strong or weak your epistemic position is with respect to the proposition that your car will be stolen, e.g. the epistemic probability of theft. But, whether you will receive a benefit or encounter a loss from the course of action in question—purchasing insurance—doesn’t precisely turn on the truth or falsehood of most of these further beliefs. It turns precisely on whether or not your car will be stolen. (Assuming there is no emotional security gained from the purchase, purchasing insurance will benefit you if your car is stolen and otherwise be a loss.) So, many of these further beliefs aren’t directly related enough to the potential reason-giving facts for or against purchasing insurance. We conclude that many of these further beliefs cannot ultimately be the rationalizers for action in this case; they have the wrong content to do so. (Of course, the belief that your car will not be stolen has the right content, but the rational belief isn’t “strong enough” to rationalize forgoing insurance; in this instance, the problem appears to be not with the content, but the attitude.) In particular, beliefs about one’s epistemic position have the wrong content because although one’s epistemic position is relevant to whether one acts rationally, the aim of acting is not to act rationally per se, but to act to secure benefits, diminish losses, right wrongs, uphold justice, etc.—that is to say, to make appropriate changes in the world. Beliefs with the right content to rationalize action are precisely those directed outwardly at the sources of benefit, loss, right, and wrong in the world, not directed in a self-fascinated way at factors relevant to the subject’s own rationality or evidence. To rationalize action, the beliefs must be rational and supported by evidence, not be about rationality or evidential support.

In addition, we suspect that while one’s network of beliefs may register a lot about how secure any particular belief is in light of the agent’s experiences, it doesn’t register everything. Accounting for maximally rational action really does cause problems for a strong version of bfe that insists that the epistemic statuses of credence are wholly determined and constitutively explained by those of belief precisely because even an entire network of beliefs won’t generally reflect all aspects of one’s evidence that are relevant to selecting a course of action that is maximally rational. To adequately account for how one might, in principle, act maximally rationally in all possible cases, one needs to move away from a purely belief-based epistemology to an epistemology of a more general kind. This will be our course.

4 §3

Let’s begin looking for an answer to the Further Doxastic State Question by thinking carefully about why a credence-based epistemology seems to handle car insurance with such ease. Why doesn’t the Further Doxastic State Question seem to arise for credences like it does for beliefs? For instance, why aren’t there obvious cases in which questions about whether it is rational to have some credence are settled, but these answers conflict with answers to questions about which potential course of action is rational? We contend that the difference here is the fact that there is a continuum of credal states, whereas with belief, it seems like there’s only belief and withholding belief.

It is not hard to imagine variants of car insurance in which your epistemic position vis-à-vis the proposition that your car will not be stolen is much stronger. For instance, you might know that the area that you live and work in is renowned for its exceedingly low crime rate and very effective law enforcement. And, we can imagine variants in which your epistemic position is much weaker, e.g. in contrast to the original case, you might know that car theft is prevalent in your area, particularly theft of the brand of car you drive. But, these differences in epistemic strength can’t be captured by a (first-order)Footnote 35 doxastic attitude towards the proposition if the only attitudes to choose from are believing and withholding. There are two attitudes, but at least three different epistemic positions. So, one attitude will have to double as the response to two different epistemic positions. Information about which of the two epistemic positions you are in will not be registered by this attitude. Merely adding another doxastic attitude won’t help. Consider any two variants of our original case, car insurance \(^{\textsc {x}}\) and car insurance \(^{\textsc {y}}\) that are strictly ordered by epistemic position with respect to the proposition in question. Mightn’t we always find some third case, car insurance \(^{\textsc {z}}\), that falls strictly between them in this regard? If so, this shows that the set of doxastic attitudes must be (at least) a dense topological space akin to the set of rational numbers in order to register all these different possible epistemic positions.

The reason that registering epistemic positions matters for rational action is that many differences in epistemic position with respect to some relevant proposition \(\langle p\rangle \) can make for corresponding differences in whether a particular action is (maximally) rational. To better improve rationality with respect to action, it pays to be able to register more differences in one’s epistemic position with respect to a proposition. That way, no matter what the threshold for rational action turns out to be vis-à-vis strength of epistemic position, one will be able to determine whether it is met. This discussion suggests a remedy to the advocate of belief-based epistemology who wants to answer the Further Doxastic State Question. Just introduce further doxastic attitudes to go alongside belief—one for every possible epistemic position one could have with respect to a proposition.

5 §4

A cfe advocate might complain that this strategy concedes too much. Isn’t acknowledging an infinite number of possible doxastic attitudes towards a proposition just letting credences in through the back door? Here, it is important to pay close attention to which aspects of credence-based epistemology do and which don’t help with the determinations of rational action. What helps is the large number of credal states. But there are other features that don’t seem to matter.

Consider a disparity between belief and credence noted earlier: the correctness of beliefs—their accuracy—is an all or nothing affair; in contrast, to the extent that accuracy makes sense for credal states, it is a matter of degree. This disparity seems to account for the fact that although both beliefs and credal states have propositional content—and are, in some sense, about the world—only beliefs are a genuine kind of mental representation, i.e. mental states that definitively render the world as being a particular way.

This feature of credences played no role in our discussion last section. The explanation given there was that because the number of credal states is infinite, an infinite number of different epistemic positions can be registered with respect to some proposition. And, this turns out to be significant since any difference of epistemic position could be a relevant difference for rational action; having credences puts one in a position to select the rational action across these differences in epistemic position. Nothing in this reasoning turns on whether credences are a genuine kind of mental representation or not. The existence of an injective mapping from epistemic positions to credal states is all that matters.

Here is another feature of credence that doesn’t seem to matter: each credal state rationally precludes other credal states.Footnote 36 Holding a higher degree of credence in a proposition rationally precludes holding a lower. This is not to say that it would be irrational to hold a kind of mushy credenceFootnote 37 that is indeterminate between the two. It is merely to say that given that one has committed oneself by holding a more determinate credal state, one is rationally precluded from simultaneously holding some distinct determinate credal state.

If the two features just discussed don’t matter when it comes to explaining why credences appear to do better than belief vis-à-vis rational action, then we should be able to omit them without losing the virtues of the credence framework vis-à-vis rational action. In other words, the door is open to the following view: there is a hierarchy of basic doxastic attitudes with a very large cardinality.Footnote 38 For each of these doxastic attitudes, D, there is also a corresponding basic attitude of withholding D. Belief is among this hierarchy. But, there are stronger attitudes like being-sure or being-absolutely-certain. They are stronger in the sense that the rationality of belief doesn’t entail but is compatible with the rationality of taking these attitudes. And, there are weaker attitudes like (at least) suspecting. They are weaker in the sense that the rationality of taking up these attitudes doesn’t entail but is compatible with the rationality of belief.

Moreover, all of these attitudes are genuine mental representations that are entirely beholden to the world for their correctness (simpliciter), depending only on whether they are true. Indeed, in taking up any of these doxastic attitudes, the aim is to possess the truth by having that attitude. As an anonymous referee astutely noted, this latter “representational” difference seems to bring the first difference of rational compatibility of stronger and weaker doxastic attitudes with it. One way to appreciate this is to note that, were credences genuine representations, they would be very bizarre ones—since a .65 credence towards the proposition that it will rain today would, while increasingly likely to be accurate as one’s epistemic position strengthened, would simultaneously become increasingly less rational. To fix this problem, genuine representations must not rationally preclude strictly stronger doxastic attitudes.

One might wonder how all these distinct doxastic attitudes could have the same aim of possessing the truth. Isn’t belief individuated from other attitudes by having this aim?Footnote 39 Actually, no. In other work, we suggest that having the aim of possessing the truth is equivalent to being committed to implement good methods for possessing the truth on the basis of one’s experiences.Footnote 40 This characterization doesn’t say how good the methods must be; using better or worse methods corresponds to aiming more or less stringently at possessing the truth. Our proposal now is that every way of specifying how good the methods must be corresponds to a distinct doxastic attitude. When it comes to believing, one is committed to using methods that are fairly effective at landing true representations rather than false representations. However, in considering whether to take up or retain the stronger attitude of being sure that, one is committed to using methods that are strictly better in this regard.

At this point, it may be helpful to consider the familiar Jamesian point that the aim of possessing the truth is best understood as a mixture of two competing aims: truly representing and not misrepresenting.Footnote 41 This dual-characterization has been espoused in different waysFootnote 42 by Descartes (xx),Footnote 43 Alston (1985),Footnote 44 Foley (1987),Footnote 45 David (2001), Fallis (2006)Footnote 46 and many others. Importantly, truly representing and not misrepresenting are competing aims because one can only fulfill the first aim by putting oneself at risk of not fulfilling the second. Weighting the second aim more would lead one to be more cautious in order to avoid possible misrepresentation. Weighting the first aim more would lead one to be bolder in order to possess more truths.

Our proposal develops this Jamesian thought in a novel direction: different ways of mixing these two aims correspond to doxastic attitudes at different places in the hierarchy.Footnote 47 There are attitudes like being-sure, where the second aim is appropriately weighted more (so that misrepresentation is worse and failing to truly represent isn’t so bad), and there are attitudes like suspecting, where the first aim is appropriately weighted more (so that failing to truly represent is worse and misrepresentation isn’t so bad). But, the aim of all of these doxastic attitudes is to possess the truth by having the attitude. It’s just the mixture of the twin aims—how important they should be in different situations—that varies.

We end this section by noting that “different ways of mixing” these two aims may but need not be construed as assigning each aim a scalar corresponding to its relative importance. (This scalar mixing approach appears to line up each distinct doxastic representation in the hierarchy with, what is in effect, some kind of minimal credal threshold.) Another way to mix the aims is to decide which sorts of error possibilities are tolerable in order take a chance on possessing the truth. For instance, misrepresenting because one is deceived by an evil Cartesian demon might be tolerable for belief, but not for absolute certainty. By tolerating all error possibilities, one is giving all the weight to truly representing. By tolerating no error possibilities, one is giving all the weight to not misrepresenting. But, of course, there are many ways of tolerating some error possibilities, but not others. Each of these corresponds to a different way of mixing the aims of truly representing and not misrepresenting.Footnote 48

Our proposal in Sect. develops this picture in more detail. In Sect. , we suggest that each doxastic attitude in the hierarchy might be characterized by a division of the space of possibilities into those that are more or less abnormal. To a first approximation, abnormality is the same as tolerability in the sense that abnormal error possibilities are usually tolerable. More exactly, possibilities are more abnormal for a doxastic attitude, D, to the extent that one has to have special evidence that they obtain before they become intolerable error possibilities for D. On the view in question, an agent rationally holds doxastic attitude, D, towards \(\langle p\rangle \) if and only if \(\langle p\rangle \) is true in, what are for D, the least abnormal possibilities that are strictly compatible with the agent’s experiences.

6 §5

The picture we have developed thus far may not satiate those who contend that belief is theoretically significant. What we have is an alternative to credence-based epistemology, but it is representation-based rather than belief-based, per se.

In order to show that we have not abandoned belief, we must demonstrate how belief occupies a special place in the hierarchy of doxastic attitudes. Our suggestion is (roughly) that belief is the weakest doxastic attitude such that rationally holding that attitude towards a proposition normally suffices for rational acceptance of that proposition.Footnote 49 (Recall: acceptance is the mental state of taking some propositional content \(\langle p\rangle \) for granted—whether consciously or not—in practical reasoning and rational decision-making; acceptance causes acting as if \(\langle p\rangle \) is true.) Belief’s being the ‘weakest’ doxastic attitude with feature F means that there is no other doxastic attitude, D, with feature F such that rationally believing entails rationally holding D, but not vice-versa. ‘Normally suffices’ means that special circumstances—and in particular, risky payoff structures—have to become apparent in order to break the link between rational belief and rational acceptance. Risky payoff structures may be apparent if, for example, the subject has reason to believe or even suspect that standing causal regularities allow for significant changes in utility that would be attributable to having taken one course of action rather than another.Footnote 50 (Notice that credence is not obviously necessary for tracking risky payoff structures.) When the link between rational belief and rational acceptance is broken, it must be possible in principle for the subject to explain the link away by citing the special circumstances in question. But, no explanation is owed for why the link holds in canonical cases. ‘Normally suffices’ is plausibly vague; it is not clear which are the normal possibilities in which the link to rational acceptance has to hold for a doxastic attitude to count as belief. But, we think that this fuzziness corresponds to a genuine indeterminacy in which among the doxastic attitudes in the hierarchy is belief.

Our theory of belief allows us to make belief the starting point of rational decision-making. When one is deliberating about what to do, it makes sense to start by considering whether one’s beliefs support the conclusion that one of the possible actions—say \(\uppsi \)-ing—would be best.Footnote 51 If so, \(\uppsi \)-ing is ordinarily rational. If not, it may not be immediately clear what to do. One may have to consider what other doxastic attitudes one has as well as the potential costs and benefits of both correctly and erroneously acting as if some particular action would be best.

It is beyond the scope of this paper to delve further into the mechanics of (bounded or unbounded) rational decision-making.Footnote 52 It suffices for our purposes to emphasize that the hierarchy of doxastic attitudes introduced previously is compatible with putting belief at the center of the rational decision-making process. The suggestion is that our first resort when considering what to do is simply to consider whether there is any possible action that we believe to be best vis-à-vis our ends. In other words, belief has the first word on what means to our ends we should take, even if it doesn’t have the last. That is a significant enough role for belief to have in determining action.

7 §6

At the end of Sect. , we suggested that a representation-based epistemology can imitate credence-based epistemology in a respect that’s important for accounting for rational action.Footnote 53 Let us pause to defend this suggestion briefly. Just as there are many credal states, so there can be many doxastic representational states. Moreover, just as credal states are assigned real numbers from the unit interval, so we could assign doxastic representational states real numbers from the unit interval in accordance with their strength. As a consequence, there would be enough doxastic representational states to register all the epistemic positions registered by credences. And, in principle, that should allow for equally fine-grained determinations of action by both. This should be obvious; in principle, one could assign credence level r (where \(0 \le r \le 1\)) to a proposition \(\langle p\rangle \) if and only if level r corresponds with the strongest doxastic representational state it would be rational to hold towards \(\langle p\rangle \), and calculate expected utilities in the usual way.

Imitation is an available theoretical option that puts representation-based epistemology on a par with credence-based epistemology. However, in the remainder of this section, we consider a potential reason not to favor imitation and consider what an alternative to imitation might look like.

Consider san diego.

san diego: John and Jane are wondering whether there are any mules in the San Diego Wild Animal Park that are cleverly disguised as zebras. John knows that students from UC Sunnydale and Caltech had planned to put a mule cleverly disguised as a zebra in the Park, but that the Caltech students cancelled their plans to participate. John doesn’t know anything about how difficult this prank would be to pull off. Jane has heard a rumor that somebody might be interested in the prank, but doesn’t know who is interested or what their plans might be. Jane works for the San Diego Wild Animal Park and knows about the security at the Park. She knows it would be very difficult for UC Sunnydale students to get past the security, but, because of differences in the engineering curriculum, the Caltech students could figure out how.

How should we compare the strength of the epistemic positions that John and Jane occupy with regard to the proposition \(\langle \) There is no mule in the San Diego Wild Animal Park that is cleverly disguised as a zebra \(\rangle \)? John is in a better position to rule out possibilities in which cleverly disguised mules are placed in the Park by Caltech students, but Jane is in a better position to rule out possibilities in which cleverly disguised mules are placed in the Park by UC Sunnydale students. Which matters more? It is a theoretical possibility that this question doesn’t have an answer.

We can put this point more abstractly. In the story, John’s experiential condition is strictly incompatible with some of the possibilities in which the proposition highlighted last paragraph is false.Footnote 54 Many of these are possibilities that are not strictly incompatible with Jane’s experiential condition. Let the set of these possibilities be S. Jane’s experiential condition is also strictly incompatible with some of the possibilities in which this proposition is false. Many of these are possibilities that are not strictly incompatible with John’s experiential condition. Let the set of those possibilities be \(S^{*}\). Whether John’s or Jane’s epistemic position is stronger rests at least partly on whether, by having experiences strictly incompatible with S and \(S^{*}\) respectively, John’s experiential condition is strictly incompatible with “more” or “less” possibilities than Jane’s is. But, the cardinality of these two sets may be the same. And, neither is a subset of the other. Maybe the possibilities of one of these sets are, in some suitable epistemic sense, collectively more probable than the possibilities of the other are. However, it strikes us that comparisons of epistemic probability of this sort need not make sense. After all, it is hardly clear that \(\langle \) There is no mule in the San Diego Wild Animal Park that is cleverly disguised as a zebra \(\rangle \) should have a definitive epistemic probability for anyone. As a result, the epistemic positions of John and Jane may be incommensurable. John’s epistemic position may be better in one respect. Jane’s epistemic position may be better in another. But, there may be no fact of the matter about which is better simpliciter.

In effect, we are considering the theoretical possibility that epistemic positions need not have the structure of the real numbers between zero and one inclusive.Footnote 55 Epistemic positions may not be well-ordered or even total ordered by strength; the ordering may only be partial. For any two epistemic positions x and y, it may be that neither is x stronger than y, nor is y stronger than x, nor are they equal in strength. (The suggestion is that some epistemic positions might be incommensurable, not that all are.) If so, then degrees of beliefs—credences—are exactly the wrong kind of doxastic attitude to register epistemic positions because degrees of beliefs are well-ordered.Footnote 56 Persons in epistemic positions of incommensurable strength with respect to some proposition would have to assign less, more, or the same credence than one another even though their respective epistemic positions are neither stronger, weaker, nor exactly the same.

One might wonder how there could be epistemic positions of incommensurable strength. Couldn’t we discover the exact strength of an epistemic position for some proposition \(\langle p\rangle \) by seeing what kinds of odds a rational agent in that epistemic position would need to bet on the truth of \(\langle p\rangle \)—ignoring, of course, practical or moral considerations that might distort betting patterns? However, the theoretical possibility under consideration is precisely one in which the epistemic position itself does not warrant any particular betting pattern. Of course, if offered a series of bets with improving odds, a rational agent will be forced into adopting a betting pattern of some sort. However, in the theoretical possibility under consideration, the choice of betting pattern will not be adequately constrained by the agent’s epistemic position. She will be forced into a betting pattern that is, to some degree, arbitrary at least as far as her epistemic position with respect to \(\langle p\rangle \) is concerned.

Moreover, we think that the betting pattern of this rational agent need not be directly revelatory of her total doxastic state either. By way of analogy, consider Buridan’s ass. Even if the ass opts for the stack of hay on the left rather than right, this need not indicate a preference on the part of the ass for the stack of hay on the left rather than right. Ex hypothesi, the ass has no reason to prefer the stack of hay on the left, so, arguably, no such preference is rational. Of course, the ass should implement some plan or other in order to eat hay, but this plan might well be settled upon arbitrarily rather than wholly on the basis of an ungrounded preference for left. Similarly, we think, a particular betting pattern with respect to \(\langle p\rangle \) need not reveal some doxastic attitude of definitive strength towards \(\langle p\rangle \), particularly if that doxastic attitude would not be warranted by a current epistemic position that is not only fallible, but genuinely equivocal (i.e. not of definitive strength) with respect to \(\langle p\rangle \). Instead, the betting pattern may reveal only a strategic coping plan of the agent—e.g. a pragmatic credal function rather than one constituting a genuine opinion with regards to \(\langle p\rangle \). The rational agent may chose this strategic coping plan somewhat arbitrarily albeit partly by reference to a total doxastic state that is also equivocal (i.e. not of definitive strength) with regards to \(\langle p\rangle \) so as to precisely mirror her equivocal epistemic position.

The hierarchy of doxastic representations introduced in Sect. is well equipped to handle the theoretical possibility under consideration. The hierarchy may but need not be well-ordered; in fact, it may only be partial ordered. So, it might do better at registering sometimes equivocal and therefore only partially ordered epistemic positions than the credal scale. Of course, a hierarchy of doxastic attitudes wouldn’t have to be a hierarchy of doxastic representations in order not to be total ordered by strength. Still, we can see that representation-based epistemology may have some flexibility that credence-based epistemology does not obviously have—at least insofar as credence is construed as a kind of subjective probability.

How much this flexibility matters depends partly on whether we can find a case where difference of epistemic position matters for whether an action is rational even though this difference is between epistemic positions that are of incommensurable strength. We won’t pursue further whether there are such cases. We think it may be interesting enough that, in principle, a representation-based psychology might be better equipped to register differences in the strength of epistemic positions than a credence-based psychology is regardless of whether this makes any difference for rational action.

If the hierarchy of doxastic attitudes isn’t a total ordering by strength, what kind of structure might it have? As alluded at the end of Sect. , perhaps, the strength of doxastic attitudes is ordered by what sorts of possibilities are “relevant” in considering whether to hold the attitude or, equivalently, by which sort of possibilities are “irrelevant” so that error in them is tolerable.Footnote 57 In considering whether to believe a proposition—e.g. \(\langle \) There is a goldfinch in the garden \(\rangle \)—certain skeptical possibilities in which the proposition is false—e.g. possibilities involving fake goldfinches—might be irrelevant. They might be irrelevant in the sense that the strict compatibility of these possibilities with one’s experiential condition need not count against belief. But, in considering whether to be absolutely certain that the proposition is true, these same possibilities might be relevant; they must be strictly incompatible with one’s experiential condition in order for taking up this attitude of absolute certainty to be rational. The idea would be that for every set of possibilities, there is a unique doxastic attitude for which this is the set of relevant possibilities. Suppose we have doxastic attitudes D and \(D^{*}\) with sets X and \(X^{*}\) as their sets of relevant possibilities, respectively. Then, D is stronger than \(D^{*}\) if and only if X is a proper superset of \(X^{*}\). If X is not a proper superset of \(X^{*}\), then it will turn out that D is not stronger than \(D^{*}\). And, if \(X^{*}\) is also not a proper superset of X, then it will turn out that \(D^{*}\) is not stronger than D. Then, D and \(D^{*}\) turn out to be incommensurate with respect to strength.

As attractive as this suggestion might be, it can’t be quite right. The problem is that while certain possibilities might not be relevant now, they might become relevant in more bizarre circumstances. Imagine that you know that there is a prankster in neighborhood planting fake goldfinches in gardens. These bizarre circumstances might make possibilities involving fake goldfinches relevant even if they weren’t relevant before. It might be that it wouldn’t be rational to believe \(\langle \) There is a goldfinch in the garden \(\rangle \) unless one’s experiences are strictly incompatible with these possibilities. This suggests that we move to a framework instead where possibilities are more or less abnormal, where more abnormal possibilities become relevant to whether to believe on the condition that one discovers oneself to be in more bizarre circumstances. Stronger doxastic attitudes will be ones for which “crazier” possibilities are taken to be more normal, and thus more easily relevant to whether to hold those attitudes. Weaker doxastic attitudes will be ones for which only slightly surprising possibilities are taken to be more abnormal, and thus less easily relevant to whether to hold those attitudes.

In fact, the kind of framework that we have in mind is explored formally by those interested in non-monotonic consequence relations.Footnote 58 With classical monotonic consequence, if \(\langle p\rangle \) is a consequence of \(\langle q\rangle \), then it is also a consequence of \(\langle q\rangle \) and \(\langle r\rangle \). This is no longer valid for non-monotonic consequence relations. Non-monotonic consequence relations are, of course, an attempt to formalize genuinely “ampliative” reasoning where the conclusion may “go beyond” what is contained collectively in the premises. Because conclusions “go beyond” the premises, one may have to backtrack on the conclusion upon learning more even without giving up any of the previous premises. For this reason, the same conclusions don’t necessarily follow from strengthened premise sets. As it turns out, non-monotonic consequence relations can be characterized or “represented” by truth-preservation in the set of “least abnormal” possible worlds.Footnote 59 \(^{, }\) Footnote 60 (Good ampliative reasoning is reasoning to a conclusion that can be false even when one’s premises are true; more abnormal possible worlds will be—by definition—those where this happens.) Different non-monotonic consequence relations can thus be characterized or “represented” by different orderings of possible worlds in terms of normality. When you impose a normality ranking on possibilities, you are, in effect, imposing a non-monotonic consequence relation.

Our (toy) suggestion, then, appears to amount to this: doxastic attitudes within our hierarchy are individuated by non-monotonic consequence relations—or more intuitively, by the differences in what counts as valid ampliative reasoning for that attitude—that generate, at a particular time, the doxastic attitudes from (the same full set of) premises offered up by a subject’s experiences. By way of concrete example, reasoning (in the absence of any particular background information) from the starting point of a perceptual experience as of something with a barn façade to \(\langle \) That thing is a barn \(\rangle \) may be valid non-monotonic reasoning for belief, but not for a much stronger doxastic attitude. The idea is this difference may participate in individuating belief from its stronger counterpart (in roughly the same way having a non-negative predecessor helps to individuate the number one from zero). More generally, weaker doxastic attitudes have very strong non-monotonic consequence relations—you can get a lot more out (in terms of “deductive” strength of the conclusion) for what you put in (in terms of the collective “deductive” strength of the premises). At the limit would be an absurd doxastic attitude that it would be rational to take towards any proposition. This corresponds to a limiting case of a “non-monotonic” consequence relation that allows you to get everything on the conclusion side out of anything on the premise side. (It is a limiting case because this “non-monotonic” consequence relation is monotonic; monotonic consequence relations are a special case of “non-monotonic” consequence relations on our understanding.) Stronger doxastic attitudes have very weak non-monotonic consequence relations. At the limit would be the doxastic attitude of absolute certainty that it would only be rational to take towards propositions that strictly follow from one’s experiential condition. This corresponds to another limiting case of a “non-monotonic” consequence relation that allows you to only get out on the conclusion side what you put in on the premise side. (Again, it is a limiting case because this “non-monotonic” consequence relation is monotonic.)

Notice that non-monotonic consequence relations are not total ordered by strength. Of course, some non-monotonic consequence relations are stronger than others, in the sense that the consequences of the former (for any premise set) include the consequences of the latter (for that same premise set) and more besides. So, the doxastic attitude individuated by a stronger non-monotonic consequence relation will be (strictly) weaker than the doxastic attitude individuated by the weaker non-monotonic consequence relation. But, in many cases, we don’t have “consequence inclusion” of this sort. So, correspondingly, we will have two doxastic attitudes that are potentiallyFootnote 61 incommensurable in strength.

By way of illustration, consider again John and Jane from san diego. Ex hypothesi, John’s experiential condition is strictly incompatible with possibilities of S; so, in principle, he doesn’t need anything from a non-monotonic consequence relation to rule them out. But, he would need something from a non-monotonic consequence relation to rule out the possibilities of \(S^{*}\); those aren’t strictly compatible with his experiential condition. Jane is in the reverse position. For John, whether it is rational for him to take up a doxastic attitude, D, towards \(\langle \) There is a mule in the San Diego Wild Animal Park that is cleverly disguised as a zebra \(\rangle \) depends on whether the non-monotonic consequence relation individuating D lets him rule out the possibilities of \(S^{*}\) given his experiential condition. For Jane it depends on whether the non-monotonic consequence relation individuating D lets her rule out the possibilities of S given her experiential condition. Suppose that the latter is true, but the former isn’t. In such a case, Jane is rational in taking up D, but John isn’t. Still, there should be some other non-monotonic consequence relation that lets John rule out the possibilities of \(S^{*}\) given his experiential condition, but doesn’t let Jane rule out the possibilities of S given hers. And, this non-monotonic consequence relation should individuate a further doxastic attitude, \(D^{*}\) that it will be rational for John to take up but not Jane. So, D and \(D^{*}\) are of incommensurable strength; the rationality of taking up one doesn’t entail the rationality of taking up the other.Footnote 62

Some of the doxastic attitudes individuated by non-monotonic consequence relations are, of course, completely ridiculous.Footnote 63 There is a doxastic attitude within this hierarchy that it is rational to take up towards \(\langle \) I am a recently disembodied spirit deceived by a Cartesian demon \(\rangle \) in the ordinary sort of case where one has a perceptual experience as of one’s hand—at least assuming the latter experience doesn’t provide conclusive reason rejecting this proposition. This is simply because there is a way of ranking possibilities by normality so that bizarre possibilities including Cartesian demons get ranked the most normal. Belief is obviously not an attitude of this sort. The non-monotonic consequence relation individuating belief would have to be characterized by a normality ranking on possibilities that is fairly intuitive; the possible worlds ranked as more normal by the ranking function for belief really are more normal. For that reason, rational belief provides a pro tanto reason for acceptance: because rational beliefs are true in possible worlds that really are more normal, it is ordinarily reasonable to rely on them in deliberating about what to do.

It is not clear whether in the envisioned hierarchy there will be a unique weakest doxastic attitude such that rationally holding that attitude towards a proposition normally suffices for rational acceptance of that proposition. This is not just because ‘normally suffices’ is vague. For example, a number of doxastic attitudes incommensurate in strength may normally suffice for rational acceptance even though no strictly weaker attitudes would. We think that this would simply indicate some further indeterminacy in which among the hierarchy of doxastic attitudes is belief.

Before we move on, it may be helpful to say something more about the relation between the envisioned hierarchy of doxastic attitudes and Bayesianism. Although the envisioned hierarchy is only partially ordered by strength, an ideally rational agent should be able to superimpose a kind of pragmatic credence function for purposes of decision-making partly on the basis of her doxastic attitudes.Footnote 64 This may be the best way to deal with a kind of situation raised earlier: a well-order series of proposed bets on the truth of \(\langle p\rangle \) with improving odds. Certainly, the model of doxastic attitudes under consideration doesn’t prohibit an agent from assigning pragmatic credences and doing so in accordance with the axioms of probability. Indeed, the model might naturally be supplemented so as to give some positive guidance as to how to assign pragmatic credences. For instance, perhaps relatively high pragmatic credence should be assigned to at least certain propositions that it is rational to believe.Footnote 65 Presumably, the model might put other constraints on the assignment of pragmatic credence as well (or instead). Once pragmatic credence is assigned in conformance with these constraints, rational decision-making could proceed by calculating expected utilities in the usual way.

Of course, as alluded to earlier, a pragmatic credence function generally won’t be uniquely determined from the doxastic representations that it is rational to hold, but we shouldn’t expect it to be. Some arbitrariness in assigning (any kind of) credence to a proposition is to be expected once it is conceded that the space of epistemic positions (as ordered by strength) doesn’t have the structure of the unit interval. If we can’t even make sense of the idea that some stronger epistemic position, e, is (literally) twice as strong as another epistemic position, \(e^{*}\), then the choice of whether or not to register the difference in strength with a credence that’s twice as large seems arbitrary. Unless strengths of epistemic position line up with numbers from the unit interval, they likely won’t dictate a particular assignment of credence. The arbitrariness of the assignment of credence will obviously carry over to rational decision-making. This is an apparent cost of acknowledging that epistemic positions can be of incommensurable strength (and hence aren’t like real numbers).Footnote 66

Nevertheless, the assignment of pragmatic credences to propositions on the basis of doxastic attitudes from the entire hierarchy isn’t nearly as arbitrary as an assignment merely on the basis of beliefs. The doxastic hierarchy is built to be able to register all aspects of the strength of one’s epistemic position towards a proposition rather than merely whether the epistemic position is strong enough for belief to be rational. In effect, an ideally rational agent will “mirror” the epistemic position she stands in with respect to a proposition by registering all the doxastic attitudes and withholdings of doxastic attitudes from the hierarchy that are rational. If this total “mirroring” state doesn’t uniquely constrain pragmatic credence, it’s simply because the agent’s epistemic position doesn’t uniquely constrain pragmatic credence. The total “mirroring” state will constrain pragmatic credence insofar as the agent’s epistemic position does. Thus, any pragmatic credence assignment properly based on this total mirroring state is as good as it gets for decision-making, even if the assignment isn’t uniquely proper.

In any case, the principal point we want to make is that it is possible to embed the machinery for rational decision-making from the Bayesian framework within the model of doxastic attitudes that we have been considering. Consequently, reasons to favor calculating expected utilities as the uniquely ideal way of making rational decisions are not obviously reasons to reject this model. At same time, reasons to favor other methods are not necessarily reasons to reject the model either since other methods of decision-making could be appended to the model instead.Footnote 67

Furthermore, on this model, epistemic rationality has nothing to do with probability. Rather, for a doxastic attitude to be epistemically rational, it must be true in what are for it all the least abnormal possible worlds compatible with the subject’s experiential condition.Footnote 68 This is a safety conception of epistemic rationality rather than a probabilistic one.Footnote 69 Given this significant departure from anything like probabilism, it cannot be said that the model of doxastic attitudes under consideration is parasitic on the Bayesian framework. Together, the two italicized statements in this and the preceding paragraph suggest that the model under consideration is neither obviously implausible (at least as an idealization), nor a mere imitation of Bayesianism.

Of course, it doesn’t follow that we should immediately accept the model either. The model suggests that, at any point of time, an agent is delivered a set of premises—presumably by his or her experiences up to that point in time—that determine the rationality of various kinds of doxastic representations by way of their individuating non-monotonic consequence relations. But, of course, the deliverance of a premise set by one’s experiences may well be a point of concern for some.Footnote 70 And, there are probably other problems as well. Fortunately, we need not defend this model. We merely raise it as a theoretical alternative to emphasize that representation-based epistemology need not imitate credence-based epistemology, which may turn out to be an advantage rather than a deficit.

8 §7

In this last section, we will consider a problem for our proposal. The suggestion in Sect. was that belief has the first word when it comes to rational decision-making. The implication is that cognizers can largely get along in their rational decision-making by trafficking in beliefs. But, belief does not have the last word. There are certain cases—as illustrated by car insurance—where a rational agent needs to move beyond beliefs to consider other doxastic attitudes. However, this raises the question of how a rational agent is to keep track of whether it is rational to hold stronger or weaker doxastic attitudes in order to act rationally in these cases. Must a cognizer simultaneously be trafficking in these other doxastic attitudes in order to act rationally when these cases arise? If so, then little advantage is gained by pointing out that a rational agent need not consider these attitudes ordinarily. If a rational agent has to be managing these attitudes anyway alongside belief, then cognition is already very taxing.

We think that this problem is not fatal for our proposal. Depending on a rational agent’s cognitive resources, it may make sense to keep track of some other doxastic attitudes—e.g. being-sure or suspecting—alongside of beliefs. However, we also think that a great deal of information about whether to hold other doxastic attitudes towards a proposition is captured by our system of beliefs. Perhaps this is easiest to see when the doxastic attitude in question is weaker than belief. It doesn’t seem so far-fetched to suppose that we can evaluate the plausibility of more speculative theories on the basis of our beliefs. Presumably, though, this can help us to appreciate whether weaker doxastic attitudes towards these speculative theories are rational. Notice, though, that this should also help us to appreciate whether strong doxastic attitudes are rational: if it is rational to suspect \(\langle p\rangle \), then this limits how strong a doxastic attitude it would be rational to take towards \(\langle \) not p \(\rangle \).

Our system of beliefs may also be able to tell us quite a bit about whether to hold stronger doxastic attitudes because we have extensive knowledge about how our belief-management system works. For instance, we know that beliefs about features in our environment that are currently visible are of higher quality because visual perception is fairly dependable. So, stronger doxastic attitudes towards these contents are typically rational. On the other hand, we know that beliefs that are more removed from immediate perceptual experience tend to be of lower quality as are beliefs that are partly maintained by memory. So, stronger doxastic attitudes towards these contents may not be rational. Of course, even beliefs removed from immediate experience can be of fairly high quality if they have support from multiple sources. Often (even if not very often) we know something about how beliefs are supported, i.e. we have beliefs about their credentials. This may allow us to see that stronger doxastic attitudes are, in fact, rational. Indeed, even if we can’t remember the exact credentials of some particular belief, we can usually tell quite a bit about which faculties have been involved in managing it by the belief’s content. These discriminatory abilities again tell us something about whether a stronger doxastic attitude might be rational. Obviously, some information about whether to hold stronger doxastic attitudes towards propositions that we believe is lost, but a surprising amount seems to be stored implicitly by believers who are—in roughly the sense of Sosa (2011)—reflective. By remembering and otherwise forming rational beliefs about the sources of belief and the general quality of those sources, a bounded cognizer can do a fairly good job of determining whether to hold doxastic attitudes that are stronger than belief.

The upshot is, as we alluded at the end of Sect. , that there is a certain sense in which the answer to the Further Doxastic State Question might very well be “further beliefs.” By way of reminder (again), the Further Doxastic State Question is “What are those further doxastic states that put you in a position to appreciate that some possible action would be, in fact, rational in cases like car insurance in which, by your own beliefs, this action is worse?” The answer implicit in Sects. – is “the withholding of a doxastic attitude that is stronger than belief (but necessary for rationally acting as if the belief is true given the special circumstances of the case).” However, what we are now suggesting is that whether to withhold this stronger doxastic attitude might be roughly determined on the basis of one’s system of beliefs. Consequently, in a certain sense (that avoids the wrong content problem mentioned in Sect. ), the further doxastic states might turn out to be beliefs after all. That obviously would work in favor of those that emphasize the theoretical importance of belief.

9 Conclusion

The principal project of this paper is to defuse a certain problem for those that maintain the theoretical prominence of belief. The problem comes to light when attempting to account for rational decision-making without making reference to rational credence (as a way of tracking either rational action or rational belief), particularly in cases with risky payoff structures (as illustrated by car insurance).

However, at its core, the problem is really that a psychology with only belief and withholding belief struggles to have the capacity to register all the nuances of the subject’s epistemic position with respect to any given proposition. We suggest the problem can be solved by adding to the psychology. Further representational doxastic attitudes can be used to reflect more about one’s epistemic position. Because they are representational in the sense that their truth conditions are conditions for accuracy-entailing correctness, the epistemology of these doxastic attitudes should be very similar to the epistemology of belief (and unlike the epistemology of credence).

We proposed that ideal rational decision-making might draw on these other doxastic representations. However, whether to hold these different doxastic representations can often be determined on the basis of one’s beliefs. So, in practice, belief without credence might well be enough to account for rational decision-making across a wide range of cases. Either way, the resulting alternative picture retains the central features of traditional epistemology—most saliently, an emphasis on truth as a kind of objective accuracy—while adequately accounting for rational action.