1 Introduction

What does logic tells us how about we ought to reason? If P entails Q, and you believe P, should you believe Q? There seem to be cases where you should not, for example, if you have evidence against Q, or the inference is not worth making. So we need a theory telling us when an inference ought to be made, and when not. I will argue that we should embed the issue in an independently motivated contextualist semantics for ‘ought’. With the contextualist machinery in hand we can give a theory of when inferences should be made and when not.

Section 2 explains the background and the main problems connecting logic with norms of reasoning. Section 3 explains the two parameters we need for contextualism about ‘ought’—a set of live possibilities and a standard. Section 4 discusses the objection from belief revision (this and the other problems will be explained in Sect. 2) and argues that it can be solved by using the set of live possibilities, as can the preface paradox (Sect. 5) and the problem of excessive demands (Sect. 6). Section 7 discusses the problem of clutter avoidance and argues that it can be solved by using the relevant standard. Section 8 discusses the implications for blame and guidance. Section 9 concludes.

2 Background

What is the relation between logic and reasoning? For example, suppose an agent believes that P. Suppose also that Q is a logical consequence of P, but leave open whether the agent believes that Q is a logical consequence of P.Footnote 1 Should the agent infer that Q? (To fill out the example, P might be ‘it’s raining and if it’s raining then it’s wet’ and Q might be ‘it’s wet’.)

A useful starting point is that logic ‘prescribe[s] universally how one ought to think’ (Frege 1893/1903/2009, p. 15). This suggests that the agent ought to believe Q. We might try to capture the idea with the following norm:Footnote 2

Strong Normativity Thesis

For all agents S, and propositions P and Q:

If P entails Q, and S believes P, then S ought to believe Q

(The reason for calling it ‘strong’ will emerge just below).Footnote 3

Various purported counter-examples have been given (Harman 1986, Field 2009, 2015, MacFarlane ms) based around four main problems:Footnote 4

Belief Revision

The agent might have strong evidence against Q. If so, they should surely revise their belief that P, rather than believe Q.Footnote 5

The Preface Paradox

Suppose S rationally believes each of the assertions in his book, P1, P2…Pn. Let Q stand for the conjunction, P1 &P2…&Pn. Q is entailed by the author’s beliefs. But surely, since the author regards himself as fallible, he should not believe the conjunction of all his assertions.Footnote 6

Excessive Demands

Some consequences of an agent’s beliefs are too complicated for them to work out. For example, Fermat’s Last Theorem follows from the rules of arithmetic. But surely most humans who know the rules of arithmetic have no obligation to believe Fermat’s Last Theorem.

Clutter Avoidance

Some consequences of an agent’s belief are too uninteresting to be worth working out. For example, an agent might be able to infer that either grass is green or Elvis lives on the moon, using disjunction introduction. But surely they have no obligation to make such an inference, and might be irrational for doing so.

One reaction to these problems is to weaken the Strong Normativity Thesis. To see how this might be done, it is helpful to review MacFarlane’s taxonomy of three choice-points:

  1. 1.

    Type of deontic operator. Do facts about logical validity give rise to strict obligations, permissions, or reasons for belief? Does the agent have

    1. i.

      a requirement/obligation to believe Q

    2. ii.

      permission to believe Q

    3. iii.

      a reason to believe Q?

  2. 2.

    Polarity. Are these obligations/permissions/reasons to

    1. i.

      believe

    2. ii.

      not to disbelieve?

  3. 3.

    Scope of deontic operator. Does the deontic operator govern:

    1. i.

      the consequent of the conditional (if S believes P then S ought to believe Q)

    2. ii.

      both the antecedent and the consequent (if S ought to believe P then S ought to believe Q)

    3. iii.

      the whole conditional (S ought to believe that if P then Q))?

For each choice-point, the options described move roughlyFootnote 7 from more demanding to less demanding. For example, a norm that says agents are required to believe Q is more demanding than a norm that says agents merely have permission to believe Q. The Strong Normativity Thesis takes the first, and most demanding, on all three choice-points. It says that agents have a requirement rather than a permission or reason, that the requirement is to believe rather than merely not disbelieve (which includes suspension of belief), and that the requirement attaches to only the consequent.Footnote 8 The existing literature largely considers weakening the Strong Normativity Thesis by moving to these other choice-points.Footnote 9 But I don’t think these choice points are the right places to weaken the link between logic and reasoning.

I think we need a different way to weaken the Strong Normativity Thesis—the key is that ‘ought’ is context-sensitive, and the Strong Normativity Thesis is true only with a particular sense of ‘ought’. I will explain this in the next section, then show how the counter-examples are avoided.

Eight quick clarifications (impatient readers can skip to the next section): First, I take reasoning to be a process of transitioning between beliefs. Beyond that I remain neutral on what reasoning is.Footnote 10

Second, I remain neutral on the existence and nature of other epistemic norms in the area e.g. norms of belief (Fassio 2019), norms that agents should collect more evidence (Friedman forthcoming) etc.

Third, I remain neutral on whether norms of reasoning are fundamental or derived from more fundamental synchronic norms (Hedden 2015).

Fourth, I will focus on deductive inferences rather than inductive inferences. I think my account can be extended to inductive inferences, but do not do so here.

Fifth, I take ‘reasoning’ to mean the same as ‘inferring’. The latter is useful for talking about individual inferences, which is a more natural locution than ‘an individual act of reasoning’.

Sixth, I will use ‘correct’, ‘bad’ and ‘good’ only as normative terms, and use ‘valid’ for logical relations.

Seventh, I will assume that there can be epistemic reason to believe, and to make (or not to make) an inference to a belief.Footnote 11 I will assume that an inference is a type of action, so there can also be practical reason to make (or not to make) an inference. I will remain neutral on whether there can be practical reasons to believe and on whether epistemic reasons are fundamental or are ultimately grounded in practical reason.Footnote 12

Finally, I take there to be a close connection between ‘ought’, ‘should’, ‘good’ and ‘reasons’. Specifically, I assume that ‘what one should do’ is synonymous with ‘what one ought to do’, ‘what one has most reason to do’ and ‘what is good’ (Shafer-Landau 2005; Broome 2013; Berker 2018). I will focus on contextualism about ‘ought’, but I take this to have straight-forward implications for contextualism about other normative terms (Finlay 2014).Footnote 13 I remain neutral on which if any is fundamental.

3 Two parameters

Suppose Napoleon, an eighteen century general, and Heimson, a twentieth century schizophrenic, utter the same sentence: ‘I am Napoleon’. There is a sense in which ‘I’ means the same thing in both utterances. This type of meaning can be thought of as a rule picking out whoever is speaking; this is the character (Perry 1979; Kaplan 1989). And there is a sense in which ‘I’ means different things in each utterance, Napoleon and Heimson respectively; this is the content. So the content of any utterance of ‘I’ depends on a parameter—the speaker. We can make the parameter explicit by adding to the text who ‘I’ is relative to e.g. ‘I-Napoleon’ or ‘I-Heimson’.

An analogous view regarding ‘ought’ has become increasingly popular.Footnote 14 In fact it is plausible that there are at least twoFootnote 15 parameters needed to fix the content of a sentence including ‘ought’—a standard and a set of live possibilities. In this section I will explain the view, and also separate the core commitments from stronger positions we need not be committed to.

3.1 Propositions/possible worlds

The first parameter is a modal base which determines a proposition or set of live possible worlds.Footnote 16 The live worlds are those compatible with the modal base. If the modal base is empty then all worlds are live. As the modal base grows, the set of live worlds is restricted. On the standard theory of modals (Kratzer 1981), ‘it must be that p’ means, roughly, that in all the live worlds, p.

This parameter is often called the ‘information set’, but using information here is too restrictive, for two reasons. The first reason is that information is naturally taken to imply truth. However, it will be important that agents can make good inferences from false beliefs.Footnote 17 We can allow that the modal base consists of the beliefs of the subject of the sentence, or the speaker, or some third party, or the collective beliefs of a group, or the propositions known by any of the former, or any of these plus a number of fixed propositions, and endless further options.

The second reason is novel. I think information is too restrictive because the parameter needs to vary with the agent’s abilities, not just their information. What one ought to do depends on what possibilities you can bring about in the future.Footnote 18 To motivate this, suppose you are on the beach and see someone struggling in the water. Whether you ought to dive in depends on whether you can swim. ‘You ought to dive in given that you can swim’ is true while ‘you ought to dive in given that you cannot swim’ is false. In a context in which you can swim, performing the rescue yourself is a live possibility; in a context in which you cannot swim, this possibility is ruled out. So the live possibilities can be determined in part by a set of actions. With the assumption that making an inference is a type of action,Footnote 19 the value of the live possibilities parameter can depend in part on which inferences the agent can make (Fig. 1).

Fig. 1
figure 1

Live possibilities vary with abilities

We will also make the standard assumption that when ‘ought’ occurs in the consequent of a conditional, the antecedent of the conditional is added to the modal base. Consider ‘if S believes P then S ought to believe Q’. The ‘ought’ has a modal base which includes ‘S believes P’. We can make this explicit e.g. ‘if S believes P then S ought-given-S-believes-P to believe Q’.

3.2 Standard

The second parameter is a standard or goal which determines an ordering of the live possible worlds. Plausibly, ‘S ought to A’ is true iff S A’s in every live world at the top of the ranking.Footnote 20

The standard need not be one that the subject cares about.Footnote 21 If I say ‘you ought to start with the cutlery on the outer edge’, the standard might be the rules of etiquette. The more explicit sentence is ‘by standards of etiquette, you ought to start with the cutlery on the outer edge’. This sentence can remain true even if you don’t care about etiquette. This allows us to say to the psychopath ‘you shouldn’t kill people’; the full sentence is ‘by standards of morality, you shouldn’t kill people’, and this is true even if the psychopath doesn’t care about morality.Footnote 22

For our purposes we only need to distinguish two standards: those corresponding to the epistemic ought and the practical ought.Footnote 23 We can get a grip on the epistemic ought by thinking about contexts where the conversation concerns some epistemic standard such as having true beliefs. Typical sentences might be ‘you ought to be uncertain’ or ‘we ought to expect defeat’.

Again, the standard need not be one that the subject cares about, so we need not assume that agents care about any epistemic goals. For example, someone who is told how a film ends ought (in the epistemic sense) to believe what they are told, even if they do not care how it ends, and even if they don’t want to know how it ends.Footnote 24 The full sentence might be ‘by epistemic standards, you should believe that this is how the film ends’.

There is disagreement about what the epistemic standard is. Leading contenders for epistemic standards include having beliefs that are (a) true (b) justified (c) knowledge.Footnote 25Footnote 26 The differences between these positions won’t matter here, so I will remain neutral. And we can remain neutral on whether the standard (e.g. truth) is constitutive of belief or whether something can be a belief without having such a standard.Footnote 27

This brings us to the practical ought.Footnote 28 We can get a grip on the practical ought by thinking about normal contexts where the conversation concerns what is best to do. Typical sentences might be ‘you ought to stay in school’ or ‘should I boil or steam the vegetables?’. Call the standard associated with the practical ought the practical standard.

There is disagreement about what the practical standard is. Humeans hold that the practical standard is a function of one’s desires e.g. the standard might be to maximize a weighted set of desires. Non-Humeans might hold that the practical standard is to maximize value. There are further debates about whether the practical standard is to maximize actual value or expected value, and whether expected value is determined by beliefs or evidence.Footnote 29 We can remain neutral on these issues.Footnote 30

We can also remain neutral on whether there are further parameters which determine the content beyond standards and propositions. For example, Carr (2015) argues that ‘ought’ must be relativized to a decision rule. This may be so, but it will not play a role below.

Now we have this machinery on the table, I will argue that the problems regarding the norms of reasoning can be resolved. There are numerous precisifications of the Strong Normativity Thesis, some of which are true and some of which are false.

4 Objection from belief revision

Suppose S believes P, believes P entails Q, but S has strong evidence against Q. It seems that S should not come to believe Q. But this is difficult to accommodate if our principle has a claim in the consequent about what the agent should believe e.g.

Strong Normativity Thesis

For all agents S, and propositions P and Q:

If P entails Q, and S believes P, then S ought to believe Q

This is the problem of belief revision.Footnote 31

To solve this problem (and the next) we need to distinguish the normative status of a belief from the normative status of an inference. Crucially, an agent can make a good inference from a bad (e.g. unjustified) belief.Footnote 32 Our question concerns reasoning, so we want to bracket the question of whether the initial belief was justified and focus on the question of whether the inference was good. So the solution to the problem of belief revision is to say that the inference to Q is good but the belief that Q is not.

What role does contextualism play here? It helps specify a modal base relative to which the inference is good. So we modify the Strong Normativity Thesis in two ways—we replace ‘believe’ with ‘infer’ and we make explicit the modal base:

Modified Strong Normativity Thesis

For all agents S, and propositions P and Q:

If P entails Q, and S believes P, then S ought-given-P-entails-Q-and-S-believes-P to infer Q

This allows us to judge that the inference to Q is good qua inference, while remaining neutral on the epistemic status of the initial belief that P, and consequently remaining neutral on the epistemic status of a belief that Q.

Someone might object that it is overall justification for the belief that Q that we are really interested in. If so, an account that brackets the rest of an agent’s epistemic states is unhelpful.

The first response is to flat-footedly reply that our question is about reasoning, not belief. But even if our concern were about overall justification of belief, our framework would be helpful—we would just have to identify some initial epistemic state (e.g. basic beliefs, or evidenceFootnote 33), then plug in the agent’s total initial epistemic state for P:

Modified Strong Total Normativity Thesis

For all agents S, and propositions P and Q:

If S’s initial-epistemic-state entails Q, then S ought-given-S’s-initial-epistemic-state to believe Q

So this reasoning framework can be placed into a bigger story about rational belief. But rational belief raises numerous tricky issues such as internalism, defeaters and inductive reasoning which go beyond the scope of this paper.

Terminology: In future, rather than repeating the whole antecedent appended to ‘ought’, I will just write ‘oughtA’.

5 The preface paradox

In this section I will argue that the same response, that of relativizing ‘ought’ to a possibilities parameter, solves the preface paradox:

Preface Paradox

Suppose S rationally believes each of the assertions in his book, P1, P2…Pn. Let Q stand for the conjunction, P1 & P2…&Pn. Since the author regards himself as fallible, he should not believe the conjunction of all his assertions (Q).

Thus S believes P1, P2…Pn and that they entail Q, but S should not believe Q. The problem is usually taken to be that of explaining why the author should not make the inference to Q.

But there is a sense in which the author should make the inference. If we move from talk of belief to talk of inferences, and set the live possibilities parameter to the proposition that S rationally believes P1, P2…Pn, then we can hold that the inference to Q is correct after all:

Modified Strong Normativity Thesis

For all agents S, and propositions P and Q:

If P entails Q, and S believes P, then S oughtA to infer Q

So there is a sense in which S ought to infer Q.

Someone might object that this misses the point, arguing that the Preface Paradox shows that the agent should not make the inference. But why not? The standard answer begins with the observation that each of P1, P2…Pn has partial justification, and then concludes that justification for Q might fall below some threshold.Footnote 34

But the level of justification of each of P1, P2…Pn is not at issue. We are assessing the deductive inference from P1, P2…Pn to Q (not the belief that Q) and thereby setting aside the justificatory status of P1, P2…Pn. It is irrelevant whether each of P1, P2…Pn has only partial justification, or is even completely unjustified. We are asking whether it is correct to infer Q from the set of beliefs that P1, P2…Pn, and indeed it is.

It might be useful to draw an analogy with Lewis’s (1980) Principal Principle. Roughly, it says that an agent should have credence of x in P given that they are rationally certain that the chance of P is x. If the agent is not rationally certain what the chance is, then the Principal Principle says nothing directly about what the agent should believe. Similarly, if the agent does not believe each of P1, P2…Pn then the Modified Strong Normativity Thesis says nothing directly about what the agent should believe. In fact the Modified Strong Normativity Thesis never says anything directly about what an agent should believe. It just says that given the set of beliefs P1, P2…Pn they should infer Q.

6 Excessive demands

The problem of excessive demands is that we sometimes cannot work out the consequences of our beliefs. All the theorems of mathematics follow from our beliefs about arithmetic, but surely we are not required to infer them.

This conflict between logic and the norms of reasoning can be resolved by again invoking the live possibilities parameter. Above we used the belief part of the live possibilities; here we use the actions part of the live possibilities, invoking the assumption that an inference is a type of action.

From any belief there are an infinite number of valid inferences that could be made, of which some are simple and some are complicated. Let’s first focus on the infinite set of valid inferences. There is a sense of ‘ought’ which includes all valid inferences in the possibilities parameter. (Bayesians will be familiar with this, as it is what ‘rational’ usually means in the Bayesian literature.Footnote 35) We can make this ideally rational ‘ought’ explicit by using ‘ought-rationally’. And we can make explicit the sense of ‘ought’ which is limited to inferences some particular agent is able to make with ‘ought-actually’. We get a false principle if we combine ought-actually with all the valid inferences:

False Requirement (FR)

For all agents S, and propositions P and Q:

If P entails Q, and S believes P, then S oughtA-actually to infer Q

This implies that S ought-actually to believe all theorems of mathematics. This is the root of the problem of excessive demands.

But the objection is side-stepped if we use ought-rationally:

Rational requirement (RR)

For all agents S, and propositions P and Q:

If P entails Q, and S believes P, then S oughtA-rationally to infer QFootnote 36

The objection is also side-stepped if we use ought-actually and add to the antecedent that the inferences are those S is able to make, which we can call ‘S-available inferences’.

Non-rational requirement (NR)

For all agents S, and propositions P and Q:

If P entails Q, and S believes P, and P supports Q via-S-available-inferences, then S oughtA-actually to infer Q

Thus the live possibilities parameter solves the problem of excessive demands by providing a reading of the Strong Normativity Thesis on which one is not required to infer all the theorems of mathematics (NR), and it also explains the intuition that there is a sense in which you should infer all the theorems of mathematics (RR).

To fill this out, we can imagine three different sentences which can be inferred from P.

Q1: An obvious inference that any reasoner can make

Q2: A difficult inference that a logic student can make

Q3: A superhuman inference that no human can make

Let w3 be the world where S infers Q3, Q2 and Q1; let w2 be the world where S infers Q2 and Q1; and let w1 be the world where S infers only Q1. Worlds are ordered (vertically) by how well they achieve the epistemic goal (Fig. 2).

Fig. 2
figure 2

Three worlds in which increasingly complex inferences are made

Agents ought to make true the best live world. Relative to ought-rationally, S ought to make w3 true. But if w3 is not live then S ought-actually make only w2 true. And if S is unable to do difficult logical reasoning then only w1 is live.

One complication concerns which inferences the agent is able to make, as there is some flexibility about what is held constant. Suppose the agent is tired, and this restricts the inferences they do make. What should we hold fixed when assessing what inferences they are able to make? If we hold fixed that the agent is tired, we will get one set of available inferences. If we allow them a nap we will get a bigger set of inferences. If we allow them to take a mathematics course, we will get an even bigger set of inferences.

I don’t want to take a stand on this, a topic which has been discussed in the literature on ought-implies-can.Footnote 37 I suspect that ‘able’ is also context-sensitive, which would make ‘available inferences’ context-sensitive. And any vagueness in ‘available’ will be matched by vagueness in ‘ought’. The truth of NR just requires that the demands of ought-actually do not extend beyond the available inferences.Footnote 38

We have explained how excessive demandingness can be avoided by positing the relatively undemanding norm of NR. But we’ll see that NR might still be too demanding.

7 Clutter avoidance

NR (and RR) still face the problem of clutter avoidance. They seem to imply that I am obligated to believe all of the infinitely many trivial logical consequences of my beliefs. This looks implausible. Steinberger (2019, p. 11) writes:

Not only do I not care about, say, the disjunction `I am wearing blue socks or Elvis Presley was an alien’ entailed by my true belief that I am wearing blue socks, it would be positively irrational for me to squander my meagre cognitive resources on inferring trivial implications of my beliefs that are of no value to my goals.

Many philosophers have concluded that there must be a no-clutter norm,Footnote 39 but these cause serious problems.Footnote 40

I think the problem of clutter avoidance can be solved by invoking the parameter of the standard. I will argue that in normal contexts you ought not to make trivial inferences, yet we can identify contexts in which you should make trivial inferences.

The sense of ‘ought’ in which you ought to infer all the trivial logical consequences of your beliefs is the epistemic sense (e.g. the standard of having all and only true beliefs).Footnote 41 We can make this parameter value explicit with ‘epistemically-ought’:

Non-rational Epistemic-Requirement (NER)

For all agents S, and propositions P and Q:

If P entails Q, and S believes P, and P supports Q via-S-available-inferences, then S epistemically-oughtA-actually to infer Q

(I used available inferences and ‘oughtA-actually’. For completeness, note that the ‘rationalized’ version is also true, where we remove the restrictions to available inferences:

Rational Epistemic-Requirement (RER)

For all agents S, and propositions P and Q:

If P entails Q, and S believes P, then S epistemically-oughtA-rationally to infer Q)

If NER and RER seem implausible, it might be because for humans there is always some cost in time, energy or computing power to making an inference. But imagine a creature for whom there was no cost e.g. angels with infinite computing power. If they are at all interested in truth, knowledge or justification, then they would instantaneously make all the inferences from their beliefs. And we could explain the rationality of their doing so in terms of the epistemic ought. Although humans are not like this, I think it is natural to invoke such ideals.

For further support, Christensen (2004, pp. 165–166) gives the following example:

Efficiency seems to enter into the evaluation of car designs in a fairly simple way: the more efficient a car is, the better. Now suppose someone objected to this characterization as follows: ‘‘Your evaluative scheme imposes an unrealistic standard. Are you trying to tell me that the Toyota Prius hybrid, at 49 mpg, is an ‘‘inefficient’’ car? On your view, the very best car would use no energy at all! But this is technologically impossible…the very laws of physics forbid it!’’

Christensen points out that this objection fails to undermine our ideal of efficiency, concluding that there is room for unattainable ideals even in the most pragmatic endeavours, and that we can recognize the normative force of ideals whose realization is far beyond human capacities.Footnote 42

Moving on, the sense of ‘ought’ in which it is not the case that you ought to infer all the trivial logical consequences of your beliefs is the practical sense:

False Practical-requirement (FPR)

For all agents S, and propositions P and Q:

If P entails Q, and S believes P, and P supports Q via-S-available-inferences, then S practically-oughtA-actually to infer QFootnote 43

As long as S has limited cognitive capacity and reasons to do things other than believe truths (e.g. to eat, to reproduce) as all known agents do, then FPR will have counterexamples. The problems of clutter avoidance involve such counter-examples. ,Footnote 44Footnote 45

We can see in Fig. 3 that relative to the practical standard, the best world could be w1, where only obvious inferences are made and the agent can spend their resources doing something else.

Fig. 3
figure 3

Worlds re-ordered by the practical goal

For practical ought claims to be true, the agent must have practical goals such that it is worth making the inferences. So the true norm is something like:

Non-rational Practical-requirement (NPR)

For all agents S, and propositions P and Q:

If P entails Q, and S believes P, and P supports Q via-S-available-inferences, and it is worth S making the inferences then S practically- oughtA-actually to infer Q

(For completeness, note that the ‘rationalized’ version is also true, where we remove the restrictions to available inferences:

Rational Practical-requirement (RPR)

For all agents S, and propositions P and Q:

If P entails Q, and S believes P, and it is worth S making the inference then S practically-oughtA-rationally to infer Q)

I suggest that NER, RER, NPR and RPR express the link between logic and reasoning.

8 Ideals, guidance, appraisal

Norms can be used for i) expressing ideals, ii) for guidance and iii) for making appraisals.Footnote 46 But different norms seem to be required for each role. I want to show that our intuitions about the divergence of norms for ideals, guidance and appraisal can be accounted for by the two parameters.

8.1 Ideals

Let’s start with ideals, which are closely related to standards. Think of the ideal norm as expressing the best way of achieving a given standard, making no allowance for any limitations of an agent or other standards the agent might have. We’ll focus on the epistemic standard e.g. believing all and only truths, so the relevant ought is epistemic-ought. As any limitations of the agent are irrelevant for the ideal norm, we need ought-rationally. Putting this together the ideal norm is:

Rational Epistemic-Requirement (RER)Footnote 47

For all agents S, and propositions P and Q:

If P entails Q, and S believes P, then S epistemically-oughtA-rationally to infer Q

For precedent, compare the utilitarian thesis that an act is right if and only if it maximizes happiness. Faced with the objection that this norm fails to provide guidance, utilitarians can maintain that their principle expresses the ideal norm relative to the moral standard, even if we cannot always follow it.Footnote 48

There is a controversy worth mentioning before we go further. What is ideal reasoning for an agent who falsely believes that the relevant rule is invalid? For example, suppose S has been told by a confused teaching assistant that modus ponens is invalid. This is misleading higher order evidence. Should agents reason in line with their false beliefs? Some say no, that misleading higher order evidence should be ignored in first-order reasoning (level-splitters and right reasons theoristsFootnote 49). Others say yes (conciliationists). There is an analogous debate in ethics. Some hold that those with misleading higher order evidence about the ethical rules should (morally) ignore that misleading higher order evidence.Footnote 50Footnote 51

I have my own views on this controversy (Bradley 2019), but this framework allows us to remain neutral. At the end of Sect. 4 I argued that we can bracket the rest of the agent’s epistemic states, and in particular the question of whether P is justified, and focus on the inference from P to Q. Similarly, we can bracket any of the agent’s epistemic states that might defeat the inference i.e. make the inference from P to Q incorrect. The advice of a confused teacher would thereby be bracketed. Thus, I leave open the question of how, if at all, contextualism interacts with the debate about higher level evidence.

Whatever the ideal is, we can now ask how the norms of guidance and appraisal diverge from it.

8.2 Guidance

We expect that agents can be guided by norms, but ideal norms cannot always serve as norms of guidance. For example, a norm might say ‘if the exam asks for the capital of Portugal, then write ‘Lisbon’’. This expresses the ideal, but cannot guide an agent who doesn’t know it. (Perhaps better: Doesn’t believe it). In ethics, utilitarians accept that their theory needs to say something about guidance, and they offer norms that can be used to guide e.g. maximize expected utility. In both cases, the natural solution is to hold that norms which can guide agents are restricted to refer only to beliefs and abilities the agent has.

Let’s again focus on the epistemic standard. S can only be guided by inferences available to S, so I suggest that the guidance norm is:

Non-rational Epistemic-Requirement (NER)

For all agents S, and propositions P and Q:

If P entails Q, and S believes P, and P supports Q via-S-available-inferences, then S epistemically-oughtA-actually to infer Q.

8.3 Appraisal

What is it to be blameworthy for violating an epistemic norm?Footnote 52 Here is a useful principle adapted from Kauppinen (2018):

Epistemic Blameworthiness

S is blameworthy for violating an epistemic norm if and only if it appropriate, other things being equal, to hold the subject accountable by reducing epistemic trust, insofar as she lacks an excuse.

I’m going to assume that Epistemic Blameworthiness is roughly correct. My aim in this section is to map our intuitions about what counts as an excuse onto the contextualist framework.

Distinguish two types of excuse for failing to make a valid inference.Footnote 53Agents can be excused by being unable to make the inference, or by having no sufficiently good reason to make the inference. These excuses correspond to the two parameters. Let’s go through them. (I leave open that there might be other types of excuses. I give sufficiency conditions for excuses. Blameworthiness requires no excuses, so I give necessary conditions on blameworthiness.Footnote 54)

First, S might be unableFootnote 55 to make the inference because it is too complicated, and could thereby be excused.Footnote 56 In the contextualist framework, they still infer as they ought-actually to. So if the inference they fail to make is not one they ought-actually to make then they have an excuse.

A complication is that we might reduce our epistemic trust in an agent precisely because they are unable to make the valid inference. For example, it might be an inference that we can make, and which we expect others to make, so S’s inability to make that inference reduces our epistemic trust in S. The effect of this complication is to expand the live possible worlds to an intermediate level. For example, consider an agent who can only infer Q1, producing w1. Although they cannot infer Q2 and thereby produce w2, we expect them to be able to, while we do not expect them to infer Q3 and produce w3. So the best live world is w2, and S ought to produce it. Call this middling sense ‘ought-competently’. S might fail to infer Q3, but if S infers as they ought-competently then they have an excuse (Fig. 4).Footnote 57

Fig. 4
figure 4

Live worlds for the competent

Second, S might have no sufficiently good reason to make the inference (because S has non-epistemic goals), and would thereby be excused. In the contextualist framework, they still infer as they practically-ought to. Once non-epistemic goals are added, the ordering of worlds can change, and the best world might be one in which the agent does not make the inference e.g. when the inference is trivial. In Fig. 5, S would be excused for failing to arrive at w2, as the best world is w1; in failing to arrive at w2 or w3, S infers as she practically-ought to. So if the inference they fail to make is not one they practically-ought to make then they have an excuse.

Fig. 5
figure 5

Worlds re-ordered by the practical goal

Putting these together, if the inference they fail to make is either not one they practically-ought to make or not one they ought-competently make, then they have an excuse. Contrapositively, if agents are blameworthy for failing to make a valid inference then they fail to infer as they practically-ought-competently. So the norm of blame for reasoning is:

Non-rational Practical-Requirement+ (NPR+)Footnote 58

For all agents S, and propositions P and Q:

If P entails Q, and S believes P, and P supports Q via-S-competent-inferences, and it is worth S making the inferences,

then S practically-oughtA-competently to infer Q

Let’s try a case:

Melted Ice-cream

Alessandra has gone to pick up her children at their elementary school. It is hot, but she leaves the ice-cream she has brought for her children in the car. Although able to infer that the ice-cream will melt, she does not do so. By the time they return the ice-cream has melted.Footnote 59

Intuitively, Alessandra is epistemically blameworthy. We would reduce our epistemic trust in Alessandra if we learnt that she failed to realize that the ice-cream would melt. Our framework delivers this verdict if the inference to the belief that the ice-cream would melt is one she is both able to make and has sufficient practical reason to make. And indeed both conditions are satisfied. Alessandra has enough inferential competence to be able to work out that the ice-cream would melt, and has sufficient practical interest in the ice-cream not melting.Footnote 60 She practically-ought-competently to have inferred that the ice-cream would melt, but she does not, so is epistemically blameworthy (Fig. 6).

Fig. 6
figure 6

A Way to be Epistemically Blameworthy

Alessandra is excused if we make either of two modifications to the story. If we modify the story to one in which her full attention on something other than the ice-cream is a matter of life and death then Alessandra is not epistemically blameworthy. For example, suppose she is a doctor and as she parks she sees that there has been an accident and only her full attention for several hours will save the life of a child. In such a context, a melting ice-cream is trivial in the same sense that it is trivial to infer that I am wearing blue socks or Elvis is alive. She does not have practical reason to make the inference, so is not blameworthy.Footnote 61

Alessandra is also excused if the inference to the belief that the ice-cream would melt is not one she is able to make, nor one we would expect her to make. This requires a bit more imagination, but we could imagine that it is a typically cold day in the Arctic Circle where the ice-cream would normally not melt, but the car is parked in a place where heat from concave neighbouring buildings is focussed. Alessandra knows the contingent facts, but does not have the mathematical abilities necessary to work out that the ice-cream would melt. She ought-rationally to make the inference, but we would not expect her to be able to make the inference, so she is not blameworthy.

9 Conclusion

I have argued that many controversies about the norms of reasoning can be resolved by an independently motivated contextualist semantics for ‘ought’. The problems of belief revision and the preface paradox can be solved by relativizing to a set of propositions, the problem of excessive demands can be solved by relativizing to a set of available inferences, and the problem of clutter avoidance can be solved by relativizing to a standard. These parameters can also illuminate questions about which norms are relevant to ideals, guidance, and blame.Footnote 62