Indicative Conditionals Without Iterative Epistemology∗ Ben Holguín September 2019 Abstract This paper argues that two widely accepted principles about the indicative conditional jointly presuppose the falsity of one of the most prominent arguments against epistemological iteration principles. The first principle about the indicative conditional, which has close ties both to the Ramsey test and the "or-to-if" inference, says that knowing a material conditional suffices for knowing the corresponding indicative. The second principle says that conditional contradictions cannot be true when their antecedents are epistemically possible. Taken together, these principles entail that it is impossible to be in a certain kind of epistemic state: namely, a state of ignorance about which of two partially overlapping bodies of knowledge corresponds to one's actual one. However, some of the more popular "margin for error" style arguments against epistemological iteration principles suggest that such states are not only possible, but commonplace. I argue that the tension between these views runs deep, arising just as much for non-factive attitudes like belief, presupposition, and certainty. I also argue that this is worse news for those who accept the principles about the indicative conditional than it is for those who reject epistemological iteration principles. 1 Introduction Yesterday there was a murder. You know either the butler or the gardener did it, but you don't know which of them it was. Are you in a position to know that if the butler didn't do it, then the gardener did? It certainly seems like you are. In fact, it seems plausible that for just about any pair of propositions p and q, if you know that p or q (but don't know whether p), then you're in a position to know that if ¬p, then q. Call this the principle of MATERIAL INDICATION (MI for short). Here is another thing you seem to be in a position to know: it's not both the case that if the butler didn't do it, then it happened before midnight, and if the butler didn't do it, then it ∗Forthcoming in Noûs. Thanks to Kyle Blumberg, David Boylan, Sam Carter, Jane Friedman, Simon Goldstein, Nico Kirk-Giannini, Harvey Lederman, Matt Mandelkern, Jake Nebel, and Jim Pryor for comments and discussion on earlier versions of the paper. Special thanks to Cian Dorr, Ginger Schultheis, and Trevor Teitel for extensive feedback over the course of the project. And especially special thanks to an anonymous reviewer for their detailed and extremely helpful comments, and also for providing many of the paper's neologisms. 1 happened after midnight. In fact, it seems plausible that for just about any pair of propositions p and q, if p is compatible with what you know, then it's not both the case that if p, q and that if p, ¬q. Call this the principle of WEAK CONDITIONAL NON-CONTRADICTION (WCNC for short). MI and WCNC are widely accepted among philosophers of language.1 This paper argues that their conjunction has some striking epistemological consequences. In particular, it argues that anyone who accepts both MI and WCNC is committed to rejecting the possibility of being in a certain kind of epistemic state, which (for reasons that will emerge later) we will call a daisy chain. To be in a daisy chain is for there to be a pair of propositions p and q such that: you nontrivially know p ⊃ q (i.e., you know ¬p ∨ q without knowing whether p), but for all you know you non-trivially know p ⊃ ¬q. Those who accept MI and WCNC must deny that it is possible to be in such a state. Why is it controversial to deny the possibility of daisy chains? Because according to some of the most popular models of inexact knowledge-e.g. those defended by Williamson (2000, 2011, 2014)-daisy chains are not only possible, but quotidian. Their existence stems from the fact that knowledge requires a margin for error. So: two quite popular views in the philosophy of language are jointly in tension with a popular view in epistemology. How ought we react to this result? By letting the epistemologists figure out whether daisy chains are possible, and then accepting or rejecting MI accordingly. This is because there is a principle in the vicinity of MI that is capable of accounting for the various intuitive considerations cited in favor of the original, but that does not presuppose that either WCNC is false or that daisy chains are impossible. Put roughly, it is the principle that knowing that one knows that ¬p or q suffices for knowing that if p, then q. The availability of this surrogate principle suggests the dialectical dependence between MI and daisy chains is asymmetric: though it will be easy to find non-question-begging arguments from the possibility of daisy chains to the falsity of MI, it will be difficult to find non-question-begging arguments from the truth of MI to the impossibility of daisy chains. We should thus settle the question of whether daisy chains for knowledge are possible before we settle the question of whether MI is true. That's what the paper will argue at least. And although our focus will mostly be on the relationship between principles about the indicative conditional and theories of inexact knowledge, this is more for concreteness than anything else. Our central result-that the possibility of daisy chains for knowledge entails the falsity of the conjunction of MI and WCNC-will not employ any assumptions about the logic of knowledge that are implausible as assumptions about the logic of (full) belief, certainty, or presupposition. Since MI and WCNC seem no less plausible when stated in terms of (full) belief, certainty, or presupposition, and since some of the central motivations for accepting daisy chains for knowledge extend just as well to these other attitudes, the discussion should be of broad significance.2 1 See the discussion and citations in §2. 2 To my knowledge, the only other theorist to argue in print that certain apparent platitudes about the indicative con2 2 Two principles about the indicative conditional We'll start with some terminological stipulations. 'K ' will abbreviate 'One is in a position to know', while '◊' will abbreviate '¬K¬' (i.e., 'For all one is in a position to know'). '→' will abbreviate the indicative conditional 'If. . . , then. . . '. Throughout we will make sure to interpret 'K ' and '→' relative to a single, uniform context. Our first principle of interest is: MATERIAL INDICATION (MI) (◊p ∧ K(p ⊃ q)) ⊃ K(p→ q). Equivalently: if you are in a position to non-trivially know a material conditional, then you are in a position to know the corresponding indicative conditional. The pattern of reasoning codified by MI is widely attested in the literature on indicative conditionals.3 A few quick examples will show why. You come to know by testimony that either the butler or the gardener did it. Are you in a position to know that if the butler didn't do it, then the gardener did? It certainly seems like it. You get a quick glance at the top of the deck: you see enough to know that the card is red, but not enough to know whether it's diamonds or hearts. Are you in a position to know that if it isn't diamonds, it's hearts? Surely you are. Inferences of these sort abound. MI is a good explanation why. Further evidence for MI comes from the fact that explicit violations of the principle seem bizarre: (1) ?? Alice isn't sure whether Bob is in the office or not, though she knows either Bob or Carol has to be. But she doesn't know whether Carol is in the office if Bob is. (2) ?? Although everyone knows that Lexie will either come to the party or stay home, not everyone knows whether she'll stay home if she doesn't come to the party. There seems to be something incoherent about these sentences. Given MI, the diagnosis is straightforward: they couldn't possibly be true. Without MI, it is less clear why they seem so ditional are in tension with an anti-iterative conception of knowledge is Dorst (2019). The principles that are at the center of his argument include neither MI nor WCNC, though the kinds of considerations we cite in favor of MI in §2 are akin to some of those that appear in his discussion. Unfortunately, getting clear on how this paper's result affects the dialectic of his paper would take us too far afield. But I'll note here that the morals we draw from our result near the end of the paper are quite different from the ones he draws from his. 3 Actually, there hasn't been that much discussion of MI in its own right, as evidenced by the fact that its name is new. But many theorists defend principles that directly entail MI, or entail it given some weak auxiliary assumptions. For example, Harper (1976) and Gardenfors (1986) discuss the strictly stronger principle that being in a position to know the material conditional suffices for being in a position to know the indicative, regardless of whether the knowledge is non-trivial. See also Edgington (2014, §§3–4). Relatedly, many theorists try to give the indicative conditional a semantics that validates (or at least explains) the intuitiveness of the so-called "direct argument", which says that the inference from '¬p or q' to 'if p, q' is valid-see, e.g., Stalnaker (1975); Jackson (1979); Edgington (1995); Block (2008). Supposing K is closed under valid arguments, then MI is just a special instance of this kind of reasoning. See also §5's discussion of the connections between MI and widely attested "Ramsey's test", the principle that the probability of the conditional is the conditional probability of its consequent given its antecedent. 3 terrible.4 It's also worth noting that if the material analysis of the indicative conditional is correct (i.e., the indicative conditional just is the material conditional), then MI is a tautology. And so we could also argue for MI by arguing for the material analysis. But this not will be our strategy, as the material analysis has some serious (and well known) empirical problems. To give just one example, suppose you know a fair 20-sided die is about to be rolled. How probable should you find the conditional if the die lands 20, it'll land 1–19? Intuitively you should find it not at all probable. But according to the material analysis, its probability is .95. This is good reason to think there is something defective about the analysis. Another reason to reject the material analysis is that it is in tension with our second principle of interest, which is a restricted version of the principle of conditional non-contradiction:5,6 WEAK CONDITIONAL NON-CONTRADICTION (WCNC) ◊p ⊃ ¬(p→ q ∧ p→¬q). The principle of conditional non-contradiction (CNC) says that a conditional (p → q) and its contrary (p → ¬q) cannot both be true; WCNC says that unless the conditional's antecedent is known to be false, then that conditional and its contrary cannot both be true. CNC entails WCNC, so anyone who accepts the former should accept the latter. And those who think conditionals with contradictory antecedents present counterexamples to CNC can embrace WEAK CNC without reservation. Other than the fact that it seems obviously true, the most straightforward argument for WCNC is that sentences of the form 'If p, q and if p, ¬q' seem terrible, especially when p isn't known to be false. This is presumably why, with the exception of the material analysis, there is no mainstream theory of the indicative conditional that invalidates WCNC. We will thus assume it without further argument. We have our two principles about the indicative: MATERIAL INDICATION and WEAK CONDITIONAL NON-CONTRADICTION. The first tells us that the indicative conditionals are sometimes epistemically equivalent to material conditionals, while the second tells us that they are not always materially equivalent. Both feel platitudinous, and both happen to be widely accepted among theorists of the indicative conditional. We now turn to arguing that their conjunction has some striking and heretofore unnoticed epistemological consequences. 4 Though see §§6–7. 5 Theorists whose accounts of the indicative conditional support WCNC include: Stalnaker (1968, 1975); van Fraassen (1976); Kratzer (1986); Gillies (2004, 2010); Rothschild (2013). 6 Why is WCNC in tension with the material analysis? Because given the material analysis, whenever p is false it follows that p→ q and p→ ¬q are both true. So any p that is false but not known to be false will present a counterexample to WCNC. 4 3 The central argument Our central argument is that given standard assumptions about knowledge, if we accept MI and WCNC we must also accept: NO DAISY CHAINS (◊p ∧ K(p ⊃ q)) ⊃ ¬◊(◊p ∧ K(p ⊃ ¬q)). In words: if one is in a position to non-trivially know the material conditional p ⊃ q-i.e., know p ⊃ q without knowing ¬p-then it's not the case that for all one is in a position to know, one is in a position to non-trivially know the contrary material conditional p ⊃ ¬q. Equivalently: if you non-trivially know a material conditional, then you know that you don't non-trivially know its contrary.7 NO DAISY CHAINS is so-called because the epistemic state it rules out happens to look like a daisy chain. We'll see a picture of one and get a sense of their epistemological significance shortly. But first, an informal version of the argument. Suppose it were possible to non-trivially know p ⊃ q while not being able to rule out that you non-trivially know p ⊃ ¬q. Given MI, this would entail the possibility of knowing p → q while not being able to rule out that you know p → ¬q. But of course if you can't rule out that you know p→¬q, then you can't rule out p→¬q either. And if you know that p→ q but can't rule out that p→ ¬q, then you can't rule out p→ q ∧ p→ ¬q. But given WCNC, if there's anything you can rule out, it's that p→ q∧ p→¬q. So it must not be possible to non-trivially know p ⊃ q while not being able to rule out that you non-trivially know p ⊃ ¬q. More formally: if we assume that K obeys the normal modal logic KT-and thus that: CLOSURE K(p ⊃ q) ⊃ (Kp ⊃ Kq).8 FACTIVITY Kp ⊃ p. Then by adding MI and WCNC to that logic as axioms and closing it under necessitation and modus ponens, we'll get NO DAISY CHAINS as a theorem.9 7 For the sake of readability we will sometimes drop the 'in a position to' qualifier when translating our principles into English (especially when giving informal glosses, as we just did for NO DAISY CHAINS). 8 Those who are worried about the kinds of idealizations involved with CLOSURE should keep in mind that 'Kp' says that the relevant agent is in a position to know p, not that that agent knows p. Moreover, the uses to which these principles will be put will be fairly tame: our arguments will exclusively concern mundane indicative conditionals about the heights of trees and the like-not skeptical scenarios, semantic paradoxes, nested conditionals, or anything of that sort. So the argument could go through with various restrictions on CLOSURE if needed. 9 By adding MI and WCNC to our logic as axioms (and closing it under necessitation and modus ponens), we are assuming not merely that these principles are true, but that they are knowable, knowably knowable, knowably knowably knowable, and so on. Some might find this objectionable. But strictly speaking the argument only requires that the two principles are knowable, not that their knowability iterates. And on that matter, it is difficult to imagine any cogent 5 Here is the proof. Suppose for reductio that NO DAISY CHAINS is false-i.e., that for some p and q: (1) (◊p ∧ K(p ⊃ q))∧◊(◊p ∧ K(p ⊃ ¬q)). From this MI allows us to derive: (2) (◊p ∧ K(p→ q))∧◊(◊p ∧ K(p→¬q)). And since for any p and q, Kp ∧◊q entails ◊(p ∧ q), (2) entails: (3) ◊(◊p ∧ K(p→¬q)∧ (p→ q)). From FACTIVITY and (3) we can derive: (4) ◊(◊p ∧ (p→¬q)∧ (p→ q)). Which is just the claim that: (5) ¬K(◊p ⊃ ¬((p→¬q)∧ (p→ q)). But given WCNC and the necessitation rule, we know that: (6) K(◊p ⊃ ¬(p→¬q ∧ p→ q)). So, by reductio, there must be no p and q that satisfy (1). Therefore NO DAISY CHAINS. 4 Daisy chains and margin for error principles We have our result: if you like MI and WCNC, then you better not like daisy chains. By way of warming up to its philosophical significance, it will help to have an intuitive feel for what daisy chains are. So here is a picture of one: w3 ¬p, q w2 ¬p, q w1 p, q w4 p,¬q An epistemic daisy chain position on which MI and WCNC are true but unknowable. For one thing, all the evidence in their favor is open to view, so it's hard to see why we wouldn't be able to know them. For another, MI and WCNC are supposed to be "conceptual" truths about the indicative conditional-that is, the kinds of truths we can discover on the basis of a priori reflection. 6 Arrows represent epistemic accessibility-i.e., an arrow from one world to another indicates that by the lights of what is known at the first world, the second world is actual. In this model each world sees itself and the worlds immediately adjacent to it, but nothing else. Since w2 sees exactly one p-world-namely w1-and that world happens to be a q-world, at w2 we have ◊p∧K(p ⊃ q). And since w3 sees exactly one p-world-namely w4-and that world happens to be a ¬q-world, at w3 we have ◊p∧K(p ⊃ ¬q). But w2 and w3 both see each other. So whatever is true at w2 must be compatible with what is known at w3, and vice-versa. And so at w2 we have: ◊p ∧ K(p ⊃ q) ∧ ◊(◊p ∧ K(p ⊃ ¬q)) (likewise for w3, but with the negations on the q's flipped). This is exactly what NO DAISY CHAINS says cannot happen. The significance of this fact will become clearer if we allow ourselves to (superficially) change subjects for a moment. In particular, let us focus on Williamson's (2000, ch. 5) influential critique of KK, the principle that knowledge positively iterates:10 KK Kp ⊃ KKp. Williamson's argument against KK begins with a case like the following: Mr. Magoo Mr. Magoo is an adult with normal perceptual capacities judging the heights of trees at a distance. The tree Magoo is currently looking at appears to him to be about 100 feet tall. And in fact it is 100 feet tall. But given the limitations of his powers of discrimination, Magoo is such that whenever a tree appears to him to be x feet tall, it's compatible with what he knows that the tree is anywhere between x − 10 and x + 10 feet tall. To get from this to the falsity of KK, Williamson invokes a "margin for error" principle for knowledge. Letting 'H ' abbreviate the description 'The height of the tree in feet' and letting x be a natural number, we have: MARGIN FOR ERROR K((|H − x | ≤ 1) ⊃ ◊(H = x)). In words: Magoo is in a position to know that for any height x , if the tree's actual height is within one foot of x , then for all Magoo knows the tree is x feet tall.11 The intuitive idea behind 10 The literature on KK is vast. For defenses of KK, see, e.g.: Hintikka (1962); Stalnaker (2009); Almotahari and Glick (2010); Mchugh (2010); Cresto (2012); Cohen and Comesaña (2013); Fernández (2013); Greco (2014a,b, 2015b); Das and Salow (2018); Goodman and Salow (2018); Dorst (2019). And for critiques see, e.g.: Williamson (2000, 2011, 2014); Hawthorne and Magidor (2009, 2010); Dorr et al. (2014). 11 Why does MARGIN FOR ERROR say K((|H − x | ≤ 1) ⊃ ◊(H = x)) rather than K((|H − x | ≤ 10) ⊃ ◊(H = x))? Because we don't want to assume that Magoo is in a position to know his exact powers of discrimination. That is to say: we want the argument to be consistent with the possibility that for all Magoo knows, the appearances are never more than nine feet off the tree's true height. What we will assume, however, is that Magoo has enough evidence about the limits of his powers of discrimination to know that he can't do better than guess the height of the tree to the nearest foot. (The choice of numbers here is arbitrary; the important idea is just that the number that appears under the K in MARGIN FOR ERROR is smaller than the number determined by Magoo's actual margin for error.) 7 MARGIN FOR ERROR is that knowledge requires safety from error, and thus that if one's belief that the tree is at least x feet high amounts to knowledge, there isn't a nearby possibility in which one has a relevantly similar false belief. Since whenever a tree appears to Magoo to be x feet tall, that appearance could be plus or minus 10 feet off the tree's actual height, and since Magoo only has appearances to work with, Magoo simply isn't in a position to know the height of the tree to the nearest foot. Moreover, given the abundance of evidence concerning the limits of his powers of discrimination, Magoo's inability to identify the height of the tree to the nearest foot is something he is in a position to know about himself. With MARGIN FOR ERROR in place, the argument against KK then runs as follows.12 Suppose for reductio that KK is true. By stipulation Magoo knows 90 ≤ H ≤ 110. So he knows H < 111. And by KK, he knows that he knows this. MARGIN FOR ERROR plus CLOSURE entails that if Magoo knows that he knows H < 111, then he knows that H isn't 110. So Magoo knows H isn't 110. But this contradicts the stipulations of the case, for the strongest thing Magoo is in a position to know is that the tree is between 90 and 110 feet tall. Worse, the reasoning involved here can be extended indefinitely, allowing Mr. Magoo to deduce the absurd conclusion that, for any pair of natural numbers x and y , the tree is both fewer than x and greater than y feet tall. Thus KK must be false. So we have an argument from MARGIN FOR ERROR to the falsity of KK. What does this have to do with NO DAISY CHAINS? Well, here is a more general way of thinking about the relationship between MARGIN FOR ERROR and KK, one that Williamson (2000, 2011, 2014) explicitly builds into his models of inexact knowledge. The information Magoo gets from his environment is noisy. By looking at a 100 foot tall tree, Magoo comes to know some but not all of the true propositions about how tall it is. Conjoin these propositions and you get the strongest proposition he knows about its height-i.e., the smallest range of heights that for all he know are its actual one. In normal circumstances, the values in this range are directly correlated with the tree's actual height. If the tree were slightly taller, he'd be able to rule out some of the lower values, but at the cost of adding in some high ones. And if instead it were slightly shorter, he'd be able to rule out some of the higher values, but now at the cost of adding in some low ones. Shift the height of the tree, and the boundaries of what Magoo knows shift accordingly.13 And therein lies the problem for KK. If Magoo can't tell the difference between the tree's being 100 feet tall and 101 feet tall, and if the range of heights compatible with what he knows given that the tree is 100 feet tall 12 Here we will ignore the distinction between knowing and merely being in a position to know. I'll also note that this is a somewhat simplified version of Williamson's argument-interested readers should consult the original. 13 To be clear, MARGIN FOR ERROR doesn't itself entail that what Magoo knows about the tree's height shifts in accordance with shifts in the tree's actual height. See Goodman (2013) for models of inexact knowledge that validate MARGIN FOR ERROR without making the strongest proposition Magoo knows about a tree's height exquisitely sensitive to its actual height. I lack the space to engage with such models here, though I admit they complicate the connection I'm about to draw between MARGIN FOR ERROR and daisy chains. For our purposes, the relevant point is just that the shifty picture of inexact knowledge is a natural one to draw in light of MARGIN FOR ERROR, and in fact has been drawn by (e.g.) Williamson to model the knowledge of agents like Magoo. 8 is different from the range of heights compatible with what he knows given that the tree is 101 feet tall, then how can he possibly know what the smallest range of heights compatible with his knowledge is? The range depends on the exact height of the tree, and he's in no position to know what that exact height is. So he must not be able to know what he knows about the height of the tree either. Or so the argument goes. This is not the place to defend Williamson's MARGIN FOR ERROR principle, or the conception of inexact knowledge that comes with it. What is of interest to us is that this conception of knowledge predicts not just that daisy chains are possible, but that they are as common as cases like Mr. Magoo. To see this, consider again our model of a daisy chain from above, except with the worlds conspicuously relabeled in terms of heights of the tree:14 w101 ¬p, q w100 ¬p, q w90 p, q w111 p,¬q A model of Mr. Magoo Let p be the proposition that the tree is either 90 or 111 feet tall, and let q be the proposition that the tree isn't 111 feet tall. And let the height of the tree at each world correspond to the number of that world. This gives us a plausible model of Mr. Magoo's epistemic state. And it's a daisy chain: at w100-i.e., the actual world-◊p ∧ K(p ⊃ q)∧◊(◊p ∧ K(p ⊃ ¬q)). More explicitly: Magoo knows that the tree is between 90 and 110 feet tall, and thus knows that the tree isn't 111 feet tall. But because the proposition that the tree isn't 111 feet tall is at the margins of what he knows, he doesn't know that he knows it. That is to say: for all Mr. Magoo knows, it's compatible what what he knows that the tree is 111 feet tall. Likewise, Mr. Magoo doesn't know that the tree is greater than 90 feet tall. But because the proposition that he doesn't know that the tree is greater than 90 feet tall is at the margins of what he doesn't know, he doesn't know that he doesn't know it. That is to say: for all Mr. Magoo knows, he knows that the tree is greater than 90 feet tall. And this is why Mr. Magoo non-trivially knows the material conditional (90 ∨ 111) ⊃ ¬111, while also being such that, for all he knows he non-trivially knows the material conditional (90∨ 111) ⊃ 111. So Magoo's epistemic state is a daisy chain. Here is another way of seeing the idea. Suppose like Mr. Magoo you're looking at a tree that happens to be around 100 feet tall. You know you're in normal conditions, but otherwise have no special information about the tree's height. I ask you "What's the strongest proposition you know about the tree's height"? Given MARGIN FOR ERROR, no amount of reflection will deliver 14 For simplicity's sake we omit all the worlds between w90 and w100, and between w101 and w111. 9 the answer to that question. There is some such proposition, but you're not in a position to know what it is. And that's because you're just not able to tell the difference between, on the one hand, being such that the strongest proposition you know is that the tree is between 90 and 110 feet tall and, on the other, being such that the strongest proposition you know is that it's between 91 and 111 feet tall. The margins are too small. And if the margins make it such that you know the tree is 90 and 110 feet tall, but for all you know you know the tree is between 91 and 111 tall, then your epistemic state is a daisy chain. You in fact non-trivially know that (90∨111) ⊃ ¬111, but for all you know you non-trivially know that (90∨ 111) ⊃ 111. Thus: Mr. Magoo, one of the standard putative counterexamples to KK, is also a putative counterexample to NO DAISY CHAINS, and thus to the conjunction of MI and WCNC. Two principles supported by mundane generalizations about our ordinary thought and talk involving indicative conditionals turn out to be in tension with a popular (though not uncontroversial) theory of inexact knowledge. 5 Generalizing the argument Before we give an intuitive diagnosis of the source of the tension, as well as our recommendation for how to react to it, it will be instructive to generalize the dialectic to other propositional attitudes. On that matter, recall that §3 argued that if we think K obeys: CLOSURE K(p ⊃ q) ⊃ (Kp ⊃ Kq). FACTIVITY Kp ⊃ p. and if we accept both MI and WCNC, then we must also accept NO DAISY CHAINS. However, we don't actually need the full strength of FACTIVITY. The proof works all the same if in its place we make the strictly weaker assumption that K is shift-reflexive:15 SHIFT-REFLEXIVITY K(Kp ⊃ p). And that's because the only thing FACTIVITY was used for was to secure the validity of the inference from ◊Kp to ◊p. Assuming that K has a normal modal logic, this inference goes through just as well with SHIFT-REFLEXIVITY in place of FACTIVITY.16 15 The assumption is strictly weaker because given CLOSURE, FACTIVITY entails but is not entailed by SHIFT-REFLEXIVITY. 16 More formally, the relevant steps in §3's proof are the transitions from (2) to (3) and (3) to (4): (2) (◊p ∧ K(p→ q))∧◊(◊p ∧ K(p→¬q)). 10 Our central result thus extends to propositional attitudes other than knowledge. Indeed, for any operator Φ, If we think Φ obeys (the relevant analogs of) CLOSURE and SHIFT-REFLEXIVITY, then we can accept MI and WCNC for Φ only if we accept NO DAISY CHAINS for it as well. With this in mind, let us assume that for rational agents the attitudes of (fully) believing, being certain, and presupposing (in the style of (e.g.) Stalnaker 2002) all satisfy their respective versions of CLOSURE and SHIFT-REFLEXIVITY.17 Many theorists accept this assumption, so I won't defend it at length. But I will briefly say the following in its favor.18 With respect to CLOSURE, it is unclear what, if anything, would drive someone to accept the principle when 'K ' is interpreted as knowledge, but not when it is interpreted in terms of one of these non-factive attitudes. And with respect to SHIFT-REFLEXIVITY, the thought is that even if these attitudes do not entail truth in the way knowledge does, they aim at it in a way that makes violations of SHIFT-REFLEXIVITY seem unpalatable. There is something less than fully rational about there being a particular proposition p such that it is compatible with what one believes/is certain of/presupposes that: one believes/is certain/presupposes that p, but in fact ¬p. Hence SHIFT-REFLEXIVITY. So: on the assumption that each of full belief, certainty, and presupposition obeys CLOSURE and SHIFT-REFLEXIVITY, it follows that we can only accept MI and WCNC for these attitudes if we accept NO DAISY CHAINS for them as well. This leads to the same dialectic we had when these principles were interpreted in terms of knowledge. In each case reflections on language make a strong case for MI and WCNC, while reflections on epistemology make a strong (or at least principled) case for the falsity of NO DAISY CHAINS. For the sake of concreteness we will explore the dialectic in terms of certainty. We will thus temporarily reinterpret 'K ' as 'is rationally certain that', and mutatis mutandis for '◊'. (Though from here on out we will leave the 'rational' part implicit.) Interpreting WEAK CONDITIONAL NON-CONTRADICTION certainty-theoretically, it says that if you aren't certain that ¬p, then it's not both the case that p → q and p → ¬q. This principle continues to seem both platitudinous and empirically well supported. There seem to be no contexts in which, for some p whose falsity isn't certain, it is appropriate to assert both that p→ q and p→¬q, or even to assert that it might be the case that both p→ q and p→¬q. Likewise for MATERIAL INDICATION. Now the principle says that if you aren't certain that ¬p and are certain that p ⊃ q, then you are certain that p→ q. Once again all the evidence cited in favor of MI on its knowledge-theoretic interpretation extends to its certainty-theoretic one. For instance: you are certain that either the butler or the gardener did it. Should you be certain that if the butler didn't do it, then the gardener did? It certainly seems like you should. Moreover, (3) ◊(◊p ∧ K(p→¬q)∧ p→ q). (4) ◊(◊p ∧ (p→¬q)∧ (p→ q)). That K has a normal modal logic gets you from (2)–(3). That it obeys SHIFT-REFLEXIVITY gets you from (3)–(4). 17 Here we focus on rational agents so as to avoid having to use awkward expressions like 'is in a position to be certain that'. For our purposes it's the same degree of idealization as was used in the case of knowledge. 18 For further discussion see, e.g., Stalnaker (2006), Hawthorne and Magidor (2009). 11 explicit violations of the principle continue to seem terrible: (3) ?? Alice has non-zero confidence that Bob is in the office, and is 100% confident that Carol is. But Alice isn't 100% confident that if Bob is in the office, then Carol is. (4) ?? Although everyone is certain that Lexie will either come to the party or stay home, not everyone is certain whether she'll stay home if she doesn't come to the party. Given MI, we have a nice explanation of these judgments. Without it, it's hard to see what would be driving them. It's also worth noting that on its certainty-theoretic interpretation, MI is a straightforward corollary of the principle that that the probability of an indicative conditional is the probability of its consequent given its antecedent:19 RAMSEY'S TEST (RT) P(p)> 0 ⊃ P(p→ q) = P(q|p).20 Here 'P ' denotes the relevant agent's credence function. Assuming P(p) = 1 iff one is certain that p, MI is just RT in the special case where P(q|p) = 1. So if MI is false on this interpretation, then so is RT. Many theorists of the indicative conditional find RT so intuitive that they design their theories of the conditional to deliver it.21,22 But given that RT entails MI and that the conjunction of MI and WCNC entails NO DAISY CHAINS, we know that anyone who accepts RT and WCNC will also have to accept NO DAISY CHAINS. And although there are theorists who reject WCNC, to my knowledge 19 Other names for the principle we are calling "Ramsey's test" include: Adam's thesis, Stalnaker's thesis, the thesis, the test, the equation, Ramsey's thesis, and Stalnaker's equation. The idea was first presented by Ramsey (1931) (hence our choice of name), and was popularized by Adams (1965) and Stalnaker (1968, 1970). See Hájek and Hall (1994); Edgington (1995); Douven and Verbrugge (2013) for helpful surveys of the issues surrounding the test. 20 Conditional probabilities are understood in the standard way: P(q|p) =def P(p∧q) P(p) , if P(p)> 0; else is undefined. 21 Here is Williams (2009, p. 154) on the matter: The link between something like a probability of a simple conditional and the corresponding conditional probability is a centerpiece of many accounts of the indicative conditional; clearly many philosophers have found it compelling enough to build theories around it (or some surrogate). See as well Bennett (2003, §12) and Willer (2010) for similar remarks. 22 An important caveat concerns the various "triviality results" that have been proven for RT. The first of these results is due to Lewis (1976). Since then there have been numerous extensions. See Hájek and Hall (1994), Edgington (1995), and Bennett (2003) for helpful surveys. The existence of these results hasn't undermined support for RT all that much-at least not in the sense relevant to (e.g.) the quotation from Williams in the previous footnote-though it has forced theorists to clarify what exactly their commitment to RT amounts to. Some react to the results by denying that conditionals express propositions. They interpret the 'P ' that appears in the statement of RT not as meaning probability of truth, but instead as meaning something along the lines of degree of assertability or degree of believability (see, e.g., Gibbard 1981, Edgington 1995). Some leave the interpretation of RT basically as is, but weaken the logic of the indicative conditional and make it highly context-sensitive (see, e.g., Bacon 2015). And some restrict RT to only those conditionals containing "categorical" antecedents and consequents-i.e., those that don't involve any nested conditionals (see, e.g., McGee 1989, Jeffrey and Edgington 1991, Jeffrey and Stalnaker 1994). With respect to this last category of responses, it's worth noting that our daisy chain-style counterexamples to the conjunction of MI and WCNC involve only categorical indicative conditionals. (Thanks to an anonymous reviewer for drawing attention to this point.) 12 not one of them accepts RT.23 Consequently, a lesson we should take from this paper's central result is that those who reject NO DAISY CHAINS should reject RT too. Ramsey's "platitudinous" idea turns out to presuppose the substantive epistemological thesis that rational certainty cannot admit of daisy chains. On that matter, we know that there's a good case to be made that knowledge admits of daisy chains. Supposing we find those arguments convincing, should we think any differently about the claim that certainty admits of daisy chains? It is hard to see why we would. As far as I can tell, anyone who takes cases like Mr. Magoo to establish the existence of daisy chains for knowledge should take it to do the same for certainty. One's certainties form a daisy chain just in case there is a pair of propositions p and q such that: one isn't certain that ¬p but is certain that p ⊃ q; and it's compatible with what one is certain of that: one isn't certain that ¬p but is certain that p ⊃ ¬q. To convince ourselves that Magoo is in such a state, we just need to convince ourselves of the following: that (i) when Magoo takes a quick look at the 100 foot tall tree, there comes to be a smallest range of heights such that, for all Magoo is certain of, the tree is any one of those heights; that (ii) the exact boundaries of this range depend on the tree's actual height; and that (iii) because of the sensitivity of this dependence relation, Magoo can't be certain of what he's certain of concerning the tree's height.24 If we accept (i)–(iii), we should accept that there is some range of heights such that Magoo is certain that the tree falls within that range, but also that there is some other range of heights-one that contains some but not all of the heights in the actual range, and vice-versa-such that it is compatible with what Magoo is certain of that the height of the tree falls within that range. If we accept all that, we must accept that Magoo's certainties form a daisy chain. Of course, some of these assumptions may seem like a lot to take on board. But at this point in the paper's dialectic, the question is not about their intrinsic plausibility; it's about their relative plausibility. And once we've bought into the Williamsonian way of thinking about inexact 23 Those who reject WCNC tend to reject RT because they think the material analysis is true, and the material analysis is in tension with RT. (According to the material analysis, the probability that I lose the lottery if I win it it is the probability that I lose the lottery; according to RT it's 0.) In fact, it is because the material analysis is so unsatisfying that many theorists invoke principles like RT to guide the construction of an alternative theory. See, e.g., (Stalnaker, 1968) for a representative example of this strategy. 24 One might object that there is no range of heights such that Magoo is rationally certain that the tree falls within that range, for the more basic reason that Magoo cannot be rationally certain that the tree isn't a mere hallucination (or what have you). But this objection fails to get to the heart of things. The exact same argument can be run concerning propositions about "purely internal" matters. Suppose, for instance, that Magoo is looking at an image of a bunch of red dots against a white background. In fact the image contains 1,891 red dots, but Magoo hasn't been told this, and, like most humans, isn't in a position to count them on the fly. Magoo is thus having a certain kind of visual experience, one as of there being a large (but not enormous) number of red dots against a white background. Question: what is the strongest proposition Magoo is in a position to be certain of concerning the number of red dots in his experience? We (and Magoo) know the answer lies somewhere between that it's between 1 and 10,000,000 and that it's exactly 1,891: he's definitely certain of the former, and definitely uncertain of the latter. But it is far from clear that we (or Magoo) can figure out the exact boundaries of what he's rationally certain of. From here it's a short step to accepting that Magoo's rational certainties form a daisy chain. 13 knowledge, it is unclear why any of these assumptions about Magoo's certainties should seem suspect. To make the point vivid, try putting yourself in Magoo's shoes. You're looking at a tree that happens to be 100 feet tall. I ask you 'What is the strongest proposition you're certain of concerning its height?' How should you answer? I take it that you know you're certain it's taller than an inch and shorter than a mile-but surely you also know that that's not the strongest thing you're certain of. So you'll have to do better than that. But for any putative improvement on the 'Between an inch and a mile' answer, you'll have to ask yourself whether you've landed on the exact boundaries of your certainties, or whether instead you're off by a foot in some or both directions (and if not a foot, an inch; and if not an inch, a nanometer). And once you start thinking in those terms, it becomes hard to see how you could ever be rationally certain that you've identified the exact boundaries of what you're certain of. By the same token, it becomes easy to see how you could be the kind of person who is in fact certain the tree is between 90 and 110 feet tall, but is uncertain whether what they are certain of is that the tree is between 90 and 110 feet tall, or whether instead it's that it's between 91 and 111 feet tall. This particular pair of ranges might be contrived, but the idea that there is some such pair of ranges is not. And so long as there is some such pair, at least one of MI (and thus RT) or WCNC must be false. I thus contend that the epistemological picture motivated by cases like Mr. Magoo is no less plausible when stated in terms of certainty than in terms of knowledge. And with respect to the other non-factive attitudes, readers can (hopefully) see for themselves that all of the above arguments go through when 'is certain' is replaced with any of 'fully believes', 'is sure', 'is taking for granted', etc. This suggests that the fact that MATERIAL INDICATION and WEAK CONDITIONAL NON-CONTRADICTION jointly entail NO DAISY CHAINS implies a deep tension between, on the one hand, some of the more popular ways of thinking about the indicative conditional and, on the other, an independently plausible conception of epistemology. Either the kind of reasoning about indicative conditionals codified in one of MI (and thus RT) or WCNC is suspect, or we have a novel argument against a popular class of theories of our epistemic and doxastic attitudes. 6 Replacing Material Indication? The question for the remainder of the paper is how to proceed in light of this fact. Before we answer it, however, I want to make clear that our aim is not to prove that daisy chains are possible, and thus that one of MI or WCNC is false. Nor is to prove that the reasoning behind MI and WCNC is impeccable, and thus that daisy chains are impossible. Instead, our aim is simply to figure out what one who is antecedently sympathetic to each of MI, WCNC, and the possibility of daisy chains ought to do in light of the fact that those commitments are jointly inconsistent. And what we will argue is this section and the next is that the commitment to MI is what ought to be given up. 14 6.1 Setting aside Weak Conditional Non-Contradiction We'll start by making a case for leaving WCNC alone. What seems to me to be the strongest reason to think that WCNC ought not be given up is that the kind of reasoning implicit in MI is one that proponents of daisy chains should be independently suspicious of. By way of seeing why, observe that given a "strict" analysis of the indicative conditional: STRICT p→ q ≡ K(p ⊃ q).25 MI directly entails a strictly stronger principle than NO DAISY CHAINS: namely KK, the principle that Kp entails KKp.26 Why? Because if STRICT is correct, p→ q is true if and only if p ⊃ q is known. But if MI is valid, then if p ⊃ q is (non-trivially) known, then p→ q is known too. But if non-trivially knowing that p ⊃ q suffices for knowing that p→ q, and if p→ q is true if and only if p ⊃ q is known, then non-trivially knowing that p ⊃ q must suffice for knowing that p ⊃ q is known. And since p can be a tautology, this is just KK. Now, many theorists of the indicative conditional reject STRICT. And many epistemologists accept not just NO DAISY CHAINS, but also KK. So I don't take this result to be adding to much of what was already established in §3. But it still works as a useful diagnostic. Like most truthconditional analyses of the indicative conditional that are not the material analysis, STRICT treats the indicative conditional as a propositional attitude of sorts. Saying that p→ q is essentially a way of saying that you know something (namely that either p is false or q is true). Consequently, saying that you know that p→ q is a way of saying that you know that you know something-i.e., that you have the second-order knowledge that you have the first-order knowledge that p ⊃ q. But MI says that (non-trivially) knowing p ⊃ q suffices for knowing p→ q. So given STRICT, this is equivalent to saying that having a certain kind of first-order knowledge suffices for having second-order knowledge that one has that first-order knowledge. Issues concerning WCNC aside, this is exactly the kind of iterative thinking that those convinced of MARGIN FOR ERROR-like principles are supposed to reject. So what if we replace STRICT with one of its main non-material rivals: say a variably strict analysis of the indicative (as in (e.g.) Stalnaker 1968, 1975), or a random world analysis (as in Bacon 2015)? According to these sorts of views, the truth of p→ q doesn't hinge on whether every p-world compatible what what you know is a q-world; rather, it hinges on whether a certain subset of the p-worlds compatible with what you know are q-worlds.27 So even if, as MI tells us, 25 See, e.g., Kratzer (1986), Gillies (2010), Rothschild (2013). 26 Proof : Suppose Kp. Let> be an arbitrary tautology. It follows from the necessitation rule that K>∧Kp, which entails ◊> ∧ K(> ⊃ p). So by MI, K(> → p), which given STRICT is just K(K(> ⊃ p)). Applying CLOSURE twice gets you KK> ⊃ KKp. Since > is a tautology, we have KK> for free. So by modus ponens we have KKp. 27 We won't go through the exact details of these views, but roughly: on the variably strict analysis, we look at the "closest" epistemically possible p-worlds (perhaps there's a unique one), and see whether they're all q-worlds; on the random world analysis, we pick an epistemically possible p-world at random, and see whether that world is a q-world. 15 (non-trivially) knowing the material conditional suffices for knowing the corresponding indicative, knowing the indicative isn't a matter of knowing that one knows the material conditional. This means we don't get the same quick argument from MI to KK that we got with STRICT. But avoiding the argument from MI to KK is one thing, avoiding a commitment to the idea that certain first-order epistemic facts guarantee substantive second-order epistemic facts is another. And so long as one gives the indicative conditional an analysis according to which saying p→ q is a way of saying that a certain kind of first-order epistemic fact holds-which is something both the variably strict and random world analyses are committed to-then there is simply no way one can accept MI without also accepting a conception of epistemology that requires a nontrivial amount of coordination between one's firstand second-order epistemic states. And this is why those sympathetic with the possibility of daisy chains ought to be suspicious of MI from the get go. Consequently, I believe the "real" tension is between a commitment to MI and a commitment to the existence of daisy chains. A principle like WCNC forces us to choose between these commitments, but even without it we could have noticed that they sit uncomfortably with each other. Thus, for those who reject the material analysis of the indicative, the crucial choice is between (on the one hand) taking the intuitive appeal of MI to show that daisy chains are impossible, and (on the other) taking the intuitive appeal of the arguments for the possibility of daisy chains to show that the reasoning behind MI must be invalid. 6.2 From material indication to quasi-material indication The remainder of the paper will argue in favor of the latter option. In particular, it will argue that the dialectic between MI and the possibility of daisy chains is asymmetric: reasons to think daisy chains are possible are good reasons to think MI is false, but reasons to think MI is true are not good reasons to think daisy chains are impossible. The reason why, roughly, is that just about all of the evidence cited in favor of MI can be explained by a weaker surrogate principle-one that does not imply the impossibility of daisy chains. And to the extent to that the weaker principle has explanatory deficits, they are instances of a more general pattern of problems the proponent of daisy chains already faces elsewhere. Thus, if one wants to accept MI, it should be because one already thinks daisy chains are impossible. To reason from MI (and WCNC) to NO DAISY CHAINS is to beg the question. By way of introducing our surrogate principle, let us start with an observation about the arguments that are typically made in favor of MI (repeated here): MATERIAL INDICATION (◊p ∧ K(p ⊃ q)) ⊃ K(p→ q). Either way there's no requirement that every epistemically possible p-world be a q-world. 16 And that is that they tend to draw on judgments about cases in which the relevant material conditional isn't just non-trivially known, but is in fact known to be non-trivially known. To give just one representative example, consider the case with which we opened the paper. You know either the butler or the gardener did it, but you don't know which of them it was. Do you know that if the butler didn't do it, the gardener did? It certainly seems like you do. This is what MI is supposed to explain. But here is the crucial point that tends to go unnoticed: on any normal way of filling in the details of the case, you're not just going to know that either the butler or the gardener did it, you're going to know (or at least be in a position to know) that you know this.28 One way of seeing this is to note that it would sound perfectly fine for you to think or assert 'I know that either the butler or the gardener did it'. This is hard to explain on the assumption that you don't know that you know that either the butler or the gardener did it. Another is to notice that it is actually generally easy to know what one knows-at least when the knowledge is of the sort philosophers of language tend to be interested in. This is because for many ordinary propositions p, you're going to be in a position to know (i) that people with certain kinds of evidence (be it from testimony, perception, or memory) are in a position to know that p; and (ii) that you have evidence of that kind. And from there you'll be in a position to deduce (and thus come to know) that you know that p.29 For example: suppose the local newspaper is your (and the public's) source of the information that either the butler or the gardener did it. And suppose I ask you whether Jones knows that either the butler or the gardener did it. Intuitively, you'll be able to settle my question for me just by finding out whether Jones reads the newspaper. And that's because you know that anyone who reads the newspaper knows that either the butler or the gardener did it. But of course you know that you read the newspaper. So you must know that you know that either the butler or the gardener did it too.30 What is the significance of this observation? It is that the intuitive thought experiments cited in favor of MI do not discriminate from the principle that knowing that one non-trivially knows the material conditional suffices for knowing the indicative. So rather than accept MI, we might instead accept: QUASI-MATERIAL INDICATION (QMI) K(◊p ∧ K(p ⊃ q)) ⊃ K(p→ q). 28 The same can be said about the evidence typically cited in favor of RAMSEY'S TEST. For example, consider Bacon's (2015, p. 132) main example in favor of RT. He points out that when we assess the probability of indicative conditionals like If the playing card is black, it's spades, we do so in exactly the way we would were we to calculate the conditional probability of the conditional's consequent given its antecedent, arriving at the answer 12 . And I agree that really does seem to be how we go about doing it. But again: we know that the odds of getting a spade given that we're getting a black card is 12 . In fact, we plausibly know it to many orders. 29 This argument is due to Kripke (2011, pp. 34–35). 30 Could considerations such as these establish a direct argument for KK? It's not clear why they would. We might follow (e.g.) Greco (2014a) in using them to undermine certain arguments against KK-say, arguments to the effect that knowing that one knows that p requires "independently" verifying that one's evidence for p is accurate-but that's a long way off from taking them to show that KK is true. Those who reject KK on account of MARGIN FOR ERROR-like considerations can posit an abundance of higher-order knowledge. 17 At the very least this is what the proponent of daisy chains should do, because unlike MI, QMI does not combine with WCNC to entail NO DAISY CHAINS (repeated here): NO DAISY CHAINS (◊p ∧ K(p ⊃ q)) ⊃ ¬◊(◊p ∧ K(p ⊃ ¬q)). To see why, notice that any theory of the indicative conditional that validates the inference from K◊p ∧ KK(p ⊃ q) to K(p → q) is one that makes QMI a simple theorem of the logic of K .31 STRICT is one such theory, but so are the variably strict and random world analyses mentioned above. These theories validate WCNC, so there are (plausible) models of the conjunction of QMI and WCNC that invalidate NO DAISY CHAINS-just as we want. The question, then, is how much of MI's explanatory power can be captured by QMI. If enough of it can-which is what the next section will argue-then we can be confident that the the intuitive considerations cited in favor of MI do not constitute independent grounds in favor of the impossibility of daisy chains. 7 Quasi-material indication and its limits 7.1 Contradictions, blindspots, and indefensibles Consider the following three kinds of propositions (and examples of sentences that plausibly express them): (S0) p ∧¬p. 'It's raining and it's not raining.' (S1) p ∧¬Kp. 'It's raining but I don't know it's raining.' (S2) p ∧¬KKp. 'It's raining but I might not know it's raining.' There seems to be something wrong with all of (S0)–(S2). Explaining our judgments about (S0) is easy: it's a contradiction. Explaining our judgments about (S1), an instance of Moore's (1942) paradox, is also fairly easy-we'll get to one of the standard accounts in a moment. But as has been argued in the literature on iteration principles for knowledge, whether explaining our judgments about (S2) is easy or not seems to depend on whether KK is valid (and thus whether NO DAISY CHAINS is).32 If KK is valid, it's easy; if not, it's hard. This is not to say it's 31 Proof : Suppose K(◊p∧K(p ⊃ q)). By CLOSURE, this entails K◊p∧KK(p ⊃ q), which given our assumption about the interpretation of '→' entails K(p→ q). 32 See, e.g., Sosa (2009); Cohen and Comesaña (2013); Greco (2014a, 2015a); Dorst (2019). For replies, see, e.g., Williamson (2005, 2013); Benton (2013). 18 impossible if KK is invalid, but explaining the badness of (S2) is widely regarded as a non-trivial challenge for the those who reject KK. This section will argue that the relevant theoretical difference between MI and QMI is which of (S0)–(S2) they associate certain kinds of sentences with. In particular, sentences like the following: (5) ?? I don't know whether p, but I do know that ¬p or q. But I don't know whether if p, q. (6) ?? I don't know whether p, but I do know that ¬p or q. But I might not know whether if p, q. We will argue that if MI is valid, then (5) is like (S0) and (6) is like (S1); while if QMI is valid and MI isn't, then (5) is like (S1) while (6) is like (S2). This will in turn situate the dialectic as follows: if there is an argument from the intuitive appeal of MI to NO DAISY CHAINS, it is an argument to the effect that we must treat (5) like (S0) rather than (S1) and (6) like (S1) rather than (S2). However, given the generality of the problem sentences like (S2) raise for those who deny KK, it's unclear whether our intuitive judgments about sentences like (6) could change anyone's mind about whether QMI is an adequate substitute for MI. That is the dialectic in abstract. Now to run through it in detail.33 We'll start with the explanation of the badness of the Moore-paradoxical (S1) (p ∧¬Kp). What makes (S1) problematic is not that it can't be true-after all, there are plenty of true propositions that are unknown. What makes it problematic is that it is a blindspot: the kind of proposition that is not possibly knowable or assertable (even if possibly true). Why isn't (S1) knowable? Because for no proposition p can you know that: p and that you don't know that p:34 33 Throughout the discussion we'll assume the knowledge-theoretic interpretation of 'K '. Most of what we say will be straightforwardly generalize to the relevant non-factive attitudes. But when not, we'll be careful to treat the cases separately. 34 Given FACTIVITY, the proof of NO BLINDSPOTS is simple. By CLOSURE K(p ∧ ¬Kp) entails Kp ∧ K¬Kp. And given FACTIVITY, Kp ∧ K¬Kp entails the contradictory Kp ∧¬Kp. Without FACTIVITY the argument is slightly more complicated, though still fairly plausible. So that it's easier to follow, we'll let 'B' stand for our preferred non-factive propositional attitude (e.g., full belief, certainty, presupposition, etc.), while 'K ' will continue to denote knowledge. To get NO BLINDSPOTS (i.e., ¬B(p ∧ ¬Bp)), we just need to assume either: NO CERTAIN UNCERTAINTY B¬Bp ⊃ ¬Bp. in which case NO BLINDSPOTS follows trivially. Or we assume the conjunction of: K ENTAILS B Kp ⊃ Bp. NO CERTAIN IGNORANCE B¬Kp ⊃ ¬Bp. in which case the argument runs as follows. Given CLOSURE, B(p∧¬Bp) entails Bp∧B¬Bp. By K ENTAILS B, this gives you Bp ∧ B¬Kp. And from NO CERTAIN IGNORANCE this entails the contradictory Bp ∧¬Bp. 19 NO BLINDSPOTS ¬K(p ∧¬Kp). Why isn't (S1) assertable? Because there is a K norm on assertion:35 ASSERTION If p is assertable, then Kp. And if one cannot know propositions of the form p∧¬Kp, then by ASSERTION one cannot assert them either. So far so good: (S0) is a contradiction, (S1) is a blindspot. But what about (S2) (p∧¬KKp)? It's obviously not a contradiction. Whether it is a blindspot depends on whether KK is valid or not. If KK is valid, then it is a blindspot.36 But if KK is invalid, then it is merely an indefensible: a proposition that is not knowably knowable (even if possibly knowable).37 Indefensibles are so-called because those who assert them cannot answer questions of the form 'How do you know that?' while speaking from knowledge. It's not that one who asserts an indefensible isn't in a position to know that what one asserts is true, it's that one isn't in a position to know that one is in a position to know that what one asserts is true. We know why blindspots seem bad, and so know why (S2) seems bad conditional on KK. But we don't have a story about why indefensibles should seem bad, and so don't know why (S2) seems bad conditional on KK's being invalid. We'll get back to the dialectical significance of this point in a moment. 7.2 Material indication versus quasi-material indication Having identified three different statuses-contradiction, blindspot, indefensible-we are ready to tie things back to MATERIAL INDICATION and QUASI-MATERIAL INDICATION. According to MI, non-trivially knowing a material conditional suffices for knowing the indicative. By contrast, QMI says that only knowing that one non-trivially knows the material conditional suffices for knowing the indicative. This means the kinds of cases that can distinguish the views are those where, for some propositions p and q, K(◊p ∧ K(p ⊃ q)), but ¬KK(◊p ∧ K(p ⊃ q)). For given MI, you'll be such that KK(p→ q); but given QMI, you'll be merely such that K(p→ q). With this point in mind, consider the following two propositions: (A) ◊p ∧ K(p ⊃ q)∧¬K(p→ q). 35 This is not the place to argue for ASSERTION. It suffices to observe its usefulness in explaining the unassertability of sentences like (S1), which is something both daisy chain-affirmers and daisy chain-deniers need. Interested readers should consult, e.g., Unger (1975); Williamson (2000); DeRose (2002). 36 Proof : Given KK, ¬KKp entails ¬Kp. So given CLOSURE, p ∧¬KKp entails p ∧¬Kp, which is a blindspot. 37 The argument for ¬KK(p ∧ ¬KKp) is exactly analogous to the argument for NO BLINDSPOTS (just with a few extra applications of CLOSURE). Given CLOSURE, KK(p ∧ ¬KKp) entails KKp ∧ KK¬KKp. If K is knowledge, this is in violation of FACTIVITY; if K is one of our non-factive attitudes, it's in violation either of NO CERTAIN UNCERTAINTY, or of the conjunction of K ENTAILS B and NO CERTAIN IGNORANCE (see footnote 34). 20 (B) ◊p ∧ K(p ⊃ q)∧¬KK(p→ q). If MI is valid, then we can guarantee that (A) is a contradiction and that (B) is a blindspot.38 By contrast, if QMI is valid (and MI isn't), then the most we can guarantee is that (A) is a blindspot and that (B) is an indefensible.39 So now consider some sentences that express (A)and (B)-like propositions: (A): (7) ?? I don't know whether Jim is in the office or not, though I know Jane is. But it's not the case that I know whether Jane is in the office if Jim is. (8) ?? I'm sure Lexie's either coming to the party or staying home. But I'm not sure whether she's staying home if she's not coming to the party. (B): (9) ?? I don't know whether Jim is in the office or not, though I know Jane is. But I might not know whether Jane is in the office if Jim is. (10) ?? I'm sure Lexie's either coming to the party or staying home. But I might not be sure whether she's staying home if she's not coming to the party. I take it to be a datum that (7)–(10) all seem quite bad. MI offers a nice explanation of this datum: (7)–(8) express contradictions, while (9)–(10) express blindspots. And we know why both of those would seem bad. But given just QMI, (7)–(8) express blindspots, while (9)–(10) express indefensibles. And though we know why sentences that express blindspots would seem bad, we don't know why sentences that express indefensibles would.40 This, I claim, is the real challenge for the proponent of QMI. It's not that she cannot explain our intuitions about thought experiments involving butlers, playing cards, or what have you-those tend to confirm QMI as much as they confirm MI. Nor is it that the thought behind MI is too platitudinous to be denied-it turns out to presuppose the falsity of an independently plausible set of epistemological principles. It is that without MI, the propositions expressed by sentences like (9)–(10) are indefensibles, and are thus in need of explanation. This brings into focus the core dialectical question: is the fact that the full strength of MI is needed to categorize sentences like (9)–(10) as blindspots reason to pick MI rather than an epistemology that posits daisy chains? 38 Why is (B) a blindspot given MI? Because then ◊p∧K(p ⊃ q)∧¬KK(p→ q) entails K(p→ q)∧¬KK(p→ q)-which is a proposition of the form p ∧¬Kp. 39 Why is (A) a blindspot and (B) an indefensible given (just) QMI? Because ¬K(p→ q) entails ¬K(◊p∧K(p ⊃ q)). One can thus apply the same reasoning as in the previous footnote to derive the conclusion that (A) is not knowable and that (B) is not knowably knowable. 40 One might worry that embedded instances of sentences like (7)–(8)-say in conditionals or modals-are no better than their unembedded counterparts, and thus that MI's contradiction-theoretic explanation of their badness is needed after all. Two points in response. First, when I try to embed (7) or (8) in the antecedent of a conditional, the sentence I get seems bad because of processing difficulties, not because it sounds like I'm trying to embed a contradiction. Second (and relatedly), (9)–(10) seem just as bad when embedded in these environments, yet not even the proponent of MI should want to classify them as contradictions. 21 And here I think the answer is a clear 'No'. Those who are antecedently attracted to the possibility of daisy chains already know that they need to account for the infelicity of indefensibles.41 Take (11) and (12), for instance: (11) ?? For all I know I don't know it's raining, but it's raining. (12) ?? I might not know Jane is home, but she's home. Both seem abominable, yet both express propositions of the form p ∧ ¬KKp. If KK is invalid, then both should be knowable. If we are convinced that they can't be knowable, then we should be convinced of KK. But if we are convinced of KK for reasons concerning (11) and (12), then we are convinced of NO DAISY CHAINS for reasons that are more general than those concerning the local dispute between MI and QMI. So if we're not already convinced that daisy chains are impossible, we better think that their proponents have something compelling to say about why sentences like (11) and (12) seem infelicitous despite being knowable.42 Maybe there's a second-order knowledge norm on (full belief) and assertion: don't just know it; know that you know it. Or maybe we tend to prefer to resolve the context-sensitivity of 'know' in such a way that we don't have obvious violations of KK. Who knows. The point is that whatever the exact details of the story, surely it can be applied to (9) and (10) as well. The only "interesting" difference between (9)–(10) and (11)– (12) is that the former happen to contain indicative conditionals. And that's not the kind of difference that could plausibly make a difference to the plausibility of an account of the infelicity of indefensibles. There is thus an important sense in which no matter how we land on the question "Can we explain the badness of (9)–(10) without recourse to MI?", we should regard arguments for NO DAISY CHAINS from MI as dialectically inert. If we don't think we can explain the badness of (11) and (12) without appeal to KK, then we should accept NO DAISY CHAINS for more general reasons than that it follows from MI (given WCNC). But if we think we can explain the badness of (11) and (12) without appeal to KK, then we can can get all we wanted from MI out of the strictly weaker QMI. Either way, given the existence of principles like QMI, we should be convinced that our views on the possibility of daisy chains should be driving our views on MI, not the other way around. 8 Conclusion If you reject the material analysis of the indicative conditional and are sympathetic with MARGIN FOR ERROR-style arguments for daisy chains, you should reject MI. You probably should also reject RAMSEY'S TEST. These principles presuppose an iterative conception of epistemology. And 41 Again: see, e.g., Sosa (2009); Cohen and Comesaña (2013); Greco (2014a, 2015a). 42 Again: see, e.g., Williamson (2005, 2013) and Benton (2013) for some putative explanations. 22 although it is obviously possible for considerations from language to shape one's epistemological theorizing, in this case we should be skeptical of the force of the evidence from principles like MI and RT. The question of what the iteration-denier should say about indefensibles is interesting and important. But clarity on these matters will not be found by focusing on intuitive judgments about indicative conditionals. 23 References Adams, E. (1965). The logic of conditionals. Inquiry: An Interdisciplinary Journal of Philosophy, 8(14):166–197. Almotahari, M. and Glick, E. (2010). Context, content, and epistemic transparency. Mind, 119(476):1067– 1086. Bacon, A. (2015). Stalnaker's thesis in context. Review of Symbolic Logic, 8(1):131–163. Bennett, J. (2003). A Philosophical Guide to Conditionals. Oxford University Press. Benton, M. A. (2013). Dubious objections from iterated conjunctions. Philosophical Studies, 162(2):355– 358. Block, E. (2008). Indicative conditionals in context. Mind, 117(468):783–794. Cohen, S. and Comesaña, J. (2013). Williamson on gettier cases and epistemic logic. Inquiry: An Interdisciplinary Journal of Philosophy, 56(1):15–29. Cresto, E. (2012). A defense of temperate epistemic transparency. Journal of Philosophical Logic, 41(6):923– 955. Das, N. and Salow, B. (2018). Transparency and the kk principle. Noûs, 52(1):3–23. DeRose, K. (2002). Assertion, knowledge, and context. Philosophical Review, 111(2):167–203. Dorr, C., Goodman, J., and Hawthorne, J. (2014). Knowing against the odds. Philosophical Studies, 170(2):277–287. Dorst, K. (2019). Abominable kk failures. Mind, pages 1–29. Douven, I. and Verbrugge, S. (2013). The probabilities of conditionals revisited. Cognitive Science, 37(4):711–730. Edgington, D. (1995). On conditionals. Mind, 104(414):235–329. Edgington, D. (2014). Indicative conditionals. In Zalta, E. N., editor, The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, winter 2014 edition. Fernández, J. (2013). Transparent Minds: A Study of Self-Knowledge. Oxford University Press. Gardenfors, P. (1986). Belief revisions and the ramsey test for conditionals. Philosophical Review, 95(1):81– 93. Gibbard, A. (1981). Two recent theories of conditionals. In Harper, W., Stalnaker, R. C., and Pearce, G., editors, Ifs, pages 211–247. Reidel. Gillies, A. (2010). Iffiness. Semantics and Pragmatics, 3(4):1–42. Gillies, A. S. (2004). Epistemic conditionals and conditional epistemics. Noûs, 38(4):585–616. Goodman, J. (2013). Inexact knowledge without improbable knowing. Inquiry, 56(1):30–53. Goodman, J. and Salow, B. (2018). Taking a chance on kk. Philosophical Studies, 175(1):183–196. Greco, D. (2014a). Could kk be ok? Journal of Philosophy, 111(4):169–197. Greco, D. (2014b). Iteration and fragmentation. Philosophy and Phenomenological Research, 88(1):656– 673. 24 Greco, D. (2015a). Iteration principles in epistemology i: Arguments for. Philosophy Compass, 10(11):754– 764. Greco, D. (2015b). Iteration principles in epistemology ii: Arguments against. Philosophy Compass, 10(11):765–771. Hájek, A. and Hall, N. (1994). The hypothesis of the conditional construal of conditional probability. In Eells, E., Skyrms, B., and Adams, E. W., editors, Probability and Conditionals: Belief Revision and Rational Decision, page 75. Cambridge University Press. Harper, W. L. (1976). Ramsey Test Conditionals and Iterated Belief Change (A Response to Stalnaker), pages 117–135. Springer Netherlands, Dordrecht. Hawthorne, J. and Magidor, O. (2009). Assertion, context, and epistemic accessibility. Mind, 118(470):377–397. Hawthorne, J. and Magidor, O. (2010). Assertion and epistemic opacity. Mind, 119(476):1087–1105. Hintikka, J. (1962). Knowledge and Belief. Ithaca: Cornell University Press. Jackson, F. (1979). On assertion and indicative conditionals. Philosophical Review, 88(4):565–589. Jeffrey, R. and Edgington, D. (1991). Matter-of-fact conditionals. Aristotelian Society Supplementary Volume, 65(1):161–209. Jeffrey, R. and Stalnaker, R. (1994). Conditionals as random variables robert stalnaker and richard jeffrey. In Eells, E., Skyrms, B., and Adams, E. W., editors, Probability and Conditionals: Belief Revision and Rational Decision, page 31. Cambridge University Press. Kratzer, A. (1986). Conditionals. Chicago Linguistics Society, 22(2):1–15. Kripke, S. A. (2011). On two paradoxes of knowledge. In Kripke, S. A., editor, Philosophical Troubles. Collected Papers Vol I. Oxford University Press. Lewis, D. (1976). Probabilities of conditionals and conditional probabilities. Philosophical Review, 85(3):297–315. McGee, V. (1989). Conditional probabilities and compounds of conditionals. Philosophical Review, 98(4):485–541. Mchugh, C. (2010). Self-knowledge and the kk principle. Synthese, 173(3):231–257. Moore, G. E. (1942). A reply to my critics. In Schilpp, P. A., editor, The Philosophy of G. E. Moore. Open Court. Ramsey, F. P. (1931). The Foundations of Mathematics and Other Logical Essays. Paterson, N.J., Littlefield, Adams. Rothschild, D. (2013). Do indicative conditionals express propositions? Noûs, 47(1):49–68. Sosa, D. (2009). Dubious assertions. Philosophical Studies, 146(2):269–272. Stalnaker, R. (1968). A theory of conditionals. Americal Philosophical Quarterly, pages 98–112. Stalnaker, R. (1970). Probability and conditionals. Philosophy of Science, 37(1):64–80. Stalnaker, R. (1975). Indicative conditionals. Philosophia, 5(3):269–286. Stalnaker, R. (2002). Common ground. Linguistics and Philosophy, 25(5-6):701–721. 25 Stalnaker, R. (2006). On logics of knowledge and belief. Philosophical Studies, 128(1):169–199. Stalnaker, R. (2009). On hawthorne and magidor on assertion, context, and epistemic accessibility. Mind, 118(470):399–409. Unger, P. K. (1975). Ignorance: A Case for Scepticism. Number 105. Oxford University Press. van Fraassen, B. (1976). Probabilities of conditionals. In Harper, W. and Hooker, C., editors, Foundations of Probability Theory, Statistical Inference, and Statistical Theories of Science, volume 1, pages 261–300. Dordrecht: Reidel. Willer, M. (2010). New surprises for the ramsey test. Synthese, 176(2):291–309. Williams, R. (2009). Vagueness, conditionals and probability. Erkenntnis, 70(2):151–171. Williamson, T. (2000). Knowledge and its Limits. Oxford University Press. Williamson, T. (2005). Contextualism, subject-sensitive invariantism and knowledge of knowledge. The Philosophical Quarterly, 55(219):213–235. Williamson, T. (2011). Improbable knowing. In Dougherty, T., editor, Evidentialism and its Discontents, pages 147–64. Oxford University Press. Williamson, T. (2013). Response to cohen, comesaña, goodman, nagel, and weatherson on gettier cases in epistemic logic. Inquiry: An Interdisciplinary Journal of Philosophy, 56(1):77–96. Williamson, T. (2014). Very improbable knowing. Erkenntnis, 79(5):971–999.