Abstract
It is widely believed that the so-called knowledge account of assertion best explains why sentences such as “It’s raining in Paris but I don’t believe it” and “It’s raining in Paris but I don’t know it” appear odd to us. I argue that the rival rational credibility account of assertion explains that fact just as well. I do so by providing a broadly Bayesian analysis of the said type of sentences which shows that such sentences cannot express rationally held beliefs. As an interesting aside, it will be seen that these sentences also harbor a lesson for Bayesian epistemology itself.
Similar content being viewed by others
Notes
Famously, Moore (1962) contains the first discussion in the literature of instances of (2) and (3).
See Douven (2006).
Supposing, in the latter case, that knowledge requires belief. See Lewis (1996, p. 227) for a (rare) dissenting opinion.
It might be thought that this is so because Moorean sentences do not merely sound odd but are even heard as contradictory. However, I think what DeRose (1991, p. 597) says in relation to an instance of (3) holds generally for all of us: “I don’t have a special feeling for inconsistencies; I can sense some kind of clash, but cannot distinguish my sensing of an inconsistency from my sensing of whatever it is that’s wrong with the Moorean sentence.” See on this also Douven (2006, 475 f).
See, for instance, Foley (1992) for a compelling defense of this idea.
Some authors even claim to show that Moorean sentences cannot be believed tout court; see for instance Hintikka (1962, p. 67) and Tennant (1997, 251 f). It would seem that that claim just cannot be true, for, we may suppose, the madman can believe anything. On closer inspection, however, it appears that these authors typically make certain assumptions about the notion of belief that rule out mad belief, and in fact require some notion of rational belief.
Or they are a bit vague. For instance, Shoemaker’s (1995) analysis relies on a notion of “availability” of beliefs which he himself confesses to stand in need of further elucidation (p. 227).
In 1928, Presburger had proved the completeness of arithmetic without multiplication. This made it seem as if a completeness proof for Peano arithmetic was just around the corner.
See Douven (2005, Sect. 4) for discussion.
Hintikka’s (1962) analysis of instances of (2) faces essentially the same objection. As he admits (p. 36), his “results are not directly applicable to what is true or false in the actual world of ours” because of the various idealizations involved in the doxastic logic he develops and deploys. On Williamson’s (2000) and Adler’s (2002) analyses of Moorean sentences, these come out as not being rationally credible as well. However, for them this follows from the fact that these sentences cannot be known, together with the principle that one ought to believe only what one knows (Williamson 2000, 255 f), or that one properly believes something only if one knows it [in Adler’s (2006, p. 284) formulation]. This principle in turn follows from their assumption that, loosely speaking, assertion and belief are two sides of the same coin (Williamson 2000, p. 255; Adler 2002, p. 74), in conjunction with the knowledge account of assertion. I share the former assumption (Douven 2006, Sect. 1), but, of course, not the latter.
If degrees of belief were assumed to be probabilities, then (6) would of course be equivalent to this: RB i (φ) ⇒ Cr i (φ) > 0.5. Absent that assumption, it is at least as far as (6) goes possible for a person rationally to believe something that she believes to a degree of 0.3, say, provided she believes its negation to a lower degree.
Some, like Kyburg (1970) and Foley (1992), think this is rational to believe. They avoid the lottery paradox by rejecting what is sometimes called “the conjunction principle,” according to which rational belief is closed under the rule of conjunction introduction. But at least a wholesale rejection of this principle seems hard to defend; see Douven (2002, Sect. 2) and Douven and Williamson (2006) for more on this. (Incidentally, there are approaches to the lottery paradox that maintain the conjunction principle, and yet permit that it can be rational to believe of one’s ticket that it is a loser—though on these approaches it cannot be rational to believe this of every ticket in the lottery; see, e.g., Harman (1986, p. 71), Johnson (2004, 134 ff), and Douven (2008a).) On the other hand, it is well known that if one endorses (6) and one does not want to exclude that degrees of belief can be probabilities—and, as will become apparent shortly, for us it is essential not to exclude this—then one will need some response to Makinson’s (1965) preface paradox. See Douven and Uffink (2003) and Weatherson (2005) for different solutions to this paradox which both involve placing a relatively weak restriction on the conjunction principle; Douven (2008b) makes the suggestion that the preface paradox may not be telling in the least against the conjunction principle, but may at bottom be a confirmation-theoretic problem stemming from the fact that currently we have less than full clarity about the notion of relevant evidence.
It may be noticed that if Maher is right about what follows from the history of science, then the fact that scientists seem to assert outright their theories on conferences and in publications is not just bad news for (6), but also for the knowledge account of assertion: even if those are right who think that knowledge does not require probability 1 (see, e.g., DeRose 1996, p. 568, 577 f), it is beyond serious dispute that knowledge is incompatible with low probability.
The supposition that scientists’ assertions of theories typically have the said kind of implicature is quite consistent with these assertions themselves’ (“what is said,” in Grice’s (1989) terminology) being unqualified and categorical. Maher seems to overlook this when he rejects the designated supposition by saying that “Einstein is categorically asserting [the General Theory of Relativity], not merely that the theory is approximately correct” (p. 138).
See Meyer and van der Hoek (1995, Chap. 2). It is worth noting that nothing said so far implies that the principle \(\hbox{RB}_{i}(\varphi)\Rightarrow\hbox{RB}_{i}\bigl(\hbox{RB}_{i}(\varphi)\bigr)\) is indefensible. However, if—as some might want to argue—rational belief requires a margin for error in the sense of Williamson (2000, Chap. 5), then this principle would seem to run afoul of the type of argument Williamson (2000, Chap. 4) deploys to support his claim that knowledge is anti-luminous (but also see Mendola 2007). Note that (7) is not endangered by such an argument, if only because it would be implausible to think that with the notion of finding something more likely than not to be the case, there would be associated a margin for error.
According to operationalism, which still has its defenders in the social sciences (see Gillies 2000, Chap. 9), a person’s degrees of belief can even be identified with the rates at which she would bet under certain conditions (cf. Gillies 2000, p. 200). The exact metaphysics of graded belief need not detain us here, however.
I should note that not all who have written on degrees of belief accept this assumption; see, for instance, Christensen (2004, 113 f). I hope that for the dissenters the first response suffices.
It will be noticed that in the statement of this principle the phrase “her degrees of belief” can only be meant as a nonrigid designator. For else it would follow from any set of premises that the person’s degrees of belief are not probabilities if in fact they are not (for that would be a necessary truth then).
The referee worried that this is a relatively shallow explanation of why we cannot believe instances of (2) and that a more thorough one might well require norms stronger than (5)–(8). First, however, I do not share the referee’s intuition that the present explanation is shallow; it is simple, to be sure, but I do not think of this as being a disadvantage. Second, invoking stronger norms would almost certainly go at the expense of the generality of the present explanation, and I want the explanation to be as general as possible in order not to tie the rational credibility account of assertion to any specific theory of rationality.
In view of such arguments as are to be found in, among others, Pollock (1986, Chap. 5), Wedgwood (1999), and BonJour’s contribution to BonJour and Sosa (2003), the assumption that you can determine of your rationally held beliefs that they are rationally held seems utterly plausible to me. For present purposes, however, it suffices to assume that you can determine of enough of your rational beliefs that they are rationally held to make a case for (15).
It in effect appears to me that I am fully convinced of any of my rationally held beliefs that I know them. So, it may be possible to give an abductive argument even for the principle that rationally believing something entails subjective certainty that one knows the thing (which, of course, we do not need).
There may be a more direct argument for (15), one revealing the exact conceptual interconnections between rational belief, graded belief, and knowledge in virtue of which (15) holds true (supposing it does hold true). But for now we can rest content with pointing to the abductive argument for (15) given here.
It is worth noting in this connection that, as Jonathan Adler pointed out to me (in personal correspondence), to think of φ as being known by oneself does not require one to think of φ under a description of it as satisfying the conditions for knowledge. Indeed, for all (15) requires, one need not have the faintest idea of what those conditions are.
This is false if we adopt the requirement of strict coherence, which impels us to reserve probability 1 for tautologies only (see Kemeny 1955 and Jeffreys 1961). However, most Bayesian epistemologists nowadays think that the requirement of strict coherence is untenable; see Howson (2000) and Hájek (2003) for some strong objections against it.
To derive the same conclusion for sentences of the form of (13), one needs instead of (20) the principle \(\hbox{B}_{i}(\varphi)\Rightarrow\hbox{Pr}_{i}\bigl(\hbox{K}_{i}(\varphi)\bigr) > 0.\) The proof is straightforward and left to the reader. Of course this does not add anything to the general point we are making.
It is certainly true, as the referee remarked, that if I am in excruciating pain, then I cannot rationally have a degree of belief of 1 that I am in no pain at all, even though this is consistent, and that Bayesians must already acknowledge as much. But it is still possible rationally to believe to a degree of 1 that I am in no pain at all (namely, when I am not in pain). In the case of sentences like (9), by contrast, a conceptual or near-to-conceptual impossibility bars us from ever believing them rationally to such a degree—which is the point of the present section.
I am greatly indebted to Jonathan Adler and to an anonymous referee for extremely helpful comments on earlier versions of the paper. Thanks also to Paolo Casalegno and Leon Horsten for useful feedback.
References
Adler, J. (2002). Belief’s own ethics. Cambridge, MA: MIT Press.
Adler, J. (2006). Withdrawal and contextualism. Analysis, 66, 280–285.
BonJour, L., & Sosa, E. (2003). Epistemic justification. Malden, MA: Blackwell Publishing.
Christensen, D. (2004). Putting logic in its place. Oxford: Oxford University Press.
DeRose, K. (1991). Epistemic possibilities. Philosophical Review, 100, 581–605.
DeRose, K. (1996). Knowledge, assertion, and lotteries. Australasian Journal of Philosophy, 74, 568–580.
DeRose, K. (2002). Assertion, knowledge, and context. Philosophical Review, 111, 167–203.
Douven, I. (2002). A new solution to the paradoxes of rational acceptability. British Journal for the Philosophy of Science, 53, 391–410.
Douven, I. (2005). A principled solution to Fitch’s paradox. Erkenntnis, 62, 47–69.
Douven, I. (2006). Assertion, Knowledge, and rational credibility. Philosophical Review, 115, 449–485.
Douven, I. (2008a). The lottery paradox and our epistemic goal. Pacific Philosophical Quarterly, in press.
Douven, I. (2008b). Review of Christensen (2004). Philosophical Review, 117, 123–126.
Douven, I., & Uffink, J. (2003). The preface paradox revisited. Erkenntnis, 59, 389–420.
Douven, I., & Williamson, T. (2006). Generalizing the lottery paradox. British Journal for the Philosophy of Science, 57, 755–779.
Foley, R. (1992). The epistemology of belief and the epistemology of degrees of belief. American Philosophical Quarterly, 29, 111–124.
Gillies, D. (2000). Philosophical theories of probability. London: Routledge.
Grice, H. P. (1989). Logic and conversation, in his Studies in the Way of Words (pp. 22–40). Cambridge MA: Harvard University Press.
Hájek, A. (2003). What conditional probability could not be. Synthese, 137, 273–323.
Harman, G. (1986). Change in view, Cambridge, MA: MIT Press.
Hintikka, J. (1962). Knowledge and belief, Ithaca, NY: Cornell University Press.
Howson, C. (2000). Hume’s problem: Induction and the justification of belief. Oxford: Clarendon Press.
Hunter, D. (1996). On the relation between categorical and probabilistic belief. Noûs, 30, 75–98.
Jeffreys, H. (1961). Theory of probability (3rd ed.). Oxford: Clarendon Press.
Johnson, D. (2004). Truth without paradox, Lanham, MD: Rowman and Littlefield.
Kemeny, J. (1955). Fair bets and inductive probabilities. Journal of Symbolic Logic, 20, 263–273.
Kitcher, P. (1993). The advancement of science. Oxford: Oxford University Press.
Kyburg, H. (1970). Conjunctivitis. In: M. Swain (Ed.), Induction, acceptance and rational belief (pp. 55–82). Dordrecht: Reidel.
Laudan, L. (1981). A confutation of convergent realism. Philosophy of Science, 48, 19–49.
Lewis, D. (1996). Elusive knowledge. Australasian Journal of Philosophy, 74, 549–567. (Reprinted in K. DeRose & T. Warfield (Eds.) (1999), Skepticism (pp. 220–239). Oxford: Oxford University Press; the page reference is to the reprint.)
Maher, P. (1993). Betting on theories. Cambridge: Cambridge University Press.
Makinson, D. (1965). The paradox of the preface. Analysis, 25, 205–207.
Mellor, D. H. (1993). How to believe a conditional. Journal of Philosophy, 90, 233–248.
Mendola J. (2007). Knowledge and evidence. Journal of Philosophy, 104, 157–160.
Meyer, J.-J., & van der Hoek, W. (1995). Epistemic logic for AI and computer science. Cambridge: Cambridge University Press.
Moore, G. E. (1962). Commonplace book: 1919–1953. London: Allen and Unwin.
Pollock, J. (1986). Contemporary theories of knowledge. Totowa, NJ: Rowman and Littlefield.
Psillos, S. (1999). Scientific realism: How science tracks truth. London: Routledge.
Schiffer, S. (2005). Paradox and the a priori. In T. Szabó Gendler & J. Hawthorne (Eds.), Oxford studies in Epistemology (Vol. I, pp. 273–310). Oxford: Oxford University Press.
Searle, J. (2002). Consciousness and language. Cambridge: Cambridge University Press.
Shoemaker, S. (1995). Moore’s paradox and self-knowledge. Philosophical Studies, 77, 211–228.
Tennant, N. (1997). The taming of the true. Oxford: Oxford University Press.
Unger, P. (1975). Ignorance: A case for scepticism. Oxford: Clarendon Press.
van Fraassen, B. (1995). Fine-grained opinion, probability, and the logic of full belief. Journal of Philosophical Logic, 24, 349–377.
Weatherson, B. (2005). Can we do without pragmatic encroachment? Philosophical Perspectives, 19, 417–443.
Wedgwood, R. (1999). The a priori rules of rationality. Philosophy and Phenomenological Research, 59, 113–131.
Williams, J. (1996). Moorean absurdities and the nature of assertion. Australasian Journal of Philosophy, 74, 133–149.
Williamson, T. (2000). Knowledge and its limits. Oxford: Oxford University Press.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Douven, I. Assertion, Moore, and Bayes. Philos Stud 144, 361–375 (2009). https://doi.org/10.1007/s11098-008-9214-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11098-008-9214-4