Skip to main content
Log in

A new anti-expertise dilemma

  • Published:
Synthese Aims and scope Submit manuscript

Abstract

Instability occurs when the very fact of choosing one particular possible option rather than another affects the expected values of those possible options. In decision theory: An act is stable iff given that it is actually performed, its expected utility is maximal. When there is no stable choice available, the resulting instability can seem to pose a dilemma of practical rationality. A structurally very similar kind of instability, which occurs in cases of anti-expertise, can likewise seem to create dilemmas of epistemic rationality. One possible line of response to such cases of instability, suggested by both Jeffrey (The logic of decision, University of Chicago Press, Chicago, 1983) and Sorensen (Aust J Philos 65(3):301–315, 1987), is to insist that a rational agent can simply refuse to accept that such instability applies to herself in the first place. According to this line of thought it can be rational for a subject to discount even very strong empirical evidence that the anti-expertise condition obtains. I present a new variety of anti-expertise condition where no particular empirical stage-setting is required, since the subject can deduce a priori that an anti-expertise condition obtains. This kind of anti-expertise case is therefore not amenable to the line of response that Jeffrey and Sorensen recommend.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. “Death works from an appointment book which states time and place; a person dies if and only if the book correctly states in what city he will be at the stated time. The book is made up weeks in advance on the basis of highly reliable predictions. An appointment on the next day has been inscribed for him. Suppose, on this basis, the man would take his being in Damascus the next day as strong evidence that his appointment with Death is in Damascus, and would take his being in Aleppo the next day as strong evidence that his appointment is in Aleppo… If… he decides to go to Aleppo, he then has strong grounds for expecting that Aleppo is where Death already expects him to be, and hence it is rational for him to prefer staying in Damascus. Similarly, deciding to stay in Damascus would give him strong grounds for thinking that he ought to go to Aleppo.” (Gibbard and Harper 1978, p. 373).

  2. One might alternatively define stability in terms of conditional expected value: an option is stably preferred iff its expected utility conditional on its actually being chosen is maximal. I am very grateful to an anonymous referee for helpful comments on defining stability.

  3. Notice that strictly speaking, according to Jeffrey’s own official view, there are no such things as outright or full beliefs—we should eliminate such talk and replace it with subjective probabilities.

  4. A different condition that is also sometimes labeled ‘anti-expertise’ is when: (Bp → ¬ p) & (B¬p → p). I.e. if you believe it, it’s false, if you disbelieve it, it’s true. Notice that this condition creates less threat of a dilemma insofar as it says nothing about suspending judgement or simply neither believing nor disbelieving that p. The bi-conditional AE, in contrast, states that: (Bp → ¬ p) & (¬ Bp → p).

  5. Reed Richter (1990) likewise argues that it can be rationally permitted to not believe the deductive consequences of your beliefs, even if you know and grasp that they have these consequences.

  6. Sorensen does allow that in light of new evidence one could increase one’s confidence that one is an anti-expert from a low credence to a less low credence, so long as one does not actually (outright) believe that one is an anti-expert. Likewise he allows that one can permissibly believe that one might be an anti-expert. So Sorensen does allow for some limited sensitivity to evidence in favour of one’s being an anti-expert. Many thanks to an anonymous referee for this journal for helpful comments on this point.

  7. This last sentence is, of course, an instance what we now call (following Wittgenstein 1953, part II, section x) ‘Moore’s paradox’. Moore’s own example was: ‘I went to the pictures last Tuesday, but I don’t believe that I did.’ (Moore 1942, p. 543), which is of the form: p & ¬ Bp. Sentences of the form: p & B ¬ p, or: Bp & ¬ p, are also standardly counted as ‘Moorean’.

  8. Compare for example, Koons (1990), who presents a pair of ‘Doxastic Paradoxes” that do not rely on self-reference. But the set-up of these situations does still rely on the subject somehow having extremely strong empirical evidence that a bi-conditional of the form: p ↔ S cannot justifiably Bp, really does apply to her.

  9. Wittgenstein (1953) very briefly mentions this sort of demonstrative judgement of one’s own height in section §279 of the Investigations, in the course of what is generally considered to be his ‘private language argument’: ‘Imagine someone saying: "But I know how tall I am!" and laying his hand on top of his head to prove it.’ One might naturally read Wittgenstein here as dismissing such a claim and/or demonstration as meaningless. However, in the immediately preceding section §278 he writes: ‘"I know how the colour green looks to me"—surely that makes sense!—Certainly: what use of the proposition are you thinking of?’ Assuming that in the last sentence here Wittgenstein is speaking in propria persona, his point then seems to be not so much that such statements are simply meaningless, but rather just that we should ask what the point or usage of such statements is supposed to be in any given context. But setting aside matters of Wittgensteinian interpretation, one possible objection would be to claim that these kinds of demonstrative judgements are meaningless, hence fail to be true. Let me just state that this strikes me as highly implausible. If someone were to point at the ground beneath their feet and assert “I am located here!” we may wonder what the point of the assertion is, but there seems no basis whatsoever for denying that they have uttered a perfectly well-formed English sentence, expressing a perfectly meaningful claim that is, trivially, true. Many thanks to an anonymous referee for interesting and helpful discussion on this point.

  10. E.g. I can know by reflection that “I exist right now”, whereas I might not be able to know just by reflection that “N exists at time t”. One might dispute whether the former sort of knowledge should really count as a priori since it plausibly relies on the subject’s self-conscious experience of her own mental life, but it seems clear at least that it does not rely on any empirical evidence about the external world and so is something that can be known just ‘by reflection’.

  11. If, we understand the demonstrative ‘this’ in the thought ‘I am not in this mental state’ to be referring to one’s actual current total mental state, Mi, then the anti-expertise dilemma evaporates. For when I am in the initial state Mi I don’t believe that: [I am not in Mi], which is just as it should be, since it is false that [I am not in Mi] when I am in Mi. Conversely if I were to go ahead and believe [I am not in Mi] then that belief would be true, for I would then be in a different total mental state, one which is neither Mi nor M* – for recall, M* is just like Mi except for the addition of the belief [I am not in M*], whereas this new total mental state has added the different new belief [I am not in Mi].

  12. Notice also that this sort of instability would not be evaded, but merely shifted, by simultaneously believing both “I am not in this total mental state” and some other unrelated new true proposition, n. Admittedly, forming a belief in the conjunction: ‘n & S is not in M* at t” would then put S not into M* (which is exactly like Mi except for the addition of the belief “S is not in M* at t”), but into some other new total mental state. And so S’s belief that: ‘S is not in M* at t’ would here be true. But, again, this just shifts the instability to a different proposition. For consider, if starting in Mi, one formed (all at once!) the judgement: “n & I am not in this mental state”, the demonstrative this would now be picking out a total mental state that is exactly like Mi except for the addition of this conjunctive belief: ‘n & I am not in this mental state’. Call this latter new total mental state M***. Given that n is also a true proposition, then by remaining in the initial total mental state Mi at time t, the proposition “n & S is not in M*** at t’ is true but not believed by S. But if, starting from Mi, S were to go ahead at time t and actually believe that: n & S is not in M***, then this proposition would be false. So the anti-expertise instability remains.

  13. I am extremely grateful to an anonymous referee for this journal whose helpful comments substantially improved the non-indexical formulations in this section.

  14. See, e.g. Kripke (1975) and Mortensen and Priest (1981).

  15. The example occurs on p. 56 of his Naming and Necessity (1980). Kripke’s other example of the contingent a priori was “If Neptune exists, Neptune is the cause of the perturbations in the orbit of Uranus”—see Kripke (1980, p. 75).

  16. See, e.g. Donnellan (1977), Casullo (1977), Bonjour (1998), Turri (2011) for criticisms which deny that Kripke’s example really is contingent a priori.

  17. In fact I don’t think that the use of the indexicals ‘you’ or ‘I’ is essential for Crimmins’ case—the same issues would arise for ‘S falsely believes that Gonzalez, master of disguise, is an idiot’.

  18. I am very grateful to Peter Brössel for helpful conversations about the comparison with Godel’s theorem.

  19. I am very grateful to an anonymous referee for this journal for pressing me to discuss this point.

  20. It might be thought that the statement: ‘no statement is immune to revision’ is in danger of leading to paradox if it applies to itself. See, e.g. Katz (1988), Elstein (2007), Ebbs (2016) and Baumann (2017) for discussion.

  21. Notice also, though it is an ad hominem, that Sorensen himself is well-known for championing the epistemicist approach to vagueness on the basis that it allows us to retain classical logic, and for rejecting any rival approaches which would require us to revise or deviate from classical logic.

  22. Earlier versions of this paper were presented at the conference ‘Hard Cases and Rational Choice’ at the University of Bern, at the workshop ‘Rationality: Epistemic and Practical Perspectives’ and at the colloquium on Logic and Epistemology, both at the Ruhr University Bochum. I am very grateful to the audiences on all those occasions for helpful questions and feedback. Many thanks in particular to Peter Brössel, Ruth Chang, Insa Lawler, Jim Pryor, Kevin Reuter, Christian Straßer and Filippo Vindrola for their comments, objections and advice. Finally, I am especially grateful to the anonymous referees for this journal and for another journal, whose reports very substantially improved this paper.

References

  • Baumann, P. (2017). Is everything revisable? Ergo: An Open Access Journal of Philosophy, 4, 349–357.

    Google Scholar 

  • Bommarito, N. (2010). Rationally self-ascribed anti-expertise. Philosophical Studies, 151(3), 413–419.

    Article  Google Scholar 

  • Bonjour, L. (1998). In Defense of Pure Reason. New York: Cambridge University Press.

    Google Scholar 

  • Burge, T. (1978). Buridan and epistemic paradox. Philosophical Studies, 34, 21–35.

    Article  Google Scholar 

  • Caie, M. (2013). Belief and indeterminacy. Philosophical Review, 122(4), 527–575.

    Article  Google Scholar 

  • Casullo, A. (1977). Kripke on the a priori and the necessary. Analysis, 37, 152–159.

    Article  Google Scholar 

  • Christensen, D. (2007). Epistemic self-respect. Proceedings of the Aristotelian Society, 107(1pt3), 319–337.

    Article  Google Scholar 

  • Christensen, D. (2010). Higher-order evidence. Philosophy and Phenomenological Research, 81(1), 185–215.

    Article  Google Scholar 

  • Conee, E. (1982). Utilitarianism and rationality. Analysis, 42(1), 55–59.

    Article  Google Scholar 

  • Crimmins, M. (1992). (1992) ‘I falsely believe that p.’ Analysis, 52(3), 191.

    Article  Google Scholar 

  • Donnellan, K. (1977). The contingent a priori and rigid designators. Midwest Studies in Philosophy, 2, 12–27.

    Article  Google Scholar 

  • Ebbs, G. (2016). Reading Quine’s claim that no statement is immune to revision. In F. Janssen-Lauret & G. Kemp (Eds.), Quine and his place in history. London: Palgrave Press.

    Google Scholar 

  • Egan, A., & Elga, A. (2005). I can’t believe I’m stupid. Philosophical Perspectives, 19(1), 77–93.

    Article  Google Scholar 

  • Elstein, D. (2007). A new revisability paradox. Pacific Philosophical Quarterly, 88(3), 308–318.

    Article  Google Scholar 

  • Gibbard, A., & Harper, W. L. (1978). Counterfactuals and two kinds of expected utility. In A. Hooker, J. J. Leach, & E. F. McClennen (Eds.), Foundations and applications of decision theory. Dordrecht: D. Reidel.

    Google Scholar 

  • Jeffrey, R. (1983). The Logic of Decision (2nd ed.). Chicago: University of Chicago Press.

    Google Scholar 

  • Katz, J. (1988). Realistic Rationalism. Cambridge, MA: MIT Press.

    Google Scholar 

  • Kripke, S. (1975). Outline of a theory of truth. Journal of Philosophy, 72, 690–712.

    Article  Google Scholar 

  • Kripke, S. (1980). Naming and Necessity. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Koons, R. (1990). Doxastic paradoxes without self-reference. Australasian Journal of Philosophy, 68(2), 168–177.

    Article  Google Scholar 

  • Kyburg, H. (1961). Probability and the logic of rational belief. Middletown: Wesleyan University Press.

    Google Scholar 

  • Makinson, D. (1965). The paradox of the preface. Analysis, 25, 205–207.

    Article  Google Scholar 

  • Moore, G. E. (1942). A reply to my critics. In P. A. Schilpp (Ed.), The Philosophy of G. E. Moore. Evanston, IL: Northwestern University Press.

    Google Scholar 

  • Mortensen, C., & Priest, G. (1981). The truth teller paradox. Logique et Analyse, 95–96, 381–388.

    Google Scholar 

  • Quine, W. V. O. (1961). Two dogmas of empiricism. In W. O. Van Quine (Ed.), From a logical point of view (2nd ed.). Cambridge: Harvard University Press.

    Google Scholar 

  • Richter, R. (1990). Ideal rationality and hand waving. Australasian Journal of Philosophy, 68(2), 147–156.

    Article  Google Scholar 

  • Sorensen, R. (1987). Anti-Expertise, Instability, and Rational Choice. Australasian Journal of Philosophy, 65(3), 301–315.

    Article  Google Scholar 

  • Sorensen, R. (1988). Blindspots. Oxford: Oxford University Press.

    Google Scholar 

  • Titelbaum, M. (2015). Rationality’s fixed point (Or. In Defense of Right Reason). Oxford Studies in Epistemology, 5, 253–294.

    Article  Google Scholar 

  • Turri, J. (2011). Contingent a priori knowledge. Philosophy and Phenomenological Research, 83(2), 327–344.

    Article  Google Scholar 

  • Tymoczko, T. (1984). An unsolved puzzle about knowledge. The Philosophical Quarterly, 34(137), 437–458.

    Article  Google Scholar 

  • Wittgenstein, L. (1953). Philosophical Investigations. G. E. M. Anscombe & R. Rhees (Eds.), G.E.M. Anscombe (trans.), Oxford: Blackwell.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thomas Raleigh.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Raleigh, T. A new anti-expertise dilemma. Synthese 199, 5551–5569 (2021). https://doi.org/10.1007/s11229-021-03035-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-021-03035-5

Keywords

Navigation