Many views rely on the idea that it can never be rational to have high confidence in something like, “P, but my evidence doesn’t support P.” Call this idea the “Non-Akrasia Constraint”. Just as an akratic agent acts in a way she believes she ought not act, an epistemically akratic agent believes something that she believes is unsupported by her evidence. The Non-Akrasia Constraint says that ideally rational agents will never be epistemically akratic. In a number of recent papers, the (...) Non-Akrasia Constraint has been called into question. The goal of this paper is to defend it... for the most part. (shrink)
White, Christensen, and Feldman have recently endorsed uniqueness, the thesis that given the same total evidence, two rational subjects cannot hold different views. Kelly, Schoenfield, and Meacham argue that White and others have at best only supported the weaker, merely intrapersonal view that, given the total evidence, there are no two views which a single rational agent could take. Here, we give a new argument for uniqueness, an argument with deliberate focus on the interpersonal element of the thesis. Our argument (...) is that the best explanation of the value of promoting rationality is an explanation that entails uniqueness. (shrink)
Plausibly, you should believe what your total evidence supports. But cases of misleading higher-order evidence—evidence about what your evidence supports—present a challenge to this thought. In such cases, taking both first-order and higher-order evidence at face value leads to a seemingly irrational incoherence between one’s first-order and higher-order attitudes: you will believe P, but also believe that your evidence doesn’t support P. To avoid sanctioning tension between epistemic levels, some authors have abandoned the thought that both first-order and higher-order evidence (...) have rational bearing. This sacrifice is both costly and unnecessary. We propose a principle, Evidential Calibration, which requires rational agents to accommodate first-order evidence correctly, while allowing rational uncertainty about what to believe. At the same time, it rules out irrational tensions between epistemic levels. We show that while there are serious problems for some views on which we can rationally believe, “P, but my evidence doesn’t support P”, Evidential Calibration avoids these problems. An important upshot of our discussion is a new way to think about the relationship between epistemic levels: why first-order and higher-order attitudes should generally be aligned, and why it is sometimes—though not always—problematic when they diverge. (shrink)
Believing rationally is epistemically valuable, or so we tend to think. It’s something we strive for in our own beliefs, and we criticize others for falling short of it. We theorize about rationality, in part, because we want to be rational. But why? I argue that how we answer this question depends on how permissive our theory of rationality is. Impermissive and extremely permissive views can give good answers; moderately permissive views cannot.
Epistemologists often assume that rationality bears an important connection to the truth. In this paper I examine the implications of this commitment for permissivism: if rationality is a guide to the truth, can it also allow some leeway in how we should respond to our evidence? I first discuss a particular strategy for connecting permissive rationality and the truth, developed in a recent paper by Miriam Schoenfield. I argue that this limited truth-connection is unsatisfying, and the version of permissivism that (...) supports it faces serious challenges; so, for mainstream permissivism, the truth problem is still unsolved. I then discuss a strategy available to impermissivists, according to which rationality bears a quite strong connection to truth. I argue that this second strategy is successful. (shrink)
Standard accuracy-based approaches to imprecise credences have the consequence that it is rational to move between precise and imprecise credences arbitrarily, without gaining any new evidence. Building on the Educated Guessing Framework of Horowitz (2019), we develop an alternative accuracy-based approach to imprecise credences that does not have this shortcoming. We argue that it is always irrational to move from a precise state to an imprecise state arbitrarily, however it can be rational to move from an imprecise state to a (...) precise state arbitrarily. (shrink)
In an influential paper, L. A. Paul argues that one cannot rationally decide whether to have children. In particular, she argues that such a decision is intractable for standard decision theory. Paul's central argument in this paper rests on the claim that becoming a parent is ``epistemically transformative''---prior to becoming a parent, it is impossible to know what being a parent is like. Paul argues that because parenting is epistemically transformative, one cannot estimate the values of the various outcomes of (...) a decision whether to become a parent. In response, we argue that it is possible to estimate the value of epistemically transformative experiences. Therefore, there is no special difficulty involved in deciding whether to undergo epistemically transformative experiences. Insofar as major life decisions do pose a challenge to decision theory, we suggest that this is because they often involve separate, familiar problems. (shrink)
Credences, unlike full beliefs, can’t be true or false. So what makes credences more or less accurate? This chapter offers a new answer to this question: credences are accurate insofar as they license true educated guesses, and less accurate insofar as they license false educated guesses. This account is compatible with immodesty; : a rational agent will regard her own credences to be best for the purposes of making true educated guesses. The guessing account can also be used to justify (...) certain coherence constraints on rational credence, such as probabilism. The chapter concludes by discussing some advantages of the guessing account over rival accounts of accuracy. (shrink)
William James famously tells us that there are two main goals for rational believers: believing truth and avoiding error. I argues that epistemic consequentialism—in particular its embodiment in epistemic utility theory—seems to be well positioned to explain how epistemic agents might permissibly weight these goals differently and adopt different credences as a result. After all, practical versions of consequentialism render it permissible for agents with different goals to act differently in the same situation. -/- Nevertheless, I argue that epistemic consequentialism (...) doesn’t allow for this kind of permissivism and goes on to argue that this reveals a deep disanalogy between decision theory and the formally similar epistemic utility theory. This raises the question whether epistemic utility theory is a genuinely consequentialist theory at all. (shrink)
I argue that three arguments for conditionalization -- the Diachronic Dutch Book, the expected-accuracy maximization argument from Greaves and Wallace, and the accuracy-dominance argument from Briggs and Pettigrew -- can all be improved by narrowing their focus. I suggest alternative, targeted arguments which better identify the flaw involved in non-conditionalizing updates.
ABSTRACTInAccuracy and the Laws of Credence, Richard Pettigrew gives several decision-theoretic arguments for formal requirements on rational credence. Pettigrew's arguments build on a central notion of epistemic value, but employ different decision rules. These comments explore how our choice of decision rule might matter, and discuss one of Pettigrew's arguments in detail: his argument for the Principle of Indifference, which relies on Maximin.