According to an increasingly popular epistemological view, people need outright beliefs in addition to credences to simplify their reasoning. Outright beliefs simplify reasoning by allowing thinkers to ignore small error probabilities. What is outright believed can change between contexts. It has been claimed that thinkers manage shifts in their outright beliefs and credences across contexts by an updating procedure resembling conditionalization, which I call pseudo-conditionalization (PC). But conditionalization is notoriously complicated. The claim that thinkers manage their beliefs via PC is (...) thus in tension with the view that the function of beliefs is to simplify our reasoning. I propose to resolve this puzzle by rejecting the view that thinkers employ PC. Based on this solution, I furthermore argue for a descriptive and a normative claim. The descriptive claim is that the available strategies for managing beliefs and credences across contexts that are compatible with the simplifying function of outright beliefs can generate synchronic and diachronic incoherence in a thinker’s attitudes. Moreover, I argue that the view of outright belief as a simplifying heuristic is incompatible with the view that there are ideal norms of coherence or consistency governing outright beliefs that are too complicated for human thinkers to comply with. (shrink)
Until recently, it seemed like no theory about the relationship between rational credence and rational outright belief could reconcile three independently plausible assumptions: that our beliefs should be logically consistent, that our degrees of belief should be probabilistic, and that a rational agent believes something just in case she is sufficiently confident in it. Recently a new formal framework has been proposed that can accommodate these three assumptions, which is known as “the stability theory of belief” or “high probability cores.” (...) In this paper, I examine whether the stability theory of belief can meet two further constraints that have been proposed in the literature: that it is irrational to outright believe lottery propositions, and that it is irrational to hold outright beliefs based on purely statistical evidence. I argue that these two further constraints create a dilemma for a proponent of the stability theory: she must either deny that her theory is meant to give an account of the common epistemic notion of outright belief, or supplement the theory with further constraints on rational belief that render the stability theory explanatorily idle. This result sheds light on the general prospects for a purely formal theory of the relationship between rational credence and belief, i.e. a theory that does not take into account belief content. I argue that it is doubtful that any such theory could properly account for these two constraints, and hence play an important role in characterizing our common epistemic notion of outright belief. (shrink)
How should thinkers cope with uncertainty? Julia Staffel breaks new ground in the study of rationality by answering this question and many others. She also explains how it is better to be less irrational, because less irrational degrees of belief are generally more accurate and better at guiding our actions.
In this paper I am concerned with the question of whether degrees of belief can figure in reasoning processes that are executed by humans. It is generally accepted that outright beliefs and intentions can be part of reasoning processes, but the role of degrees of belief remains unclear. The literature on subjective Bayesianism, which seems to be the natural place to look for discussions of the role of degrees of belief in reasoning, does not address the question of whether degrees (...) of belief play a role in real agents’ reasoning processes. On the other hand, the philosophical literature on reasoning, which relies much less heavily on idealizing assumptions about reasoners than Bayesianism, is almost exclusively concerned with outright belief. One possible explanation for why no philosopher has yet developed an account of reasoning with degrees of belief is that reasoning with degrees of belief is not possible for humans. In this paper, I will consider three arguments for this claim. I will show why these arguments are flawed, and conclude that, at least as far as these arguments are concerned, it seems like there is no good reason why the topic of reasoning with degrees of belief has received so little attention. (shrink)
This paper is about teaching probability to students of philosophy who don’t aim to do primarily formal work in their research. These students are unlikely to seek out classes about probability or formal epistemology for various reasons, for example because they don’t realize that this knowledge would be useful for them or because they are intimidated by the material. However, most areas of philosophy now contain debates that incorporate probability, and basic knowledge of it is essential even for philosophers whose (...) work isn’t primarily formal. In this paper, I explain how to teach probability to students who are not already enthusiastic about formal philosophy, taking into account the common phenomena of math anxiety and the lack of reading skills for formal texts. I address course design, lesson design, and assignment design. Most of my recommendations also apply to teaching formal methods other than probability theory. (shrink)
In this paper, I highlight an interesting difference between belief on the one hand, and suspended judgment and credence on the other hand. This difference is the following: credences and suspended judgments are suitable to serve as transitional as well as terminal attitudes in our reasoning, whereas beliefs are only appropriate as terminal attitudes. The notion of a transitional attitude is not an established one in the literature, but I argue that introducing it helps us better understand the different roles (...) suspended judgments and credences can play in our reasoning. Transitional and terminal attitudes have interestingly different descriptive and normative properties. I also compare my account of transitional attitudes to other inquiry-guiding attitudes that have recently been characterized in the literature and explain why they are different. (shrink)
In Accuracy and the Laws of Credence Richard Pettigrew assumes a particular view of belief, which states that people don't have any other doxastic states besides credences. This is in tension with the popular position that people have both credences and outright beliefs. Pettigrew claims that such a dual view of belief is incompatible with the accuracy-first approach. I argue in this paper that it is not. This is good news for Pettigrew, since it broadens the appeal of his framework.
This paper investigates the relationship between two evaluative claims about agents’ de- grees of belief: (i) that it is better to have more, rather than less accurate degrees of belief, and (ii) that it is better to have less, rather than more probabilistically incoherent degrees of belief. We show that, for suitable combinations of inaccuracy measures and incoherence measures, both claims are compatible, although not equivalent; moreover, certain ways of becoming less incoherent always guarantee improvements in accuracy. Incompatibilities between particular (...) incoherence and inaccuracy measures can be exploited to argue against particular ways of measuring either inaccuracy or incoherence. (shrink)
I argue that in order to account for normative uncertainty, an expressivist theory of normative language and thought must accomplish two things: Firstly, it needs to find room in its framework for a gradable conative attitude, degrees of which can be interpreted as representing normative uncertainty. Secondly, it needs to defend appropriate rationality constraints pertaining to those graded attitudes. The first task – finding an appropriate graded attitude that can represent uncertainty – is not particularly problematic. I tackle the second (...) task by exploring whether we can devise expressivist versions of the standard arguments used to support rationality constraints on degrees of uncertainty, Dutch book arguments and accuracy-dominance arguments. I show that we can do so, but that the resulting arguments don’t support the same rationality constraints as the original versions of the arguments. (shrink)
Epistemologists routinely distinguish between two kinds of justification or rationality – the propositional and the doxastic kind – in order to characterize importantly different ways in which an attitude can be justified or rational for a person. I argue that these notions, as they are commonly understood, are well suited to capture rationality judgments about the attitudes that agents reach as conclusions of their reasoning. Yet, these notions are ill-suited to capture rationality judgments about attitudes that agents form while their (...) reasoning is still in progress. In fact, we currently lack any suitable theory of rationality that lets us capture the ways in which we evaluate the rationality of such transitional attitudes, even though they are ubiquitous. I propose to capture these rationality judgments by introducing a new notion of rationality that is orthogonal to the propositional/doxastic distinction, which I call pro tem rationality. This new notion can be integrated with both traditional and formal ways of characterizing rationality or justification. It can be used to enlighten debates about logical and empirical learning, higher-order evidence, and the epistemology of philosophy, among others. (shrink)
Many philosophers hold that the probability axioms constitute norms of rationality governing degrees of belief. This view, known as subjective Bayesianism, has been widely criticized for being too idealized. It is claimed that the norms on degrees of belief postulated by subjective Bayesianism cannot be followed by human agents, and hence have no normative force for beings like us. This problem is especially pressing since the standard framework of subjective Bayesianism only allows us to distinguish between two kinds of credence (...) functions—coherent ones that obey the probability axioms perfectly, and incoherent ones that don’t. An attractive response to this problem is to extend the framework of subjective Bayesianism in such a way that we can measure differences between incoherent credence functions. This lets us explain how the Bayesian ideals can be approximated by humans. I argue that we should look for a measure that captures what I call the ‘overall degree of incoherence’ of a credence function. I then examine various incoherence measures that have been proposed in the literature, and evaluate whether they are suitable for measuring overall incoherence. The competitors are a qualitative measure that relies on finding coherent subsets of incoherent credence functions, a class of quantitative measures that measure incoherence in terms of normalized Dutch book loss, and a class of distance measures that determine the distance to the closest coherent credence function. I argue that one particular Dutch book measure and a corresponding distance measure are particularly well suited for capturing the overall degree of incoherence of a credence function. (shrink)
Bayesians defend norms of ideal rationality such as probabilism, which they claim should be approximated by non-ideal thinkers. Yet, it is not often discussed exactly in what sense it is beneficial for an agent’s credence function to approximate probabilistic coherence. Some existing research indicates that approximating coherence leads to improvements in accuracy, whereas other research suggests that it decreases Dutch book vulnerability. Yet, the existing results don’t settle whether there is a way of approximating coherence that delivers both benefits at (...) once. We show that there is. (shrink)
Epistemic utility theory seeks to establish epistemic norms by combining principles from decision theory and social choice theory with ways of determining the epistemic utility of agents’ attitudes. Recently, Moss, 1053–69, 2011) has applied this strategy to the problem of finding epistemic compromises between disagreeing agents. She shows that the norm “form compromises by maximizing average expected epistemic utility”, when applied to agents who share the same proper epistemic utility function, yields the result that agents must form compromises by splitting (...) the difference between their credence functions. However, this “split the difference” norm is in conflict with conditionalization, since applications of the two norms don’t commute. A common response in the literature seems to be to abandon the procedure of splitting the difference in favor of compromise strategies that avoid non-commutativity. This would also entail abandoning Moss’ norm. I explore whether a different response is feasible. If agents can use epistemic utility-based considerations to agree on an order in which they will apply the two norms, they might be able to avoid diachronic incoherence. I show that this response can’t save Moss’ norm, because the agreements concerning the order of compromising and updating it generates are not stable over time, and hence cannot avoid diachronic incoherence. I also show that a variant of Moss’ norm, which requires that the weights given to each agent’s epistemic utility change in a way that ensures commutativity, cannot be justified on epistemological grounds. (shrink)
Active reasoning is the kind of reasoning that we do deliberately and consciously. In characterizing the nature of active reasoning and the norms it should obey, the question arises which attitudes we can reason with. Many authors take outright beliefs to be the attitudes we reason with. Others assume that we can reason with both outright beliefs and degrees of belief. Some think that we reason only with degrees of belief. In this paper I approach the question of what kinds (...) of beliefs can participate in reasoning by using the following method: I take the default position to be maximally permissive – that both graded and outright beliefs can participate in reasoning. I then identify some features of active reasoning that appear at first glance to favor a more restrictive position about which types of belief we can reason with. I argue that the arguments based on these features ultimately fail. (shrink)
Sorensen offers the following definition of a ‘knowledge-lie’: ‘An assertion that p is a knowledge-lie exactly if intended to prevent the addressee from knowing that p is untrue but is not intended to deceive the addressee into believing p.’ According to Sorensen, knowledge-lies are not meant to deceive their addressee, and this fact is supposed to make them less bad than ordinary lies. I will argue that standard cases of knowledge-lies, including almost all the cases Sorensen considers, do in fact (...) involve deception, contrary to what Sorensen claims. And while there are cases of non-deceptive knowledge-lies, such cases are deviant, either because it is only incidental that the knowledge-preventing assertion is a lie, or because it is only incidental that the lie doesn't deceive. Here's an example of a knowledge-lie: Dr Head is considering firing Dr Appendix because of his weak research. But he doesn't want to fire Appendix unless he knows that Appendix's research isn't good, and so he consults Dr Heart. Head knows that if Appendix's research is good, then Heart will tell the truth and say so, whereas if Appendix's work isn't good, then Heart may or may not …. (shrink)
The aim of this paper is to examine whether it would be advantageous to introduce knowledge norms instead of the currently assumed rational credence norms into the debate about decision making under normative uncertainty. There is reason to think that this could help us better accommodate cases in which agents are rationally highly confident in false moral views. I show how Moss’ view of probabilistic knowledge can be fruitfully employed to develop a decision theory that delivers plausible verdicts in these (...) cases. I also argue that, for this new view to be better than existing alternatives, it must adopt a particular solution to the new evil demon problem, which asks whether agents and their BIV-counterparts are equally justified. In order to get an attractive decision theory for cases of moral uncertainty, we must reject the claim that agents and their BIV-counterparts are equally justified. Moreover, the resulting view must be supplemented with a moral epistemology that explains how it is possible to be rationally morally uncertain. This is especially challenging if we assume that moral truths are knowable a priori. (shrink)
Ideal agents are role models whose perfection in some normative domain we try to approximate. But which form should this striving take? It is well known that following ideal rules of practical reasoning can have disastrous results for non-ideal agents. Yet, this issue has not been explored with respect to rules of theoretical reasoning. I show how we can extend Bayesian models of ideally rational agents in order to pose and answer the question of whether non-ideal agents should form new (...) degrees of belief in the same way as their ideal counterparts. I demonstrate that the epistemic and the practical case are parallel: following ideal rules does not always lead to optimal outcomes for non-ideal agents. (shrink)
This paper proposes a novel answer to the question of what attitude agents should adopt when they receive misleading higher-order evidence that avoids the drawbacks of existing views. The answer builds on the independently motivated observation that there is a difference between attitudes that agents form as conclusions of their reasoning, called terminal attitudes, and attitudes that are formed in a transitional manner in the process of reasoning, called transitional attitudes. Terminal and transitional attitudes differ both in their descriptive and (...) in their normative properties. When an agent receives higher-order evidence that they might have reasoned incorrectly to a belief or credence towards p, then their attitude towards p is no longer justified as a terminal attitude towards p, but it can still be justified as a transitional attitude. This view, which I call the unmooring view, allows us to capture the rational impact of misleading higher-order evidence in a way that integrates smoothly with a natural picture of epistemic justification and the dynamics of deliberation. (shrink)
In this article, I discuss three distinct but related puzzles involving lotteries: Kyburg’s lottery paradox, the statistical evidence problem, and the Harman-Vogel paradox. Kyburg’s lottery paradox is the following well-known problem: if we identify rational outright belief with a rational credence above a threshold, we seem to be forced to admit either that one can have inconsistent rational beliefs, or that one cannot rationally believe anything one is not certain of. The statistical evidence problem arises from the observation that people (...) seem to resist forming outright beliefs whenever the available evidence for the claim under consideration is purely statistical. We need explanations of whether it is in fact irrational to form such beliefs, and of whether a clear distinction can be drawn between statistical and non-statistical evidence. The Harman-Vogel paradox is usually presented as a paradox about knowledge: we tend to assume that we can know so-called ordinary propositions, such as the claim that I will be in Barcelona next spring. Yet, we hesitate to make knowledge claims regarding so-called lottery propositions, such as the claim that I won’t die in a car crash in the next few months, even if these lottery propositions are obviously entailed by the ordinary propositions we claim to know. Depending on one’s view about the relationship between rational belief and knowledge, the Harman-Vogel paradox has ramifications for a theory of rational outright belief. Formal theories of the relationship between rational credence and rational belief, such as Leitgeb’s stability theory, tend to focus mostly on handling Kyburg’s lottery paradox, but not the other two puzzles I mention. My aim in this article is to draw out relationships and differences between the puzzles, and to examine to what extent existing formal solutions to Kyburg’s lottery paradox help with answering the statistical evidence problem and the Harman-Vogel paradox. (shrink)
In this paper, we ask: how should an agent who has incoherent credences update when they learn new evidence? The standard Bayesian answer for coherent agents is that they should conditionalize; however, this updating rule is not defined for incoherent starting credences. We show how one of the main arguments for conditionalization, the Dutch strategy argument, can be extended to devise a target property for updating plans that can apply to them regardless of whether the agent starts out with coherent (...) or incoherent credences. The main idea behind this extension is that the agent should avoid updating plans that increase the possible sure loss from Dutch strategies. This happens to be equivalent to avoiding updating plans that increase incoherence according to a distance-based incoherence measure. (shrink)
Bayesian epistemology provides a popular and powerful framework for modeling rational norms on credences, including how rational agents should respond to evidence. The framework is built on the assumption that ideally rational agents have credences, or degrees of belief, that are representable by numbers that obey the axioms of probability. From there, further constraints are proposed regarding which credence assignments are rationally permissible, and how rational agents’ credences should change upon learning new evidence. While the details are hotly disputed, all (...) flavors of Bayesianism purport to give us norms of ideal rationality. This raises the question of how exactly these norms apply to you and me, since perfect compliance with those ideal norms is out of reach for human thinkers. A common response is that Bayesian norms are ideals that human reasoners are supposed to approximate – the closer they come to being ideally rational, the better. To make this claim plausible, we need to make it more precise. In what sense is it better to be closer to ideally rational, and what is an appropriate measure of such closeness? This article sketches some possible answers to these questions. (shrink)
This chapter is a philosophical survey of some leading approaches in formal epistemology in the so-called ‘Bayesian’ tradition. According to them, a rational agent’s degrees of belief—credences—at a time are representable with probability functions. We also canvas various further putative ‘synchronic’ rationality norms on credences. We then consider ‘diachronic’ norms that are thought to constrain how credences should respond to evidence. We discuss some of the main lines of recent debate, and conclude with some prospects for future research.
In his new book "The Importance of Being Rational", Errol Lord aims to give a real definition of the property of rationality in terms of normative reasons. If he can do so, his work is an important step towards a defense of ‘reasons fundamentalism’ – the thesis that all complex normative properties can be analyzed in terms of normative reasons. I focus on his analysis of epistemic rationality, which says that your doxastic attitudes are rational just in case they are (...) correct responses to the objective normative reasons you possess. For some fact to be an objective normative reason to do something that you possess, you have to be in a position to know this fact and be able to competently use it as a reason to do that thing. Lord’s view is thus a knowledge-first view about possessing normative reasons. Throughout the book, Lord conceptualizes belief in the traditional tripartite way – if you take any attitude at all towards a proposition, then you either believe it, or disbelieve it, or you suspend judgment about it. Lord doesn’t discuss cases in which we’re uncertain. Yet, those cases are ubiquitous. I explore how his view can be extended to them. I first discuss whether his strategy for vindicating coherence requirements in terms of normative reasons can be applied to credences. I then ask how Lord can conceive of the doxastic attitudes that encode uncertainty . (shrink)