There are many things—call them ‘experts’—that you should defer to in forming your opinions. The trouble is, many experts are modest: they’re less than certain that they are worthy of deference. When this happens, the standard theories of deference break down: the most popular (“Reflection”-style) principles collapse to inconsistency, while their most popular (“New-Reflection”-style) variants allow you to defer to someone while regarding them as an anti-expert. We propose a middle way: deferring to someone involves preferring to make any decision (...) using their opinions instead of your own. In a slogan, deferring opinions is deferring decisions. Generalizing the proposal of Dorst (2020a), we first formulate a new principle that shows exactly how your opinions must relate to an expert’s for this to be so. We then build off the results of Levinstein (2019) and Campbell-Moore (2020) to show that this principle is also equivalent to the constraint that you must always expect the expert’s estimates to be more accurate than your own. Finally, we characterize the conditions an expert’s opinions must meet to be worthy of deference in this sense, showing how they sit naturally between the too-strong constraints of Reflection and the too-weak constraints of New Reflection. (shrink)
Considerations of accuracy – the epistemic good of having credences close to truth-values – have led to the justification of a host of epistemic norms. These arguments rely on specific ways of measuring accuracy. In particular, the accuracy measure should be strictly proper. However, the main argument for strict propriety supports only weak propriety. But strict propriety follows from weak propriety given strict truth directedness and additivity. So no further argument is necessary.
Permissivism about rationality is the view that there is sometimes more than one rational response to a given body of evidence. In this paper I discuss the relationship between permissivism, deference to rationality, and peer disagreement. I begin by arguing that—contrary to popular opinion—permissivism supports at least a moderate version of conciliationism. I then formulate a worry for permissivism. I show that, given a plausible principle of rational deference, permissive rationality seems to become unstable and to collapse into unique rationality. (...) I conclude with a formulation of a way out of this problem on behalf of the permissivist. (shrink)
We use a theorem from M. J. Schervish to explore the relationship between accuracy and practical success. If an agent is pragmatically rational, she will quantify the expected loss of her credence with a strictly proper scoring rule. Which scoring rule is right for her will depend on the sorts of decisions she expects to face. We relate this pragmatic conception of inaccuracy to the purely epistemic one popular among epistemic utility theorists.
Chance both guides our credences and is an objective feature of the world. How and why we should conform our credences to chance depends on the underlying metaphysical account of what chance is. I use considerations of accuracy (how close your credences come to truth-values) to propose a new way of deferring to chance. The principle I endorse, called the Trust Principle, requires chance to be a good guide to the world, permits modest chances, tells us how to listen to (...) chance even when the chances are modest, and entails but is not entailed by the New Principle. As I show, a rational agent will obey this principle if and only if she expects chance to be at least as accurate as she is on every good way of measuring accuracy. Much of the discussion, and the technical results, extend beyond chance to deference to any kind of expert. Indeed, you will trust someone about a particular question just in case you expect that person to be more accurate than you are about that question. (shrink)
Evidential Decision Theory and Causal Decision Theory are the leading contenders as theories of rational action, but both face counterexamples. We present some new counterexamples, including one in which the optimal action is causally dominated. We also present a novel decision theory, Functional Decision Theory, which simultaneously solves both sets of counterexamples. Instead of considering which physical action of theirs would give rise to the best outcomes, FDT agents consider which output of their decision function would give rise to the (...) best outcome. This theory relies on a notion of subjunctive dependence, where multiple implementations of the same mathematical function are considered to have identical results for logical rather than causal reasons. Taking these subjunctive dependencies into account allows FDT agents to outperform CDT and EDT agents in, for example, the presence of accurate predictors. (shrink)
Leitgeb and Pettigrew argue that (1) agents should minimize the expected inaccuracy of their beliefs and (2) inaccuracy should be measured via the Brier score. They show that in certain diachronic cases, these claims require an alternative to Jeffrey Conditionalization. I claim that this alternative is an irrational updating procedure and that the Brier score, and quadratic scoring rules generally, should be rejected as legitimate measures of inaccuracy.
In this paper, I develop a new kind of conciliatory answer to the problem of peer disagreement. Instead of trying to guide an agent’s updating behaviour in any particular disagreement, I establish constraints on an agent’s expected behaviour and argue that, in the long run, she should tend to be conciliatory toward her peers. I first claim that this macro-approach affords us new conceptual insight on the problem of peer disagreement and provides an important angle complementary to the standard micro-approaches (...) in the literature. I then detail the import of two novel results based on accuracy-considerations that establish the following: An agent should, on average, give her peers equal weight. However, if the agent takes herself and her advisor to be reliable, she should usually give the party with a stronger opinion more weight. In other words, an agent’s response to peer disagreement should over the course of many disagreements average out to equal weight, but in any particular disagreement, her response should tend to deviate from equal weight in a way that systematically depends on the actual credences she and her advisor report. (shrink)
Some propositions are more epistemically important than others. Further, how important a proposition is is often a contingent matter—some propositions count more in some worlds than in others. Epistemic Utility Theory cannot accommodate this fact, at least not in any standard way. For EUT to be successful, legitimate measures of epistemic utility must be proper, i.e., every probability function must assign itself maximum expected utility. Once we vary the importance of propositions across worlds, however, normal measures of epistemic utility become (...) improper. I argue there isn’t any good way out for EUT. (shrink)
In this paper, we illustrate some serious difficulties involved in conveying information about uncertain risks and securing informed consent for risky interventions in a clinical setting. We argue that in order to secure informed consent for a medical intervention, physicians often need to do more than report a bare, numerical probability value. When probabilities are given, securing informed consent generally requires communicating how probability expressions are to be interpreted and communicating something about the quality and quantity of the evidence for (...) the probabilities reported. Patients may also require guidance on how probability claims may or may not be relevant to their decisions, and physicians should be ready to help patients understand these issues. (shrink)
Our decision-theoretic states are not luminous. We are imperfectly reliable at identifying our own credences, utilities and available acts, and thus can never be more than imperfectly reliable at identifying the prescriptions of decision theory. The lack of luminosity affords decision theory a remarkable opportunity — to issue guidance on the basis of epistemically inaccessible facts. We show how a decision theory can guarantee action in accordance with contingent truths about which an agent is arbitrarily uncertain. It may seem that (...) such advantages would require dubiously adverting to externalist facts that go beyond the internalism of traditional decision theory, but this is not so. Using only the standard repertoire of decision-theoretic tools, we show how to modify existing decision theories to take advantage of this opportunity. These improved decision theories require agents to maximize conditional expected utility — expected utility conditional upon an agent’s actual decision situation. We call such modified decision theories ‘self-confident’. These self-confident decision theories have a distinct advantage over standard decision theories: their prescriptions are better. (shrink)
Consequentialist theories determine rightness solely based on real or expected consequences. Although such theories are popular, they often have difficulty with generalizing intuitions, which demand concern for questions like “What if everybody did that?” Rule consequentialism attempts to incorporate these intuitions by shifting the locus of evaluation from the consequences of acts to those of rules. However, detailed rule-consequentialist theories seem ad hoc or arbitrary compared to act consequentialist ones. We claim that generalizing can be better incorporated into consequentialism by (...) keeping the locus of evaluation on acts but adjusting the decision theory behind act selection. Specifically, we should adjust which types of dependencies the theory takes to be decision-relevant. Using this strategy, we formulate a new theory, generalized act consequentialism, which we argue is more compelling than rule consequentialism both in modeling the actual reasoning of generalizers and in delivering correct verdicts. (shrink)