This book defends the view that any adequate account of rational decision making must take a decision maker's beliefs about causal relations into account. The early chapters of the book introduce the non-specialist to the rudiments of expected utility theory. The major technical advance offered by the book is a 'representation theorem' that shows that both causal decision theory and its main rival, Richard Jeffrey's logic of decision, are both instances of a more general conditional decision theory. The book solves (...) a long-standing problem for Jeffrey's theory by showing for the first time how to obtain a unique utility and probability representation for preferences and judgements of comparative likelihood. The book also contains a major new discussion of what it means to suppose that some event occurs or that some proposition is true. The most complete and robust defence of causal decision theory available. (shrink)
The pragmatic character of the Dutch book argument makes it unsuitable as an "epistemic" justification for the fundamental probabilist dogma that rational partial beliefs must conform to the axioms of probability. To secure an appropriately epistemic justification for this conclusion, one must explain what it means for a system of partial beliefs to accurately represent the state of the world, and then show that partial beliefs that violate the laws of probability are invariably less accurate than they could be otherwise. (...) The first task can be accomplished once we realize that the accuracy of systems of partial beliefs can be measured on a gradational scale that satisfies a small set of formal constraints, each of which has a sound epistemic motivation. When accuracy is measured in this way it can be shown that any system of degrees of belief that violates the axioms of probability can be replaced by an alternative system that obeys the axioms and yet is more accurate in every possible world. Since epistemically rational agents must strive to hold accurate beliefs, this establishes conformity with the axioms of probability as a norm of epistemic rationality whatever its prudential merits or defects might be. (shrink)
Andy Egan has recently produced a set of alleged counterexamples to causal decision theory in which agents are forced to decide among causally unratifiable options, thereby making choices they know they will regret. I show that, far from being counterexamples, CDT gets Egan's cases exactly right. Egan thinks otherwise because he has misapplied CDT by requiring agents to make binding choices before they have processed all available information about the causal consequences of their acts. I elucidate CDT in a way (...) that makes it clear where Egan goes wrong, and which explains why his examples pose no threat to the theory. My approach has similarities to a modification of CDT proposed by Frank Arntzenius, but it differs in the significance that it assigns to potential regrets. I maintain, contrary to Arntzenius, that an agent facing Egan's decisions can rationally choose actions that she knows she will later regret. All rationality demands of agents it that they maximize unconditional causal expected utility from an epistemic perspective that accurately reflects all the available evidence about what their acts are likely to cause. This yields correct answers even in outlandish cases in which one is sure to regret whatever one does. (shrink)
Confirmation theory is intended to codify the evidential bearing of observations on hypotheses, characterizing relations of inductive “support” and “countersupport” in full generality. The central task is to understand what it means to say that datum E confirms or supports a hypothesis H when E does not logically entail H.
Richard Jeffrey long held that decision theory should be formulated without recourse to explicitly causal notions. Newcomb problems stand out as putative counterexamples to this ‘evidential’ decision theory. Jeffrey initially sought to defuse Newcomb problems via recourse to the doctrine of ratificationism, but later came to see this as problematic. We will see that Jeffrey’s worries about ratificationism were not compelling, but that valid ratificationist arguments implicitly presuppose causal decision theory. In later work, Jeffrey argued that Newcomb problems are not (...) decisions at all because agents who face them possess so much evidence about correlations between their actions and states of the world that they are unable to regard their deliberate choices as causes of outcomes, and so cannot see themselves as making free choices. Jeffrey’s reasoning goes wrong because it fails to recognize that an agent’s beliefs about her immediately available acts are so closely tied to the immediate causes of these actions that she can create evidence that outweighs any antecedent correlations between acts and states. Once we recognize that deliberating agents are free to believe what they want about their own actions, it will be clear that Newcomb problems are indeed counterexamples to evidential decision theory. (shrink)
Isaac Levi has long criticized causal decisiontheory on the grounds that it requiresdeliberating agents to make predictions abouttheir own actions. A rational agent cannot, heclaims, see herself as free to choose an actwhile simultaneously making a prediction abouther likelihood of performing it. Levi is wrongon both points. First, nothing in causaldecision theory forces agents to makepredictions about their own acts. Second,Levi's arguments for the ``deliberation crowdsout prediction thesis'' rely on a flawed modelof the measurement of belief. Moreover, theability of agents (...) to adopt beliefs about theirown acts during deliberation is essentialto any plausible account of human agency andfreedom. Though these beliefs play no part inthe rationalization of actions, they arerequired to account for the causalgenesis of behavior. To explain the causes ofactions we must recognize that (a) an agentcannot see herself as entirely free in thematter of A unless she believes herdecision to perform A will cause A,and (b) she cannot come to a deliberatedecision about A unless she adoptsbeliefs about her decisions. FollowingElizabeth Anscombe and David Velleman, I arguethat an agent's beliefs about her own decisionsare self-fulfilling, and that this can beused to explain away the seeming paradoxicalfeatures of act probabilities. (shrink)
Bayesianism claims to provide a unified theory of epistemic and practical rationality based on the principle of mathematical expectation. In its epistemic guise it requires believers to obey the laws of probability. In its practical guise it asks agents to maximize their subjective expected utility. Joyce’s primary concern is Bayesian epistemology, and its five pillars: people have beliefs and conditional beliefs that come in varying gradations of strength; a person believes a proposition strongly to the extent that she presupposes its (...) truth in her practical and theoretical reasoning; rational graded beliefs must conform to the laws of probability; evidential relationships should be analyzed subjectively in terms of relations among a person’s graded beliefs and conditional beliefs; empirical learning is best modeled as probabilistic conditioning. Joyce explains each of these claims and evaluates some of the justifications that have been offered for them, including “Dutch book,” “decision-theoretic,” and “non-pragmatic” arguments for and. He also addresses some common objections to Bayesianism, in particular the “problem of old evidence” and the complaint that the view degenerates into an untenable subjectivism. The essay closes by painting a picture of Bayesianism as an “internalist” theory of reasons for action and belief that can be fruitfully augmented with “externalist” principles of practical and epistemic rationality. (shrink)
I argue that one central aspect of the epistemology of causation, the use of causes as evidence for their effects, is largely independent of the metaphysics of causation. In particular, I use the formalism of Bayesian causal graphs to factor the incremental evidential impact of a cause for its effect into a direct cause-to-effect component and a backtracking component. While the “backtracking” evidence that causes provide about earlier events often obscures things, once we our restrict attention to the cause-to-effect component (...) it is true to say promoting (inhibiting) causes raise (lower) the probabilities of their effects. This factoring assumes the same form whether causation is given an interventionist, counterfactual or probabilistic interpretation. Whether we think about causation in terms of interventions and causal graphs, counterfactuals and imaging functions, or probability raising against the background of causally homogenous partitions, if we describe the essential features of a situation correctly then the incremental evidence that a cause provides for its effect in virtue of being its cause will be the same. (shrink)
In The Logic of Decision Richard Jeffrey defends a version of expected utility theory that advises agents to choose acts with an eye to securing evidence for thinking that desirable results will ensue. Proponents of "causal" decision theory have argued that Jeffrey's account is inadequate because it fails to properly discriminate the causal features of acts from their merely evidential properties. Jeffrey's approach has also been criticized on the grounds that it makes it impossible to extract a unique probability/utility representation (...) from a sufficiently rich system of preferences (given a zero and unit for measuring utility). The existence of these problems should not blind us to the fact that Jeffrey's system has advantages that no other decision theory can match: it can be underwritten by a particularly compelling representation theorem proved by Ethan Bolker; and it has a property called partition invariance that every reasonable theory of rational choice must possess. I shall argue that the non-uniqueness problem can be finessed, and that it is impossible to adequately formulate causal decision theory, or any other, without using Jeffrey's theory as one's basic analysis of rational desire. (shrink)
Colin Howson has recently argued that accuracy arguments for probabilism fail because they assume a privileged ‘coding’ in which TRUE is assigned the value 1 and FALSE is assigned the value 0. I explain why this is wrong by first showing that Howson’s objections are based on a misconception about the way in which degrees of confidence are measured, and then reformulating the accuracy argument in a way that manifestly does not depend on the coding of truth-values. Along the way, (...) I will explain how to formulate the laws of probability and rational expectation in a scale-invariant way, and how to properly understand the values of the credence functions that we use to represent rational degrees of confidence. (shrink)
Richard Bradley’s landmark book Decision Theory with a Human Face makes seminal contributions to nearly every major area of decision theory, as well as most areas of formal epistemology and many areas of semantics. In addition to sketching Bradley’s distinctive semantics for conditional beliefs and desires, I will explain his theory of conditional desire, focusing particularly on his claim that we should not desire events, either positively or negatively, under the supposition that they will occur. I shall argue, to the (...) contrary, that permitting non-trivial desirabilities for events whose occurrence is known or assumed is both more intuitively plausible and more theoretically fruitful than Bradley’s approach. In the course of the discussion I will contrast Bradley’s broadly evidentialist picture of decision theory with my own more orthodox causal approach. (shrink)
Recently several authors have argued that accuracy-first epistemology ends up licensing problematic epistemic bribes. They charge that it is better, given the accuracy-first approach, to deliberately form one false belief if this will lead to forming many other true beliefs. We argue that this is not a consequence of the accuracy-first view. If one forms one false belief and a number of other true beliefs, then one is committed to many other false propositions, e.g., the conjunction of that false belief (...) with any of the true beliefs. Once we properly account for all the falsehoods that are adopted by the person who takes the bribe, it turns out that the bribe does not increase accuracy. (shrink)
Philosophers can learn a lot about scientific methodology when great scientists square off to debate the foundations of their discipline. The Leibniz/newton controversy over the nature of physical space and the Einstein/bohr exchanges over quantum theory provide paradigm examples of this phenomenon. David Howie’s splendid recent book describes another philosophically laden dispute of this sort. Throughout the 1930s, R. A. Fisher and Harold Jeffries squabbled over the methodology for the nascent discipline of statistics. Their debate has come to symbolize the (...) controversy between the “frequentist” and “Bayesian” schools of statistical thought. Though much has been written about the Fisher/jeffreys exchange, Howie’s book is now the definitive treatment of the subject. Though billed as a piece of history of science, it brims with philosophical insights. (shrink)