In this important book, Ellery Eells explores and refines philosophical conceptions of probabilistic causality. In a probabilistic theory of causation, causes increase the probability of their effects rather than necessitate their effects in the ways traditional deterministic theories have specified. Philosophical interest in this subject arises from attempts to understand population sciences as well as indeterminism in physics. Taking into account issues involving spurious correlation, probabilistic causal interaction, disjunctive causal factors, and temporal ideas, Professor Eells advances the analysis of what (...) it is for one factor to be a positive causal factor for another. A salient feature of the book is a theory of token level probabilistic causation in which the evolution of the probability of a later event from an earlier event is central. This will be a book of crucial significance to philosophers of science and metaphysicians; it will also prove stimulating to many economists, psychologists, and physicists. (shrink)
First published in 1982, Ellery Eells' original work on rational decision making had extensive implications for probability theorists, economists, statisticians and psychologists concerned with decision making and the employment of Bayesian principles. His analysis of the philosophical and psychological significance of Bayesian decision theories, causal decision theories and Newcomb's paradox continues to be influential in philosophy of science. His book is now revived for a new generation of readers and presented in a fresh twenty-first-century series livery, including a specially commissioned (...) preface written by Brian Skyrms, illuminating its continuing importance and relevance to philosophical enquiry. (shrink)
Several forms of symmetry in degrees of evidential support areconsidered. Some of these symmetries are shown not to hold in general. This has implications for the adequacy of many measures of degree ofevidential support that have been proposed and defended in the philosophical literature.
In past years, the traditional Bayesian theory of rational decision making, based on subjective calculations of expected utility, has faced powerful attack from philosophers such as David Lewis and Brian Skyrms, who advance an alternative causal decision theory. The test they present for the Bayesian is exemplified in the decision problem known as 'Newcomb's paradox' and in related decision problems and is held to support the prescriptions of the causal theory. As well as his conclusions, the concepts and methods of (...) Professor Eells introduces in the course of his analyses have extensive implications, not solely for probability theorists narrowly conceived, but for economists, statisticians and psychologists concerned with decision making and the employment of Bayesian principles. They and their students will, in addition, find the early chapters of great use as a background and introduction to the subject as a whole. (shrink)
After clarifying the probabilistic conception of causality suggested by Good (1961-2), Suppes (1970), Cartwright (1979), and Skyrms (1980), we prove a sufficient condition for transitivity of causal chains. The bearing of these considerations on the units of selection problem in evolutionary theory and on the Newcomb paradox in decision theory is then discussed.
This collection of essays is on the relation between probabilities, especially conditional probabilities, and conditionals. It provides negative results which sharply limit the ways conditionals can be related to conditional probabilities. There are also positive ideas and results which will open up areas of research. The collection is intended to honour Ernest W. Adams, whose seminal work is largely responsible for creating this area of inquiry. As well as describing, evaluating, and applying Adams's work the contributions extend his ideas in (...) directions he may or may not have anticipated, but that he certainly inspired. In addition to a wide range of philosophers of science, the volume should interest computer scientists and linguists. (shrink)
Human beings are peculiar. In laboratory experiments, they often cooperate in one-shot prisoners’ dilemmas, they frequently offer 1/2 and reject low offers in the ultimatum game, and they often bid 1/2 in the game of divide-the-cake All these behaviors are puzzling from the point of view of game theory. The first two are irrational, if utility is measured in a certain way.1 The last isn’t positively irrational, but it is no more rational than other possible actions, since there are infinitely (...) many other Nash equilibria besides the one in which both players bid 1/2. At the same time, these behaviors seem to indicate that people are sometimes inclined to be cooperative, fair, and just. In his stimulating new book, Brian Skyrms sets himself the task of showing why these inclinations evolved, or how they might have evolved, under the pressure of natural selection. The goal is not to justify our ethical intuitions, but to explain why we have them.2.. (shrink)
Bayesian epistemology suggests various ways of measuring the support that a piece of evidence provides a hypothesis. Such measures are defined in terms of a subjective probability assignment, pr, over propositions entertained by an agent. The most standard measure (where “H” stands for “hypothesis” and “E” stands for “evidence”) is: the difference measure: d(H,E) = pr(H/E) - pr(H).0 This may be called a “positive (probabilistic) relevance measure” of confirmation, since, according to it, a piece of evidence E qualitatively confirms a (...) hypothesis H if and only if pr(H/E) > pr(H), where qualitative disconfirmation is characterized by replacing “>” with “ “ with “=”. Other more or less standard positive relevance measures that have been proposed are: the log-ratio measure: r(H,E) = log[pr(H/E)/pr(H)] and the log-likelihood-ratio measure: l(H,E) = log[pr(E/H)/pr(E/~H)]. (shrink)
I argue that to the extent to which philosophical theories of objective probability have offered theoretically adequate conceptions of objective probability , they have failed to satisfy a methodological standard -- roughly, a requirement to the effect that the conception offered be specified with the precision appropriate for a physical interpretation of an abstract formal calculus and be fully explicated in terms of concepts, objects or phenomena understood independently of the idea of physical probability. The significance of this, and of (...) the suggested methodological standard, is then briefly discussed. (shrink)
It is possible for a causal factor to raise the probability of a second factor in some situations while lowering the probability of the second factor in other situations. Must a genuine cause always raise the probability of a genuine effect of it? When it does not always do so, an "interaction" with some third factor may be the reason. I discuss causal interaction from the perspectives of Giere's counterfactual characterization of probabilistic causal connection (1979, 1980) and the "contextual unanimity" (...) model developed by, among others, Cartwright (1979) and Skyrms (1980). I argue that the contextual unanimity theory must exercise care, in a new way that seems to have gone unnoticed, in order to adequately accommodate the phenomenon, and that the counterfactual theory must be substantially revised; although it will still, pending clarification of a second kind of revision, be unable to accommodate a kind of interaction exemplified in cases like those described by Sober (1982). (shrink)
John Dupré (1984) has recently criticized the theory of probabilistic causality developed by, among others, Good (1961-62), Suppes (1970), Cartwright (1979), and Skyrms (1980). He argues that there is a tension or incompatibility between one of its central requirements for the presence of a causal connection, on the one hand, and a feature of the theory pointed out by Elliott Sober and me (1983), on the other. He also argues that the requirement just alluded to should be given up. I (...) defend the theory against Dupré's criticisms and conclude with comments on Dupré's appraisal of the bearing of his arguments on the nature of probabilistic causal laws. (shrink)
I argue that to the extent to which philosophical theories of objective probability have offered theoretically adequateconceptions of objective probability (in connection with such desiderata as causal and explanatory significance, applicability to single cases, etc.), they have failed to satisfy amethodological standard — roughly, a requirement to the effect that the conception offered be specified with the precision appropriate for a physical interpretation of an abstract formal calculus and be fully explicated in terms of concepts, objects or phenomena understood independently (...) of the idea of physical probability. The significance of this, and of the suggested methodological standard, is then briefly discussed. (shrink)
After a brief presentation of evidential decision theory, causal decision theory, and Newcomb type prima facie counterexamples to the evidential theory, three kinds of "metatickle" defenses of the evidential theory are discussed. Each has its weaknesses, but one of them seems stronger than the other two. The weaknesses of the best of the three, and the intricacy of metatickle analysis, does not constitute an advantage of causal decision theory over the evidential theory, however. It is argued, by way of an (...) example, that causal decision theory also stands in need of a metatickle defense. (shrink)
One of us (Eells 1982) has defended traditional evidential decision theory against prima facie Newcomb counterexamples by assuming that a common cause forms a conjunctive fork with its joint effects. In this paper, the evidential theory is defended without this assumption. The suggested rationale shows that the theory's assumptions are not about the nature of causality, but about the nature of rational deliberation. These presuppositions are weak enough for the argument to count as a strong justification of the evidential theory.
Popper and Miller argued, in a 1983 paper, that there is no such thing as 'probabilistic inductive support' of hypotheses. They show how to divide a hypothesis into two "parts," where evidence only 'probabilistically supports' the "part" that the evidence 'deductively' implies, and 'probabilistically countersupports' the "rest" of the hypothesis. I argue that by distinguishing between 'support that is purely deductive in nature' and 'support of a deductively implied hypothesis', we can see that their argument fails to establish (in any (...) important way of interpreting it) their conclusion that "all probabilistic support is purely deductive." Their argument is 'not' "completely devastating to the inductive interpretation of the calculus of probability," as claimed. (shrink)
I explore the problem of ``probabilistic causal preemption'' in the context of a``propensity trajectory'' theory of singular probabilistic causation. This involvesa particular conception of events and a substantive thesis concerning events soconceived.
I defend evidential decision theory and the theory of deliberation-probability dynamics from a recent criticism advanced by Jordan Howard Sobel. I argue that his alleged counterexample to the theories, called the Popcorn Problem is not a genuine counterexample.
Along with such criteria as truth, comprehensiveness, explanatory adequacy, and simplicity, philosophers of science usually also mention predictive accuracy as a criterion of theory choice. But while philosophers have devoted attention to the problem of the logical structure of scientific prediction, it seems that little attention has been devoted to the difficult question of what precisely constitutes predictive accuracy, at least ‘predictive accuracy’ in the sense in which I will discuss it here.I will in this paper discuss the role of (...) predictive accuracy in theory choice. But before that, I will address the problem of what constitutes predictive accuracy more generally and independently of its role in theory choice. I will approach the problem of predictive accuracy from a pragmaticpoint of view, and then try to assess the role of predictive accuracy in theory choice from that perspective. (shrink)
Richard Otte (1985) has recently criticized the resolution of Simpson's paradox given by Nancy Cartwright (1979). He argues that there are difficulties with the version of the theory of probabilistic causality that Cartwright has developed, and that there is a way in which Simpson's paradox can arise that Cartwright's theory cannot handle. And Otte develops his own theory of probabilistic causality. I defend Cartwright's solution, and I argue that there are difficulties with the theory of probabilistic causality that Otte proposes.
In a recent commendable article, Quentin Smith (1987) exposes fatal flaws in several recent attempts to demonstrate that it is logically impossible for the past to be infinite. However, his analysis of one of these flawed arguments--involving an interesting version of Russell's "Tristram Shandy paradox"--is off the mark, as I show in this paper.
This paper distinguishes between "descriptive" and "normative" conceptions of Bayesian principles of rationality, both in the context of inference and in the context of decision. I emphasize an idea according to which, "You have to work with what you have to work with" - that is, that rationality is a relation among old beliefs, new information, and new beliefs and among beliefs, desires, preferences, and choices. According to this conception of rationality, one's current beliefs and desires are not themselves subject (...) to evaluation as to their rationality. From this perspective, rationality is about how we move from old beliefs to new beliefs when confronted with evidence, and about how our preferences are structured given what we believe and what we want. I present some formal details of this perspective and discuss several criticisms of it. (shrink)