I explore the problem of ``probabilistic causal preemption'' in the context of a``propensity trajectory'' theory of singular probabilistic causation. This involvesa particular conception of events and a substantive thesis concerning events soconceived.
Several forms of symmetry in degrees of evidential support areconsidered. Some of these symmetries are shown not to hold in general. This has implications for the adequacy of many measures of degree ofevidential support that have been proposed and defended in the philosophical literature.
Bayesian epistemology suggests various ways of measuring the support that a piece of evidence provides a hypothesis. Such measures are defined in terms of a subjective probability assignment, pr, over propositions entertained by an agent. The most standard measure (where “H” stands for “hypothesis” and “E” stands for “evidence”) is: the difference measure: d(H,E) = pr(H/E) - pr(H).0 This may be called a “positive (probabilistic) relevance measure” of confirmation, since, according to it, a piece of evidence E qualitatively confirms a (...) hypothesis H if and only if pr(H/E) > pr(H), where qualitative disconfirmation is characterized by replacing “>” with “ “ with “=”. Other more or less standard positive relevance measures that have been proposed are: the log-ratio measure: r(H,E) = log[pr(H/E)/pr(H)] and the log-likelihood-ratio measure: l(H,E) = log[pr(E/H)/pr(E/~H)]. (shrink)
Human beings are peculiar. In laboratory experiments, they often cooperate in one-shot prisoners’ dilemmas, they frequently offer 1/2 and reject low offers in the ultimatum game, and they often bid 1/2 in the game of divide-the-cake All these behaviors are puzzling from the point of view of game theory. The first two are irrational, if utility is measured in a certain way.1 The last isn’t positively irrational, but it is no more rational than other possible actions, since there are infinitely (...) many other Nash equilibria besides the one in which both players bid 1/2. At the same time, these behaviors seem to indicate that people are sometimes inclined to be cooperative, fair, and just. In his stimulating new book, Brian Skyrms sets himself the task of showing why these inclinations evolved, or how they might have evolved, under the pressure of natural selection. The goal is not to justify our ethical intuitions, but to explain why we have them.2.. (shrink)
This is a 'state of the art' collection of essays on the relation between probabilities, especially conditional probabilities, and conditionals. It provides new negative results which sharply limit the ways conditionals can be related to conditional probabilities. There are also positive ideas and results which will open up new areas of research. The collection is intended to honour Ernest W. Adams, whose seminal work is largely responsible for creating this area of inquiry. As well as describing, evaluating, and applying Adams' (...) work the contributions extend his ideas in directions he may or may not have anticipated, but that he certainly inspired. In addition to a wide range of philosophers of science, the volume should interest computer scientists and linguists. (shrink)
I defend evidential decision theory and the theory of deliberation-probability dynamics from a recent criticism advanced by Jordan Howard Sobel. I argue that his alleged counterexample to the theories, called the Popcorn Problem is not a genuine counterexample.
Popper and Miller argued, in a 1983 paper, that there is no such thing as 'probabilistic inductive support' of hypotheses. They show how to divide a hypothesis into two "parts," where evidence only 'probabilistically supports' the "part" that the evidence 'deductively' implies, and 'probabilistically countersupports' the "rest" of the hypothesis. I argue that by distinguishing between 'support that is purely deductive in nature' and 'support of a deductively implied hypothesis', we can see that their argument fails to establish (in any (...) important way of interpreting it) their conclusion that "all probabilistic support is purely deductive." Their argument is 'not' "completely devastating to the inductive interpretation of the calculus of probability," as claimed. (shrink)
In a recent commendable article, Quentin Smith (1987) exposes fatal flaws in several recent attempts to demonstrate that it is logically impossible for the past to be infinite. However, his analysis of one of these flawed arguments--involving an interesting version of Russell's "Tristram Shandy paradox"--is off the mark, as I show in this paper.
Richard Otte (1985) has recently criticized the resolution of Simpson's paradox given by Nancy Cartwright (1979). He argues that there are difficulties with the version of the theory of probabilistic causality that Cartwright has developed, and that there is a way in which Simpson's paradox can arise that Cartwright's theory cannot handle. And Otte develops his own theory of probabilistic causality. I defend Cartwright's solution, and I argue that there are difficulties with the theory of probabilistic causality that Otte proposes.
John Dupré (1984) has recently criticized the theory of probabilistic causality developed by, among others, Good (1961-62), Suppes (1970), Cartwright (1979), and Skyrms (1980). He argues that there is a tension or incompatibility between one of its central requirements for the presence of a causal connection, on the one hand, and a feature of the theory pointed out by Elliott Sober and me (1983), on the other. He also argues that the requirement just alluded to should be given up. I (...) defend the theory against Dupré's criticisms and conclude with comments on Dupré's appraisal of the bearing of his arguments on the nature of probabilistic causal laws. (shrink)
It is possible for a causal factor to raise the probability of a second factor in some situations while lowering the probability of the second factor in other situations. Must a genuine cause always raise the probability of a genuine effect of it? When it does not always do so, an "interaction" with some third factor may be the reason. I discuss causal interaction from the perspectives of Giere's counterfactual characterization of probabilistic causal connection (1979, 1980) and the "contextual unanimity" (...) model developed by, among others, Cartwright (1979) and Skyrms (1980). I argue that the contextual unanimity theory must exercise care, in a new way that seems to have gone unnoticed, in order to adequately accommodate the phenomenon, and that the counterfactual theory must be substantially revised; although it will still, pending clarification of a second kind of revision, be unable to accommodate a kind of interaction exemplified in cases like those described by Sober (1982). (shrink)
One of us (Eells 1982) has defended traditional evidential decision theory against prima facie Newcomb counterexamples by assuming that a common cause forms a conjunctive fork with its joint effects. In this paper, the evidential theory is defended without this assumption. The suggested rationale shows that the theory's assumptions are not about the nature of causality, but about the nature of rational deliberation. These presuppositions are weak enough for the argument to count as a strong justification of the evidential theory.
After a brief presentation of evidential decision theory, causal decision theory, and Newcomb type prima facie counterexamples to the evidential theory, three kinds of "metatickle" defenses of the evidential theory are discussed. Each has its weaknesses, but one of them seems stronger than the other two. The weaknesses of the best of the three, and the intricacy of metatickle analysis, does not constitute an advantage of causal decision theory over the evidential theory, however. It is argued, by way of an (...) example, that causal decision theory also stands in need of a metatickle defense. (shrink)
The traditional or “orthodox” decision rule of maximizing conditional expected utility has recently come under attack by critics who advance alternative “causal” decision theories. The traditional theory has, however, been defended. And these defenses have in turn been criticized. Here, I examine two objections to such defenses and advance a theory about the dynamics of deliberation (a diachronic theory about the process of deliberation) within the framework of which both objections to the defenses of the traditional theory fail.
After a brief presentation and discussion of two versions of Newcomb's problem, I examine the analyses proposed by Levi (1975). Horgan (1981), and Kyburg (1980). I argue that the first two are not genuine solutions to the problem, but that the third, if appropriately elaborated and modified, is correct.
I argue that to the extent to which philosophical theories of objective probability have offered theoretically adequateconceptions of objective probability (in connection with such desiderata as causal and explanatory significance, applicability to single cases, etc.), they have failed to satisfy amethodological standard — roughly, a requirement to the effect that the conception offered be specified with the precision appropriate for a physical interpretation of an abstract formal calculus and be fully explicated in terms of concepts, objects or phenomena understood independently (...) of the idea of physical probability. The significance of this, and of the suggested methodological standard, is then briefly discussed. (shrink)
After clarifying the probabilistic conception of causality suggested by Good (1961-2), Suppes (1970), Cartwright (1979), and Skyrms (1980), we prove a sufficient condition for transitivity of causal chains. The bearing of these considerations on the units of selection problem in evolutionary theory and on the Newcomb paradox in decision theory is then discussed.