As every philosopher knows, “the design argument” concludes that God exists from premisses that cite the adaptive complexity of organisms or the lawfulness and orderliness of the whole universe. Since 1859, it has formed the intellectual heart of creationist opposition to the Darwinian hypothesis that organisms evolved their adaptive features by the mindless process of natural selection. Although the design argument developed as a defense of theism, the logic of the argument in fact encompasses a larger set of issues. William (...) Paley saw clearly that we sometimes have an excellent reason to postulate the existence of an intelligent designer. If we find a watch on the heath, we reasonably infer that it was produced by an intelligent watchmaker. This design argument makes perfect sense. Why is it any different to claim that the eye was produced by an intelligent designer? Both critics and defenders of the design argument need to understand what the ground rules are for inferring that an intelligent designer is the unseen cause of an observed effect. (shrink)
Sober (1992) has recently evaluated Brandon's (1982, 1990; see also 1985, 1988) use of Salmon's (1971) concept of screening-off in the philosophy of biology. He critiques three particular issues, each of which will be considered in this discussion.
The Irish poet W. B. Yeats once wrote, with great sapience and perception: Nor dread, nor hope attend A dying animal; A man awaits his end Dreading and hoping all. That death has ever been a problem to man is attested as far back as we can trace our species in the archaeological record—indeed, it seems to have been a problem even for that immediate precursor of homo sapiens, the so-called Neanderthal Man; for he buried his dead.
This anthology collects some of the most important papers on what is believed to be the major force in evolution, natural selection. An issue of great consequence in the philosophy of biology concerns the levels at which, and the units upon which selection acts. In recent years, biologists and philosophers have published a large number of papers bearing on this subject. The papers selected for inclusion in this book are divided into three main sections covering the history of the subject, (...) explaining its conceptual foundations, and focusing on kin and group selection and higher levels of selection.One of the book's interesting features is that it draws together material from the biological and philosophical literatures. The philosophical literature, having thoroughly absorbed the biological material, now offers conceptual tools suitable for the reworking of the biological arguments. Although a full symbiosis has yet to develop, this anthology offers a unique resource for students in both biology and philosophy.Robert N. Brandon is Professor in the Philosophy Department, Duke University. Richard M. Burian is Professor of Philosophy and Department Chairman, Virginia Polytechnic Institute and State University.A Bradford Book. (shrink)
Hempel first introduced the paradox of confirmation in (Hempel 1937). Since then, a very extensive literature on the paradox has evolved (Vranas 2004). Much of this literature can be seen as responding to Hempel’s subsequent discussions and analyses of the paradox in (Hempel 1945). Recently, it was noted that Hempel’s intuitive (and plausible) resolution of the paradox was inconsistent with his official theory of confirmation (Fitelson & Hawthorne 2006). In this article, we will try to explain how this inconsistency (...) affects the historical dialectic about the paradox and how it illuminates the nature of confirmation. In the end, we will argue that Hempel’s intuitions about the paradox of confirmation were (basically) correct, and that it is his theory that should be rejected, in favor of a (broadly) Bayesian account of confirmation. (shrink)
Naive deductivist accounts of confirmation have the undesirable consequence that if E confirms H, then E also confirms the conjunction H·X, for any X—even if X is completely irrelevant to E and H. Bayesian accounts of confirmation may appear to have the same problem. In a recent article in this journal Fitelson (2002) argued that existing Bayesian attempts to resolve of this problem are inadequate in several important respects. Fitelson then proposes a new‐and‐improved Bayesian account that overcomes the (...) problem of irrelevant conjunction, and does so in a more general setting than past attempts. We will show how to simplify and improve upon Fitelson's solution. (shrink)
This collection of essays by Robert Brandon spans two decades and most of the important problems in the philosophy of biology. Four of his five most important contributions to the philosophy of biology can be found here: the concept of relative adaptedness and its role in the propensity interpretation of fitness; the principle of natural selection; the use of the screening-off relation in defense of organismic selection; and the distinction between units of selection and levels of selection. The fifth (...) major contribution, an analysis of the concept of "environment," mentioned briefly in an essay on the co-evolution of organisms and environment, is given an extended treatment in his 1990 book, Adaptation and Environment. (shrink)
Robert Brandon is one of the most important and influential of contemporary philosophers of biology. This collection of his recent essays covers all the traditional topics in the philosophy of evolutionary biology and as such could serve as an introduction to the field. There are essays on the nature of fitness, teleology, the structure of the theory of natural selection, and the levels of selection. The book also deals with newer topics that are less frequently discussed but are of (...) growing interest, for example the evolution of human language and the role of experimentation in evolutionary biology. A special feature of the collection is that it avoids jargon and is written in a style that will appeal to working evolutionary biologists as well as philosophers. (shrink)
In response to a paper by Harris & Fitelson, Slaney states several open questions concerning possible strategies for proving distributivity in a wide class of positive sentential logics. In this note, I provide answers to all of Slaney's open questions. The result is a better understanding of the class of positive logics in which distributivity holds.
Taking Joyce’s (1998; 2009) recent argument(s) for probabilism as our point of departure, we propose a new way of grounding formal, synchronic, epistemic coherence requirements for (opinionated) full belief. Our approach yields principled alternatives to deductive consistency, sheds new light on the preface and lottery paradoxes, and reveals novel conceptual connections between alethic and evidential epistemic norms.
Contemporary Bayesian confirmation theorists measure degree of (incremental) confirmation using a variety of non-equivalent relevance measures. As a result, a great many of the arguments surrounding quantitative Bayesian confirmation theory are implicitly sensitive to choice of measure of confirmation. Such arguments are enthymematic, since they tacitly presuppose that certain relevance measures should be used (for various purposes) rather than other relevance measures that have been proposed and defended in the philosophical literature. I present a survey of this pervasive class of (...) Bayesian confirmation-theoretic enthymemes, and a brief analysis of some recent attempts to resolve the problem of measure sensitivity. (shrink)
According to Bayesian confirmation theory, evidence E (incrementally) confirms (or supports) a hypothesis H (roughly) just in case E and H are positively probabilistically correlated (under an appropriate probability function Pr). There are many logically equivalent ways of saying that E and H are correlated under Pr. Surprisingly, this leads to a plethora of non-equivalent quantitative measures of the degree to which E confirms H (under Pr). In fact, many non-equivalent Bayesian measures of the degree to which E confirms (or (...) supports) H have been proposed and defended in the literature on inductive logic. I provide a thorough historical survey of the various proposals, and a detailed discussion of the philosophical ramifications of the differences between them. I argue that the set of candidate measures can be narrowed drastically by just a few intuitive and simple desiderata. In the end, I provide some novel and compelling reasons to think that the correct measure of degree of evidential support (within a Bayesian framework) is the (log) likelihood ratio. The central analyses of this research have had some useful and interesting byproducts, including: (i ) a new Bayesian account of (confirmationally) independent evidence, which has applications to several important problems in con- firmation theory, including the problem of the (confirmational) value of evidential diversity, and (ii ) novel resolutions of several problems in Bayesian confirmation theory, motivated by the use of the (log) likelihood ratio measure, including a reply to the Popper-Miller critique of probabilistic induction, and a new analysis and resolution of the problem of irrelevant conjunction (a.k.a., the tacking problem). (shrink)
Let E be a set of n propositions E1, ..., En. We seek a probabilistic measure C(E) of the ‘degree of coherence’ of E. Intuitively, we want C to be a quantitative, probabilistic generalization of the (deductive) logical coherence of E. So, in particular, we require C to satisfy the following..
Several forms of symmetry in degrees of evidential support areconsidered. Some of these symmetries are shown not to hold in general. This has implications for the adequacy of many measures of degree ofevidential support that have been proposed and defended in the philosophical literature.
To the extent that we have reasons to avoid these “bad B -properties”, these arguments provide reasons not to have an incoherent credence function b — and perhaps even reasons to have a coherent one. But, note that these two traditional arguments for probabilism involve what might be called “pragmatic” reasons (not) to be (in)coherent. In the case of the Dutch Book argument, the “bad” property is pragmatically bad (to the extent that one values money). But, it is not clear (...) whether the DBA pinpoints any epistemic defect of incoherent agents. The same can be said for Representation Theorem arguments, since they involve the structure of an agent’s preferences. (shrink)
In this paper, we compare and contrast two methods for the revision of qualitative beliefs. The first method is generated by a simplistic diachronic Lockean thesis requiring coherence with the agent’s posterior credences after conditionalization. The second method is the orthodox AGM approach to belief revision. Our primary aim is to determine when the two methods may disagree in their recommendations and when they must agree. We establish a number of novel results about their relative behavior. Our most notable finding (...) is that the inverse of the golden ratio emerges as a non-arbitrary bound on the Bayesian method’s free-parameter—the Lockean threshold. This “golden threshold” surfaces in two of our results and turns out to be crucial for understanding the relation between the two methods. (shrink)
In this paper, we investigate various possible (Bayesian) precisifications of the (somewhat vague) statements of “the equal weight view” (EWV) that have appeared in the recent literature on disagreement. We will show that the renditions of (EWV) that immediately suggest themselves are untenable from a Bayesian point of view. In the end, we will propose some tenable (but not necessarily desirable) interpretations of (EWV). Our aim here will not be to defend any particular Bayesian precisification of (EWV), but rather to (...) raise awareness about some of the difficulties inherent in formulating such precisifications. (shrink)
In this note, I consider various precisifications of the slogan ‘evidence of evidence is evidence’. I provide counter-examples to each of these precisifications (assuming an epistemic probabilistic relevance notion of ‘evidential support’).
According to orthodox (Kolmogorovian) probability theory, conditional probabilities are by definition certain ratios of unconditional probabilities. As a result, orthodox conditional probabilities are undefined whenever their antecedents have zero unconditional probability. This has important ramifications for the notion of probabilistic independence. Traditionally, independence is defined in terms of unconditional probabilities (the factorization of the relevant joint unconditional probabilities). Various “equivalent” formulations of independence can be given using conditional probabilities. But these “equivalences” break down if conditional probabilities are permitted to have (...) conditions with zero unconditional probability. We reconsider probabilistic independence in this more general setting. We argue that a less orthodox but more general (Popperian) theory of conditional probability should be used, and that much of the conventional wisdom about probabilistic independence needs to be rethought. (shrink)
Likelihoodists and Bayesians seem to have a fundamental disagreement about the proper probabilistic explication of relational (or contrastive) conceptions of evidential support (or confirmation). In this paper, I will survey some recent arguments and results in this area, with an eye toward pinpointing the nexus of the dispute. This will lead, first, to an important shift in the way the debate has been couched, and, second, to an alternative explication of relational support, which is in some sense a "middle way" (...) between Likelihoodism and Bayesianism. In the process, I will propose some new work for an old probability puzzle: the "Monty Hall" problem. (shrink)
This paper is divided into three sections. In the first section we offer a retooling of some traditional concepts, namely icons and symbols, which allows us to describe an evolutionary continuum of communication systems. The second section consists of an argument from theoretical biology. In it we explore the advantages and disadvantages of phenotypic plasticity. We argue that a range of the conditions that selectively favor phenotypic plasticity also favor a nongenetic transmission system that would allow for the inheritance of (...) acquired characters. The first two sections are independent, the third depends on both of them. In it we offer an argument that human natural languages have just the features required of an ideal transmission mechanism under the conditions described in section 2. (shrink)
The conjunction fallacy has been a key topic in debates on the rationality of human reasoning and its limitations. Despite extensive inquiry, however, the attempt to provide a satisfactory account of the phenomenon has proved challenging. Here we elaborate the suggestion (first discussed by Sides, Osherson, Bonini, & Viale, 2002) that in standard conjunction problems the fallacious probability judgements observed experimentally are typically guided by sound assessments of _confirmation_ relations, meant in terms of contemporary Bayesian confirmation theory. Our main formal (...) result is a confirmation-theoretic account of the conjunction fallacy, which is proven _robust_ (i.e., not depending on various alternative ways of measuring degrees of confirmation). The proposed analysis is shown distinct from contentions that the conjunction effect is in fact not a fallacy, and is compared with major competing explanations of the phenomenon, including earlier references to a confirmation-theoretic account. (shrink)
Drift is to evolution as inertia is to Newtonian mechanics. Both are the "natural" or default states of the systems to which they apply. Both are governed by zero-force laws. The zero-force law in biology is stated here for the first time.
A Bayesian account of independent evidential support is outlined. This account is partly inspired by the work of C. S. Peirce. I show that a large class of quantitative Bayesian measures of confirmation satisfy some basic desiderata suggested by Peirce for adequate accounts of independent evidence. I argue that, by considering further natural constraints on a probabilistic account of independent evidence, all but a very small class of Bayesian measures of confirmation can be ruled out. In closing, another application of (...) my account to the problem of evidential diversity is also discussed. (shrink)
outlined. This account is partly inspired by the work of C.S. Peirce. When we want to consider how degree of confirmation varies with changing I show that a large class of quantitative Bayesian measures of con-.
The concept of individuality as applied to species, an important advance in the philosophy of evolutionary biology, is nevertheless in need of refinement. Four important subparts of this concept must be recognized: spatial boundaries, temporal boundaries, integration, and cohesion. Not all species necessarily meet all of these. Two very different types of pluralism have been advocated with respect to species, only one of which is satisfactory. An often unrecognized distinction between grouping and ranking components of any species concept is necessary. (...) A phylogenetic species concept is advocated that uses a grouping criterion of monophyly in a cladistic sense, and a ranking criterion based on those causal processes that are most important in producing and maintaining lineages in a particular case. Such causal processes can include actual interbreeding, selective constraints, and developmental canalization. The widespread use of the biological species concept is flawed for two reasons: because of a failure to distinguish grouping from ranking criteria and because of an unwarranted emphasis on the importance of interbreeding as a universal causal factor controlling evolutionary diversification. The potential to interbreed is not in itself a process; it is instead a result of a diversity of processes which result in shared selective environments and common developmental programs. These types of processes act in both sexual and asexual organisms, thus the phylogenetic species concept can reflect an underlying unity that the biological species concept can not. (shrink)
Millstein [Bio. Philos. 17 (2002) 33] correctly identies a serious problem with the view that natural selection and random drift are not conceptually distinct. She offers a solution to this problem purely in terms of differences between the processes of selection and drift. I show that this solution does not work, that it leaves the vast majority of real biological cases uncategorized. However, I do think there is a solution to the problem she raises, and I offer it here. My (...) solution depends on solving the biological analogue of the reference class problem in probability theory and on the reality of individual fitnesses. (shrink)
Note: This is not an ad hoc change at all. It’s simply the natural thing say here – if one thinks of F as a generalization of classical logical entailment. The extra complexity I had in my original (incorrect) deﬁnition of F was there because I was foolishly trying to encode some non-classical, or “relavant” logical structure in F. I now think this is a mistake, and that I should go with the above, classical account of F. Arguments about relevance (...) logic need to be handled in a diﬀerent way (and a diﬀerent context!). And, besides, as Luca Moretti has shown (see below), the original deﬁnition of F cannot be the right basis for C ! OK, now on to C. (shrink)
First, a brief historical trace of the developments in confirmation theory leading up to Goodman's infamous "grue" paradox is presented. Then, Goodman's argument is analyzed from both Hempelian and Bayesian perspectives. A guiding analogy is drawn between certain arguments against classical deductive logic, and Goodman's "grue" argument against classical inductive logic. The upshot of this analogy is that the "New Riddle" is not as vexing as many commentators have claimed. Specifically, the analogy reveals an intimate connection between Goodman's problem, and (...) the "problem of old evidence". Several other novel aspects of Goodman's argument are also discussed. (shrink)
In this paper we first briefly review Bell's (1964, 1966) Theorem to see how it invalidates any deterministic "hidden variable" account of the apparent indeterminacy of quantum mechanics (QM). Then we show that quantum uncertainty, at the level of DNA mutations, can "percolate" up to have major populational effects. Interesting as this point may be it does not show any autonomous indeterminism of the evolutionary process. In the next two sections we investigate drift and natural selection as the locus of (...) autonomous biological indeterminacy. Here we conclude that the population-level indeterminacy of natural selection and drift are ultimately based on the assumption of a fundamental indeterminacy at the level of the lives and deaths of individual organisms. The following section examines this assumption and defends it from the determinists' attack. Then we show that, even if one rejects the assumption, there is still an important reason why one might think evolutionary theory (ET) is autonomously indeterministic. In the concluding section we contrast the arguments we have mounted against a deterministic hidden variable account of ET with the proof of the impossibility of such an account of QM. (shrink)
We give an analysis of the Monty Hall problem purely in terms of confirmation, without making any lottery assumptions about priors. Along the way, we show the Monty Hall problem is structurally identical to the Doomsday Argument.
Naive deductive accounts of confirmation have the undesirable consequence that if E confirms H, then E also confirms the conjunction H & X, for any X—even if X is utterly irrelevant to H (and E). Bayesian accounts of confirmation also have this property (in the case of deductive evidence). Several Bayesians have attempted to soften the impact of this fact by arguing that—according to Bayesian accounts of confirmation— E will confirm the conjunction H & X less strongly than E confirms (...) H (again, in the case of deductive evidence). I argue that existing Bayesian “resolutions” of this problem are inadequate in several important respects. In the end, I suggest a new‐and‐improved Bayesian account (and understanding) of the problem of irrelevant conjunction. (shrink)
The Paradox of the Ravens (a.k.a,, The Paradox of Confirmation) is indeed an old chestnut. A great many things have been written and said about this paradox and its implications for the logic of evidential support. The first part of this paper will provide a brief survey of the early history of the paradox. This will include the original formulation of the paradox and the early responses of Hempel, Goodman, and Quine. The second part of the paper will describe attempts (...) to resolve the paradox within a Bayesian framework, and show how to improve upon them. This part begins with a discussion of how probabilistic methods can help to clarify the statement of the paradox itself. And it describes some of the early responses to probabilistic explications. We then inspect the assumptions employed by traditional (canonical) Bayesian approaches to the paradox. These assumptions may appear to be overly strong. So, drawing on weaker assumptions, we formulate a new-and-improved Bayesian confirmation-theoretic resolution of the Paradox of the Ravens. (shrink)
In this paper I argue that we can best make sense of the practice of experimental evolutionary biology if we see it as investigating contingent, rather than lawlike, regularities. This understanding is contrasted with the experimental practice of certain areas of physics. However, this presents a problem for those who accept the Logical Positivist conception of law and its essential role in scientific explanation. I address this problem by arguing that the contingent regularities of evolutionary biology have a limited range (...) of nomic necessity and a limited range of explanatory power even though they lack the unlimited projectibility that has been seen by some as a hallmark of scientific laws. (shrink)
Carnap's inductive logic (or confirmation) project is revisited from an "increase in firmness" (or probabilistic relevance) point of view. It is argued that Carnap's main desiderata can be satisfied in this setting, without the need for a theory of "logical probability." The emphasis here will be on explaining how Carnap's epistemological desiderata for inductive logic will need to be modified in this new setting. The key move is to abandon Carnap's goal of bridging confirmation and credence, in favor of bridging (...) confirmation and evidential support. (shrink)
In the past three decades a number of narrative self-concepts have appeared in the philosophical literature. A central question posed in recent literature concerns the embodiment of the narrative self. Though one of the best-known narrative self-concepts is a non-embodied one, namely Dennett’s self as ‘a center of narrative gravity’, others argue that the narrative self should include a role for embodiment. Several arguments have been made in support of the latter claim, but these can be summarized in two main (...) points. Firstly, a logical one: without taking the body into account Dennett’s theory becomes self-refuting. Secondly, a more practical/phenomenological point: a disembodied self-concept overlooks how personal the body is, and as such should be considered part of the self. In this paper I endorse these criticisms of non-embodied narrative self-concepts, but I argue that the relationship between the narrative self and the body is far from sufficiently fleshed out. I claim that the narrative self and the body are much more interwoven than the above criticisms suggest. What I aim to show in this paper is that the relationship between the body and the narrative self is interactive rather than unidirectional: not only does our body shape our narrative self, but our narrative self also shapes our body. The upshot of this is a better conception of the self is as a dynamic interaction between its various aspects. (shrink)
In this discussion note, we explain how to relax some of the standard assumptions made in Garber-style solutions to the Problem of Old Evidence. The result is a more general and explanatory Bayesian approach.