The Nature of Selection is a straightforward, self-contained introduction to philosophical and biological problems in evolutionary theory. It presents a powerful analysis of the evolutionary concepts of natural selection, fitness, and adaptation and clarifies controversial issues concerning altruism, group selection, and the idea that organisms are survival machines built for the good of the genes that inhabit them. "Sober's is the answering philosophical voice, the voice of a first-rate philosopher and a knowledgeable student of contemporary evolutionary theory. His book merits (...) broad attention among both communities. It should also inspire others to continue the conversation."-Philip Kitcher, Nature "Elliott Sober has made extraordinarily important contributions to our understanding of biological problems in evolutionary biology and causality. The Nature of Selection is a major contribution to understanding epistemological problems in evolutionary theory. I predict that it will have a long lasting place in the literature."-Richard C. Lewontin. (shrink)
Perhaps because of it implications for our understanding of human nature, recent philosophy of biology has seen what might be the most dramatic work in the philosophies of the ”special” sciences. This drama has centered on evolutionary theory, and in the second edition of this textbook, Elliott Sober introduces the reader to the most important issues of these developments. With a rare combination of technical sophistication and clarity of expression, Sober engages both the higher level of theory and the direct (...) implications for such controversial issues as creationism, teleology, nature versus nurture, and sociobiology. Above all, the reader will gain from this book a firm grasp of the structure of evolutionary theory, the evidence for it, and the scope of its explanatory significance. (shrink)
“Absence of evidence isn’t evidence of absence” is a slogan that is popular among scientists and nonscientists alike. This article assesses its truth by using a probabilistic tool, the Law of Likelihood. Qualitative questions (“Is E evidence about H ?”) and quantitative questions (“How much evidence does E provide about H ?”) are both considered. The article discusses the example of fossil intermediates. If finding a fossil that is phenotypically intermediate between two extant species provides evidence that those species have (...) a common ancestor, does failing to find such a fossil constitute evidence that there was no common ancestor? Or should the failure merely be chalked up to the imperfection of the fossil record? The transitivity of the evidence relation in simple causal chains provides a broader context, which leads to discussion of the fine-tuning argument, the anthropic principle, and observation selection effects. (shrink)
Reductionism is often understood to include two theses: (1) every singular occurrence that the special sciences can explain also can be explained by physics; (2) every law in a higher-level science can be explained by physics. These claims are widely supposed to have been refuted by the multiple realizability argument, formulated by Putnam (1967, 1975) and Fodor (1968, 1975). The present paper criticizes the argument and identifies a reductionistic thesis that follows from one of the argument's premises.
Traditional analyses of the curve fitting problem maintain that the data do not indicate what form the fitted curve should take. Rather, this issue is said to be settled by prior probabilities, by simplicity, or by a background theory. In this paper, we describe a result due to Akaike , which shows how the data can underwrite an inference concerning the curve's form based on an estimate of how predictively accurate it will be. We argue that this approach throws light (...) on the theoretical virtues of parsimoniousness, unification, and non ad hocness, on the dispute about Bayesianism, and on empiricism and scientific realism. * Both of us gratefully acknowledge support from the Graduate School at the University of Wisconsin-Madison, and NSF grant DIR-8822278 (M.F.) and NSF grant SBE-9212294 (E.S.). Special thanks go to A. W. F. Edwards.William Harper. Martin Leckey. Brian Skyrms, and especially Peter Turney for helpful comments on an earlier draft. (shrink)
To evaluate Hume's thesis that causal claims are always empirical, I consider three kinds of causal statement: ?e1 caused e2 ?, ?e1 promoted e2 ?, and ?e1 would promote e2 ?. Restricting my attention to cases in which ?e1 occurred? and ?e2 occurred? are both empirical, I argue that Hume was right about the first two, but wrong about the third. Standard causal models of natural selection that have this third form are a priori mathematical truths. Some are obvious, others (...) less so. Empirical work on natural selection takes the form of defending causal claims of the first two types. I provide biological examples that illustrate differences among these three kinds of causal claim. (shrink)
an observation to formulate a theory, it is no surprise that the resulting theory accurately captures that observation. However, when the theory makes a novel prediction—when it predicts an observation that was not used in its formulation—this seems to provide more substantial confirmation of the theory. This paper presents a new approach to the vexed problem of understanding the epistemic difference between prediction and accommodation. In fact, there are several problems that need to be disentangled; in all of them, the (...) key is the concept of overfitting. We float the hypothesis that accommodation is a defective methodology only when the methods used to accommodate the data fail to guard against the risk of overfitting. We connect our analysis with the proposals that other philosophers have made. We also discuss its bearing on the conflict between instrumentalism and scientific realism. Introduction Predictivisms—a taxonomy Observations Formulating the problem What might Annie be doing wrong? Solutions Observations explained Mayo on severe tests The miracle argument and scientific realism Concluding comments. (shrink)
I discuss two subjects in Samir Okasha’s excellent book, Evolution and the Levels of Selection. In consonance with Okasha’s critique of the conventionalist view of the units of selection problem, I argue that conventionalists have not attended to what realists mean by group, individual, and genic selection. In connection with Okasha’s discussion of the Price equation and contextual analysis, I discuss whether the existence of these two quantitative frameworks is a challenge to realism.
Ernst Mayr has argued that Darwinian theory discredited essentialist modes of thought and replaced them with what he has called "population thinking". In this paper, I characterize essentialism as embodying a certain conception of how variation in nature is to be explained, and show how this conception was undermined by evolutionary theory. The Darwinian doctrine of evolutionary gradualism makes it impossible to say exactly where one species ends and another begins; such line-drawing problems are often taken to be the decisive (...) reason for thinking that essentialism is untenable. However, according to the view of essentialism I suggest, this familiar objection is not fatal to essentialism. It is rather the essentialist's use of what I call the natural state model for explaining variation which clashes with evolutionary theory. This model implemented the essentialist's requirement that properties of populations be defined in terms of properties of member organisms. Requiring such constituent definitions is reductionistic in spirit; additionally, evolutionary theory shows that such definitions are not available, and, moreover, that they are not needed to legitimize population-level concepts. Population thinking involves the thesis that population concepts may be legitimized by showing their connections with each other, even when they are not reducible to concepts applying at lower levels of organization. In the paper, I develop these points by describing Aristotle's ideas on the origins of biological variation; they are a classic formulation of the natural state model. I also describe how the development of statistical ideas in the 19th century involved an abandoning of the natural state model. (shrink)
Evolutionary theory is awash with probabilities. For example, natural selection is said to occur when there is variation in fitness, and fitness is standardly decomposed into two components, viability and fertility, each of which is understood probabilistically. With respect to viability, a fertilized egg is said to have a certain chance of surviving to reproductive age; with respect to fertility, an adult is said to have an expected number of offspring.1 There is more to evolutionary theory than the theory of (...) natural selection, and here too one finds probabilistic concepts aplenty. When there is no selection, the theory of neutral evolution says that a gene’s chance of eventually reaching fixation is 1/(2N), where N is the number of organisms in the generation of the diploid population to which the gene belongs. The evolutionary consequences of mutation are likewise conceptualized in terms of the probability per unit time a gene has of changing from one state to another. The examples just mentioned are all “forwarddirected” probabilities; they describe the probability of later events, conditional on earlier events. However, evolutionary theory also uses “backwards probabilities” that describe the probability of a cause conditional on its effects; for example, coalescence theory allows one to calculate the expected number of generations in the past that the genes in the present generation find their most recent common ancestor. (shrink)
Several evolutionary biologists have used a parsimony argument to argue that the single gene is the unit of selection. Since all evolution by natural selection can be represented in terms of selection coefficients attaching to single genes, it is, they say, "more parsimonious" to think that all selection is selection for or against single genes. We examine the limitations of this genic point of view, and then relate our criticisms to a broader view of the role of causal concepts and (...) the dangers of reification in science. (shrink)
The concept of fitness began its career in biology long before evolutionary theory was mathematized. Fitness was used to describe an organism’s vigor, or the degree to which organisms “fit” into their environments. An organism’s success in avoiding predators and in building a nest obviously contribute to its fitness and to the fitness of its offspring, but the peacock’s gaudy tail seemed to be in an entirely different line of work. Fitness, as a term in ordinary language (as in “physical (...) fitness”) and in its original biological meaning, applied to the survival of an organism and its offspring, not to sheer reproductive output (Paul ////; Cronin 1991). Darwin’s separation of natural from sexual selection may sound odd from a modern perspective, but it made sense from this earlier point of view. (shrink)
After clarifying the probabilistic conception of causality suggested by Good (1961-2), Suppes (1970), Cartwright (1979), and Skyrms (1980), we prove a sufficient condition for transitivity of causal chains. The bearing of these considerations on the units of selection problem in evolutionary theory and on the Newcomb paradox in decision theory is then discussed.
A simple and general criterion is derived for the evolution of altruism when individuals interact in pairs. It is argued that the treatment of this problem in kin selection theory and in game theory are special cases of this general criterion.
Brandon ( 1984, 1990) has argued that Salmon's (1971) concept of screening-off can be used to characterize (i) the idea that natural selection acts directly on an organism's phenotype, only indirectly on its genotype, and (ii) the biological problem of the levels of selection. Brandon also suggests (iii) that screening-off events in a causal chain are better explanations than the events they screen off. This paper critically evaluates Brandon's proposals.
The propensity interpretation of fitness draws on the propensity interpretation of probability, but advocates of the former have not attended sufficiently to problems with the latter. The causal power of C to bring about E is not well-represented by the conditional probability Pr. Since the viability fitness of trait T is the conditional probability Pr, the viability fitness of the trait does not represent the degree to which having the trait causally promotes surviving. The same point holds for fertility fitness. (...) This failure of trait fitness to capture causal role can also be seen in the fact that coextensive traits must have the same fitness values even if one of them promotes survival and the other is neutral or deleterious. Although the fitness of a trait does not represent the trait’s causal power to promote survival and reproduction, variation in fitness in a population causally promotes change in trait frequencies; in this sense, fitness variation is a population-level propensity. (shrink)
In their 2010 book, Biology’s First Law, D. McShea and R. Brandon present a principle that they call ‘‘ZFEL,’’ the zero force evolutionary law. ZFEL says (roughly) that when there are no evolutionary forces acting on a population, the population’s complexity (i.e., how diverse its member organisms are) will increase. Here we develop criticisms of ZFEL and describe a different law of evolution; it says that diversity and complexity do not change when there are no evolutionary causes.
The thesis that natural selection explains the frequencies of traits in populations, but not why individual organisms have the traits tehy do, is here defended and elaborated. A general concept of ‘distributive explanation’ is discussed.
John Beatty (1995) and Alexander Rosenberg (1994) have argued against the claim that there are laws in biology. Beatty's main reason is that evolution is a process full of contingency, but he also takes the existence of relative significance controversies in biology and the popularity of pluralistic approaches to a variety of evolutionary questions to be evidence for biology's lawlessness. Rosenberg's main argument appeals to the idea that biological properties supervene on large numbers of physical properties, but he also develops (...) case studies of biological controversies to defend his thesis that biology is best understood as an instrumental discipline. The present paper assesses their arguments. (shrink)
When two causally independent processes each have a quantity that increases monotonically (either deterministically or in probabilistic expectation), the two quantities will be correlated, thus providing a counterexample to Reichenbach's principle of the common cause. Several philosophers have denied this, but I argue that their efforts to save the principle are unsuccessful. Still, one salvage attempt does suggest a weaker principle that avoids the initial counterexample. However, even this weakened principle is mistaken, as can be seen by exploring the concepts (...) of homology and homoplasy used in evolutionary biology. I argue that the kernel of truth in the principle of the common cause is to be found by separating metaphysical and epistemological issues; as far as the epistemology is concerned, the Likelihood Principle is central. (shrink)
Historical sciences like evolutionary biology reconstruct past events by using the traces that the past has bequeathed to the present. Markov chain theory entails that the passage of time reduces the amount of information that the present provides about the past. Here we use a Moran process framework to show that some evolutionary processes destroy information faster than others. Our results connect with Darwin’s principle that adaptive similarities provide scant evidence of common ancestry whereas neutral and deleterious similarities do better. (...) We also describe how the branching in phylogenetic trees affects the information that the present supplies about the past. (shrink)
This paper defends two theses about probabilistic reasoning. First, although modus ponens has a probabilistic analog, modus tollens does not – the fact that a hypothesis says that an observation is very improbable does not entail that the hypothesis is improbable. Second, the evidence relation is essentially comparative; with respect to hypotheses that confer probabilities on observation statements but do not entail them, an observation O may favor one hypothesis H1 over another hypothesis H2 , but O cannot be said (...) to confirm or disconfirm H1 without such relativization. These points have serious consequences for the Intelligent Design movement. Even if evolutionary theory entailed that various complex adaptations are very improbable, that would neither disconfirm the theory nor support the hypothesis of intelligent design. For either of these conclusions to follow, an additional question must be answered: With respect to the adaptive features that evolutionary theory allegedly says are very improbable, what is their probability of arising if they were produced by intelligent design? This crucial question has not been addressed by the ID movement. (shrink)
Quine’s publication in 1951 of “Two Dogmas of Empiricism” was a watershed event in 20th century philosophy. In that essay, Quine sought to demolish the concepts of analyticity and a priority; he also sketched a positive proposal of his own -- epistemological holism. There can be little doubt that philosophy changed as a result of Quine’s work. The question I want to address here is whether it should have. My goal is not to argue for a return to the halcyon (...) days of the logical empiricists. Rather, I want to take stock. Now, almost fifty years after the publication of “Two Dogmas,” what view should we take of analyticity, the a priori, and epistemological holism, and of what Quine said about these topics? (shrink)
That some propositions are testable, while others are not, was a fundamental idea in the philosophical program known as logical empiricism. That program is now widely thought to be defunct. Quine’s (1953) “Two Dogmas of Empiricism” and Hempel’s (1950) “Problems and Changes in the Empiricist Criterion of Meaning” are among its most notable epitaphs. Yet, as we know from Mark Twain’s comment on an obituary that he once had the pleasure of reading about himself, the report of a death can (...) be an exaggeration. The research program that began in Vienna and Berlin continues, even though many of the specific formulations that came out of those circles are flawed and need to be replaced. (shrink)
Nancy Cartwright (1983, 1999) argues that (1) the fundamental laws of physics are true when and only when appropriate ceteris paribus modifiers are attached and that (2) ceteris paribus modifiers describe conditions that are almost never satisfied. She concludes that when the fundamental laws of physics are true, they don't apply in the real world, but only in highly idealized counterfactual situations. In this paper, we argue that (1) and (2) together with an assumption about contraposition entail the opposite conclusion (...) — that the fundamental laws of physics do apply in the real world. Cartwright extracts from her thesis about the inapplicability of fundamental laws the conclusion that they cannot figure in covering-law explanations. We construct a different argument for a related conclusion — that forward-directed idealized dynamical laws cannot provide covering-law explanations that are causal. This argument is neutral on whether the assumption about contraposition is true. We then discuss Cartwright's simulacrum account of explanation, which seeks to describe how idealized laws can be explanatory. (shrink)
Parsimony arguments are advanced in both science and philosophy. How are they related? This question is a test case for Naturalismp, which is the thesis that philosophical theories and scientific theories should be evaluated by the same criteria. In this paper, I describe the justifications that attach to two types of parsimony argument in science. In the first, parsimony is a surrogate for likelihood. In the second, parsimony is relevant to estimating how accurately a model will predict new data when (...) fitted to old. I then consider how these two justifications apply to parsimony arguments in philosophy concerning theism and atheism, the mind/body problem, ethical realism, the question of whether mental properties are causally efficacious, and nominalism versus Platonism about numbers. (shrink)
In their book What Darwin Got Wrong , Jerry Fodor and Massimo Piattelli-Palmarini construct an a priori philosophical argument and an empirical biological argument. The biological argument aims to show that natural selection is much less important in the evolutionary process than many biologists maintain. The a priori argument begins with the claim that there cannot be selection for one but not the other of two traits that are perfectly correlated in a population; it concludes that there cannot be an (...) evolutionary theory of adaptation. This article focuses mainly on the a priori argument. *Received March 2010; revised July 2010. †To contact the author, please write to: Department of Philosophy, 5185 Helen C. White Hall, University of Wisconsin–Madison, Madison, WI 53706; e-mail: email@example.com. (shrink)
The probability that the fitter of two alleles will increase in frequency in a population goes up as the product of N (the effective population size) and s (the selection coefficient) increases. Discovering the distribution of values for this product across different alleles in different populations is a very important biological task. However, biologists often use the product Ns to define a different concept; they say that drift “dominates” selection or that drift is “stronger than” selection when Ns is much (...) smaller than some threshold quantity (e.g., ½) and that the reverse is true when Ns is much larger than that threshold. We argue that the question of whether drift dominates selection for a single allele in a single population makes no sense. Selection and drift are causes of evolution, but there is no fact of the matter as to which cause is stronger in the evolution of any given allele. (shrink)
Ockham's razor, the principle of parsimony, states that simpler theories are better than theories that are more complex. It has a history dating back to Aristotle and it plays an important role in current physics, biology, and psychology. The razor also gets used outside of science - in everyday life and in philosophy. This book evaluates the principle and discusses its many applications. Fascinating examples from different domains provide a rich basis for contemplating the principle's promises and perils. It is (...) obvious that simpler theories are beautiful and easy to understand; the hard problem is to figure out why the simplicity of a theory should be relevant to saying what the world is like. In this book, the ABCs of probability theory are succinctly developed and put to work to describe two 'parsimony paradigms' within which this problem can be solved. (shrink)
An empirical procedure is suggested for testing a model that postulates variables that intervene between observed causes and abserved effects against a model that includes no such postulate. The procedure is applied to two experiments in psychology. One involves a conditioning regimen that leads to response generalization; the other concerns the question of whether chimpanzees have a theory of mind.
When proponents of Intelligent Design theory deny that their theory is religious, the minimalistic theory they have in mind is the claim that the irreducibly complex adaptations found in nature were made by one or more intelligent designers. The denial that this theory is religious rests on the fact that it does not specify the identity of the designer—a supernatural God or a team of extra-terrestrials could have done the work. The present paper attempts to show that this reply underestimates (...) the commitments of the mini-ID Theory. The mini-ID theory, when supplemented with four independently plausible further assumptions, entails the existence of a supernatural intelligent designer. It is further argued that scientific theories, such as the Darwinian theory of evolution, are neutral on the question of whether supernatural designers exist. (shrink)
Carl Hempel1 set the tone for subsequent philosophical work on scientific explanation by resolutely locating the problem he wanted to address outside of epistemology. “Hempel’s problem,” as I will call it, was not to say what counts as evidence that X is the explanation of Y. Rather, the question was what it means for X to explain Y. Hempel’s theory of explanation and its successors don’t tell you what to believe; instead, they tell you which of your beliefs (if any) (...) can be said to explain a given target proposition. (shrink)