The Nature of Selection is a straightforward, self-contained introduction to philosophical and biological problems in evolutionary theory. It presents a powerful analysis of the evolutionary concepts of natural selection, fitness, and adaptation and clarifies controversial issues concerning altruism, group selection, and the idea that organisms are survival machines built for the good of the genes that inhabit them. "Sober's is the answering philosophical voice, the voice of a first-rate philosopher and a knowledgeable student of contemporary evolutionary theory. His book merits (...) broad attention among both communities. It should also inspire others to continue the conversation."-Philip Kitcher, Nature "Elliott Sober has made extraordinarily important contributions to our understanding of biological problems in evolutionary biology and causality. The Nature of Selection is a major contribution to understanding epistemological problems in evolutionary theory. I predict that it will have a long lasting place in the literature."-Richard C. Lewontin. (shrink)
How should the concept of evidence be understood? And how does the concept of evidence apply to the controversy about creationism as well as to work in evolutionary biology about natural selection and common ancestry? In this rich and wide-ranging book, Elliott Sober investigates general questions about probability and evidence and shows how the answers he develops to those questions apply to the specifics of evolutionary biology. Drawing on a set of fascinating examples, he analyzes whether claims about intelligent design (...) are untestable; whether they are discredited by the fact that many adaptations are imperfect; how evidence bears on whether present species trace back to common ancestors; how hypotheses about natural selection can be tested, and many other issues. His book will interest all readers who want to understand philosophical questions about evidence and evolution, as they arise both in Darwin's work and in contemporary biological research. (shrink)
Perhaps because of it implications for our understanding of human nature, recent philosophy of biology has seen what might be the most dramatic work in the philosophies of the ”special” sciences. This drama has centered on evolutionary theory, and in the second edition of this textbook, Elliott Sober introduces the reader to the most important issues of these developments. With a rare combination of technical sophistication and clarity of expression, Sober engages both the higher level of theory and the direct (...) implications for such controversial issues as creationism, teleology, nature versus nurture, and sociobiology. Above all, the reader will gain from this book a firm grasp of the structure of evolutionary theory, the evidence for it, and the scope of its explanatory significance. (shrink)
Ockham's razor, the principle of parsimony, states that simpler theories are better than theories that are more complex. It has a history dating back to Aristotle and it plays an important role in current physics, biology, and psychology. The razor also gets used outside of science - in everyday life and in philosophy. This book evaluates the principle and discusses its many applications. Fascinating examples from different domains provide a rich basis for contemplating the principle's promises and perils. It is (...) obvious that simpler theories are beautiful and easy to understand; the hard problem is to figure out why the simplicity of a theory should be relevant to saying what the world is like. In this book, the ABCs of probability theory are succinctly developed and put to work to describe two 'parsimony paradigms' within which this problem can be solved. (shrink)
Traditional analyses of the curve fitting problem maintain that the data do not indicate what form the fitted curve should take. Rather, this issue is said to be settled by prior probabilities, by simplicity, or by a background theory. In this paper, we describe a result due to Akaike , which shows how the data can underwrite an inference concerning the curve's form based on an estimate of how predictively accurate it will be. We argue that this approach throws light (...) on the theoretical virtues of parsimoniousness, unification, and non ad hocness, on the dispute about Bayesianism, and on empiricism and scientific realism. * Both of us gratefully acknowledge support from the Graduate School at the University of Wisconsin-Madison, and NSF grant DIR-8822278 (M.F.) and NSF grant SBE-9212294 (E.S.). Special thanks go to A. W. F. Edwards.William Harper. Martin Leckey. Brian Skyrms, and especially Peter Turney for helpful comments on an earlier draft. (shrink)
Reductionism is often understood to include two theses: (1) every singular occurrence that the special sciences can explain also can be explained by physics; (2) every law in a higher-level science can be explained by physics. These claims are widely supposed to have been refuted by the multiple realizability argument, formulated by Putnam (1967, 1975) and Fodor (1968, 1975). The present paper criticizes the argument and identifies a reductionistic thesis that follows from one of the argument's premises.
Elliott Sober is one of the leading philosophers of science and is a former winner of the Lakatos Prize, the major award in the field. This new collection of essays will appeal to a readership that extends well beyond the frontiers of the philosophy of science. Sober shows how ideas in evolutionary biology bear in significant ways on traditional problems in philosophy of mind and language, epistemology, and metaphysics. Amongst the topics addressed are psychological egoism, solipsism, and the interpretation of (...) belief and utterance, empiricism, Ockham's razor, causality, essentialism, and scientific laws. The collection will prove invaluable to a wide range of philosophers, primarily those working in the philosophy of science, the philosophy of mind, and epistemology. (shrink)
“Absence of evidence isn’t evidence of absence” is a slogan that is popular among scientists and nonscientists alike. This article assesses its truth by using a probabilistic tool, the Law of Likelihood. Qualitative questions (“Is E evidence about H ?”) and quantitative questions (“How much evidence does E provide about H ?”) are both considered. The article discusses the example of fossil intermediates. If finding a fossil that is phenotypically intermediate between two extant species provides evidence that those species have (...) a common ancestor, does failing to find such a fossil constitute evidence that there was no common ancestor? Or should the failure merely be chalked up to the imperfection of the fossil record? The transitivity of the evidence relation in simple causal chains provides a broader context, which leads to discussion of the fine-tuning argument, the anthropic principle, and observation selection effects. (shrink)
The probability that the fitter of two alleles will increase in frequency in a population goes up as the product of N (the effective population size) and s (the selection coefficient) increases. Discovering the distribution of values for this product across different alleles in different populations is a very important biological task. However, biologists often use the product Ns to define a different concept; they say that drift “dominates” selection or that drift is “stronger than” selection when Ns is much (...) smaller than some threshold quantity (e.g., ½) and that the reverse is true when Ns is much larger than that threshold. We argue that the question of whether drift dominates selection for a single allele in a single population makes no sense. Selection and drift are causes of evolution, but there is no fact of the matter as to which cause is stronger in the evolution of any given allele. (shrink)
Ernst Mayr has argued that Darwinian theory discredited essentialist modes of thought and replaced them with what he has called "population thinking". In this paper, I characterize essentialism as embodying a certain conception of how variation in nature is to be explained, and show how this conception was undermined by evolutionary theory. The Darwinian doctrine of evolutionary gradualism makes it impossible to say exactly where one species ends and another begins; such line-drawing problems are often taken to be the decisive (...) reason for thinking that essentialism is untenable. However, according to the view of essentialism I suggest, this familiar objection is not fatal to essentialism. It is rather the essentialist's use of what I call the natural state model for explaining variation which clashes with evolutionary theory. This model implemented the essentialist's requirement that properties of populations be defined in terms of properties of member organisms. Requiring such constituent definitions is reductionistic in spirit; additionally, evolutionary theory shows that such definitions are not available, and, moreover, that they are not needed to legitimize population-level concepts. Population thinking involves the thesis that population concepts may be legitimized by showing their connections with each other, even when they are not reducible to concepts applying at lower levels of organization. In the paper, I develop these points by describing Aristotle's ideas on the origins of biological variation; they are a classic formulation of the natural state model. I also describe how the development of statistical ideas in the 19th century involved an abandoning of the natural state model. (shrink)
The concept of fitness began its career in biology long before evolutionary theory was mathematized. Fitness was used to describe an organism’s vigor, or the degree to which organisms “fit” into their environments. An organism’s success in avoiding predators and in building a nest obviously contribute to its fitness and to the fitness of its offspring, but the peacock’s gaudy tail seemed to be in an entirely different line of work. Fitness, as a term in ordinary language (as in “physical (...) fitness”) and in its original biological meaning, applied to the survival of an organism and its offspring, not to sheer reproductive output (Paul ////; Cronin 1991). Darwin’s separation of natural from sexual selection may sound odd from a modern perspective, but it made sense from this earlier point of view. (shrink)
Evolutionary theory is awash with probabilities. For example, natural selection is said to occur when there is variation in fitness, and fitness is standardly decomposed into two components, viability and fertility, each of which is understood probabilistically. With respect to viability, a fertilized egg is said to have a certain chance of surviving to reproductive age; with respect to fertility, an adult is said to have an expected number of offspring.1 There is more to evolutionary theory than the theory of (...) natural selection, and here too one finds probabilistic concepts aplenty. When there is no selection, the theory of neutral evolution says that a gene’s chance of eventually reaching fixation is 1/(2N), where N is the number of organisms in the generation of the diploid population to which the gene belongs. The evolutionary consequences of mutation are likewise conceptualized in terms of the probability per unit time a gene has of changing from one state to another. The examples just mentioned are all “forwarddirected” probabilities; they describe the probability of later events, conditional on earlier events. However, evolutionary theory also uses “backwards probabilities” that describe the probability of a cause conditional on its effects; for example, coalescence theory allows one to calculate the expected number of generations in the past that the genes in the present generation find their most recent common ancestor. (shrink)
an observation to formulate a theory, it is no surprise that the resulting theory accurately captures that observation. However, when the theory makes a novel prediction—when it predicts an observation that was not used in its formulation—this seems to provide more substantial confirmation of the theory. This paper presents a new approach to the vexed problem of understanding the epistemic difference between prediction and accommodation. In fact, there are several problems that need to be disentangled; in all of them, the (...) key is the concept of overfitting. We float the hypothesis that accommodation is a defective methodology only when the methods used to accommodate the data fail to guard against the risk of overfitting. We connect our analysis with the proposals that other philosophers have made. We also discuss its bearing on the conflict between instrumentalism and scientific realism. Introduction Predictivisms—a taxonomy Observations Formulating the problem What might Annie be doing wrong? Solutions Observations explained Mayo on severe tests The miracle argument and scientific realism Concluding comments. (shrink)
Several evolutionary biologists have used a parsimony argument to argue that the single gene is the unit of selection. Since all evolution by natural selection can be represented in terms of selection coefficients attaching to single genes, it is, they say, "more parsimonious" to think that all selection is selection for or against single genes. We examine the limitations of this genic point of view, and then relate our criticisms to a broader view of the role of causal concepts and (...) the dangers of reification in science. (shrink)
To evaluate Hume's thesis that causal claims are always empirical, I consider three kinds of causal statement: ?e1 caused e2 ?, ?e1 promoted e2 ?, and ?e1 would promote e2 ?. Restricting my attention to cases in which ?e1 occurred? and ?e2 occurred? are both empirical, I argue that Hume was right about the first two, but wrong about the third. Standard causal models of natural selection that have this third form are a priori mathematical truths. Some are obvious, others (...) less so. Empirical work on natural selection takes the form of defending causal claims of the first two types. I provide biological examples that illustrate differences among these three kinds of causal claim. (shrink)
The propensity interpretation of fitness draws on the propensity interpretation of probability, but advocates of the former have not attended sufficiently to problems with the latter. The causal power of C to bring about E is not well-represented by the conditional probability Pr. Since the viability fitness of trait T is the conditional probability Pr, the viability fitness of the trait does not represent the degree to which having the trait causally promotes surviving. The same point holds for fertility fitness. (...) This failure of trait fitness to capture causal role can also be seen in the fact that coextensive traits must have the same fitness values even if one of them promotes survival and the other is neutral or deleterious. Although the fitness of a trait does not represent the trait’s causal power to promote survival and reproduction, variation in fitness in a population causally promotes change in trait frequencies; in this sense, fitness variation is a population-level propensity. (shrink)
I discuss two subjects in Samir Okasha’s excellent book, Evolution and the Levels of Selection. In consonance with Okasha’s critique of the conventionalist view of the units of selection problem, I argue that conventionalists have not attended to what realists mean by group, individual, and genic selection. In connection with Okasha’s discussion of the Price equation and contextual analysis, I discuss whether the existence of these two quantitative frameworks is a challenge to realism.
After clarifying the probabilistic conception of causality suggested by Good (1961-2), Suppes (1970), Cartwright (1979), and Skyrms (1980), we prove a sufficient condition for transitivity of causal chains. The bearing of these considerations on the units of selection problem in evolutionary theory and on the Newcomb paradox in decision theory is then discussed.
That some propositions are testable, while others are not, was a fundamental idea in the philosophical program known as logical empiricism. That program is now widely thought to be defunct. Quine’s (1953) “Two Dogmas of Empiricism” and Hempel’s (1950) “Problems and Changes in the Empiricist Criterion of Meaning” are among its most notable epitaphs. Yet, as we know from Mark Twain’s comment on an obituary that he once had the pleasure of reading about himself, the report of a death can (...) be an exaggeration. The research program that began in Vienna and Berlin continues, even though many of the specific formulations that came out of those circles are flawed and need to be replaced. (shrink)
Historical sciences like evolutionary biology reconstruct past events by using the traces that the past has bequeathed to the present. Markov chain theory entails that the passage of time reduces the amount of information that the present provides about the past. Here we use a Moran process framework to show that some evolutionary processes destroy information faster than others. Our results connect with Darwin’s principle that adaptive similarities provide scant evidence of common ancestry whereas neutral and deleterious similarities do better. (...) We also describe how the branching in phylogenetic trees affects the information that the present supplies about the past. (shrink)
When two causally independent processes each have a quantity that increases monotonically (either deterministically or in probabilistic expectation), the two quantities will be correlated, thus providing a counterexample to Reichenbach's principle of the common cause. Several philosophers have denied this, but I argue that their efforts to save the principle are unsuccessful. Still, one salvage attempt does suggest a weaker principle that avoids the initial counterexample. However, even this weakened principle is mistaken, as can be seen by exploring the concepts (...) of homology and homoplasy used in evolutionary biology. I argue that the kernel of truth in the principle of the common cause is to be found by separating metaphysical and epistemological issues; as far as the epistemology is concerned, the Likelihood Principle is central. (shrink)
As every philosopher knows, “the design argument” concludes that God exists from premisses that cite the adaptive complexity of organisms or the lawfulness and orderliness of the whole universe. Since 1859, it has formed the intellectual heart of creationist opposition to the Darwinian hypothesis that organisms evolved their adaptive features by the mindless process of natural selection. Although the design argument developed as a defense of theism, the logic of the argument in fact encompasses a larger set of issues. William (...) Paley saw clearly that we sometimes have an excellent reason to postulate the existence of an intelligent designer. If we find a watch on the heath, we reasonably infer that it was produced by an intelligent watchmaker. This design argument makes perfect sense. Why is it any different to claim that the eye was produced by an intelligent designer? Both critics and defenders of the design argument need to understand what the ground rules are for inferring that an intelligent designer is the unseen cause of an observed effect. (shrink)
John Beatty (1995) and Alexander Rosenberg (1994) have argued against the claim that there are laws in biology. Beatty's main reason is that evolution is a process full of contingency, but he also takes the existence of relative significance controversies in biology and the popularity of pluralistic approaches to a variety of evolutionary questions to be evidence for biology's lawlessness. Rosenberg's main argument appeals to the idea that biological properties supervene on large numbers of physical properties, but he also develops (...) case studies of biological controversies to defend his thesis that biology is best understood as an instrumental discipline. The present paper assesses their arguments. (shrink)
In 'Two Dogmas of Empiricism', Quine attacks the analytic/synthetic distinction and defends a doctrine that I call epistemological holism. Now, almost fifty years after the article's appearance, what are we to make of these ideas? I suggest that the philosophical naturalism that Quine did so much to promote should lead us to reject Quine's brief against the analytic/synthetic distinction; I also argue that Quine misunderstood Carnap's views on analyticity. As for epistemological holism, I claim that this thesis does not follow (...) from the logical point that Duhem and Quine made about the role of auxiliary assumptions in hypothesis testing, and that the thesis should be rejected. \\\ [Peter Hylton] Section I of this essay discusses Quine's views about reference, contrasting them with those of Russell. For the latter, our language and thought succeed in being about the world because of our acquaintance with objects; the relation of reference-roughly, the relation between a name and its bearer-is thus fundamental. For Quine, by contrast, the fundamental relation by which our language comes to be about the world, and to have empirical content, is that between a sentence and stimulations of our sensory surfaces; reference, while important, is a derivative notion. Section II shows how this view of reference as derivative makes possible the notorious Quinean doctrine of ontological relativity. Section III raises the issue of realism. It argues that somewhat different notions of realism are in play for Quine and for Russell-for Russell, objects, and our knowledge of objects, play the fundamental role, while for quine objectivity and truth are fundamental, with ontology being derivative. (shrink)
It is a challenge to explain how evolutionary altruism can evolve by the process of natural selection, since altruists in a group will be less fit than the selfish individuals in the same group who receive benefits but do not make donations of their own. Darwin proposed a theory of group selection to solve this puzzle. Very simply, even though altruists are less fit than selfish individuals within any single group, groups of altruists are more fit than groups of selfish (...) individuals. If a population is subdivided into many groups that vary in their altruistic tendencies, altruism will be favored at the level of selection among groups even as it is being disfavored at the level of selection among individuals within groups. Darwin’s scenario became the basis for a theoretical framework called multilevel selection theory. (shrink)
Parsimony arguments are advanced in both science and philosophy. How are they related? This question is a test case for Naturalismp, which is the thesis that philosophical theories and scientific theories should be evaluated by the same criteria. In this paper, I describe the justifications that attach to two types of parsimony argument in science. In the first, parsimony is a surrogate for likelihood. In the second, parsimony is relevant to estimating how accurately a model will predict new data when (...) fitted to old. I then consider how these two justifications apply to parsimony arguments in philosophy concerning theism and atheism, the mind/body problem, ethical realism, the question of whether mental properties are causally efficacious, and nominalism versus Platonism about numbers. (shrink)
A simple and general criterion is derived for the evolution of altruism when individuals interact in pairs. It is argued that the treatment of this problem in kin selection theory and in game theory are special cases of this general criterion.
This paper defends two theses about probabilistic reasoning. First, although modus ponens has a probabilistic analog, modus tollens does not – the fact that a hypothesis says that an observation is very improbable does not entail that the hypothesis is improbable. Second, the evidence relation is essentially comparative; with respect to hypotheses that confer probabilities on observation statements but do not entail them, an observation O may favor one hypothesis H1 over another hypothesis H2 , but O cannot be said (...) to confirm or disconfirm H1 without such relativization. These points have serious consequences for the Intelligent Design movement. Even if evolutionary theory entailed that various complex adaptations are very improbable, that would neither disconfirm the theory nor support the hypothesis of intelligent design. For either of these conclusions to follow, an additional question must be answered: With respect to the adaptive features that evolutionary theory allegedly says are very improbable, what is their probability of arising if they were produced by intelligent design? This crucial question has not been addressed by the ID movement. (shrink)
The thesis that natural selection explains the frequencies of traits in populations, but not why individual organisms have the traits tehy do, is here defended and elaborated. A general concept of ‘distributive explanation’ is discussed.
The chapter discusses the principle of conservatism and traces how the general principle is related to the specific one. This tracing suggests that the principle of conservatism needs to be refined. Connecting the principle in cognitive science to more general questions about scientific inference also allows us to revisit the question of realism versus instrumentalism. The framework deployed in model selection theory is very general; it is not specific to the subject matter of science. The chapter outlines some non-Bayesian ideas (...) that have been developed in model selection theory. The principle of conservatism, like C. Lloyd Morgan's canon, describes a preference concerning kinds of parameters. It says that a model that postulates only lower-level intentionality is preferable to one that postulates higher-level intentionality if both fit the data equally well. The model selection approach to parsimony helps explain why unification is a theoretical virtue. (shrink)
Nancy Cartwright (1983, 1999) argues that (1) the fundamental laws of physics are true when and only when appropriate ceteris paribus modifiers are attached and that (2) ceteris paribus modifiers describe conditions that are almost never satisfied. She concludes that when the fundamental laws of physics are true, they don't apply in the real world, but only in highly idealized counterfactual situations. In this paper, we argue that (1) and (2) together with an assumption about contraposition entail the opposite conclusion (...) — that the fundamental laws of physics do apply in the real world. Cartwright extracts from her thesis about the inapplicability of fundamental laws the conclusion that they cannot figure in covering-law explanations. We construct a different argument for a related conclusion — that forward-directed idealized dynamical laws cannot provide covering-law explanations that are causal. This argument is neutral on whether the assumption about contraposition is true. We then discuss Cartwright's simulacrum account of explanation, which seeks to describe how idealized laws can be explanatory. (shrink)
We argue elsewhere that explanatoriness is evidentially irrelevant . Let H be some hypothesis, O some observation, and E the proposition that H would explain O if H and O were true. Then O screens-off E from H: Pr = Pr. This thesis, hereafter “SOT” , is defended by appeal to a representative case. The case concerns smoking and lung cancer. McCain and Poston grant that SOT holds in cases, like our case concerning smoking and lung cancer, that involve frequency (...) data. However, McCain and Poston contend that there is a wider sense of evidential relevance—wider than the sense at play in SOT—on which explanatoriness is evidentially relevant even in cases involving frequency data. This is their main point, but they also contend that SOT does not hold in certain cases not involving frequency data. We reply to each of these points and conclude with some general remarks on screening-off as a test of evidential relevance. (shrink)
Quine’s publication in 1951 of “Two Dogmas of Empiricism” was a watershed event in 20th century philosophy. In that essay, Quine sought to demolish the concepts of analyticity and a priority; he also sketched a positive proposal of his own -- epistemological holism. There can be little doubt that philosophy changed as a result of Quine’s work. The question I want to address here is whether it should have. My goal is not to argue for a return to the halcyon (...) days of the logical empiricists. Rather, I want to take stock. Now, almost fifty years after the publication of “Two Dogmas,” what view should we take of analyticity, the a priori, and epistemological holism, and of what Quine said about these topics? (shrink)