The Nature of Selection is a straightforward, self-contained introduction to philosophical and biological problems in evolutionary theory. It presents a powerful analysis of the evolutionary concepts of natural selection, fitness, and adaptation and clarifies controversial issues concerning altruism, group selection, and the idea that organisms are survival machines built for the good of the genes that inhabit them. "Sober's is the answering philosophical voice, the voice of a first-rate philosopher and a knowledgeable student of contemporary evolutionary theory. His book merits (...) broad attention among both communities. It should also inspire others to continue the conversation."-Philip Kitcher, Nature "Elliott Sober has made extraordinarily important contributions to our understanding of biological problems in evolutionary biology and causality. The Nature of Selection is a major contribution to understanding epistemological problems in evolutionary theory. I predict that it will have a long lasting place in the literature."-Richard C. Lewontin. (shrink)
The authors demonstrate that unselfish behavior is in fact an important feature of both biological and human nature. Their book provides a panoramic view of altruism throughout the animal kingdom--from self-sacrificing parasites to the human capacity for selflessness--even as it explains the evolutionary sense of such behavior.
This Element analyzes the various forms that design arguments for the existence of God can take, but the main focus is on two such arguments. The first concerns the complex adaptive features that organisms have. Creationists who advance this argument contend that evolution by natural selection cannot be the right explanation. The second design argument - the argument from fine-tuning - begins with the fact that life could not exist in our universe if the constants found in the laws of (...) physics had values that differed more than a little from their actual values. Since probability is the main analytical tool used, the book provides a primer on probability theory. (shrink)
Perhaps because of it implications for our understanding of human nature, recent philosophy of biology has seen what might be the most dramatic work in the philosophies of the ”special” sciences. This drama has centered on evolutionary theory, and in the second edition of this textbook, Elliott Sober introduces the reader to the most important issues of these developments. With a rare combination of technical sophistication and clarity of expression, Sober engages both the higher level of theory and the direct (...) implications for such controversial issues as creationism, teleology, nature versus nurture, and sociobiology. Above all, the reader will gain from this book a firm grasp of the structure of evolutionary theory, the evidence for it, and the scope of its explanatory significance. (shrink)
How should the concept of evidence be understood? And how does the concept of evidence apply to the controversy about creationism as well as to work in evolutionary biology about natural selection and common ancestry? In this rich and wide-ranging book, Elliott Sober investigates general questions about probability and evidence and shows how the answers he develops to those questions apply to the specifics of evolutionary biology. Drawing on a set of fascinating examples, he analyzes whether claims about intelligent design (...) are untestable; whether they are discredited by the fact that many adaptations are imperfect; how evidence bears on whether present species trace back to common ancestors; how hypotheses about natural selection can be tested, and many other issues. His book will interest all readers who want to understand philosophical questions about evidence and evolution, as they arise both in Darwin's work and in contemporary biological research. (shrink)
Ockham's razor, the principle of parsimony, states that simpler theories are better than theories that are more complex. It has a history dating back to Aristotle and it plays an important role in current physics, biology, and psychology. The razor also gets used outside of science - in everyday life and in philosophy. This book evaluates the principle and discusses its many applications. Fascinating examples from different domains provide a rich basis for contemplating the principle's promises and perils. It is (...) obvious that simpler theories are beautiful and easy to understand; the hard problem is to figure out why the simplicity of a theory should be relevant to saying what the world is like. In this book, the ABCs of probability theory are succinctly developed and put to work to describe two 'parsimony paradigms' within which this problem can be solved. (shrink)
Reconstructing the Past seeks to clarify and help resolve the vexing methodological issues that arise when biologists try to answer such questions as whether human beings are more closely related to chimps than they are to gorillas. It explores the case for considering the philosophical idea of simplicity/parsimony as a useful principle for evaluating taxonomic theories of evolutionary relationships. For the past two decades, evolutionists have been vigorously debating the appropriate methods that should be used in systematics, the field that (...) aims at reconstructing phylogenetic relationships among species. This debate over phylogenetic inference, Elliott Sober observes, raises broader questions of hypothesis testing and theory evaluation that run head on into long standing issues concerning simplicity/parsimony in the philosophy of science. Sober treats the problem of phylogenetic inference as a detailed case study in which the philosophical idea of simplicity/parsimony can be tested as a principle of theory evaluation. Bringing together philosophy and biology, as well as statistics, Sober builds a general framework for understanding the circumstances in which parsimony makes sense as a tool of phylogenetic inference. Along the way he provides a detailed critique of parsimony in the biological literature, exploring the strengths and limitations of both statistical and nonstatistical cladistic arguments. (shrink)
Ernst Mayr has argued that Darwinian theory discredited essentialist modes of thought and replaced them with what he has called "population thinking". In this paper, I characterize essentialism as embodying a certain conception of how variation in nature is to be explained, and show how this conception was undermined by evolutionary theory. The Darwinian doctrine of evolutionary gradualism makes it impossible to say exactly where one species ends and another begins; such line-drawing problems are often taken to be the decisive (...) reason for thinking that essentialism is untenable. However, according to the view of essentialism I suggest, this familiar objection is not fatal to essentialism. It is rather the essentialist's use of what I call the natural state model for explaining variation which clashes with evolutionary theory. This model implemented the essentialist's requirement that properties of populations be defined in terms of properties of member organisms. Requiring such constituent definitions is reductionistic in spirit; additionally, evolutionary theory shows that such definitions are not available, and, moreover, that they are not needed to legitimize population-level concepts. Population thinking involves the thesis that population concepts may be legitimized by showing their connections with each other, even when they are not reducible to concepts applying at lower levels of organization. In the paper, I develop these points by describing Aristotle's ideas on the origins of biological variation; they are a classic formulation of the natural state model. I also describe how the development of statistical ideas in the 19th century involved an abandoning of the natural state model. (shrink)
Traditional analyses of the curve fitting problem maintain that the data do not indicate what form the fitted curve should take. Rather, this issue is said to be settled by prior probabilities, by simplicity, or by a background theory. In this paper, we describe a result due to Akaike [1973], which shows how the data can underwrite an inference concerning the curve's form based on an estimate of how predictively accurate it will be. We argue that this approach throws light (...) on the theoretical virtues of parsimoniousness, unification, and non ad hocness, on the dispute about Bayesianism, and on empiricism and scientific realism. * Both of us gratefully acknowledge support from the Graduate School at the University of Wisconsin-Madison, and NSF grant DIR-8822278 (M.F.) and NSF grant SBE-9212294 (E.S.). Special thanks go to A. W. F. Edwards.William Harper. Martin Leckey. Brian Skyrms, and especially Peter Turney for helpful comments on an earlier draft. (shrink)
Reductionism is often understood to include two theses: (1) every singular occurrence that the special sciences can explain also can be explained by physics; (2) every law in a higher-level science can be explained by physics. These claims are widely supposed to have been refuted by the multiple realizability argument, formulated by Putnam (1967, 1975) and Fodor (1968, 1975). The present paper criticizes the argument and identifies a reductionistic thesis that follows from one of the argument's premises.
Changes and additions in the new edition reflect the ways in which the subject has broadened and deepened on several fronts; more than half of the-chapters are ...
Elliott Sober is one of the leading philosophers of science and is a former winner of the Lakatos Prize, the major award in the field. This new collection of essays will appeal to a readership that extends well beyond the frontiers of the philosophy of science. Sober shows how ideas in evolutionary biology bear in significant ways on traditional problems in philosophy of mind and language, epistemology, and metaphysics. Amongst the topics addressed are psychological egoism, solipsism, and the interpretation of (...) belief and utterance, empiricism, Ockham's razor, causality, essentialism, and scientific laws. The collection will prove invaluable to a wide range of philosophers, primarily those working in the philosophy of science, the philosophy of mind, and epistemology. (shrink)
When philosophers defend epiphenomenalist doctrines, they often do so by way of a priori arguments. Here we suggest an empirical approach that is modeled on August Weismann’s experimental arguments against the inheritance of acquired characters. This conception of how epiphenomenalism ought to be developed helps clarify some mistakes in two recent epiphenomenalist positions – Jaegwon Kim’s (1993) arguments against mental causation, and the arguments developed by Walsh (2000), Walsh, Lewens, and Ariew (2002), and Matthen and Ariew (2002) that natural selection (...) and drift are not causes of evolution. A manipulationist account of causation (Woodward 2003) leads naturally to an account of how macro- and micro-causation are related and to an understanding of how epiphenomenalism at different levels of organization should be understood. (shrink)
In both biology and the human sciences, social groups are sometimes treated as adaptive units whose organization cannot be reduced to individual interactions. This group-level view is opposed by a more individualistic one that treats social organization as a byproduct of self-interest. According to biologists, group-level adaptations can evolve only by a process of natural selection at the group level. Most biologists rejected group selection as an important evolutionary force during the 1960s and 1970s but a positive literature began to (...) grow during the 1970s and is rapidly expanding today. We review this recent literature and its implications for human evolutionary biology. We show that the rejection of group selection was based on a misplaced emphasis on genes as “replicators” which is in fact irrelevant to the question of whether groups can be like individuals in their functional organization. The fundamental question is whether social groups and other higher-level entities can be “vehicles” of selection. When this elementary fact is recognized, group selection emerges as an important force in nature and what seem to be competing theories, such as kin selection and reciprocity, reappear as special cases of group selection. The result is a unified theory of natural selection that operates on a nested hierarchy of units.The vehicle-based theory makes it clear that group selection is an important force to consider in human evolution. Humans can facultatively span the full range from self-interested individuals to “organs” of group-level “organisms.” Human behavior not only reflects the balance between levels of selection but it can also alter the balance through the construction of social structures that have the effect of reducing fitness differences within groups, concentrating natural selection at the group level. These social structures and the cognitive abilities that produce them allow group selection to be important even among large groups of unrelated individuals. (shrink)
The evolutionary problem of the units of selection has elicited a good deal of conceptual work from philosophers. We review this work to determine where the issues now stand.
Several evolutionary biologists have used a parsimony argument to argue that the single gene is the unit of selection. Since all evolution by natural selection can be represented in terms of selection coefficients attaching to single genes, it is, they say, "more parsimonious" to think that all selection is selection for or against single genes. We examine the limitations of this genic point of view, and then relate our criticisms to a broader view of the role of causal concepts and (...) the dangers of reification in science. (shrink)
an observation to formulate a theory, it is no surprise that the resulting theory accurately captures that observation. However, when the theory makes a novel prediction—when it predicts an observation that was not used in its formulation—this seems to provide more substantial confirmation of the theory. This paper presents a new approach to the vexed problem of understanding the epistemic difference between prediction and accommodation. In fact, there are several problems that need to be disentangled; in all of them, the (...) key is the concept of overfitting. We float the hypothesis that accommodation is a defective methodology only when the methods used to accommodate the data fail to guard against the risk of overfitting. We connect our analysis with the proposals that other philosophers have made. We also discuss its bearing on the conflict between instrumentalism and scientific realism. Introduction Predictivisms—a taxonomy Observations Formulating the problem What might Annie be doing wrong? Solutions Observations explained Mayo on severe tests The miracle argument and scientific realism Concluding comments. (shrink)
The concept of fitness began its career in biology long before evolutionary theory was mathematized. Fitness was used to describe an organism’s vigor, or the degree to which organisms “fit” into their environments. An organism’s success in avoiding predators and in building a nest obviously contribute to its fitness and to the fitness of its offspring, but the peacock’s gaudy tail seemed to be in an entirely different line of work. Fitness, as a term in ordinary language (as in “physical (...) fitness”) and in its original biological meaning, applied to the survival of an organism and its offspring, not to sheer reproductive output (Paul ////; Cronin 1991). Darwin’s separation of natural from sexual selection may sound odd from a modern perspective, but it made sense from this earlier point of view. (shrink)
After clarifying the probabilistic conception of causality suggested by Good (1961-2), Suppes (1970), Cartwright (1979), and Skyrms (1980), we prove a sufficient condition for transitivity of causal chains. The bearing of these considerations on the units of selection problem in evolutionary theory and on the Newcomb paradox in decision theory is then discussed.
“Absence of evidence isn’t evidence of absence” is a slogan that is popular among scientists and nonscientists alike. This article assesses its truth by using a probabilistic tool, the Law of Likelihood. Qualitative questions (“Is E evidence about H ?”) and quantitative questions (“How much evidence does E provide about H ?”) are both considered. The article discusses the example of fossil intermediates. If finding a fossil that is phenotypically intermediate between two extant species provides evidence that those species have (...) a common ancestor, does failing to find such a fossil constitute evidence that there was no common ancestor? Or should the failure merely be chalked up to the imperfection of the fossil record? The transitivity of the evidence relation in simple causal chains provides a broader context, which leads to discussion of the fine-tuning argument, the anthropic principle, and observation selection effects. (shrink)
Evolutionary theory is awash with probabilities. For example, natural selection is said to occur when there is variation in fitness, and fitness is standardly decomposed into two components, viability and fertility, each of which is understood probabilistically. With respect to viability, a fertilized egg is said to have a certain chance of surviving to reproductive age; with respect to fertility, an adult is said to have an expected number of offspring.1 There is more to evolutionary theory than the theory of (...) natural selection, and here too one finds probabilistic concepts aplenty. When there is no selection, the theory of neutral evolution says that a gene’s chance of eventually reaching fixation is 1/(2N), where N is the number of organisms in the generation of the diploid population to which the gene belongs. The evolutionary consequences of mutation are likewise conceptualized in terms of the probability per unit time a gene has of changing from one state to another. The examples just mentioned are all “forwarddirected” probabilities; they describe the probability of later events, conditional on earlier events. However, evolutionary theory also uses “backwards probabilities” that describe the probability of a cause conditional on its effects; for example, coalescence theory allows one to calculate the expected number of generations in the past that the genes in the present generation find their most recent common ancestor. (shrink)
Realists persuaded by indispensability arguments af- firm the existence of numbers, genes, and quarks. Van Fraassen's empiricism remains agnostic with respect to all three. The point of agreement is that the posits of mathematics and the posits of biology and physics stand orfall together. The mathematical Platonist can take heart from this consensus; even if the existence of num- bers is still problematic, it seems no more problematic than the existence of genes or quarks. If the two positions just described (...) were the only ones possible, there could be no objection to this melding of numbers with genes and quarks. However, the position I call contrastive empiricism (Sober 1990a) stands opposed to both realism and to Van Fraassen's em- piricism. As it turns out, contrastive empiricism entails that coalesc- ing mathematics with empirical science is highly problematic. I believe that there is an important kernel of truth in abductive ar- guments for genes and quarks. But no counterpart argument exists for the case of numbers. Of course, the existence of this third way would be uninteresting, if contrastive empiricism were wholly implausible. However, I will argue that contrastive empiricism captures what makes sense in standard versions of realism and empiricism, while avoiding the excesses of each. This third way is a middle way; it cannot be dismissed out of hand. (shrink)
How could the fundamental mental operations which facilitate scientific theorizing be the product of natural selection, since it appears that such theoretical methods were neither used nor useful "in the cave"-i.e., in the sequence of environments in which selection took place? And if these wired-in information processing techniques were not selected for, how can we view rationality as an adaptation? It will be the purpose of this paper to address such questions as these, and in the process to sketch some (...) of the considerations that an evolutionary account of rationality may involve. By describing the broad framework within which the evolution of rationality may eventually be understood, I hope to undermine the idea that evolutionary theory is somehow incapable of dealing with this characteristic and requires supplementation by some novel principle. A more modest ambition of the paper is to try to provoke those who think that there are special problems involved in this evolutionary inquiry to say what these problems are. (shrink)
The end of the nineteenth century is remembered as a time when psychology freed itself from philosophy and carved out an autonomous subject matter for itself. In fact, this time of emancipation was also a time of exile: while the psychologists were leaving, philosophers were slamming the door behind them. Frege is celebrated for having demonstrated the irrelevance of psychological considerations to philosophy. Some of Frege’s reasons for distinguishing psychological questions from philosophical ones were sound, but one of Frege’s most (...) influential arguments, which was elaborated upon and advocated by the positivists, vastly overestimated the gap separating the two disciplines. (shrink)
Testability.Elliott Sober - 1999 - Proceedings and Addresses of the American Philosophical Association 73 (2):47-76.details
That some propositions are testable, while others are not, was a fundamental idea in the philosophical program known as logical empiricism. That program is now widely thought to be defunct. Quine’s (1953) “Two Dogmas of Empiricism” and Hempel’s (1950) “Problems and Changes in the Empiricist Criterion of Meaning” are among its most notable epitaphs. Yet, as we know from Mark Twain’s comment on an obituary that he once had the pleasure of reading about himself, the report of a death can (...) be an exaggeration. The research program that began in Vienna and Berlin continues, even though many of the specific formulations that came out of those circles are flawed and need to be replaced. (shrink)
A statement of the form ‘C caused E’ obeys the requirement of proportionality precisely when C says no more than what is necessary to bring about E. The thesis that causal statements must obey this requirement might be given a semantic or a pragmatic justification. We use the idea that causal claims are contrastive to criticize both.
It is a challenge to explain how evolutionary altruism can evolve by the process of natural selection, since altruists in a group will be less fit than the selfish individuals in the same group who receive benefits but do not make donations of their own. Darwin proposed a theory of group selection to solve this puzzle. Very simply, even though altruists are less fit than selfish individuals within any single group, groups of altruists are more fit than groups of selfish (...) individuals. If a population is subdivided into many groups that vary in their altruistic tendencies, altruism will be favored at the level of selection among groups even as it is being disfavored at the level of selection among individuals within groups. Darwin’s scenario became the basis for a theoretical framework called multilevel selection theory. (shrink)
John Beatty (1995) and Alexander Rosenberg (1994) have argued against the claim that there are laws in biology. Beatty's main reason is that evolution is a process full of contingency, but he also takes the existence of relative significance controversies in biology and the popularity of pluralistic approaches to a variety of evolutionary questions to be evidence for biology's lawlessness. Rosenberg's main argument appeals to the idea that biological properties supervene on large numbers of physical properties, but he also develops (...) case studies of biological controversies to defend his thesis that biology is best understood as an instrumental discipline. The present paper assesses their arguments. (shrink)
To evaluate Hume's thesis that causal claims are always empirical, I consider three kinds of causal statement: ?e1 caused e2 ?, ?e1 promoted e2 ?, and ?e1 would promote e2 ?. Restricting my attention to cases in which ?e1 occurred? and ?e2 occurred? are both empirical, I argue that Hume was right about the first two, but wrong about the third. Standard causal models of natural selection that have this third form are a priori mathematical truths. Some are obvious, others (...) less so. Empirical work on natural selection takes the form of defending causal claims of the first two types. I provide biological examples that illustrate differences among these three kinds of causal claim. (shrink)
The propensity interpretation of fitness draws on the propensity interpretation of probability, but advocates of the former have not attended sufficiently to problems with the latter. The causal power of C to bring about E is not well-represented by the conditional probability Pr. Since the viability fitness of trait T is the conditional probability Pr, the viability fitness of the trait does not represent the degree to which having the trait causally promotes surviving. The same point holds for fertility fitness. (...) This failure of trait fitness to capture causal role can also be seen in the fact that coextensive traits must have the same fitness values even if one of them promotes survival and the other is neutral or deleterious. Although the fitness of a trait does not represent the trait’s causal power to promote survival and reproduction, variation in fitness in a population causally promotes change in trait frequencies; in this sense, fitness variation is a population-level propensity. (shrink)
In a recent article, Kim Sterelny and Philip Kitcher5 defend a version of genic selectionism and attempt to refute the criticisms I made of that doctrine. Their defense has two components. First, they find fault with the account I gave of the units-of-selection controversy-an account which uses the idea of probabilistic causality as a tool of explication. Second, they provide a positive account of their own of what that controversy concerns, one which they think allows genic selectionism to emerge as (...) a successful thesis. I believe that the position they sketch is mistaken, both in its general orientation and in its details. I believe that the Sterelny/ Kitcher position misunderstands what the biological question of the units of selection is about and that their criticisms of my own proposal are mistaken as well. (shrink)
A simple and general criterion is derived for the evolution of altruism when individuals interact in pairs. It is argued that the treatment of this problem in kin selection theory and in game theory are special cases of this general criterion.
The debate over the relative importance of natural selection as compared to other forces affecting the evolution of organisms is a long-standing and central controversy in evolutionary biology. The theory of adaptationism argues that natural selection contains sufficient explanatory power in itself to account for all evolution. However, there are differing views about the efficiency of the adaptation model of explanation. If the adaptationism theory is applied, are energy and resources being used to their optimum? This book presents an up-to-date (...) view of this controversy and reflects the dramatic changes in our understanding of evolution that have occurred in the last twenty years. The volume combines contributions from biologists and philosophers, and offers a systematic treatment of foundational, conceptual, and methodological issues surrounding the theory of adaptationism. The essays examine recent developments in topics such as phylogenetic analysis, the theory of optimality and ess models, and methods of testing models. (shrink)
When two causally independent processes each have a quantity that increases monotonically (either deterministically or in probabilistic expectation), the two quantities will be correlated, thus providing a counterexample to Reichenbach's principle of the common cause. Several philosophers have denied this, but I argue that their efforts to save the principle are unsuccessful. Still, one salvage attempt does suggest a weaker principle that avoids the initial counterexample. However, even this weakened principle is mistaken, as can be seen by exploring the concepts (...) of homology and homoplasy used in evolutionary biology. I argue that the kernel of truth in the principle of the common cause is to be found by separating metaphysical and epistemological issues; as far as the epistemology is concerned, the Likelihood Principle is central. (shrink)
The thesis that natural selection explains the frequencies of traits in populations, but not why individual organisms have the traits tehy do, is here defended and elaborated. A general concept of ‘distributive explanation’ is discussed.
Nancy Cartwright (1983, 1999) argues that (1) the fundamental laws of physics are true when and only when appropriate ceteris paribus modifiers are attached and that (2) ceteris paribus modifiers describe conditions that are almost never satisfied. She concludes that when the fundamental laws of physics are true, they don't apply in the real world, but only in highly idealized counterfactual situations. In this paper, we argue that (1) and (2) together with an assumption about contraposition entail the opposite conclusion (...) — that the fundamental laws of physics do apply in the real world. Cartwright extracts from her thesis about the inapplicability of fundamental laws the conclusion that they cannot figure in covering-law explanations. We construct a different argument for a related conclusion — that forward-directed idealized dynamical laws cannot provide covering-law explanations that are causal. This argument is neutral on whether the assumption about contraposition is true. We then discuss Cartwright's simulacrum account of explanation, which seeks to describe how idealized laws can be explanatory. (shrink)
One possible interpretation of the species concept is that specifies are natural kinds. Another species concept is that species are individuals whose parts are organisms. Philip Kitcher takes seriously both these ideas; he sees a role for the genealogical/historical conception and also for the one that is “purely qualitative”. I criticize his ideas here. I see the genealogical conception at work in biological discussion of species and it is presupposed by an active and inventive research program, but the natural kind (...) concept is not. I also criticize Kitcher’s criticisms against Hull’s position that species are individuals, including Kitcher’s idea that species are sets of organisms, his explanations of why “all swans are white” isn’t law-like, his discussion of the example of multiple origination. Finally, I argue that given the current evolutionary developments, pluralism is the "null hypothesis" that we should attempt to refute. (shrink)
Is there some general reason to expect organisms that have beliefs to have false beliefs? And after you observe that an organism occasionally occupies a given neural state that you think encodes a perceptual belief, how do you evaluate hypotheses about the semantic content that that state has, where some of those hypotheses attribute beliefs that are sometimes false while others attribute beliefs that are always true? To address the first of these questions, we discuss evolution by natural selection and (...) show how organisms that are risk-prone in the beliefs they form can be fitter than organisms that are risk-free. To address the second question, we discuss a problem that is widely recognized in statistics – the problem of over-fitting – and one influential device for addressing that problem, the Akaike Information Criterion (AIC). We then use AIC to solve epistemological versions of the disjunction and distality problems, which are two key problems concerning what it is for a belief state to have one semantic content rather than another. (shrink)
I discuss two subjects in Samir Okasha’s excellent book, Evolution and the Levels of Selection. In consonance with Okasha’s critique of the conventionalist view of the units of selection problem, I argue that conventionalists have not attended to what realists mean by group, individual, and genic selection. In connection with Okasha’s discussion of the Price equation and contextual analysis, I discuss whether the existence of these two quantitative frameworks is a challenge to realism.
Parsimony arguments are advanced in both science and philosophy. How are they related? This question is a test case for Naturalismp, which is the thesis that philosophical theories and scientific theories should be evaluated by the same criteria. In this paper, I describe the justifications that attach to two types of parsimony argument in science. In the first, parsimony is a surrogate for likelihood. In the second, parsimony is relevant to estimating how accurately a model will predict new data when (...) fitted to old. I then consider how these two justifications apply to parsimony arguments in philosophy concerning theism and atheism, the mind/body problem, ethical realism, the question of whether mental properties are causally efficacious, and nominalism versus Platonism about numbers. (shrink)
In 'Two Dogmas of Empiricism', Quine attacks the analytic/synthetic distinction and defends a doctrine that I call epistemological holism. Now, almost fifty years after the article's appearance, what are we to make of these ideas? I suggest that the philosophical naturalism that Quine did so much to promote should lead us to reject Quine's brief against the analytic/synthetic distinction; I also argue that Quine misunderstood Carnap's views on analyticity. As for epistemological holism, I claim that this thesis does not follow (...) from the logical point that Duhem and Quine made about the role of auxiliary assumptions in hypothesis testing, and that the thesis should be rejected. \\\ [Peter Hylton] Section I of this essay discusses Quine's views about reference, contrasting them with those of Russell. For the latter, our language and thought succeed in being about the world because of our acquaintance with objects; the relation of reference-roughly, the relation between a name and its bearer-is thus fundamental. For Quine, by contrast, the fundamental relation by which our language comes to be about the world, and to have empirical content, is that between a sentence and stimulations of our sensory surfaces; reference, while important, is a derivative notion. Section II shows how this view of reference as derivative makes possible the notorious Quinean doctrine of ontological relativity. Section III raises the issue of realism. It argues that somewhat different notions of realism are in play for Quine and for Russell-for Russell, objects, and our knowledge of objects, play the fundamental role, while for quine objectivity and truth are fundamental, with ontology being derivative. (shrink)
Michael Scriven’s (1959) example of identical twins (who are said to be equal in fitness but unequal in their reproductive success) has been used by many philosophers of biology to discuss how fitness should be defined, how selection should be distinguished from drift, and how the environment in which a selection process occurs should be conceptualized. Here it is argued that evolutionary theory has no commitment, one way or the other, as to whether the twins are equally fit. This is (...) because the theory of natural selection is fundamentally about the fitnesses of traits, not the fitnesses of token individuals. A plausible philosophical thesis about supervenience entails that the twins are equally fit if they live in identical environments, but evolutionary biology is not committed to the thesis that the twins live in identical environments. Evolutionary theory is right to focus on traits, rather than on token individuals, because the fitnesses of token organisms (as opposed to their actual survivorship and degree of reproductive success) are almost always unknowable. This point has ramifications for the question of how Darwin’s theory of evolution and R. A. Fisher’s are conceptually different. (shrink)
This article reviews two standard criticisms of creationism/intelligent design (ID): it is unfalsifiable, and it is refuted by the many imperfect adaptations found in nature. Problems with both criticisms are discussed. A conception of testability is described that avoids the defects in Karl Popper’s falsifiability criterion. Although ID comes in multiple forms, which call for different criticisms, it emerges that ID fails to constitute a serious alternative to evolutionary theory.