The Nature of Selection is a straightforward, self-contained introduction to philosophical and biological problems in evolutionary theory. It presents a powerful analysis of the evolutionary concepts of natural selection, fitness, and adaptation and clarifies controversial issues concerning altruism, group selection, and the idea that organisms are survival machines built for the good of the genes that inhabit them. "Sober's is the answering philosophical voice, the voice of a first-rate philosopher and a knowledgeable student of contemporary evolutionary theory. His book merits (...) broad attention among both communities. It should also inspire others to continue the conversation."-Philip Kitcher, Nature "Elliott Sober has made extraordinarily important contributions to our understanding of biological problems in evolutionary biology and causality. The Nature of Selection is a major contribution to understanding epistemological problems in evolutionary theory. I predict that it will have a long lasting place in the literature."-Richard C. Lewontin. (shrink)
This Element analyzes the various forms that design arguments for the existence of God can take, but the main focus is on two such arguments. The first concerns the complex adaptive features that organisms have. Creationists who advance this argument contend that evolution by natural selection cannot be the right explanation. The second design argument - the argument from fine-tuning - begins with the fact that life could not exist in our universe if the constants found in the laws of (...) physics had values that differed more than a little from their actual values. Since probability is the main analytical tool used, the book provides a primer on probability theory. (shrink)
How should the concept of evidence be understood? And how does the concept of evidence apply to the controversy about creationism as well as to work in evolutionary biology about natural selection and common ancestry? In this rich and wide-ranging book, Elliott Sober investigates general questions about probability and evidence and shows how the answers he develops to those questions apply to the specifics of evolutionary biology. Drawing on a set of fascinating examples, he analyzes whether claims about intelligent design (...) are untestable; whether they are discredited by the fact that many adaptations are imperfect; how evidence bears on whether present species trace back to common ancestors; how hypotheses about natural selection can be tested, and many other issues. His book will interest all readers who want to understand philosophical questions about evidence and evolution, as they arise both in Darwin's work and in contemporary biological research. (shrink)
Perhaps because of it implications for our understanding of human nature, recent philosophy of biology has seen what might be the most dramatic work in the philosophies of the ”special” sciences. This drama has centered on evolutionary theory, and in the second edition of this textbook, Elliott Sober introduces the reader to the most important issues of these developments. With a rare combination of technical sophistication and clarity of expression, Sober engages both the higher level of theory and the direct (...) implications for such controversial issues as creationism, teleology, nature versus nurture, and sociobiology. Above all, the reader will gain from this book a firm grasp of the structure of evolutionary theory, the evidence for it, and the scope of its explanatory significance. (shrink)
Ockham's razor, the principle of parsimony, states that simpler theories are better than theories that are more complex. It has a history dating back to Aristotle and it plays an important role in current physics, biology, and psychology. The razor also gets used outside of science - in everyday life and in philosophy. This book evaluates the principle and discusses its many applications. Fascinating examples from different domains provide a rich basis for contemplating the principle's promises and perils. It is (...) obvious that simpler theories are beautiful and easy to understand; the hard problem is to figure out why the simplicity of a theory should be relevant to saying what the world is like. In this book, the ABCs of probability theory are succinctly developed and put to work to describe two 'parsimony paradigms' within which this problem can be solved. (shrink)
Ernst Mayr has argued that Darwinian theory discredited essentialist modes of thought and replaced them with what he has called "population thinking". In this paper, I characterize essentialism as embodying a certain conception of how variation in nature is to be explained, and show how this conception was undermined by evolutionary theory. The Darwinian doctrine of evolutionary gradualism makes it impossible to say exactly where one species ends and another begins; such line-drawing problems are often taken to be the decisive (...) reason for thinking that essentialism is untenable. However, according to the view of essentialism I suggest, this familiar objection is not fatal to essentialism. It is rather the essentialist's use of what I call the natural state model for explaining variation which clashes with evolutionary theory. This model implemented the essentialist's requirement that properties of populations be defined in terms of properties of member organisms. Requiring such constituent definitions is reductionistic in spirit; additionally, evolutionary theory shows that such definitions are not available, and, moreover, that they are not needed to legitimize population-level concepts. Population thinking involves the thesis that population concepts may be legitimized by showing their connections with each other, even when they are not reducible to concepts applying at lower levels of organization. In the paper, I develop these points by describing Aristotle's ideas on the origins of biological variation; they are a classic formulation of the natural state model. I also describe how the development of statistical ideas in the 19th century involved an abandoning of the natural state model. (shrink)
Traditional analyses of the curve fitting problem maintain that the data do not indicate what form the fitted curve should take. Rather, this issue is said to be settled by prior probabilities, by simplicity, or by a background theory. In this paper, we describe a result due to Akaike , which shows how the data can underwrite an inference concerning the curve's form based on an estimate of how predictively accurate it will be. We argue that this approach throws light (...) on the theoretical virtues of parsimoniousness, unification, and non ad hocness, on the dispute about Bayesianism, and on empiricism and scientific realism. * Both of us gratefully acknowledge support from the Graduate School at the University of Wisconsin-Madison, and NSF grant DIR-8822278 (M.F.) and NSF grant SBE-9212294 (E.S.). Special thanks go to A. W. F. Edwards.William Harper. Martin Leckey. Brian Skyrms, and especially Peter Turney for helpful comments on an earlier draft. (shrink)
Elliott Sober is one of the leading philosophers of science and is a former winner of the Lakatos Prize, the major award in the field. This new collection of essays will appeal to a readership that extends well beyond the frontiers of the philosophy of science. Sober shows how ideas in evolutionary biology bear in significant ways on traditional problems in philosophy of mind and language, epistemology, and metaphysics. Amongst the topics addressed are psychological egoism, solipsism, and the interpretation of (...) belief and utterance, empiricism, Ockham's razor, causality, essentialism, and scientific laws. The collection will prove invaluable to a wide range of philosophers, primarily those working in the philosophy of science, the philosophy of mind, and epistemology. (shrink)
Reductionism is often understood to include two theses: (1) every singular occurrence that the special sciences can explain also can be explained by physics; (2) every law in a higher-level science can be explained by physics. These claims are widely supposed to have been refuted by the multiple realizability argument, formulated by Putnam (1967, 1975) and Fodor (1968, 1975). The present paper criticizes the argument and identifies a reductionistic thesis that follows from one of the argument's premises.
When philosophers defend epiphenomenalist doctrines, they often do so by way of a priori arguments. Here we suggest an empirical approach that is modeled on August Weismann’s experimental arguments against the inheritance of acquired characters. This conception of how epiphenomenalism ought to be developed helps clarify some mistakes in two recent epiphenomenalist positions – Jaegwon Kim’s (1993) arguments against mental causation, and the arguments developed by Walsh (2000), Walsh, Lewens, and Ariew (2002), and Matthen and Ariew (2002) that natural selection (...) and drift are not causes of evolution. A manipulationist account of causation (Woodward 2003) leads naturally to an account of how macro- and micro-causation are related and to an understanding of how epiphenomenalism at different levels of organization should be understood. (shrink)
In both biology and the human sciences, social groups are sometimes treated as adaptive units whose organization cannot be reduced to individual interactions. This group-level view is opposed by a more individualistic one that treats social organization as a byproduct of self-interest. According to biologists, group-level adaptations can evolve only by a process of natural selection at the group level. Most biologists rejected group selection as an important evolutionary force during the 1960s and 1970s but a positive literature began to (...) grow during the 1970s and is rapidly expanding today. We review this recent literature and its implications for human evolutionary biology. We show that the rejection of group selection was based on a misplaced emphasis on genes as “replicators” which is in fact irrelevant to the question of whether groups can be like individuals in their functional organization. The fundamental question is whether social groups and other higher-level entities can be “vehicles” of selection. When this elementary fact is recognized, group selection emerges as an important force in nature and what seem to be competing theories, such as kin selection and reciprocity, reappear as special cases of group selection. The result is a unified theory of natural selection that operates on a nested hierarchy of units.The vehicle-based theory makes it clear that group selection is an important force to consider in human evolution. Humans can facultatively span the full range from self-interested individuals to “organs” of group-level “organisms.” Human behavior not only reflects the balance between levels of selection but it can also alter the balance through the construction of social structures that have the effect of reducing fitness differences within groups, concentrating natural selection at the group level. These social structures and the cognitive abilities that produce them allow group selection to be important even among large groups of unrelated individuals. (shrink)
Several evolutionary biologists have used a parsimony argument to argue that the single gene is the unit of selection. Since all evolution by natural selection can be represented in terms of selection coefficients attaching to single genes, it is, they say, "more parsimonious" to think that all selection is selection for or against single genes. We examine the limitations of this genic point of view, and then relate our criticisms to a broader view of the role of causal concepts and (...) the dangers of reification in science. (shrink)
“Absence of evidence isn’t evidence of absence” is a slogan that is popular among scientists and nonscientists alike. This article assesses its truth by using a probabilistic tool, the Law of Likelihood. Qualitative questions (“Is E evidence about H ?”) and quantitative questions (“How much evidence does E provide about H ?”) are both considered. The article discusses the example of fossil intermediates. If finding a fossil that is phenotypically intermediate between two extant species provides evidence that those species have (...) a common ancestor, does failing to find such a fossil constitute evidence that there was no common ancestor? Or should the failure merely be chalked up to the imperfection of the fossil record? The transitivity of the evidence relation in simple causal chains provides a broader context, which leads to discussion of the fine-tuning argument, the anthropic principle, and observation selection effects. (shrink)
an observation to formulate a theory, it is no surprise that the resulting theory accurately captures that observation. However, when the theory makes a novel prediction—when it predicts an observation that was not used in its formulation—this seems to provide more substantial confirmation of the theory. This paper presents a new approach to the vexed problem of understanding the epistemic difference between prediction and accommodation. In fact, there are several problems that need to be disentangled; in all of them, the (...) key is the concept of overfitting. We float the hypothesis that accommodation is a defective methodology only when the methods used to accommodate the data fail to guard against the risk of overfitting. We connect our analysis with the proposals that other philosophers have made. We also discuss its bearing on the conflict between instrumentalism and scientific realism. Introduction Predictivisms—a taxonomy Observations Formulating the problem What might Annie be doing wrong? Solutions Observations explained Mayo on severe tests The miracle argument and scientific realism Concluding comments. (shrink)
After clarifying the probabilistic conception of causality suggested by Good (1961-2), Suppes (1970), Cartwright (1979), and Skyrms (1980), we prove a sufficient condition for transitivity of causal chains. The bearing of these considerations on the units of selection problem in evolutionary theory and on the Newcomb paradox in decision theory is then discussed.
Evolutionary theory is awash with probabilities. For example, natural selection is said to occur when there is variation in fitness, and fitness is standardly decomposed into two components, viability and fertility, each of which is understood probabilistically. With respect to viability, a fertilized egg is said to have a certain chance of surviving to reproductive age; with respect to fertility, an adult is said to have an expected number of offspring.1 There is more to evolutionary theory than the theory of (...) natural selection, and here too one finds probabilistic concepts aplenty. When there is no selection, the theory of neutral evolution says that a gene’s chance of eventually reaching fixation is 1/(2N), where N is the number of organisms in the generation of the diploid population to which the gene belongs. The evolutionary consequences of mutation are likewise conceptualized in terms of the probability per unit time a gene has of changing from one state to another. The examples just mentioned are all “forwarddirected” probabilities; they describe the probability of later events, conditional on earlier events. However, evolutionary theory also uses “backwards probabilities” that describe the probability of a cause conditional on its effects; for example, coalescence theory allows one to calculate the expected number of generations in the past that the genes in the present generation find their most recent common ancestor. (shrink)
How could the fundamental mental operations which facilitate scientific theorizing be the product of natural selection, since it appears that such theoretical methods were neither used nor useful "in the cave"-i.e., in the sequence of environments in which selection took place? And if these wired-in information processing techniques were not selected for, how can we view rationality as an adaptation? It will be the purpose of this paper to address such questions as these, and in the process to sketch some (...) of the considerations that an evolutionary account of rationality may involve. By describing the broad framework within which the evolution of rationality may eventually be understood, I hope to undermine the idea that evolutionary theory is somehow incapable of dealing with this characteristic and requires supplementation by some novel principle. A more modest ambition of the paper is to try to provoke those who think that there are special problems involved in this evolutionary inquiry to say what these problems are. (shrink)
The concept of fitness began its career in biology long before evolutionary theory was mathematized. Fitness was used to describe an organism’s vigor, or the degree to which organisms “fit” into their environments. An organism’s success in avoiding predators and in building a nest obviously contribute to its fitness and to the fitness of its offspring, but the peacock’s gaudy tail seemed to be in an entirely different line of work. Fitness, as a term in ordinary language (as in “physical (...) fitness”) and in its original biological meaning, applied to the survival of an organism and its offspring, not to sheer reproductive output (Paul ////; Cronin 1991). Darwin’s separation of natural from sexual selection may sound odd from a modern perspective, but it made sense from this earlier point of view. (shrink)
Realists persuaded by indispensability arguments af- firm the existence of numbers, genes, and quarks. Van Fraassen's empiricism remains agnostic with respect to all three. The point of agreement is that the posits of mathematics and the posits of biology and physics stand orfall together. The mathematical Platonist can take heart from this consensus; even if the existence of num- bers is still problematic, it seems no more problematic than the existence of genes or quarks. If the two positions just described (...) were the only ones possible, there could be no objection to this melding of numbers with genes and quarks. However, the position I call contrastive empiricism (Sober 1990a) stands opposed to both realism and to Van Fraassen's em- piricism. As it turns out, contrastive empiricism entails that coalesc- ing mathematics with empirical science is highly problematic. I believe that there is an important kernel of truth in abductive ar- guments for genes and quarks. But no counterpart argument exists for the case of numbers. Of course, the existence of this third way would be uninteresting, if contrastive empiricism were wholly implausible. However, I will argue that contrastive empiricism captures what makes sense in standard versions of realism and empiricism, while avoiding the excesses of each. This third way is a middle way; it cannot be dismissed out of hand. (shrink)
The end of the nineteenth century is remembered as a time when psychology freed itself from philosophy and carved out an autonomous subject matter for itself. In fact, this time of emancipation was also a time of exile: while the psychologists were leaving, philosophers were slamming the door behind them. Frege is celebrated for having demonstrated the irrelevance of psychological considerations to philosophy. Some of Frege’s reasons for distinguishing psychological questions from philosophical ones were sound, but one of Frege’s most (...) influential arguments, which was elaborated upon and advocated by the positivists, vastly overestimated the gap separating the two disciplines. (shrink)
That some propositions are testable, while others are not, was a fundamental idea in the philosophical program known as logical empiricism. That program is now widely thought to be defunct. Quine’s (1953) “Two Dogmas of Empiricism” and Hempel’s (1950) “Problems and Changes in the Empiricist Criterion of Meaning” are among its most notable epitaphs. Yet, as we know from Mark Twain’s comment on an obituary that he once had the pleasure of reading about himself, the report of a death can (...) be an exaggeration. The research program that began in Vienna and Berlin continues, even though many of the specific formulations that came out of those circles are flawed and need to be replaced. (shrink)
A statement of the form ‘C caused E’ obeys the requirement of proportionality precisely when C says no more than what is necessary to bring about E. The thesis that causal statements must obey this requirement might be given a semantic or a pragmatic justification. We use the idea that causal claims are contrastive to criticize both.
In a recent article, Kim Sterelny and Philip Kitcher5 defend a version of genic selectionism and attempt to refute the criticisms I made of that doctrine. Their defense has two components. First, they find fault with the account I gave of the units-of-selection controversy-an account which uses the idea of probabilistic causality as a tool of explication. Second, they provide a positive account of their own of what that controversy concerns, one which they think allows genic selectionism to emerge as (...) a successful thesis. I believe that the position they sketch is mistaken, both in its general orientation and in its details. I believe that the Sterelny/ Kitcher position misunderstands what the biological question of the units of selection is about and that their criticisms of my own proposal are mistaken as well. (shrink)
To evaluate Hume's thesis that causal claims are always empirical, I consider three kinds of causal statement: ?e1 caused e2 ?, ?e1 promoted e2 ?, and ?e1 would promote e2 ?. Restricting my attention to cases in which ?e1 occurred? and ?e2 occurred? are both empirical, I argue that Hume was right about the first two, but wrong about the third. Standard causal models of natural selection that have this third form are a priori mathematical truths. Some are obvious, others (...) less so. Empirical work on natural selection takes the form of defending causal claims of the first two types. I provide biological examples that illustrate differences among these three kinds of causal claim. (shrink)
A simple and general criterion is derived for the evolution of altruism when individuals interact in pairs. It is argued that the treatment of this problem in kin selection theory and in game theory are special cases of this general criterion.
John Beatty (1995) and Alexander Rosenberg (1994) have argued against the claim that there are laws in biology. Beatty's main reason is that evolution is a process full of contingency, but he also takes the existence of relative significance controversies in biology and the popularity of pluralistic approaches to a variety of evolutionary questions to be evidence for biology's lawlessness. Rosenberg's main argument appeals to the idea that biological properties supervene on large numbers of physical properties, but he also develops (...) case studies of biological controversies to defend his thesis that biology is best understood as an instrumental discipline. The present paper assesses their arguments. (shrink)
I discuss two subjects in Samir Okasha’s excellent book, Evolution and the Levels of Selection. In consonance with Okasha’s critique of the conventionalist view of the units of selection problem, I argue that conventionalists have not attended to what realists mean by group, individual, and genic selection. In connection with Okasha’s discussion of the Price equation and contextual analysis, I discuss whether the existence of these two quantitative frameworks is a challenge to realism.
The thesis that natural selection explains the frequencies of traits in populations, but not why individual organisms have the traits tehy do, is here defended and elaborated. A general concept of ‘distributive explanation’ is discussed.
In 'Two Dogmas of Empiricism', Quine attacks the analytic/synthetic distinction and defends a doctrine that I call epistemological holism. Now, almost fifty years after the article's appearance, what are we to make of these ideas? I suggest that the philosophical naturalism that Quine did so much to promote should lead us to reject Quine's brief against the analytic/synthetic distinction; I also argue that Quine misunderstood Carnap's views on analyticity. As for epistemological holism, I claim that this thesis does not follow (...) from the logical point that Duhem and Quine made about the role of auxiliary assumptions in hypothesis testing, and that the thesis should be rejected. \\\ [Peter Hylton] Section I of this essay discusses Quine's views about reference, contrasting them with those of Russell. For the latter, our language and thought succeed in being about the world because of our acquaintance with objects; the relation of reference-roughly, the relation between a name and its bearer-is thus fundamental. For Quine, by contrast, the fundamental relation by which our language comes to be about the world, and to have empirical content, is that between a sentence and stimulations of our sensory surfaces; reference, while important, is a derivative notion. Section II shows how this view of reference as derivative makes possible the notorious Quinean doctrine of ontological relativity. Section III raises the issue of realism. It argues that somewhat different notions of realism are in play for Quine and for Russell-for Russell, objects, and our knowledge of objects, play the fundamental role, while for quine objectivity and truth are fundamental, with ontology being derivative. (shrink)
Nancy Cartwright (1983, 1999) argues that (1) the fundamental laws of physics are true when and only when appropriate ceteris paribus modifiers are attached and that (2) ceteris paribus modifiers describe conditions that are almost never satisfied. She concludes that when the fundamental laws of physics are true, they don't apply in the real world, but only in highly idealized counterfactual situations. In this paper, we argue that (1) and (2) together with an assumption about contraposition entail the opposite conclusion (...) — that the fundamental laws of physics do apply in the real world. Cartwright extracts from her thesis about the inapplicability of fundamental laws the conclusion that they cannot figure in covering-law explanations. We construct a different argument for a related conclusion — that forward-directed idealized dynamical laws cannot provide covering-law explanations that are causal. This argument is neutral on whether the assumption about contraposition is true. We then discuss Cartwright's simulacrum account of explanation, which seeks to describe how idealized laws can be explanatory. (shrink)
When two causally independent processes each have a quantity that increases monotonically (either deterministically or in probabilistic expectation), the two quantities will be correlated, thus providing a counterexample to Reichenbach's principle of the common cause. Several philosophers have denied this, but I argue that their efforts to save the principle are unsuccessful. Still, one salvage attempt does suggest a weaker principle that avoids the initial counterexample. However, even this weakened principle is mistaken, as can be seen by exploring the concepts (...) of homology and homoplasy used in evolutionary biology. I argue that the kernel of truth in the principle of the common cause is to be found by separating metaphysical and epistemological issues; as far as the epistemology is concerned, the Likelihood Principle is central. (shrink)
This paper defends two theses about probabilistic reasoning. First, although modus ponens has a probabilistic analog, modus tollens does not – the fact that a hypothesis says that an observation is very improbable does not entail that the hypothesis is improbable. Second, the evidence relation is essentially comparative; with respect to hypotheses that confer probabilities on observation statements but do not entail them, an observation O may favor one hypothesis H1 over another hypothesis H2 , but O cannot be said (...) to confirm or disconfirm H1 without such relativization. These points have serious consequences for the Intelligent Design movement. Even if evolutionary theory entailed that various complex adaptations are very improbable, that would neither disconfirm the theory nor support the hypothesis of intelligent design. For either of these conclusions to follow, an additional question must be answered: With respect to the adaptive features that evolutionary theory allegedly says are very improbable, what is their probability of arising if they were produced by intelligent design? This crucial question has not been addressed by the ID movement. (shrink)
The propensity interpretation of fitness draws on the propensity interpretation of probability, but advocates of the former have not attended sufficiently to problems with the latter. The causal power of C to bring about E is not well-represented by the conditional probability Pr. Since the viability fitness of trait T is the conditional probability Pr, the viability fitness of the trait does not represent the degree to which having the trait causally promotes surviving. The same point holds for fertility fitness. (...) This failure of trait fitness to capture causal role can also be seen in the fact that coextensive traits must have the same fitness values even if one of them promotes survival and the other is neutral or deleterious. Although the fitness of a trait does not represent the trait’s causal power to promote survival and reproduction, variation in fitness in a population causally promotes change in trait frequencies; in this sense, fitness variation is a population-level propensity. (shrink)
Parsimony arguments are advanced in both science and philosophy. How are they related? This question is a test case for Naturalismp, which is the thesis that philosophical theories and scientific theories should be evaluated by the same criteria. In this paper, I describe the justifications that attach to two types of parsimony argument in science. In the first, parsimony is a surrogate for likelihood. In the second, parsimony is relevant to estimating how accurately a model will predict new data when (...) fitted to old. I then consider how these two justifications apply to parsimony arguments in philosophy concerning theism and atheism, the mind/body problem, ethical realism, the question of whether mental properties are causally efficacious, and nominalism versus Platonism about numbers. (shrink)
This article reviews two standard criticisms of creationism/intelligent design (ID): it is unfalsifiable, and it is refuted by the many imperfect adaptations found in nature. Problems with both criticisms are discussed. A conception of testability is described that avoids the defects in Karl Popper’s falsifiability criterion. Although ID comes in multiple forms, which call for different criticisms, it emerges that ID fails to constitute a serious alternative to evolutionary theory.
Brandon ( 1984, 1990) has argued that Salmon's (1971) concept of screening-off can be used to characterize (i) the idea that natural selection acts directly on an organism's phenotype, only indirectly on its genotype, and (ii) the biological problem of the levels of selection. Brandon also suggests (iii) that screening-off events in a causal chain are better explanations than the events they screen off. This paper critically evaluates Brandon's proposals.
Is it accurate to label Darwin's theory "the theory of evolution by natural selection," given that the concept of common ancestry is at least as central to Darwin's theory? Did Darwin reject the idea that group selection causes characteristics to evolve that are good for the group though bad for the individual? How does Darwin's discussion of God in The Origin of Species square with the common view that he is the champion of methodological naturalism? These are just some of (...) the intriguing questions raised in this volume of interconnected philosophical essays on Darwin. The author's approach is informed by modern issues in evolutionary biology, but is sensitive to the ways in which Darwin's outlook differed from that of many biologists today. The main topics that are the focus of the book--common ancestry, group selection, sex ratio, and naturalism--have rarely been discussed in their connection with Darwin in such penetrating detail. (shrink)
I want to explore what happens to two philosophical issues when we assume that the mind, a functional device, is to be understood by the same sort of functional analysis that guides biological investigation of other organismic systems and characteristics. The first problem area concerns the concept of rationality, its connection with reliability and reproductive success, and the status of rationality hypotheses in attribution of beliefs. It has been argued that ascribing beliefs to someone requires the assumption that that person (...) is rational. It also has been argued that evolutionary theory can account for the capacity we have for rational thought by appealing to the idea that rational methods for constructing beliefs are reliable and therefore represented a selected advantage. Both of these claims about rationality are critically examined. In the second part of the paper, I turn to some criticisms that have been advanced against "functionalist" solutions to the mind/body problem. These criticisms are defused when that doctrine is understood in terms of a Teleological (rather than a Turing Machine) notion of function. However, functionalism does not emerge totally unscathed. Voltaire's Dr. Pangloss saw everything in terms of its function and its perfect performance of its function. In the philosophy of mind, his influence lives on in the form of two ideas: (i) that we must possess rational methods for constructing beliefs, since our minds are the product of natural selection, and (ii) that psychological states and properties are individuated by their psychological functions. Contemporary biology provides the resources for avoiding Pangloss's wideeyed teleology in the study of adaptation. It is to be hoped that progress in psychology will loosen the grip of Panglossian assumptions in the study of mind. (shrink)