The naïve see causal connections everywhere. Consider the fact that Evelyn Marie Adams won the New Jersey lottery twice. The naïve find it irresistible to think that this cannot be a coincidence. Maybe the lottery was rigged or perhaps some uncanny higher power placed its hand upon her brow. Sophisticates respond with an indulgent smile and ask the naïve to view Adams’ double win within a larger perspective. Given all the lotteries there have been, it isn’t at all surprising that (...) someone would win one of them twice. No need to invent conspiracy theories or invoke the paranormal – the double win was a mere coincidence. (shrink)
Fifty years before Darwin defended his theory of evolution by natural selection in The Origin of Species, the French biologist Jean Baptiste Lamarck put forward an evolutionary theory of his own. According to Lamarck, life has an inherent tendency to develop from simple to complex through a preordained sequence of stages. The lineage to which human beings belong is the oldest, since we are the most complex of living things. Present-day worms belong to a lineage that is much younger, since (...) they are simpler. For Lamarck, the human beings and worms that exist today do not share a common ancestor, even though human beings derive from worm-like ancestors. (shrink)
As every philosopher knows, “the design argument” concludes that God exists from premisses that cite the adaptive complexity of organisms or the lawfulness and orderliness of the whole universe. Since 1859, it has formed the intellectual heart of creationist opposition to the Darwinian hypothesis that organisms evolved their adaptive features by the mindless process of natural selection. Although the design argument developed as a defense of theism, the logic of the argument in fact encompasses a larger set of issues. William (...) Paley saw clearly that we sometimes have an excellent reason to postulate the existence of an intelligent designer. If we find a watch on the heath, we reasonably infer that it was produced by an intelligent watchmaker. This design argument makes perfect sense. Why is it any different to claim that the eye was produced by an intelligent designer? Both critics and defenders of the design argument need to understand what the ground rules are for inferring that an intelligent designer is the unseen cause of an observed effect. (shrink)
Quine’s publication in 1951 of “Two Dogmas of Empiricism” was a watershed event in 20th century philosophy. In that essay, Quine sought to demolish the concepts of analyticity and a priority; he also sketched a positive proposal of his own -- epistemological holism. There can be little doubt that philosophy changed as a result of Quine’s work. The question I want to address here is whether it should have. My goal is not to argue for a return to the (...) halcyon days of the logical empiricists. Rather, I want to take stock. Now, almost fifty years after the publication of “Two Dogmas,” what view should we take of analyticity, the a priori, and epistemological holism, and of what Quine said about these topics? (shrink)
The design argument for the existence of God took a probabilistic turn in the 17th and 18th centuries. Earlier versions, such as Thomas Aquinas’ 5th way, usually embraced the premise that goaldirected systems (things that “act for an end” or have a function) must have been created by an intelligent designer. This idea – which we might express by the slogan “no design without a designer” – survived into the 17th and 18th centuries,1 and it is with us still in (...) the writings of many creationists. The new version of the argument, inspired by the emerging mathematical theory of probability, removed the premise of necessity. It begins with the thought that goal-directed systems might have arisen by intelligent design or by chance; the problem is to discern which hypothesis is more plausible. With the epistemic concept of plausibility characterized in terms of the mathematical concept of probability, the design argument was given a new direction. (shrink)
The concept of fitness began its career in biology long before evolutionary theory was mathematized. Fitness was used to describe an organism’s vigor, or the degree to which organisms “fit” into their environments. An organism’s success in avoiding predators and in building a nest obviously contribute to its fitness and to the fitness of its offspring, but the peacock’s gaudy tail seemed to be in an entirely different line of work. Fitness, as a term in ordinary language (as in “physical (...) fitness”) and in its original biological meaning, applied to the survival of an organism and its offspring, not to sheer reproductive output (Paul ////; Cronin 1991). Darwin’s separation of natural from sexual selection may sound odd from a modern perspective, but it made sense from this earlier point of view. (shrink)
Carl Hempel1 set the tone for subsequent philosophical work on scientific explanation by resolutely locating the problem he wanted to address outside of epistemology. “Hempel’s problem,” as I will call it, was not to say what counts as evidence that X is the explanation of Y. Rather, the question was what it means for X to explain Y. Hempel’s theory of explanation and its successors don’t tell you what to believe; instead, they tell you which of your beliefs (if any) (...) can be said to explain a given target proposition. (shrink)
The problem of simplicity involves three questions: How is the simplicity of a hypothesis to be measured? How is the use of simplicity as a guide to hypothesis choice to be justified? And how is simplicity related to other desirable features of hypotheses -- that is, how is simplicity to be traded-off? The present paper explores these three questions, from a variety of viewpoints, including Bayesianism, likelihoodism, and the framework of predictive accuracy formulated by Akaike (1973). It may turn out (...) that simplicity has no global justification -- that its justification varies from problem to problem. (shrink)
In their 2010 book, Biology’s First Law, D. McShea and R. Brandon present a principle that they call ‘‘ZFEL,’’ the zero force evolutionary law. ZFEL says (roughly) that when there are no evolutionary forces acting on a population, the population’s complexity (i.e., how diverse its member organisms are) will increase. Here we develop criticisms of ZFEL and describe a different law of evolution; it says that diversity and complexity do not change when there are no evolutionary causes.
We begin by considering two principles, each having the form causal completeness ergo screening-off . The first concerns a common cause of two or more effects; the second describes an intermediate link in a causal chain. They are logically independent of each other, each is independent of Reichenbach's principle of the common cause, and each is a consequence of the causal Markov condition. Simple examples show that causal incompleteness means that screening-off may fail to obtain. We derive a stronger result: (...) in a rather general setting, if the composite cause C 1 & C 2 & ... & C n screens-off one event from another, then each of the n component causes C 1 , C 2 , ..., C n must fail to screen-off. The idea that a cause may be ordinally invariant in its impact on different effects is defined; it plays an important role in establishing this no-go theorem. Along the way, we describe how composite and component causes can all screen-off when ordinal invariance fails. We argue that this theorem is relevant to assessing the plausibility of the two screening-off principles. The discovery of incomplete causes that screen-off is not evidence that causal completeness must engender screening-off. Formal and epistemic analogies between screening-off and determinism are discussed. (shrink)
This paper is a sympathetic critique of the argument that Reichenbach develops in Chap. 2 of Experience and Prediction for the thesis that sense experience justifies belief in the existence of an external world. After discussing his attack on the positivist theory of meaning, I describe the probability ideas that Reichenbach presents. I argue that Reichenbach begins with an argument grounded in the Law of Likelihood but that he then endorses a different argument that involves prior probabilities. I try to (...) show how this second step in Reichenbach’s approach can be strengthened by using ideas that have been developed recently for understanding causation in terms of the idea of intervention. (shrink)
The probability that the fitter of two alleles will increase in frequency in a population goes up as the product of N (the effective population size) and s (the selection coefficient) increases. Discovering the distribution of values for this product across different alleles in different populations is a very important biological task. However, biologists often use the product Ns to define a different concept; they say that drift “dominates” selection or that drift is “stronger than” selection when Ns is much (...) smaller than some threshold quantity (e.g., ½) and that the reverse is true when Ns is much larger than that threshold. We argue that the question of whether drift dominates selection for a single allele in a single population makes no sense. Selection and drift are causes of evolution, but there is no fact of the matter as to which cause is stronger in the evolution of any given allele. (shrink)
A statement of the form ‘C caused E’ obeys the requirement of proportionality precisely when C says no more than what is necessary to bring about E. The thesis that causal statements must obey this requirement might be given a semantic or a pragmatic justification. We use the idea that causal claims are contrastive to criticize both.
I consider three theses that are friendly to anthropomorphism. Each makes a claim about what can be inferred about the mental life of chimpanzees from the fact that humans and chimpanzees both have behavioral trait B and humans produce this behavior by having mental trait M. The first thesis asserts that this fact makes it probable that chimpanzees have M. The second says that this fact provides strong evidence that chimpanzees have M. The third claims that the fact is evidence (...) that chimpanzees have M. The third thesis follows from a plausible Reichenbachian model of how a common ancestor is probabilistically related to its descendants. The first two theses do not, and they have no general evolutionary justification. (shrink)
“The theory of evolution is about organisms evolving, populations evolving. What does this theory tell us about the quantum mechanics of micro-particles? The answer is ‘nothing’. There’s lots of stuff that happens in the world that the theory just isn’t telling us about. The existence of a God who occasionally intervenes in nature might be one of those things.”.
Philosophers have explored objective interpretations of probability mainly by considering empirical probability statements. Because of this focus, it is widely believed that the logical interpretation and the actual-frequency interpretation are unsatisfactory and the hypothetical-frequency interpretation is not much better. Probabilistic assertions in pure mathematics present a new challenge. Mathematicians prove theorems in number theory that assign probabilities. The most natural interpretation of these probabilities is that they describe actual frequencies in finite sets and limits of actual frequencies in infinite sets. (...) This interpretation vindicates part of what the logical interpretation of probability aimed to establish. (shrink)
To evaluate Hume's thesis that causal claims are always empirical, I consider three kinds of causal statement: ?e1 caused e2 ?, ?e1 promoted e2 ?, and ?e1 would promote e2 ?. Restricting my attention to cases in which ?e1 occurred? and ?e2 occurred? are both empirical, I argue that Hume was right about the first two, but wrong about the third. Standard causal models of natural selection that have this third form are a priori mathematical truths. Some are obvious, others (...) less so. Empirical work on natural selection takes the form of defending causal claims of the first two types. I provide biological examples that illustrate differences among these three kinds of causal claim. (shrink)
I discuss two subjects in Samir Okasha’s excellent book, Evolution and the Levels of Selection. In consonance with Okasha’s critique of the conventionalist view of the units of selection problem, I argue that conventionalists have not attended to what realists mean by group, individual, and genic selection. In connection with Okasha’s discussion of the Price equation and contextual analysis, I discuss whether the existence of these two quantitative frameworks is a challenge to realism.
Markov models of evolution describe changes in the probability distribution of the trait values a population might exhibit. In consequence, they also describe how entropy and conditional entropy values evolve, and how the mutual information that characterizes the relation between an earlier and a later moment in a lineage’s history depends on how much time separates them. These models therefore provide an interesting perspective on questions that usually are considered in the foundations of physics—when and why does entropy increase and (...) at what rates do changes in entropy take place? They also throw light on an important epistemological question: are there limits on what your observations of the present can tell you about the evolutionary past? (shrink)
Evolutionary theory is awash with probabilities. For example, natural selection is said to occur when there is variation in fitness, and fitness is standardly decomposed into two components, viability and fertility, each of which is understood probabilistically. With respect to viability, a fertilized egg is said to have a certain chance of surviving to reproductive age; with respect to fertility, an adult is said to have an expected number of offspring.1 There is more to evolutionary theory than the theory of (...) natural selection, and here too one finds probabilistic concepts aplenty. When there is no selection, the theory of neutral evolution says that a gene’s chance of eventually reaching fixation is 1/(2N), where N is the number of organisms in the generation of the diploid population to which the gene belongs. The evolutionary consequences of mutation are likewise conceptualized in terms of the probability per unit time a gene has of changing from one state to another. The examples just mentioned are all “forwarddirected” probabilities; they describe the probability of later events, conditional on earlier events. However, evolutionary theory also uses “backwards probabilities” that describe the probability of a cause conditional on its effects; for example, coalescence theory allows one to calculate the expected number of generations in the past that the genes in the present generation find their most recent common ancestor. (shrink)
In their book What Darwin Got Wrong , Jerry Fodor and Massimo Piattelli-Palmarini construct an a priori philosophical argument and an empirical biological argument. The biological argument aims to show that natural selection is much less important in the evolutionary process than many biologists maintain. The a priori argument begins with the claim that there cannot be selection for one but not the other of two traits that are perfectly correlated in a population; it concludes that there cannot be an (...) evolutionary theory of adaptation. This article focuses mainly on the a priori argument. *Received March 2010; revised July 2010. †To contact the author, please write to: Department of Philosophy, 5185 Helen C. White Hall, University of Wisconsin–Madison, Madison, WI 53706; e-mail: firstname.lastname@example.org. (shrink)
A phylogeny that allows for lateral gene transfer (LGT) can be thought of as a strictly branching tree (all of whose branches are vertical) to which lateral branches have been added. Given that the goal of phylogenetics is to depict evolutionary history, we should look for the best supported phylogenetic network and not restrict ourselves to considering trees. However, the obvious extensions of popular tree-based methods such as maximum parsimony and maximum likelihood face a serious problem—if we judge networks by (...) fit to data alone, networks that have lateral branches will always fit the data at least as well as any network that restricts itself to vertical branches. This is analogous to the well-studied problem of overfitting data in the curve-fitting problem. Analogous problems often have analogous solutions and we propose to treat network inference as a case of model selection and use the Akaike Information Criterion (AIC). Strictly tree-like networks are more parsimonious than those that postulate lateral as well as vertical branches. This leads to the conclusion that we should not always infer LGT events whenever it would improve our fit-to-data, but should do so only when the improved fit is larger than the penalty for adding extra lateral branches. (shrink)
“Absence of evidence isn’t evidence of absence” is a slogan that is popular among scientists and nonscientists alike. This article assesses its truth by using a probabilistic tool, the Law of Likelihood. Qualitative questions (“Is E evidence about H ?”) and quantitative questions (“How much evidence does E provide about H ?”) are both considered. The article discusses the example of fossil intermediates. If finding a fossil that is phenotypically intermediate between two extant species provides evidence that those species have (...) a common ancestor, does failing to find such a fossil constitute evidence that there was no common ancestor? Or should the failure merely be chalked up to the imperfection of the fossil record? The transitivity of the evidence relation in simple causal chains provides a broader context, which leads to discussion of the fine-tuning argument, the anthropic principle, and observation selection effects. (shrink)
In my paper “Intelligent Design Theory and the Supernatural—the ‘God or Extra-Terrestrial’ Reply,” I argued that Intelligent Design (ID) Theory, when coupled with independently plausible further assumptions, leads to the conclusion that a supernatural intelligent designer exists. ID theory is therefore not neutral on the question of whether there are supernatural agents. In this respect, it differs from the Darwinian theory of evolution. John Beaudoin replies to my paper in his “Sober on Intelligent Design Theory and the Intelligent Designer,” arguing (...) that my paper faces two challenges. In the present paper, I try to address Beaudoin’s challenges. (shrink)
When proponents of Intelligent Design (ID) theory deny that their theory is religious, the minimalistic theory they have in mind (the mini-ID theory) is the claim that the irreducibly complex adaptations found in nature were made by one or more intelligent designers. The denial that this theory is religious rests on the fact that it does not specify the identity of the designer—a supernatural God or a team of extra-terrestrials could have done the work. The present paper attempts to show (...) that this reply underestimates the commitments of the mini-ID Theory. The mini-ID theory, when supplemented with four independently plausible further assumptions, entails the existence of a supernatural intelligent designer. It is further argued that scientific theories, such as the Darwinian theory of evolution, are neutral on the question of whether supernatural designers exist. (shrink)
an observation to formulate a theory, it is no surprise that the resulting theory accurately captures that observation. However, when the theory makes a novel prediction—when it predicts an observation that was not used in its formulation—this seems to provide more substantial confirmation of the theory. This paper presents a new approach to the vexed problem of understanding the epistemic difference between prediction and accommodation. In fact, there are several problems that need to be disentangled; in all of them, the (...) key is the concept of overfitting. We float the hypothesis that accommodation is a defective methodology only when the methods used to accommodate the data fail to guard against the risk of overfitting. We connect our analysis with the proposals that other philosophers have made. We also discuss its bearing on the conflict between instrumentalism and scientific realism. Introduction Predictivisms—a taxonomy Observations Formulating the problem What might Annie be doing wrong? Solutions Observations explained Mayo on severe tests The miracle argument and scientific realism Concluding comments. (shrink)
What thesis is Hume trying to establish in his essay “On Miracles” (Section 10 of the Enquiry Concerning Human Understanding) and does he succeed? John Earman’s answer to the latter question is clearly conveyed by the title of his new book. Earman uses a Bayesian representation of the problem to make his case. For Earman, this mode of analysis is both perspicuous and nonanachronistic, in that probability reasoning was central to the 18th century debate about miracles in particular and testimony (...) in general. Indeed, one of Hume’s most interesting antagonists, Richard Price, was the person to whom Thomas Bayes entrusted his now-famous essay for posthumous publication. For Earman, Price is the proper Bayesian, while Hume’s essay provides only “rhetoric and schein geld” (p. 73). Earman’s tone is consistently prosecutorial and sometimes snide; he says that his animus is not so much against Hume himself as against those who smugly invoke Hume’s essay as definitively settling the matter. This tone should not deter potential readers who are convinced that Hume’s essay contains something of value. Earman’s book is interesting and provocative in multiple ways—it places Hume’s essay in its historical setting, it offers an insightful close reading of the text, and it shows how the resources of Bayesianism can be powerfully put to work. Besides Earman’s own essay (94 pages long), the volume also contains Hume’s essay and relevant work by others, including Locke, Spinoza, Samuel Clarke, Price, Laplace, and Babbage. The book would be an excellent choice for an advanced undergraduate or graduate seminar. (shrink)
I discuss two versions of the doomsday argument. According to ``Gott's Line'',the fact that the human race has existed for 200,000 years licences the predictionthat it will last between 5100 and 7.8 million more years. According to ``Leslie'sWedge'', the fact that I currently exist is evidence that increases the plausibilityof the hypothesis that the human race will come to an end sooner rather than later.Both arguments rest on substantive assumptions about the sampling process thatunderlies our observations. These sampling assumptions have (...) testable consequences,and so the sampling assumptions themselves must be regarded as empirical claims.The result of testing some of these consequences is that both doomsday argumentsare empirically disconfirmed. (shrink)
We explore the evidential relationships that connect two standard claims of modern evolutionary biology. The hypothesis of common ancestry (which says that all organisms now on earth trace back to a single progenitor) and the hypothesis of natural selection (which says that natural selection has been an important influence on the traits exhibited by organisms) are logically independent; however, this leaves open whether testing one requires assumptions about the status of the other. Darwin noted that an extreme version of adaptationism (...) would undercut the possibility of making inferences about common ancestry. Here we develop a converse claim—hypotheses that assert that natural selection has been an important influence on trait values are untestable unless supplemented by suitable background assumptions. The fact of common ancestry and a claim about quantitative genetics together suffice to render such hypotheses testable. Furthermore, we see no plausible alternative to these assumptions; we hypothesize that they are necessary as well as sufficient for adaptive hypotheses to be tested. This point has important implications for biological practice, since biologists standardly assume that adaptive hypotheses predict trait associations among tip species. Another consequence is that adaptive hypotheses cannot be confirmed or disconfirmed by a trait value that is universal within a single species, if that trait value deviates even slightly from the optimum. 1 Two Darwinian hypotheses 2 Logical independence 3 How adaptive hypotheses bear on the tree of life hypothesis 4 How the tree of life hypothesis bears on adaptive hypotheses 5 What do adaptive hypotheses predict? 6 Common ancestry and quantitative genetics to the rescue 7 Conclusion. (shrink)
Nancy Cartwright (1983, 1999) argues that (1) the fundamental laws of physics are true when and only when appropriate ceteris paribus modifiers are attached and that (2) ceteris paribus modifiers describe conditions that are almost never satisfied. She concludes that when the fundamental laws of physics are true, they don't apply in the real world, but only in highly idealized counterfactual situations. In this paper, we argue that (1) and (2) together with an assumption about contraposition entail the opposite conclusion (...) — that the fundamental laws of physics do apply in the real world. Cartwright extracts from her thesis about the inapplicability of fundamental laws the conclusion that they cannot figure in covering-law explanations. We construct a different argument for a related conclusion — that forward-directed idealized dynamical laws cannot provide covering-law explanations that are causal. This argument is neutral on whether the assumption about contraposition is true. We then discuss Cartwright's simulacrum account of explanation, which seeks to describe how idealized laws can be explanatory. (shrink)
Unified explanations seek to situate the traits of human beings in a causal framework that also explains the trait values found in nonhuman species. Disunified explanations claim that the traits of human beings are due to causal processes not at work in the rest of nature. This paper outlines a methodology for testing hypotheses of these two types. Implications are drawn concerning evolutionary psychology, adaptationism, and anti-adaptationism.
This paper defends two theses about probabilistic reasoning. First, although modus ponens has a probabilistic analog, modus tollens does not – the fact that a hypothesis says that an observation is very improbable does not entail that the hypothesis is improbable. Second, the evidence relation is essentially comparative; with respect to hypotheses that confer probabilities on observation statements but do not entail them, an observation O may favor one hypothesis H1 over another hypothesis H2 , but O cannot be said (...) to confirm or disconfirm H1 without such relativization. These points have serious consequences for the Intelligent Design movement. Even if evolutionary theory entailed that various complex adaptations are very improbable, that would neither disconfirm the theory nor support the hypothesis of intelligent design. For either of these conclusions to follow, an additional question must be answered: With respect to the adaptive features that evolutionary theory allegedly says are very improbable, what is their probability of arising if they were produced by intelligent design? This crucial question has not been addressed by the ID movement. (shrink)
Akaike’s framework for thinking about model selection in terms of the goal of predictive accuracy and his criterion for model selection have important philosophical implications. Scientists often test models whose truth values they already know, and they often decline to reject models that they know full well are false. Instrumentalism helps explain this pervasive feature of scientific practice, and Akaike’s framework helps provide instrumentalism with the epistemology it needs. Akaike’s criterion for model selection also throws light on the role of (...) parsimony considerations in hypothesis evaluation. I explain the basic ideas behind Akaike’s framework and criterion; several biological examples, including the use of maximum likelihood methods in phylogenetic inference, are considered. (shrink)
We have two main objections to Kerr and Godfrey-Smith's (2002) meticulous analysis. First, they misunderstand the position we took in Unto Others – we do not claim that individual-level statements about the evolution of altruism are always unexplanatory and always fail to capture causal relationships. Second, Kerr and Godfrey-Smith characterize the individual and the multi-level perspectives in terms of different sets of parameters. In particular, they do not allow the multi-level perspective to use the individual fitness parameters i and i. (...) We don't see why the multi-level perspective prevents one from thinking in these terms. Kerr and Godfrey-Smith's argument that Uyenoyama and Feldman's (1980, 1992) definition of altruism belongs more to the individualist perspective than it does to the multi-level perspective is an artifact of their choice of parameters; the same point applies to their argument about the individualism inherent in the idea of Class I and Class II fitness structures. (shrink)
Instrumentalism is usually understood as a semantic thesis: scientific theories are neither true nor false, but are merely instruments for making predictions. Scientific realists are on firm ground when they reject this semantic claim. This paper focuses on epistemological rather than semantic instrumentalism. This form of instrumentalism claims that theories are to be judged by their ability to make accurate predictions, and that predictive accuracy is the only consideration that matters in the end. I consider how instrumentalism is related to (...) a quite different proposal concerning how theories should be evaluated—scientific realism. Instrumentalism allows for the fact that a false model can get one closer to the truth than a true one. (shrink)
When two causally independent processes each have a quantity that increases monotonically (either deterministically or in probabilistic expectation), the two quantities will be correlated, thus providing a counterexample to Reichenbach's principle of the common cause. Several philosophers have denied this, but I argue that their efforts to save the principle are unsuccessful. Still, one salvage attempt does suggest a weaker principle that avoids the initial counterexample. However, even this weakened principle is mistaken, as can be seen by exploring the concepts (...) of homology and homoplasy used in evolutionary biology. I argue that the kernel of truth in the principle of the common cause is to be found by separating metaphysical and epistemological issues; as far as the epistemology is concerned, the Likelihood Principle is central. (shrink)
Perhaps because of it implications for our understanding of human nature, recent philosophy of biology has seen what might be the most dramatic work in the philosophies of the ”special” sciences. This drama has centered on evolutionary theory, and in the second edition of this textbook, Elliott Sober introduces the reader to the most important issues of these developments. With a rare combination of technical sophistication and clarity of expression, Sober engages both the higher level of theory and the direct (...) implications for such controversial issues as creationism, teleology, nature versus nurture, and sociobiology. Above all, the reader will gain from this book a firm grasp of the structure of evolutionary theory, the evidence for it, and the scope of its explanatory significance. (shrink)
We address the following issues raised by the commentators of our target article and book: (1) the problem of multiple perspectives; (2) how to define group selection; (3) distinguishing between the concepts of altruism and organism; (4) genetic versus cultural group selection; (5) the dark side of group selection; (6) the relationship between psychological and evolutionary altruism; (7) the question of whether the psychological questions can be answered; (8) psychological experiments. We thank the contributors for their commentaries, which provide a (...) diverse agenda for future study of evolution and morality. Our response will follow the organization of our book, distinguishing between evolutionary issues that concern fitness effects and psychological issues that concern motives. (shrink)
The hypothesis of group selection fell victim to a seemingly devastating critique in 1960s evolutionary biology. In Unto Others (1998), we argue to the contrary, that group selection is a conceptually coherent and empirically well documented cause of evolution. We suggest, in addition, that it has been especially important in human evolution. In the second part of Unto Others, we consider the issue of psychological egoism and altruism -- do human beings have ultimate motives concerning the well-being of others? We (...) argue that previous psychological and philosophical work on this question has been inconclusive. We propose an evolutionary argument for the claim that human beings have altruistic ultimate motives. (shrink)
Human beings are peculiar. In laboratory experiments, they often cooperate in one-shot prisoners’ dilemmas, they frequently offer 1/2 and reject low offers in the ultimatum game, and they often bid 1/2 in the game of divide-the-cake All these behaviors are puzzling from the point of view of game theory. The first two are irrational, if utility is measured in a certain way.1 The last isn’t positively irrational, but it is no more rational than other possible actions, since there are infinitely (...) many other Nash equilibria besides the one in which both players bid 1/2. At the same time, these behaviors seem to indicate that people are sometimes inclined to be cooperative, fair, and just. In his stimulating new book, Brian Skyrms sets himself the task of showing why these inclinations evolved, or how they might have evolved, under the pressure of natural selection. The goal is not to justify our ethical intuitions, but to explain why we have them.2.. (shrink)
Modus Darwin is a principle of inference that licenses the conclusion that two species have a common ancestor, based on the observation that they are similar. The present paper investigates the principle's probabilistic foundations.
That some propositions are testable, while others are not, was a fundamental idea in the philosophical program known as logical empiricism. That program is now widely thought to be defunct. Quine’s (1953) “Two Dogmas of Empiricism” and Hempel’s (1950) “Problems and Changes in the Empiricist Criterion of Meaning” are among its most notable epitaphs. Yet, as we know from Mark Twain’s comment on an obituary that he once had the pleasure of reading about himself, the report of a death can (...) be an exaggeration. The research program that began in Vienna and Berlin continues, even though many of the specific formulations that came out of those circles are flawed and need to be replaced. (shrink)
Reductionism is often understood to include two theses: (1) every singular occurrence that the special sciences can explain also can be explained by physics; (2) every law in a higher-level science can be explained by physics. These claims are widely supposed to have been refuted by the multiple realizability argument, formulated by Putnam (1967, 1975) and Fodor (1968, 1975). The present paper criticizes the argument and identifies a reductionistic thesis that follows from one of the argument's premises.
In Chapter 12 of Warrant and Proper Function, Alvin Plantinga constructs two arguments against evolutionary naturalism, which he construes as a conjunction E&N .The hypothesis E says that “human cognitive faculties arose by way of the mechanisms to which contemporary evolutionary thought directs our attention (p.220).”1 With respect to proposition N , Plantinga (p. 270) says “it isn’t easy to say precisely what naturalism is,” but then adds that “crucial to metaphysical naturalism, of course, is the view that there is (...) no such person as the God of traditional theism.” Plantinga tries to cast doubt on the conjunction E&N in two ways.His “preliminary argument” aims to show that the conjunction is probably false, given the fact (R) that our psychological mechanisms for forming beliefs about the world are generally reliable.His “main argument” aims to show that the conjunction E&N is self-defeating — if you believe E&N , then you should stop believing that conjunction.Plantinga further develops the main argument in his unpublished paper “Naturalism Defeated” (Plantinga 1994).We will try to show that both arguments contain serious errors. (shrink)
An empirical procedure is suggested for testing a model that postulates variables that intervene between observed causes and abserved effects against a model that includes no such postulate. The procedure is applied to two experiments in psychology. One involves a conditioning regimen that leads to response generalization; the other concerns the question of whether chimpanzees have a theory of mind.
This paper proposes a game-theoretic solution of the surprise examination problem. It is argued that the game of “matching pennies” provides a useful model for the interaction of a teacher who wants her exam to be surprising and students who want to avoid being surprised. A distinction is drawn between prudential and evidential versions of the problem. In both, the teacher should not assign a probability of zero to giving the exam on the last day. This representation of the problem (...) provides a diagnosis of where the backwards induction argument, which “proves” that no surprise exam is possible, is mistaken. (shrink)
No matter what we do, however kind or generous our deeds may seem, a hidden motive of selfishness lurks--or so science has claimed for years. This book, whose publication promises to be a major scientific event, tells us differently. In Unto Others philosopher Elliott Sober and biologist David Sloan Wilson demonstrate once and for all that unselfish behavior is in fact an important feature of both biological and human nature. Their book provides a panoramic view of altruism throughout the animal (...) kingdom--from self-sacrificing parasites to insects that subsume themselves in the superorganism of a colony to the human capacity for selflessness--even as it explains the evolutionary sense of such behavior. (shrink)
John Beatty (1995) and Alexander Rosenberg (1994) have argued against the claim that there are laws in biology. Beatty's main reason is that evolution is a process full of contingency, but he also takes the existence of relative significance controversies in biology and the popularity of pluralistic approaches to a variety of evolutionary questions to be evidence for biology's lawlessness. Rosenberg's main argument appeals to the idea that biological properties supervene on large numbers of physical properties, but he also develops (...) case studies of biological controversies to defend his thesis that biology is best understood as an instrumental discipline. The present paper assesses their arguments. (shrink)
If a parsimony criterion may be used to choose between theories that make different predictions, may the same criterion be used to choose between theories that are predictively equivalent? The work of the statistician H. Akaike (1973) is discussed in connection with this question. The results are applied to two examples in which parsimony has been invoked to choose between philosophical theories-Shoemaker's (1969) discussion of the possibility of time without change and the discussion by Smart (1959) and Brandt and Kim (...) (1967) of mind/body dualism and the identity theory. (shrink)
The thesis that natural selection explains the frequencies of traits in populations, but not why individual organisms have the traits tehy do, is here defended and elaborated. A general concept of ‘distributive explanation’ is discussed.
Traditional analyses of the curve fitting problem maintain that the data do not indicate what form the fitted curve should take. Rather, this issue is said to be settled by prior probabilities, by simplicity, or by a background theory. In this paper, we describe a result due to Akaike , which shows how the data can underwrite an inference concerning the curve's form based on an estimate of how predictively accurate it will be. We argue that this approach throws light (...) on the theoretical virtues of parsimoniousness, unification, and non ad hocness, on the dispute about Bayesianism, and on empiricism and scientific realism. * Both of us gratefully acknowledge support from the Graduate School at the University of Wisconsin-Madison, and NSF grant DIR-8822278 (M.F.) and NSF grant SBE-9212294 (E.S.). Special thanks go to A. W. F. Edwards.William Harper. Martin Leckey. Brian Skyrms, and especially Peter Turney for helpful comments on an earlier draft. (shrink)
Only one traditional objection to Pascal's wager is telling: Pascal assumes a particular theology, but without justification. We produce two new objections that go deeper. We show that even if Pascal's theology is assumed to be probable, Pascal's argument does not go through. In addition, we describe a wager that Pascal never considered, which leads away from Pascal's conclusion. We then consider the impact of these considerations on other prudential arguments concerning what one should believe, and on the more general (...) question of when and why belief formation ought to be based solely on the evidence. (shrink)
Elliott Sober is one of the leading philosophers of science and is a former winner of the Lakatos Prize, the major award in the field. This new collection of essays will appeal to a readership that extends well beyond the frontiers of the philosophy of science. Sober shows how ideas in evolutionary biology bear in significant ways on traditional problems in philosophy of mind and language, epistemology, and metaphysics. Amongst the topics addressed are psychological egoism, solipsism, and the interpretation of (...) belief and utterance, empiricism, Ockham's razor, causality, essentialism, and scientific laws. The collection will prove invaluable to a wide range of philosophers, primarily those working in the philosophy of science, the philosophy of mind, and epistemology. (shrink)