The naïve see causal connections everywhere. Consider the fact that Evelyn Marie Adams won the New Jersey lottery twice. The naïve find it irresistible to think that this cannot be a coincidence. Maybe the lottery was rigged or perhaps some uncanny higher power placed its hand upon her brow. Sophisticates respond with an indulgent smile and ask the naïve to view Adams’ double win within a larger perspective. Given all the lotteries there have been, it isn’t at all surprising that (...) someone would win one of them twice. No need to invent conspiracy theories or invoke the paranormal – the double win was a mere coincidence. (shrink)
Fifty years before Darwin defended his theory of evolution by natural selection in The Origin of Species, the French biologist Jean Baptiste Lamarck put forward an evolutionary theory of his own. According to Lamarck, life has an inherent tendency to develop from simple to complex through a preordained sequence of stages. The lineage to which human beings belong is the oldest, since we are the most complex of living things. Present-day worms belong to a lineage that is much younger, since (...) they are simpler. For Lamarck, the human beings and worms that exist today do not share a common ancestor, even though human beings derive from worm-like ancestors. (shrink)
Quine’s publication in 1951 of “Two Dogmas of Empiricism” was a watershed event in 20th century philosophy. In that essay, Quine sought to demolish the concepts of analyticity and a priority; he also sketched a positive proposal of his own -- epistemological holism. There can be little doubt that philosophy changed as a result of Quine’s work. The question I want to address here is whether it should have. My goal is not to argue for a return to the (...) halcyon days of the logical empiricists. Rather, I want to take stock. Now, almost fifty years after the publication of “Two Dogmas,” what view should we take of analyticity, the a priori, and epistemological holism, and of what Quine said about these topics? (shrink)
The design argument for the existence of God took a probabilistic turn in the 17th and 18th centuries. Earlier versions, such as Thomas Aquinas’ 5th way, usually embraced the premise that goaldirected systems (things that “act for an end” or have a function) must have been created by an intelligent designer. This idea – which we might express by the slogan “no design without a designer” – survived into the 17th and 18th centuries,1 and it is with us still in (...) the writings of many creationists. The new version of the argument, inspired by the emerging mathematical theory of probability, removed the premise of necessity. It begins with the thought that goal-directed systems might have arisen by intelligent design or by chance; the problem is to discern which hypothesis is more plausible. With the epistemic concept of plausibility characterized in terms of the mathematical concept of probability, the design argument was given a new direction. (shrink)
The concept of fitness began its career in biology long before evolutionary theory was mathematized. Fitness was used to describe an organism’s vigor, or the degree to which organisms “fit” into their environments. An organism’s success in avoiding predators and in building a nest obviously contribute to its fitness and to the fitness of its offspring, but the peacock’s gaudy tail seemed to be in an entirely different line of work. Fitness, as a term in ordinary language (as in “physical (...) fitness”) and in its original biological meaning, applied to the survival of an organism and its offspring, not to sheer reproductive output (Paul ////; Cronin 1991). Darwin’s separation of natural from sexual selection may sound odd from a modern perspective, but it made sense from this earlier point of view. (shrink)
Carl Hempel1 set the tone for subsequent philosophical work on scientific explanation by resolutely locating the problem he wanted to address outside of epistemology. “Hempel’s problem,” as I will call it, was not to say what counts as evidence that X is the explanation of Y. Rather, the question was what it means for X to explain Y. Hempel’s theory of explanation and its successors don’t tell you what to believe; instead, they tell you which of your beliefs (if any) (...) can be said to explain a given target proposition. (shrink)
The problem of simplicity involves three questions: How is the simplicity of a hypothesis to be measured? How is the use of simplicity as a guide to hypothesis choice to be justified? And how is simplicity related to other desirable features of hypotheses -- that is, how is simplicity to be traded-off? The present paper explores these three questions, from a variety of viewpoints, including Bayesianism, likelihoodism, and the framework of predictive accuracy formulated by Akaike (1973). It may turn out (...) that simplicity has no global justification -- that its justification varies from problem to problem. (shrink)
Egoism and altruism need not be characterized as single factor theories of motivation, according to which there is a single kind of preference (self-regarding or other-regarding) that moves people to action. Rather, each asserts a claim of causal primacy--a claim as to which sort of preference is the more powerful influence on behavior. This paper shows that this idea of causal primacy can be clarified in a standard scientific way. This formulation explains why many observed behaviors fail to discriminate between (...) the hypothesis that the agent is altruistic and the hypotheses that the agent is egoistic. A distinction between altruistic motivation and altruistic action is then drawn, from which it follows that it is an open question how often altruists behave altruistically. (shrink)
The probability that the fitter of two alleles will increase in frequency in a population goes up as the product of N (the effective population size) and s (the selection coefficient) increases. Discovering the distribution of values for this product across different alleles in different populations is a very important biological task. However, biologists often use the product Ns to define a different concept; they say that drift “dominates” selection or that drift is “stronger than” selection when Ns is much (...) smaller than some threshold quantity (e.g., ½) and that the reverse is true when Ns is much larger than that threshold. We argue that the question of whether drift dominates selection for a single allele in a single population makes no sense. Selection and drift are causes of evolution, but there is no fact of the matter as to which cause is stronger in the evolution of any given allele. (shrink)
In the world of philosophy of science, the dominant theory of confirmation is Bayesian. In the wider philosophical world, the idea of inference to the best explanation exerts a considerable influence. Here we place the two worlds in collision, using Bayesian confirmation theory to argue that explanatoriness is evidentially irrelevant.
In their 2010 book, Biology’s First Law, D. McShea and R. Brandon present a principle that they call ‘‘ZFEL,’’ the zero force evolutionary law. ZFEL says (roughly) that when there are no evolutionary forces acting on a population, the population’s complexity (i.e., how diverse its member organisms are) will increase. Here we develop criticisms of ZFEL and describe a different law of evolution; it says that diversity and complexity do not change when there are no evolutionary causes.
I consider three theses that are friendly to anthropomorphism. Each makes a claim about what can be inferred about the mental life of chimpanzees from the fact that humans and chimpanzees both have behavioral trait B and humans produce this behavior by having mental trait M. The first thesis asserts that this fact makes it probable that chimpanzees have M. The second says that this fact provides strong evidence that chimpanzees have M. The third claims that the fact is evidence (...) that chimpanzees have M. The third thesis follows from a plausible Reichenbachian model of how a common ancestor is probabilistically related to its descendants. The first two theses do not, and they have no general evolutionary justification. (shrink)
“The theory of evolution is about organisms evolving, populations evolving. What does this theory tell us about the quantum mechanics of micro-particles? The answer is ‘nothing’. There’s lots of stuff that happens in the world that the theory just isn’t telling us about. The existence of a God who occasionally intervenes in nature might be one of those things.”.
To evaluate Hume's thesis that causal claims are always empirical, I consider three kinds of causal statement: ?e1 caused e2 ?, ?e1 promoted e2 ?, and ?e1 would promote e2 ?. Restricting my attention to cases in which ?e1 occurred? and ?e2 occurred? are both empirical, I argue that Hume was right about the first two, but wrong about the third. Standard causal models of natural selection that have this third form are a priori mathematical truths. Some are obvious, others (...) less so. Empirical work on natural selection takes the form of defending causal claims of the first two types. I provide biological examples that illustrate differences among these three kinds of causal claim. (shrink)
I discuss two subjects in Samir Okasha’s excellent book, Evolution and the Levels of Selection. In consonance with Okasha’s critique of the conventionalist view of the units of selection problem, I argue that conventionalists have not attended to what realists mean by group, individual, and genic selection. In connection with Okasha’s discussion of the Price equation and contextual analysis, I discuss whether the existence of these two quantitative frameworks is a challenge to realism.
This paper is a sympathetic critique of the argument that Reichenbach develops in Chap. 2 of Experience and Prediction for the thesis that sense experience justifies belief in the existence of an external world. After discussing his attack on the positivist theory of meaning, I describe the probability ideas that Reichenbach presents. I argue that Reichenbach begins with an argument grounded in the Law of Likelihood but that he then endorses a different argument that involves prior probabilities. I try to (...) show how this second step in Reichenbach's approach can be strengthened by using ideas that have been developed recently for understanding causation in terms of the idea of intervention. (shrink)
Markov models of evolution describe changes in the probability distribution of the trait values a population might exhibit. In consequence, they also describe how entropy and conditional entropy values evolve, and how the mutual information that characterizes the relation between an earlier and a later moment in a lineage’s history depends on how much time separates them. These models therefore provide an interesting perspective on questions that usually are considered in the foundations of physics—when and why does entropy increase and (...) at what rates do changes in entropy take place? They also throw light on an important epistemological question: are there limits on what your observations of the present can tell you about the evolutionary past? (shrink)
Evolutionary theory is awash with probabilities. For example, natural selection is said to occur when there is variation in fitness, and fitness is standardly decomposed into two components, viability and fertility, each of which is understood probabilistically. With respect to viability, a fertilized egg is said to have a certain chance of surviving to reproductive age; with respect to fertility, an adult is said to have an expected number of offspring.1 There is more to evolutionary theory than the theory of (...) natural selection, and here too one finds probabilistic concepts aplenty. When there is no selection, the theory of neutral evolution says that a gene’s chance of eventually reaching fixation is 1/(2N), where N is the number of organisms in the generation of the diploid population to which the gene belongs. The evolutionary consequences of mutation are likewise conceptualized in terms of the probability per unit time a gene has of changing from one state to another. The examples just mentioned are all “forwarddirected” probabilities; they describe the probability of later events, conditional on earlier events. However, evolutionary theory also uses “backwards probabilities” that describe the probability of a cause conditional on its effects; for example, coalescence theory allows one to calculate the expected number of generations in the past that the genes in the present generation find their most recent common ancestor. (shrink)
In their book What Darwin Got Wrong , Jerry Fodor and Massimo Piattelli-Palmarini construct an a priori philosophical argument and an empirical biological argument. The biological argument aims to show that natural selection is much less important in the evolutionary process than many biologists maintain. The a priori argument begins with the claim that there cannot be selection for one but not the other of two traits that are perfectly correlated in a population; it concludes that there cannot be an (...) evolutionary theory of adaptation. This article focuses mainly on the a priori argument. *Received March 2010; revised July 2010. †To contact the author, please write to: Department of Philosophy, 5185 Helen C. White Hall, University of Wisconsin–Madison, Madison, WI 53706; e-mail: email@example.com. (shrink)
A phylogeny that allows for lateral gene transfer (LGT) can be thought of as a strictly branching tree (all of whose branches are vertical) to which lateral branches have been added. Given that the goal of phylogenetics is to depict evolutionary history, we should look for the best supported phylogenetic network and not restrict ourselves to considering trees. However, the obvious extensions of popular tree-based methods such as maximum parsimony and maximum likelihood face a serious problem—if we judge networks by (...) fit to data alone, networks that have lateral branches will always fit the data at least as well as any network that restricts itself to vertical branches. This is analogous to the well-studied problem of overfitting data in the curve-fitting problem. Analogous problems often have analogous solutions and we propose to treat network inference as a case of model selection and use the Akaike Information Criterion (AIC). Strictly tree-like networks are more parsimonious than those that postulate lateral as well as vertical branches. This leads to the conclusion that we should not always infer LGT events whenever it would improve our fit-to-data, but should do so only when the improved fit is larger than the penalty for adding extra lateral branches. (shrink)
“Absence of evidence isn’t evidence of absence” is a slogan that is popular among scientists and nonscientists alike. This article assesses its truth by using a probabilistic tool, the Law of Likelihood. Qualitative questions (“Is E evidence about H ?”) and quantitative questions (“How much evidence does E provide about H ?”) are both considered. The article discusses the example of fossil intermediates. If finding a fossil that is phenotypically intermediate between two extant species provides evidence that those species have (...) a common ancestor, does failing to find such a fossil constitute evidence that there was no common ancestor? Or should the failure merely be chalked up to the imperfection of the fossil record? The transitivity of the evidence relation in simple causal chains provides a broader context, which leads to discussion of the fine-tuning argument, the anthropic principle, and observation selection effects. (shrink)
Parsimony arguments are advanced in both science and philosophy. How are they related? This question is a test case for Naturalismp, which is the thesis that philosophical theories and scientific theories should be evaluated by the same criteria. In this paper, I describe the justifications that attach to two types of parsimony argument in science. In the first, parsimony is a surrogate for likelihood. In the second, parsimony is relevant to estimating how accurately a model will predict new data when (...) fitted to old. I then consider how these two justifications apply to parsimony arguments in philosophy concerning theism and atheism, the mind/body problem, ethical realism, the question of whether mental properties are causally efficacious, and nominalism versus Platonism about numbers. (shrink)
In my paper “Intelligent Design Theory and the Supernatural—the ‘God or Extra-Terrestrial’ Reply,” I argued that Intelligent Design (ID) Theory, when coupled with independently plausible further assumptions, leads to the conclusion that a supernatural intelligent designer exists. ID theory is therefore not neutral on the question of whether there are supernatural agents. In this respect, it differs from the Darwinian theory of evolution. John Beaudoin replies to my paper in his “Sober on Intelligent Design Theory and the Intelligent Designer,” arguing (...) that my paper faces two challenges. In the present paper, I try to address Beaudoin’s challenges. (shrink)
When proponents of Intelligent Design (ID) theory deny that their theory is religious, the minimalistic theory they have in mind (the mini-ID theory) is the claim that the irreducibly complex adaptations found in nature were made by one or more intelligent designers. The denial that this theory is religious rests on the fact that it does not specify the identity of the designer—a supernatural God or a team of extra-terrestrials could have done the work. The present paper attempts to show (...) that this reply underestimates the commitments of the mini-ID Theory. The mini-ID theory, when supplemented with four independently plausible further assumptions, entails the existence of a supernatural intelligent designer. It is further argued that scientific theories, such as the Darwinian theory of evolution, are neutral on the question of whether supernatural designers exist. (shrink)
an observation to formulate a theory, it is no surprise that the resulting theory accurately captures that observation. However, when the theory makes a novel prediction—when it predicts an observation that was not used in its formulation—this seems to provide more substantial confirmation of the theory. This paper presents a new approach to the vexed problem of understanding the epistemic difference between prediction and accommodation. In fact, there are several problems that need to be disentangled; in all of them, the (...) key is the concept of overfitting. We float the hypothesis that accommodation is a defective methodology only when the methods used to accommodate the data fail to guard against the risk of overfitting. We connect our analysis with the proposals that other philosophers have made. We also discuss its bearing on the conflict between instrumentalism and scientific realism. Introduction Predictivisms—a taxonomy Observations Formulating the problem What might Annie be doing wrong? Solutions Observations explained Mayo on severe tests The miracle argument and scientific realism Concluding comments. (shrink)
What thesis is Hume trying to establish in his essay “On Miracles” (Section 10 of the Enquiry Concerning Human Understanding) and does he succeed? John Earman’s answer to the latter question is clearly conveyed by the title of his new book. Earman uses a Bayesian representation of the problem to make his case. For Earman, this mode of analysis is both perspicuous and nonanachronistic, in that probability reasoning was central to the 18th century debate about miracles in particular and testimony (...) in general. Indeed, one of Hume’s most interesting antagonists, Richard Price, was the person to whom Thomas Bayes entrusted his now-famous essay for posthumous publication. For Earman, Price is the proper Bayesian, while Hume’s essay provides only “rhetoric and schein geld” (p. 73). Earman’s tone is consistently prosecutorial and sometimes snide; he says that his animus is not so much against Hume himself as against those who smugly invoke Hume’s essay as definitively settling the <span class='Hi'>matter</span>. This tone should not deter potential readers who are convinced that Hume’s essay contains something of value. Earman’s book is interesting and provocative in multiple ways—it places Hume’s essay in its historical setting, it offers an insightful close reading of the text, and it shows how the resources of Bayesianism can be powerfully put to work. Besides Earman’s own essay (94 pages long), the volume also contains Hume’s essay and relevant work by others, including Locke, Spinoza, Samuel Clarke, Price, Laplace, and Babbage. The book would be an excellent choice for an advanced undergraduate or graduate seminar. (shrink)
We explore the evidential relationships that connect two standard claims of modern evolutionary biology. The hypothesis of common ancestry (which says that all organisms now on earth trace back to a single progenitor) and the hypothesis of natural selection (which says that natural selection has been an important influence on the traits exhibited by organisms) are logically independent; however, this leaves open whether testing one requires assumptions about the status of the other. Darwin noted that an extreme version of adaptationism (...) would undercut the possibility of making inferences about common ancestry. Here we develop a converse claim—hypotheses that assert that natural selection has been an important influence on trait values are untestable unless supplemented by suitable background assumptions. The fact of common ancestry and a claim about quantitative genetics together suffice to render such hypotheses testable. Furthermore, we see no plausible alternative to these assumptions; we hypothesize that they are necessary as well as sufficient for adaptive hypotheses to be tested. This point has important implications for biological practice, since biologists standardly assume that adaptive hypotheses predict trait associations among tip species. Another consequence is that adaptive hypotheses cannot be confirmed or disconfirmed by a trait value that is universal within a single species, if that trait value deviates even slightly from the optimum. 1 Two Darwinian hypotheses 2 Logical independence 3 How adaptive hypotheses bear on the tree of life hypothesis 4 How the tree of life hypothesis bears on adaptive hypotheses 5 What do adaptive hypotheses predict? 6 Common ancestry and quantitative genetics to the rescue 7 Conclusion. (shrink)
Nancy Cartwright (1983, 1999) argues that (1) the fundamental laws of physics are true when and only when appropriate ceteris paribus modifiers are attached and that (2) ceteris paribus modifiers describe conditions that are almost never satisfied. She concludes that when the fundamental laws of physics are true, they don't apply in the real world, but only in highly idealized counterfactual situations. In this paper, we argue that (1) and (2) together with an assumption about contraposition entail the opposite conclusion (...) — that the fundamental laws of physics do apply in the real world. Cartwright extracts from her thesis about the inapplicability of fundamental laws the conclusion that they cannot figure in covering-law explanations. We construct a different argument for a related conclusion — that forward-directed idealized dynamical laws cannot provide covering-law explanations that are causal. This argument is neutral on whether the assumption about contraposition is true. We then discuss Cartwright's simulacrum account of explanation, which seeks to describe how idealized laws can be explanatory. (shrink)