We argue that current discussions of criteria for actual causation are ill-posed in several respects. (1) The methodology of current discussions is by induction from intuitions about an infinitesimal fraction of the possible examples and counterexamples; (2) cases with larger numbers of causes generate novel puzzles; (3) "neuron" and causal Bayes net diagrams are, as deployed in discussions of actual causation, almost always ambiguous; (4) actual causation is (intuitively) relative to an initial system state since state changes are relevant, but (...) most current accounts ignore state changes through time; (5) more generally, there is no reason to think that philosophical judgements about these sorts of cases are normative; but (6) there is a dearth of relevant psychological research that bears on whether various philosophical accounts are descriptive. Our skepticism is not directed towards the possibility of a correct account of actual causation; rather, we argue that standard methods will not lead to such an account. A different approach is required. (shrink)
By combining experimental interventions with search procedures for graphical causal models we show that under familiar assumptions, with perfect data, N - 1 experiments suffice to determine the causal relations among N > 2 variables when each experiment randomizes at most one variable. We show the same bound holds for adaptive learners, but does not hold for N > 4 when each experiment can simultaneously randomize more than one variable. This bound provides a type of ideal for the measure of (...) success of heuristic approached in active learning methods of casual discovery, which currently use less informative measures. (shrink)
We argue that current discussions of criteria for actual causation are ill-posed in several respects. (1) The methodology of current discussions is by induction from intuitions about an infinitesimal fraction of the possible examples and counterexamples; (2) cases with larger numbers of causes generate novel puzzles; (3) “neuron” and causal Bayes net diagrams are, as deployed in discussions of actual causation, almost always ambiguous; (4) actual causation is (intuitively) relative to an initial system state since state changes are relevant, but (...) most current accounts ignore state changes through time; (5) more generally, there is no reason to think that philosophical judgements about these sorts of cases are normative; but (6) there is a dearth of relevant psychological research that bears on whether various philosophical accounts are descriptive. Our skepticism is not directed towards the possibility of a correct account of actual causation; rather, we argue that standard methods will not lead to such an account. A different approach is required. Once upon a time a hungry wanderer came into a village. He filled an iron cauldron with water, built a fire under it, and dropped a stone into the water. “I do like a tasty stone soup” he announced. Soon a villager added a cabbage to the pot, another added some salt and others added potatoes, onions, carrots, mushrooms, and so on, until there was a meal for all. (shrink)
Hans Reichenbach has been not only one of the founding fathers of logical empiricism but also one of the most prominent figures in the philosophy of science of the past century. While some of his ideas continue to be of interest in current philosophical programs, an important part of his early work has been neglected, and some of it has been unavailable to English readers. Among Reichenbach’s overlooked (and untranslated) early works, his doctoral thesis of 1915, The Concept of Probability (...) in the Mathematical Representation of Reality, deserves special attention, both for the topics covered and for its significance for a proper understanding of his intellectual trajectory. This volume anticipates most of the fundamental themes of his later philosophy. In particular, it addresses the issue of the application of probability statements to reality, as well as the relationship between probability and causality—questions that have been at the core of his research throughout his life. (shrink)
Since the introduction of mathematical population genetics, its machinery has shaped our fundamental understanding of natural selection. Selection is taken to occur when differential fitnesses produce differential rates of reproductive success, where fitnesses are understood as parameters in a population genetics model. To understand selection is to understand what these parameter values measure and how differences in them lead to frequency changes. I argue that this traditional view is mistaken. The descriptions of natural selection rendered by population genetics models are (...) in general neither predictive nor explanatory and introduce avoidable conceptual confusions. I conclude that a correct understanding of natural selection requires explicitly causal models of reproductive success. *Received May 2006; revised December 2006. †To contact the author, please write to: Department of Philosophy, Kansas State University, 201 Dickens Hall, Manhattan, KS 66506; e‐mail: email@example.com . (shrink)
Glymour (Philos Sci 73:369–389, 2006) claims that classical population genetic models can reliably predict short and medium run population dynamics only given information about future fitnesses those models cannot themselves predict, and that in consequence the causal, ecological models which can predict future fitnesses afford a more foundational description of natural selection than do population genetic models. This paper defends the first claim from objections offered by Gildenhuys (Biol Philos, 2011).
The literature on causal discovery has focused on interventions that involve randomly assigning values to a single variable. But such a randomized intervention is not the only possibility, nor is it always optimal. In some cases it is impossible or it would be unethical to perform such an intervention. We provide an account of ‘hard' and ‘soft' interventions and discuss what they can contribute to causal discovery. We also describe how the choice of the optimal intervention(s) depends heavily on the (...) particular experimental setup and the assumptions that can be made. ‡The first author is funded by the Causal Learning Collaborative Initiative supported by the James S. McDonnell Foundation. Many aspects of this paper were inspired by discussions with members of the collaborative. †To contact the authors, please write to: Department of Philosophy, Carnegie Mellon University, Pittsburgh, PA 15213; e-mail: firstname.lastname@example.org and email@example.com. (shrink)
The causal Bayes net framework specifies a set of axioms for causal discovery. This article explores the set of causal variables that function as relata in these axioms. Spirtes showed how a causal system can be equivalently described by two different sets of variables that stand in a non-trivial translation-relation to each other, suggesting that there is no “correct” set of causal variables. I extend Spirtes’ result to the general framework of linear structural equation models and then explore to what (...) extent the possibility to intervene or a preference for simpler causal systems may help in selecting among sets of causal variables. (shrink)
Bayesian models of human learning are becoming increasingly popular in cognitive science. We argue that their purported confirmation largely relies on a methodology that depends on premises that are inconsistent with the claim that people are Bayesian about learning and inference. Bayesian models in cognitive science derive their appeal from their normative claim that the modeled inference is in some sense rational. Standard accounts of the rationality of Bayesian inference imply predictions that an agent selects the option that maximizes the (...) posterior expected utility. Experimental confirmation of the models, however, has been claimed because of groups of agents that probability match the posterior. Probability matching only constitutes support for the Bayesian claim if additional unobvious and untested (but testable) assumptions are invoked. The alternative strategy of weakening the underlying notion of rationality no longer distinguishes the Bayesian model uniquely. A new account of rationality—either for inference or for decision-making—is required to successfully confirm Bayesian models in cognitive science. (shrink)
Bogen and Woodward (1988) advance adistinction between data and phenomena. Roughly, theformer are the observations reported by experimentalscientists, the latter are objective, stable featuresof the world to which scientists infer based onpatterns in reliable data. While phenomena areexplained by theories, data are not, and so theempirical basis for an inference to a theory consistsin claims about phenomena. McAllister (1997) hasrecently offered a critique of their version of thisdistinction, offering in its place a version on whichphenomena are theory laden, and hence (...) on which theempirical support for inferences to theories is also,unavoidably, theory laden. In this commentary I arguethat McAllister and Bogen and Woodward are mistaken inthinking that the distinction is necessary, and thatthe empirical support for inferences to theories isnot necessarily theory laden in the way McAllister'saccount entails they are. (shrink)
Hans Reichenbach is well known for his limiting frequency view of probability, with his most thorough account given in The Theory of Probability in 1935/1949. Perhaps less known are Reichenbach's early views on probability and its epistemology. In his doctoral thesis from 1915, Reichenbach espouses a Kantian view of probability, where the convergence limit of an empirical frequency distribution is guaranteed to exist thanks to the synthetic a priori principle of lawful distribution. Reichenbach claims to have given a purely objective (...) account of probability, while integrating the concept into a more general philosophical and epistemological framework. A brief synopsis of Reichenbach's thesis and a critical analysis of the problematic steps of his argument will show that the roots of many of his most influential insights on probability and causality can be found in this early work. (shrink)
An interventionist account of causation characterizes causal relations in terms of changes resulting from particular interventions. I provide a new example of a causal relation for which there does not exist an intervention satisfying the common interventionist standard. I consider adaptations that would save this standard and describe their implications for an interventionist account of causation. No adaptation preserves all the aspects that make the interventionist account appealing. Part of the fallout is a clearer account of the difficulties in characterizing (...) so-called “soft” interventions. (shrink)
: Models that fail to satisfy the Markov condition are unstable in the sense that changes in state variable values may cause changes in the values of background variables, and these changes in background lead to predictive error. This sort of error arises exactly from the failure of non-Markovian models to track the set of causal relations upon which the values of response variables depend. The result has implications for discussions of the level of selection: under certain plausible conditions the (...) models of selection presented in such debates will not satisfy the Markov condition when fit to data from real populations. Since this is true both for group and individual level models, models of neither sort correctly represent the causal structure generating, nor correctly explain, the phenomena of interest. (shrink)
I argue that results from foraging theory give us good reason to think some evolutionary phenomena are indeterministic and hence that evolutionary theory must be probabilistic. Foraging theory implies that random search is sometimes selectively advantageous, and experimental work suggests that it is employed by a variety of organisms. There are reasons to think such search will sometimes be genuinely indeterministic. If it is, then individual reproductive success will also be indeterministic, and so too will frequency change in populations of (...) organisms employing such search. (shrink)
Using a variety of different results from the literature, I show how causal discovery with experiments is limited unless substantive assumptions about the underlying causal structure are made. These results undermine the view that experiments, such as randomized controlled trials, can independently provide a gold standard for causal discovery. Moreover, I present a concrete example in which causal underdetermination persists despite exhaustive experimentation and argue that such cases undermine the appeal of an interventionist account of causation as its dependence on (...) other assumptions is not spelled out. (shrink)
This survey presents some of the main principles involved in discovering causal relations. They belong to a large array of possible assumptions and conditions about causal relations, whose various combinations limit the possibilities of acquiring causal knowledge in different ways. How much and in what detail the causal structure can be discovered from what kinds of data depends on the particular set of assumptions one is able to make. The assumptions considered here provide a starting point to explore further the (...) foundations of causal discovery procedures, and how they can be improved. (shrink)
I argue that the orthodox account of probabilistic causation, on which probabilistic causes determine the probability of their effects, is inconsistent with certain ontological assumptions implicit in scientific practice. In particular, scientists recognize the possibility that properties of populations can cause the behavior of members of the populations. Such emergent population‐level causation is metaphysically impossible on the orthodoxy.
Standard models of statistical explanation face two intractable difficulties. In his 1984 Salmon argues that because statistical explanations are essentially probabilistic we can make sense of statistical explanation only by rejecting the intuition that scientific explanations are contrastive. Further, frequently the point of a statistical explanation is to identify the etiology of its explanandum, but on standard models probabilistic explanations often fail to do so. This paper offers an alternative conception of statistical explanations on which explanations of the frequency of (...) a property consist in the derivation of that frequency from a statistical specification of the mechanism by which instances of the relevant property are produced. Such explanations are contrastive precisely because they identify the determinate causal etiologies of their explananda. (shrink)
argues that correlated interactions are necessary for group selection. His argument turns on a particular procedure for measuring the strength of selection, and employs a restricted conception of correlated interaction. It is here shown that the procedure in question is unreliable, and that while related procedures are reliable in special contexts, they do not require correlated interactions for group selection to occur. It is also shown that none of these procedures, all of which employ partial regression methods, are reliable when (...) correlated interactions of a specific kind arise, and it is argued that such correlated interactions will likely be ubiquitous in natural populations. Introduction Process and Product Fitness, Mean Fitness, and Phenotypic Change Correlated Interactions Causation Implications CiteULike Connotea Del.icio.us What's this? (shrink)
In this article I explore some statistical difficulties confronting going conceptions of ‘group’ as understood in accounts of group selection. Most such theories require real groups but define the reality of groups in ways that make it impossible to test for their reality. There are alternatives, but they either require or invite a nominalism about groups that many theorists abjure.
Sober (1984) presents an account of selection motivated by the view that one property can causally explain the occurrence of another only if the first plays a unique role in the causal production of the second. Sober holds that a causal property will play such a unique role if it is a population level cause of its effect, and on this basis argues that there is selection for a trait T only if T is a population level cause of survival (...) and reproductive success. Sterelny and Kitcher (1988) claim against Sober that some traits directly subject to selection will not satisfy the probabilistic condition on population level causation. In this paper I show that Sober has the resources to resist the Sterelny-Kitcher complaint, but I argue that not all traits that satisfy the probabilistic condition play the required unique role in the production of their effects. (shrink)
This paper describes the application of eight statistical and machine-learning methods to derive computer models for predicting mortality of hospital patients with pneumonia from their findings at initial presentation. The eight models were each constructed based on 9847 patient cases and they were each evaluated on 4352 additional cases. The primary evaluation metric was the error in predicted survival as a function of the fraction of patients predicted to survive. This metric is useful in assessing a model’s potential to assist (...) a clinician in deciding whether to treat a given patient in the hospital or at home. We examined the error rates of the models when predicting that a given fraction of patients will survive. We examined survival fractions between 0.1 and 0.6. Over this range, each model’s predictive error rate was within 1% of the error rate of every other model. When predicting that approximately 30°K of the patients will survive, all the models have an error rate of less than 1.5%. The models are distinguished more by the number of variables and parameters that they contain than by their error rates; these differences suggest which models may be the most amenable to future implementation as paper-based guidelines. (shrink)
Lennox and Wilson (1994) critique dispositional accounts of selection on the grounds that such accounts will class evolutionary events as cases of selection whether or not the environment constrains population growth. Lennox and Wilson claim that pure r-selection involves no environmental checks on growth, and that accounts of natural selection ought to distinguish between the two sorts of cases. I argue that Lennox and Wilson are mistaken in claiming that pure r-selection involves no environmental checks, but suggest that two related (...) cases support their substantive complaint, namely that dispositional accounts of selection have resources insufficient for making important distinctions in causal structure. (shrink)
Oaksford & Chater (O&C) aim to provide teleological explanations of behavior by giving an appropriate normative standard: Bayesian inference. We argue that there is no uncontroversial independent justification for the normativity of Bayesian inference, and that O&C fail to satisfy a necessary condition for teleological explanations: demonstration that the normative prescription played a causal role in the behavior's existence.
An interventionist account of causation characterizes causal relations in terms of changes resulting from particular interventions. We provide an example of a causal relation for which there does not exist an intervention satisfying the common interventionist standard. We consider adaptations that would save this standard and describe their implications for an interventionist account of causation. No adaptation preserves all the aspects that make the interventionist account appealing.
We consider the problems arising from using sequences of experiments to discover the causal structure among a set of variables, none of whom are known ahead of time to be an “outcome”. In particular, we present various approaches to resolve conflicts in the experimental results arising from sampling variability in the experiments. We provide a sufficient condition that allows for pooling of data from experiments with different joint distributions over the variables. Satisfaction of the condition allows for an independence test (...) with greater sample size that may resolve some of the conflicts in the experimental results. The pooling condition has its own problems, but should—due to its generality—be informative to techniques for meta-analysis. (shrink)
We argue that the authors’ call to integrate Bayesian models more strongly with algorithmic- and implementational-level models must go hand in hand with a call for a fully developed account of algorithmic rationality. Without such an account, the integration of levels would come at the expense of the explanatory benefit that rational models provide.
We present an algorithm to infer causal relations between a set of measured variables on the basis of experiments on these variables. The algorithm assumes that the causal relations are linear, but is otherwise completely general: It provides consistent estimates when the true causal structure contains feedback loops and latent variables, while the experiments can involve surgical or `soft' interventions on one or multiple variables at a time. The algorithm is `online' in the sense that it combines the results from (...) any set of available experiments, can incorporate background knowledge and resolves conflicts that arise from combining results from different experiments. In addition we provide a necessary and sufficient condition that determines when the algorithm can uniquely return the true graph, and can be used to select the next best experiment until this condition is satisfied. We demonstrate the method by applying it to simulated data and the flow cytometry data of Sachs et al. (shrink)
Increasingly, epistemologists are becoming interested in social structures and their effect on epistemic enterprises, but little attention has been paid to the proper distribution of experimental results among scientists. This paper will analyze a model first suggested by two economists, which nicely captures one type of learning situation faced by scientists. The results of a computer simulation study of this model provide two interesting conclusions. First, in some contexts, a community of scientists is, as a whole, more reliable when its (...) members are less aware of their colleagues' experimental results. Second, there is a robust tradeoff between the reliability of a community and the speed with which it reaches a correct conclusion. ‡The author would like to thank Brian Skyrms, Kyle Stanford, Jeffrey Barrett, BruceGlymour, and the participants in the Social Dynamics Seminar at University of California–Irvine for their helpful comments. Generous financial support was provided by the School of Social Science and Institute for Mathematical Behavioral Sciences at UCI. †To contact the author, please write to: Department of Philosophy, Baker Hall 135, Carnegie Mellon University, Pittsburgh, PA 15213-3890; e-mail: firstname.lastname@example.org. (shrink)
Frederick Douglass, the abolitionist, the civil rights advocate and the great rhetorician, has been the focus of much academic research. Only more recently is Douglass work on aesthetics beginning to receive its due, and even then its philosophical scope is rarely appreciated. Douglass’ aesthetic interest was notably not so much in art itself, but in understanding aesthetic presentation as an epistemological and psychological aspect of the human condition and thereby as a social and political tool. He was fascinated by (...) the power of images, and took particular interest in the emerging technologies of photography. He often returned to the themes of art, pictures and aesthetic perception in his speeches. He saw himself, also after the end of slavery, as first and foremost a human rights advocate, and he suggests that his work and thoughts as a public intellectual always in some way related to this end. In this regard, his interest in the power of photographic images to impact the human soul was a lifelong concern. His reflections accordingly center on the psychological and political potentials of images and the relationship between art, culture, and human dignity. In this chapter we discuss Douglass views and practical use of photography and other forms of imagery, and tease out his view about their transformational potential particularly in respect to combating racist attitudes. We propose that his views and actions suggest that he intuitively if not explicitly anticipated many later philosophical, pragmatist and ecological insights regarding the generative habits of mind and affordance perception : I.e. that we perceive the world through our values and habitual ways of engaging with it and thus that our perception is active and creative, not passive and objective. Our understanding of the world is simultaneously shaped by and shaping our perceptions. Douglass saw that in a racist and bigoted society this means that change through facts and rational arguments will be hard. A distorted lens distorts - and accordingly re-produces and perceives its own distortion. His interest in aesthetics is intimately connected to this conundrum of knowledge and change, perception and action. To some extent precisely due to his understanding of how stereotypical categories and dominant relations work on our minds, he sees a radical transformational potential in certain art and imagery. We see in his work a profound understanding of the value-laden and action-oriented nature of perception and what we today call the perception of affordances (that is, what our environment permits/invites us to do). Douglass is particularly interested in the social environment and the social affordances of how we perceive other humans, and he thinks that photographs can impact on the human intellect in a transformative manner. In terms of the very process of aesthetic perception his views interestingly cohere and supplement a recent theory about the conditions and consequences of being an aesthetic beholder. The main idea being that artworks typically invite an asymmetric engagement where one can behold them without being the object of reciprocal attention. This might allow for a kind of vulnerability and openness that holds transformational potentials not typically available in more strategic and goal-directed modes of perception. As mentioned, Douglass main interest is in social change and specifically in combating racist social structures and negative stereotypes of black people. He is fascinated by the potential of photography in particular as a means of correcting fallacious stereotypes, as it allows a more direct and less distorted image of the individuality and multidimensionality of black people. We end with a discussion of how, given this interpretation of aesthetic perception, we can understand the specific imagery used by Douglass himself. How he tried to use aesthetic modes to subvert and change the racist habitus in the individual and collective mind of his society. We suggest that Frederick Douglass, the human rights activist, had a sophisticated philosophy of aesthetics, mind, epistemology and particularly of the transformative and political power of images. His works in many ways anticipate and sometimes go beyond later scholars in these and other fields such as psychology & critical theory. Overall, we propose that our world could benefit from revisiting Douglass’ art and thought. (shrink)
This paper presents the leading idea of my doctoral dissertation and thus has been shaped by the reactions of all the members of my thesis committee: Charles Chastain, Walter Edelberg, W. Kent Wilson, Dorothy Grover, and Charles Marks. I am especially grateful for the help of Professors Chastain, Edelberg, and Wilson; each worked closely with me at one stage or another in the development of the ideas contained in the present work. Shorter versions of this paper were presented at the (...) 47th Annual Northwest Conference on Philosophy (1995), the 1996 Mid-South Philosophy Conference, the 1997 meeting of the Central Division of the American Philosophical Association, and at the University of Washington, Seattle; thanks to all audiences for their insightful comments and questions and also to my conference commentators, Eric Gampel, Jonathan Cohen, and BruceGlymour, respectively, each of whom offered a thoughtful critique. Lastly, I extend my gratitude to anonymous referees, including two from Mind and Language, whose remarks led to significant improvements in the paper. Address for correspondence: 13416, 4th Ave. S., Seattle, WA 98168, USA E-mail: email@example.com.. (shrink)
In this paper I offer an appraisal of James Bogen and James Woodward’s distinction between data and phenomena which pursues two objectives. First, I aim to clarify the notion of a scientific phenomenon. Such a clarification is required because despite its intuitive plausibility it is not exactly clear how Bogen and Woodward’s distinction has to be understood. I reject one common interpretation of the distinction, endorsed for example by James McAllister and BruceGlymour, which identifies phenomena with patterns (...) in data sets. Furthermore, I point out that other interpretations of Bogen and Woodward’s distinction do not specify the relationship between phenomena and theories in a satisfying manner. In order to avoid this problem I propose a contextual understanding of scientific phenomena according to which phenomena are states of affairs which play specific roles in scientific practice and to which we adopt a special epistemic attitude. Second, I evaluate the epistemological significance of Bogen and Woodward’s distinction with respect to the debate between scientific realists and constructive empiricists. Contrary to what Bogen and Woodward claim, I argue that the distinction does not provide a convincing argument against constructive empiricism. (shrink)
In a recent article, “Wayward Modeling: Population Genetics and Natural Selection,” BruceGlymour claims that population genetics is burdened by serious predictive and explanatory inadequacies and that the theory itself is to blame. Because Glymour overlooks a variety of formal modeling techniques in population genetics, his arguments do not quite undermine a major scientific theory. However, his arguments are extremely valuable as they provide definitive proof that those who would deploy classical population genetics over natural systems must (...) do so with careful attention to interactions between individual population members and environmental causes. Glymour’s arguments have deep implications for causation in classical population genetics. (shrink)