Woodward's long awaited book is an attempt to construct a comprehensive account of causation explanation that applies to a wide variety of causal and explanatory claims in different areas of science and everyday life. The book engages some of the relevant literature from other disciplines, as Woodward weaves together examples, counterexamples, criticisms, defenses, objections, and replies into a convincing defense of the core of his theory, which is that we can analyze causation by appeal to the notion of manipulation.
This paper attempts to elucidate three characteristics of causal relationships that are important in biological contexts. Stability has to do with whether a causal relationship continues to hold under changes in background conditions. Proportionality has to do with whether changes in the state of the cause “line up” in the right way with changes in the state of the effect and with whether the cause and effect are characterized in a way that contains irrelevant detail. Specificity is connected both to (...) David Lewis’ notion of “influence” and also with the extent to which a causal relation approximates to the ideal of one cause–one effect. Interrelations among these notions and their possible biological significance are also discussed. (shrink)
A number of writers, myself included, have recently argued that an “interventionist” treatment of causation of the sort defended in Woodward, 2003 can be used to cast light on so-called “causal exclusion” arguments. This interventionist treatment of causal exclusion has in turn been criticized by other philosophers. This paper responds to these criticisms. It describes an interventionist framework for thinking about causal relationships when supervenience relations are present. I contend that this framework helps us to see that standard arguments for (...) causal exclusion involve mistaken assumptions about what it is appropriate to control for or hold fixed in assessing causal claims. The framework also provides a natural way of capturing the idea that properties that supervene on but that are not identical with realizing properties can be causally efficacious. (shrink)
This paper describes an alternative to the common view that explanation in the special sciences involves subsumption under laws. According to this alternative, whether or not a generalization can be used to explain has to do with whether it is invariant rather than with whether it is lawful. A generalization is invariant if it is stable or robust in the sense that it would continue to hold under a relevant if it is stable or robust in the sense that it (...) would continue to hold under a relevant class of changes. Unlike lawfulness, invariance comes in degrees and has other features that are well suited to capture the characteristics of explanatory generalizations in the special sciences. For example, a generalization can be invariant even if it has exceptions or holds only over a limited spatio-temporal interval. The notion of invariance can be used to resolve a number of dilemmas that arise in standard treatments of explanatory generalizations in the special sciences. (shrink)
This paper explores the question of whether all or most explanations in biology are, or ideally should be, ‘mechanistic’. I begin by providing an account of mechanistic explanation, making use of the interventionist ideas about causation I have developed elsewhere. This account emphasizes the way in which mechanistic explanations, at least in the biological sciences, integrate difference‐making and spatio‐temporal information, and exhibit what I call fine‐tunedness of organization. I also emphasize the role played by modularity conditions in mechanistic explanation. I (...) will then argue, in agreement with John Dupré, that, given this account, it is plausible that many biological systems require explanations that are relatively non‐mechanical or depart from expectations one associates with the behaviour of machines. (shrink)
This paper explores some issues about the choice of variables for causal representation and explanation. Depending on which variables a researcher employs, many causal inference procedures and many treatments of causation will reach different conclusions about which causal relationships are present in some system of interest. The assumption of this paper is that some choices of variables are superior to other choices for the purpose of causal analysis. A number of possible criteria for variable choice are described and defended within (...) a broadly interventionist approach to causation. (shrink)
This paper discusses some issues concerning the relationship between the mental and the physical, including the so-called causal exclusion argument, within the framework of a broadly interventionist approach to causation.
It is widely believed that robustness (of inferences, measurements, models, phenomena and relationships discovered in empirical investigation etc.) is a Good Thing. However, there are many different notions of robustness. These often differ both in their normative credentials and in the conditions that warrant their deployment. Failure to distinguish among these notions can result in the uncritical transfer of considerations which support one notion to contexts in which another notion is being deployed. This paper surveys several different notions of robustness (...) and tries to identify why (and in what circumstances) each is valuable or appealing. I begin by discussing the notion of robustness addressed in Aldrich's paper (robustness as insensitivity of the results of inference to alternative specifications) and then discuss how this relates to robustness of derivations, robustness of measurement results, and robustness as a mark of casual as opposed to (merely) correlational relationships. (shrink)
This paper responds to recent criticisms of the idea that true causal claims, satisfying a minimal “interventionist” criterion for causation, can differ in the extent to which they satisfy other conditions—called stability and proportionality—that are relevant to their use in explanatory theorizing. It reformulates the notion of proportionality so as to avoid problems with previous formulations. It also introduces the notion of conditional independence or irrelevance, which I claim is central to understanding the respects and the extent to which upper (...) level explanations can be “autonomous”. (shrink)
Manipulablity theories of causation, according to which causes are to be regarded as handles or devices for manipulating effects, have considerable intuitive appeal and are popular among social scientists and statisticians. This article surveys several prominent versions of such theories advocated by philosophers, and the many difficulties they face. Philosophical statements of the manipulationist approach are generally reductionist in aspiration and assign a central role to human action. These contrast with recent discussions employing a broadly manipulationist framework for understanding causation, (...) such as those due to the computer scientist Judea Pearl and others, which are non-reductionist and rely instead on the notion of an intervention. This is simply an appropriately exogenous causal process; it has no essential connection with human action. This interventionist framework manages to avoid at least some of these difficulties faced by traditional philosophical versions of the manipulability theory and helps to clarify the content of causal claims. (shrink)
What is the relationship between, on the one hand, the sorts of causal claims found in the special sciences (and in common sense) and, on the other hand, the world as described by physics? A standard picture goes like this: the fundamental laws of physics are causal laws in the sense that they can be interpreted as telling us that realizations of one set of physical factors or properties “causes” realizations of other properties. Causal claims in the special sciences are (...) then true (to the extent that they are) in virtue of “instantiating” these underlying causal laws; as it is often put, the latter serve as “truth-makers” for the former. The picture is thus one according to which the notion of cause, as it occurs in the special sciences, is reflected or “grounded” in a fairly straightforward and transparent way in a similar notion that occurs in fundamental physics. This paper explores some alternatives to this picture. (shrink)
Hierarchical Bayesian models (HBMs) provide an account of Bayesian inference in a hierarchically structured hypothesis space. Scientific theories are plausibly regarded as organized into hierarchies in many cases, with higher levels sometimes called ‘paradigms’ and lower levels encoding more specific or concrete hypotheses. Therefore, HBMs provide a useful model for scientific theory change, showing how higher‐level theory change may be driven by the impact of evidence on lower levels. HBMs capture features described in the Kuhnian tradition, particularly the idea that (...) higher‐level theories guide learning at lower levels. In addition, they help resolve certain issues for Bayesians, such as scientific preference for simplicity and the problem of new theories. *Received July 2009; revised October 2009. †To contact the authors, please write to: Leah Henderson, Massachusetts Institute of Technology, 77 Massachusetts Avenue, 32D‐808, Cambridge, MA 02139; e‐mail: firstname.lastname@example.org. (shrink)
This paper defends an interventionist treatment of mechanisms and contrasts this with Waskan (forthcoming). Interventionism embodies a difference-making conception of causation. I contrast such conceptions with geometrical/mechanical or “actualist” conceptions, associating Waskan’s proposals with the latter. It is argued that geometrical/mechanical conceptions of causation cannot replace difference-making conceptions in characterizing the behavior of mechanisms, but that some of the intuitions behind the geometrical/mechanical approach can be captured by thinking in terms of spatio-temporally organized difference-making information.
Issues concerning scientific explanation have been a focus of philosophical attention from Pre- Socratic times through the modern period. However, recent discussion really begins with the development of the Deductive-Nomological (DN) model. This model has had many advocates (including Popper 1935, 1959, Braithwaite 1953, Gardiner, 1959, Nagel 1961) but unquestionably the most detailed and influential statement is due to Carl Hempel (Hempel 1942, 1965, and Hempel & Oppenheim 1948). These papers and the reaction to them have structured subsequent discussion concerning (...) scientific explanation to an extraordinary degree. After some general remarks by way of background and orientation (Section 1), this entry describes the DN model and its extensions, and then turns to some well-known objections (Section 2). It next describes a variety of subsequent attempts to develop alternative models of explanation, including Wesley Salmon's Statistical Relevance (Section 3) and Causal Mechanical (Section 4) models and the Unificationist models due to Michael Friedman and Philip Kitcher (Section 5). Section 6 provides a summary and discusses directions for future work. (shrink)
In this paper I criticize the commonly accepted idea that the generalizations of the special sciences should be construed as ceteris paribus laws. This idea rests on mistaken assumptions about the role of laws in explanation and their relation to causal claims. Moreover, the major proposals in the literature for the analysis of ceteris paribus laws are, on their own terms, complete failures. I sketch a more adequate alternative account of the content of causal generalizations in the special sciences which (...) I argue should replace the ceteris paribus conception. (shrink)
This paper defends an interventionist account of causation by construing this account as a contribution to methodology, rather than as a set of theses about the ontology or metaphysics of causation. It also uses the topic of causation to raise some more general issues about the relation between, on the one hand, methodology, and, on the other hand, ontology and metaphysics, as these are understood in contemporary philosophical discussion, particularly among so-called analytic metaphysicians. It concludes with the suggestion that issues (...) about the ontology of causation often can be fruitfully reconstrued as methodological proposals. (shrink)
This chapter explores the possibility of weakening the criteria for causal explanation in Making Things Happen to yield various forms of non-causal explanation. These include the following: retaining the idea that explanations must answer what if things had been different questions but dropping the requirement the answers to such questions must take the form of claims about what would happen under interventions. Retaining the w- question requirement but allowing generalizations that hold for mathematical or conceptual reasons to figure in explanations. (...) Dropping the w-question requirement to accommodate the role of information about irrelevance in explanation. (shrink)
expose some gaps and difficulties in the argument for the causal Markov condition in our essay ‘Independence, Invariance and the Causal Markov Condition’ (), and we are grateful for the opportunity to reformulate our position. In particular, Cartwright disagrees vigorously with many of the theses we advance about the connection between causation and manipulation. Although we are not persuaded by some of her criticisms, we shall confine ourselves to showing how our central argument can be reconstructed and to casting doubt (...) on Cartwright's claim that the causal Markov condition typically fails when there are indeterministic by-products. Why believe the causal Markov condition? Causation and manipulation The argument Indeterministic by-products and the causal Markov condition The chemical factory counterexample and PM2 Conclusions: causation and manipulability. (shrink)
This essay advocates a “functional” approach to causation and causal reasoning: these are to be understood in terms of the goals and purposes of causal thinking. This approach is distinguished from accounts based on metaphysical considerations or on reconstruction of “intuitions.”.
This paper provides a restatement and defense of the data/ phenomena distinction introduced by Jim Bogen and me several decades ago (e.g., Bogen and Woodward, The Philosophical Review, 303–352, 1988). Additional motivation for the distinction is introduced, ideas surrounding the distinction are clarified, and an attempt is made to respond to several criticisms.
We argue that Koch’s postulates are best understood within an interventionist account of causation, in the sense described in Woodward. We show how this treatment helps to resolve interpretive puzzles associated with Koch’s work and how it clarifies the different roles the postulates play in providing useful, yet not universal criteria for disease causation. Our paper is an effort at rational reconstruction; we attempt to show how Koch’s postulates and reasoning make sense and are normatively justified within an interventionist framework (...) and more difficult to understand within alternative frameworks for thinking about causation. (shrink)
We use the phrase "moral intuition" to describe the appearance in consciousness of moral judgments or assessments without any awareness of having gone through a conscious reasoning process that produces this assessment. This paper investigates the neural substrates of moral intuition. We propose that moral intuitions are part of a larger set of social intuitions that guide us through complex, highly uncertain and rapidly changing social interactions. Such intuitions are shaped by learning. The neural substrates for moral intuition include fronto-insular, (...) cingulate, and orbito-frontal cortices and associated subcortical structure such as the septum, basil ganglia and amygdala. Understanding the role of these structures undercuts many philosophical doctrines concerning the status of moral intuitions, but vindicates the claim that they can sometimes play a legitimate role in moral decision-making. (shrink)
This article defends the use of interventionist counterfactuals to elucidate causal and explanatory claims against criticisms advanced by James Bogen and Peter Machamer. Against Bogen, I argue that counterfactual claims concerning what would happen under interventions are meaningful and have determinate truth values, even in a deterministic world. I also argue, against both Machamer and Bogen, that we need to appeal to counterfactuals to capture the notions like causal relevance and causal mechanism. Contrary to what both authors suppose, counterfactuals are (...) not "unscientific" - a substantial tradition within statistics and the causal modelling literature makes heavy use of them. (shrink)
This paper explores the relationship between a manipulability conception of causation and the causal Markov condition (CM). We argue that violations of CM also violate widely shared expectations—implicit in the manipulability conception—having to do with the absence of spontaneous correlations. They also violate expectations concerning the connection between independence or dependence relationships in the presence and absence of interventions.
This paper explores some issues concerning the nature and structure of causal explanation in psychiatry and psychology from the point of view of the “interventionist” theory defended in my book, Making Things Happen. Among the issues is explored is the extent to which candidate causal explanations involving “upper level” or relatively coarse-grained or macroscopic variables such as mental/psychological states (e.g. highly self critical beliefs or low self esteem) or environmental factors (e.g. parental abuse) compete with explanations that instead appeal to (...) underlying, “lower level” or more fine gained neural, genetic, or biochemical mechanisms. (shrink)
Following Jacques Hadamard, applied mathematicians typically investigate their models in the form of well-set problems, which actually consist of a family of applicational circumstances that vary in specific ways with respect to their initial and boundary values. The chief motive for investigating models in this wider manner is to avoid the improper behavioral conclusions one might reach from the consideration of a more restricted range of cases. Suitable specifications of the required initial and boundary variability typically appeal to previously established (...) experimental conclusions as to how the target system will behave under a range of eternally applied manipulations of the form “If the conditions pertaining to S were altered in manner M, internal features X would/would not alter”. In his investigations of causal reasoning within other parts of science, our first author has emphasized the conceptual importance of counterfactuals of this nature, for which he was been often criticized by authors of a self-styled “metaphysical” inclination. The purpose of this note is to argue, pace these objections, that closely analogous considerations have long been part of the practice of investigating differential equation models in a sensible way. (shrink)
This paper explores the idea that laws express relationships between properties or universals as defended in Michael Tooley's recent book Causation: A Realist Approach. I suggest that the most plausible version of realism will take a different form than that advocated by Tooley. According to this alternative, laws are grounded in facts about the capacities and powers of particular systems, rather than facts about relations between universals. The notion of lawfulness is linked to the notion of invariance, rather than to (...) the metaphysical notion of a necessary connection. (shrink)
This paper sketches one possible form that a pragmatist philosophy of science might take. It defends general philosophy of science, although not in the form it has traditionally taken, and along with this, a focus on methodology as a legitimate concern for philosophers of science. Connections are made between some classical pragmatist themes and issues in contemporary philosophy of science. My intention is to be provocative.
This paper explores some general questions about the sorts of abilities that are involved in tool use and “causal cognition”, both in humans and in non-human primates. An attempt is made to relate the empirical literature on these topics to various philosophical theories of causation.
Counterfactual theories of causation of the sort presented in Mackie, 1974, and Lewis, 1973 are a familiar part of the philosophical landscape. Such theories are typically advanced primarily as accounts of the metaphysics of causation. But they also raise empirical psychological issues concerning the processes and representations that underlie human causal reasoning. For example, do human subjects internally represent causal claims in terms of counterfactual judgments and when they engage in causal reasoning, does this involves reasoning about counterfactual claims? This (...) paper explores several such issues from a broadly interventionist perspective. (shrink)
This chapter explores some issues having to do with the structure of the evidential reasoning we use to infer causal and lawful claims. It is argued that such reasoning always makes use of prior, causally, or nomologically committed information, thus undercutting various views that attempt to reduce causal and lawful claims to claims about regularities. A non-reductive account of laws and causes built around the notion of invariance is advanced as an alternative.
This paper defends an invariance-based account of laws of nature: Laws are generalizations that remain invariant under various sorts of changes. Alternatively, laws are generalizations that exhibit a certain kind of independence from background conditions.
This essay advocates a “functional” approach to causation and causal reasoning: these are to be understood in terms of the goals and purposes of causal thinking. This approach is distinguished from accounts based on metaphysical considerations or on reconstruction of “intuitions”.