We propose that children employ specialized cognitive systems that allow them to recover an accurate “causal map” of the world: an abstract, coherent, learned representation of the causal relations among events. This kind of knowledge can be perspicuously understood in terms of the formalism of directed graphical causal models, or “Bayes nets”. Children’s causal learning and inference may involve computations similar to those for learning causal Bayes nets and for predicting with them. Experimental results suggest that 2- to 4-year-old children (...) construct new causal maps and that their learning is consistent with the Bayes net formalism. (shrink)
We argue that current discussions of criteria for actual causation are ill-posed in several respects. (1) The methodology of current discussions is by induction from intuitions about an infinitesimal fraction of the possible examples and counterexamples; (2) cases with larger numbers of causes generate novel puzzles; (3) "neuron" and causal Bayes net diagrams are, as deployed in discussions of actual causation, almost always ambiguous; (4) actual causation is (intuitively) relative to an initial system state since state changes are relevant, but (...) most current accounts ignore state changes through time; (5) more generally, there is no reason to think that philosophical judgements about these sorts of cases are normative; but (6) there is a dearth of relevant psychological research that bears on whether various philosophical accounts are descriptive. Our skepticism is not directed towards the possibility of a correct account of actual causation; rather, we argue that standard methods will not lead to such an account. A different approach is required. (shrink)
Recent literature in philosophy of science has addressed purported notions of explanatory virtues—‘explanatory power’, ‘unification’, and ‘coherence’. In each case, a probabilistic relation between a theory and data is said to measure the power of an explanation, or degree of unification, or degree of coherence. This essay argues that the measures do not capture cases that are paradigms of scientific explanation, that the available psychological evidence indicates that the measures do not capture judgements of explanatory power, and, finally, that the (...) measures do not provide useful methods for selecting hypotheses. 1. Introduction2. Some Proposed Measures of Explanatory Virtues3. Descriptive Inadequacy3.1 Excellent but false explanations3.2 Causal explanation4. Psychological Inadequacy5. Finding the Truth6. Conclusion. (shrink)
"Goodness of Fit": Clinical Applications from Infancy through Adult Life. By Stella Chess & Alexander Thomas. Brunner/Mazel, Philadelphia, PA, 1999. pp. 229. pound24.95 (hb). Chess and Thomas's pioneering longitudinal studies of temperamental individuality started over 40 years ago (Thomas et al., 1963). Their publications soon became and remain classics. Their concept of "goodness of fit" emerges out of this monumental work but has had a long gestation period. In their new book, the authors distinguish between behaviour disorders that are reactive (...) to the child's life circumstances, including life events, and which are self-correcting or responsive to the relevant changes in their environment, and more serious disorders. (shrink)
We consider the dispute between causal decision theorists and evidential decision theorists over Newcomb-like problems. We introduce a framework relating causation and directed graphs developed by Spirtes et al. (1993) and evaluate several arguments in this context. We argue that much of the debate between the two camps is misplaced; the disputes turn on the distinction between conditioning on an event E as against conditioning on an event I which is an action to bring about E. We give the essential (...) machinery for calculating the effect of an intervention and consider recent work which extends the basic account given here to the case where causal Knowledge is incomplete. (shrink)
Reverse inference in cognitive neuropsychology has been characterized as inference to ‘psychological processes’ from ‘patterns of activation’ revealed by functional magnetic resonance or other scanning techniques. Several arguments have been provided against the possibility. Focusing on Machery’s presentation, we attempt to clarify the issues, rebut the impossibility arguments, and propose and illustrate a strategy for reverse inference. 1 The Problem of Reverse Inference in Cognitive Neuropsychology2 The Arguments2.1 The anti-Bayesian argument3 Patterns of Activation4 Reverse Inference Practiced5 Seek and Ye Shall (...) Find, Maybe6 Conclusion. (shrink)
I argue that psychologists interested in human causal judgment should understand and adopt a representation of causal mechanisms by directed graphs that encode conditional independence (screening off) relations. I illustrate the benefits of that representation, now widely used in computer science and increasingly in statistics, by (i) showing that a dispute in psychology between ‘mechanist’ and ‘associationist’ psychological theories of causation rests on a false and confused dichotomy; (ii) showing that a recent, much-cited experiment, purporting to show that human subjects, (...) incorrectly let large causes ‘overshadow’ small causes, misrepresents the most likely, and warranted, causal explanation available to the subjects, in the light of which their responses were normative; (iii) showing how a recent psychological theory (due to P. Cheng) of human judgment of causal power can be considerably generalized: and (iv) suggesting a range of possible experiments comparing human and computer abilities to extract causal information from associations. (shrink)
The ultimate focus of the current essay is on methods of “creative abduction” that have some guarantees as reliable guides to the truth, and those that do not. Emphasizing work by Richard Englehart using data from the World Values Survey, Gerhard Schurz has analyzed literature surrounding Samuel Huntington’s well-known claims that civilization is divided into eight contending traditions, some of which resist “modernization” – democracy, civil rights, equality of rights of women and minorities, secularism. Schurz suggests an evolutionary model of (...) modernization and identifies opposing social forces. In a later essay, citing Englehart’s work as an example, Schurz identifies factor analysis as an example of “creative abduction”. The theories of Englehart and his collaborators are reviewed again in the current essay. Published simulations and standard statistical desiderata for causal inference show the methods Englehart used, factor analysis in particular, are not guides to truth for the kind of data Schurz recognizes as common in political science. Recent work in statistics, philosophy and computer science that makes advances towards such methods is briefly reviewed. (shrink)
Your use of the JSTOR archive indicates your acceptance of J STOR’s Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. J STOR’s Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non—commercial use.
& Carnegie Mellon University Abstract The rationality of human causal judgments has been the focus of a great deal of recent research. We argue against two major trends in this research, and for a quite different way of thinking about causal mechanisms and probabilistic data. Our position rejects a false dichotomy between "mechanistic" and "probabilistic" analyses of causal inference -- a dichotomy that both overlooks the nature of the evidence that supports the induction of mechanisms and misses some important probabilistic (...) implications of mechanisms. This dichotomy has obscured an alternative conception of causal learning: for discrete events, a central adaptive task is to induce causal mechanisms in the environment from probabilistic data and prior knowledge. Viewed from this perspective, it is apparent that the probabilistic norms assumed in the human causal judgment literature often do not map onto the mechanisms generating the probabilities. Our alternative conception of causal judgment is more congruent with both scientific uses of the notion of causation and observed causal judgments of untutored reasoners. We illustrate some of the relevant variables under this conception, using a framework for causal representation now widely adopted in computer science and, increasingly, in statistics. We also review the formulation and evidence for a theory of human causal induction (Cheng, 1997) that adopts this alternative conception. (shrink)
Few people have thought so hard about the nature of the quantum theory as has Jeff Bub,· and so it seems appropriate to offer in his honor some reflections on that theory. My topic is an old one, the consistency of our microscopic theories with our macroscopic theories, my example, the Aspect experiments (Aspect et al., 1981, 1982, 1982a; Clauser and Shimony, l978;_Duncan and Kleinpoppen, 199,8) is familiar, and my sirnplrcation of it is borrowed. All that is new here is (...) a kind of diagonalization: an argument that the fundamental principles found to be violated by the quantum theory must be assumed to be true of the experimental apparatus used in the experiments.. (shrink)
Taking seriously the arguments of Earman, Roberts and Smith that ceteris paribus laws have no semantics and cannot be tested, I suggest that ceteris paribus claims have a kind of formal pragmatics, and that at least some of them can be verified or refuted in the limit.
Nancy Cartwright's recent criticisms of efforts and methods to obtain causal information from sample data using automated search are considered. In addition to reviewing that effort, I argue that almost all of her criticisms are false and rest on misreading, overgeneralization, or neglect of the relevant literature.
Your use of the JSTOR archive indicates your acceptance of J STOR’s Terms and Conditions of Use, available at http://www.jstor.org/about/tenns.htm1. J STOR’s Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non—commercial use.
Contemporary cognitive neuropsychology attempts to infer unobserved features of normal human cognition, or ?cognitive architecture?, from experiments with normals and with brain-damaged subjects in whom certain normal cognitive capacities are altered, diminished, or absent. Fundamental methodological issues about the enterprise of cognitive neuropsychology concern the characterization of methods by which features of normal cognitive architecture can be identified from such data, the assumptions upon which the reliability of such methods are premised, and the limits of such methods?even granting their assumptions?in (...) resolving uncertainties about that architecture. With some idealization, the question of the capacities of various experimental designs in cognitive neuropsychology to uncover cognitive architecture can be reduced to comparatively simple questions about the prior assumptions investigators are willing to make. This paper presents some of simplest of those reductions. 1Research for this paper was made possible by a fellowship from the John Simon Guggenheim Memorial Foundation and by grant number SBE-9212264 from the National Science Foundation. I thank Martha Farah for teaching me what little I know of cognitive neuropsychology, Jeffrey Bub for stimulating me to think about these issues and for commenting on drafts of this paper, and Peter Slezak for additional comments. Alfonso Caramazza and Michael McCloskey provided very helpful comments on a second draft. (shrink)
is every bit as intelligible and philosophically respectable as many other doctrines currently in favor, e.g., the doctrine that mental events are identical with brain events; the attempt to give a linguistic construal of this latter doctrine meets many of the same sorts of difficulties encountered above (see Hempel, op. cit.). Secondly, I think that evidence for universal determinism may not, as a matter of fact, be so hard to come by as one might imagine. It is a striking fact (...) about our world that we never observe any genuine cases of parallelism; it always seems possible to design some sort of interaction between any two genuine empirical magnitudes. If this is correct, then a true theory T can be deterministic only if universal determinism reigns. (shrink)
We outline a cognitive and computational account of causal learning in children. We propose that children employ specialized cognitive systems that allow them to recover an accurate “causal map” of the world: an abstract, coherent representation of the causal relations among events. This kind of knowledge can be perspicuously represented by the formalism of directed graphical causal models, or “Bayes nets”. Human causal learning and inference may involve computations similar to those for learnig causal Bayes nets and for predicting with (...) them. Preliminary experimental results suggest that 2- to 4-year-old children spontaneously construct new causal maps and that their learning is consistent with the Bayes-Net formalism. (shrink)
Using Gebharter’s representation, we consider aspects of the problem of discovering the structure of unmeasured submechanisms when the variables in those submechanisms have not been measured. Exploiting an early insight of Sober’s, we provide a correct algorithm for identifying latent, endogenous structure—submechanisms—for a restricted class of structures. The algorithm can be merged with other methods for discovering causal relations among unmeasured variables, and feedback relations between measured variables and unobserved causes can sometimes be learned.
Twenty years ago, Nancy Cartwright wrote a perceptive essay in which she clearly distinguished causal relations from associations, introduced philosophers to Simpson’s paradox, articulated the difﬁculties for reductive probabilistic analyses of causation that ﬂow from these observations, and connected causal relations with strategies of action (Cartwright 1979). Five years later, without appreciating her essay, I and my (then) students began to develop formal representations of causal and probabilistic relations, which, subsequently informed by the work of computer scientists and statisticians, led (...) eventually to a practical theory of causal inference and prediction, a theory incorporating some of the sensibilities Cartwright had voiced (Glymour et al. 1987; Spirtes et al. 1993). That theory, and ideas related to it, have become a subﬁeld of computer science with contributions far deeper than mine from many sources, and its inferential and predictive techniques have been successfully applied in biology, economics, educational research, geology and space physics. (shrink)
Reflectance spectroscopy is a standard tool for studying the mineral composition of rock and soil samples and for remote sensing of terrestrial and extraterrestrial surfaces. We describe research on automated methods of mineral identification from reflectance spectra and give evidence that a simple algorithm, adapted from a well-known search procedure for Bayes nets, identifies the most frequently occurring classes of carbonates with reliability equal to or greater than that of human experts. We compare the reliability of the procedure to the (...) reliability of several other automated methods adapted to the same purpose. Evidence is given that the procedure can be applied to some other mineral classes as well. Since the procedure is fast with low memory requirements, it is suitable for on- board scientific analysis by orbiters or surface rovers. (shrink)
Halpern's Actual Causality is an extended development of an account of causal relations among individual events in the tradition that analyzes causation as difference making. The book is notable for its efforts at formal clarity, its exploration of "normality" conditions, and the wealth of examples it uses and whose provenance it traces. Unfortunately, the various normality conditions considered undermine the capacity of the basic theory to plausibly treat various cases Halpern considers, and the unalloyed basic theory yields implausible results in (...) simple cases of overdetermination, which are not remedied by Halpern's probabilistic version of his theory or unambiguously by the variety of normality conditions Actual Causality entertains. (shrink)
A reprint of the Prentice-Hall edition of 1992. Prepared by nine distinguished philosophers and historians of science, this thoughtful reader represents a cooperative effort to provide an introduction to the philosophy of science focused on cultivating an understanding of both the workings of science and its historical and social context. Selections range from discussions of topics in general methodology to a sampling of foundational problems in various physical, biological, behavioral, and social sciences. Each chapter contains a list of suggested readings (...) and study questions. (shrink)
Time series of macroscopic quantities that are aggregates of microscopic quantities, with unknown one‐many relations between macroscopic and microscopic states, are common in applied sciences, from economics to climate studies. When such time series of macroscopic quantities are claimed to be causal, the causal relations postulated are representable by a directed acyclic graph and associated probability distribution—sometimes called a dynamical Bayes net. Causal interpretations of such series imply claims that hypothetical manipulations of macroscopic variables have unambiguous effects on variables “downstream” (...) in the graph, and such macroscopic variables may be predictably produced or altered even while particular microstates are not. This paper argues that such causal time series of macroscopic aggregates of microscopic processes are the appropriate model for mental causation. (shrink)
It is “well known” that in linear models: (1) testable constraints on the marginal distribution of observed variables distinguish certain cases in which an unobserved cause jointly influences several observed variables; (2) the technique of “instrumental variables” sometimes permits an estimation of the influence of one variable on another even when the association between the variables may be confounded by unobserved common causes; (3) the association (or conditional probability distribution of one variable given another) of two variables connected by a (...) path or pair of paths with a single common vertex (a trek) can be computed directly from the parameter values associated with each edge in the trek; (4) the association of two variables produced by multiple treks can be computed from the parameters associated with each trek; and (5) the independence of two variables conditional on a third implies the corresponding independence of the sums of the variables over all units conditional on the sums over all units of each of the original conditioning variables. (shrink)
In everyday matters, as well as in law, we allow that someone’s reasons can be causes of her actions, and often are. That correct reasoning accords with Bayesian principles is now so widely held in philosophy, psychology, computer science and elsewhere that the contrary is beginning to seem obtuse, or at best quaint. And that rational agents should learn about the world from energies striking sensory inputs nerves in people—seems beyond question. Even rats seem to recognize the difference between correlation (...) and causation,1 and accordingly make different inferences from passive observation than from interventions. A few statisticians aside,” so do most of us. To square these views with the demands of computability, increasing numbers of psychologists and others have embraced a particular formalization, causal Bayes nets, as an account of human reasoning about and to causal connections.111 Such structures can be used by rational agents, including humans in so far as they are rational, to have degrees of belief in various conceptual contents, which they use to reason to expectations, which are realized or defeated by sensory inputs, which cause them to change their degrees of belief in other contents in accord with Bayes Rule, or some generalization of it. How is all of this supposed to be carried out? l. Representing Causal Structures The causal Bayes net framework adopted by a growing number of psychologists goes like this: Our representations of causal relations are captured in a graphical causal. (shrink)