We argue that current discussions of criteria for actual causation are ill-posed in several respects. (1) The methodology of current discussions is by induction from intuitions about an infinitesimal fraction of the possible examples and counterexamples; (2) cases with larger numbers of causes generate novel puzzles; (3) "neuron" and causal Bayes net diagrams are, as deployed in discussions of actual causation, almost always ambiguous; (4) actual causation is (intuitively) relative to an initial system state since state changes are relevant, but (...) most current accounts ignore state changes through time; (5) more generally, there is no reason to think that philosophical judgements about these sorts of cases are normative; but (6) there is a dearth of relevant psychological research that bears on whether various philosophical accounts are descriptive. Our skepticism is not directed towards the possibility of a correct account of actual causation; rather, we argue that standard methods will not lead to such an account. A different approach is required. (shrink)
By combining experimental interventions with search procedures for graphical causal models we show that under familiar assumptions, with perfect data, N - 1 experiments suffice to determine the causal relations among N > 2 variables when each experiment randomizes at most one variable. We show the same bound holds for adaptive learners, but does not hold for N > 4 when each experiment can simultaneously randomize more than one variable. This bound provides a type of ideal for the measure of (...) success of heuristic approached in active learning methods of casual discovery, which currently use less informative measures. (shrink)
Models that fail to satisfy the Markov condition are unstable because changes in state variable values may cause changes in the values of background variables, and these changes in background lead to predictive error. Such error arises because non‐Markovian models fail to track the causal relations generating the values of response variables. This has implications for discussions of the level of selection: under certain plausible conditoins most standard models of group selection will not satisfy the Markov condition when fit to (...) data from real populations. These models neither correctly represent the causal structure generating nor correctly explain the phenomena of interest. †To contact the author, please write to: BruceGlymour, Department of Philosophy, 201 Dickens Hall, Kansas State University, Manhattan KS, 66506; e‐mail: firstname.lastname@example.org. (shrink)
Since the introduction of mathematical population genetics, its machinery has shaped our fundamental understanding of natural selection. Selection is taken to occur when differential fitnesses produce differential rates of reproductive success, where fitnesses are understood as parameters in a population genetics model. To understand selection is to understand what these parameter values measure and how differences in them lead to frequency changes. I argue that this traditional view is mistaken. The descriptions of natural selection rendered by population genetics models are (...) in general neither predictive nor explanatory and introduce avoidable conceptual confusions. I conclude that a correct understanding of natural selection requires explicitly causal models of reproductive success. *Received May 2006; revised December 2006. †To contact the author, please write to: Department of Philosophy, Kansas State University, 201 Dickens Hall, Manhattan, KS 66506; e‐mail: email@example.com . (shrink)
Glymour (Philos Sci 73:369–389, 2006) claims that classical population genetic models can reliably predict short and medium run population dynamics only given information about future fitnesses those models cannot themselves predict, and that in consequence the causal, ecological models which can predict future fitnesses afford a more foundational description of natural selection than do population genetic models. This paper defends the first claim from objections offered by Gildenhuys (Biol Philos, 2011).
This survey presents some of the main principles involved in discovering causal relations. They belong to a large array of possible assumptions and conditions about causal relations, whose various combinations limit the possibilities of acquiring causal knowledge in different ways. How much and in what detail the causal structure can be discovered from what kinds of data depends on the particular set of assumptions one is able to make. The assumptions considered here provide a starting point to explore further the (...) foundations of causal discovery procedures, and how they can be improved. (shrink)
I argue that the orthodox account of probabilistic causation, on which probabilistic causes determine the probability of their effects, is inconsistent with certain ontological assumptions implicit in scientific practice. In particular, scientists recognize the possibility that properties of populations can cause the behavior of members of the populations. Such emergent population‐level causation is metaphysically impossible on the orthodoxy.
Bayesian models of human learning are becoming increasingly popular in cognitive science. We argue that their purported confirmation largely relies on a methodology that depends on premises that are inconsistent with the claim that people are Bayesian about learning and inference. Bayesian models in cognitive science derive their appeal from their normative claim that the modeled inference is in some sense rational. Standard accounts of the rationality of Bayesian inference imply predictions that an agent selects the option that maximizes the (...) posterior expected utility. Experimental confirmation of the models, however, has been claimed because of groups of agents that probability match the posterior. Probability matching only constitutes support for the Bayesian claim if additional unobvious and untested (but testable) assumptions are invoked. The alternative strategy of weakening the underlying notion of rationality no longer distinguishes the Bayesian model uniquely. A new account of rationality—either for inference or for decision-making—is required to successfully confirm Bayesian models in cognitive science. (shrink)
Hans Reichenbach is well known for his limiting frequency view of probability, with his most thorough account given in The Theory of Probability in 1935/1949. Perhaps less known are Reichenbach's early views on probability and its epistemology. In his doctoral thesis from 1915, Reichenbach espouses a Kantian view of probability, where the convergence limit of an empirical frequency distribution is guaranteed to exist thanks to the synthetic a priori principle of lawful distribution. Reichenbach claims to have given a purely objective (...) account of probability, while integrating the concept into a more general philosophical and epistemological framework. A brief synopsis of Reichenbach's thesis and a critical analysis of the problematic steps of his argument will show that the roots of many of his most influential insights on probability and causality can be found in this early work. (shrink)
Oaksford & Chater (O&C) aim to provide teleological explanations of behavior by giving an appropriate normative standard: Bayesian inference. We argue that there is no uncontroversial independent justification for the normativity of Bayesian inference, and that O&C fail to satisfy a necessary condition for teleological explanations: demonstration that the normative prescription played a causal role in the behavior's existence.
The literature on causal discovery has focused on interventions that involve randomly assigning values to a single variable. But such a randomized intervention is not the only possibility, nor is it always optimal. In some cases it is impossible or it would be unethical to perform such an intervention. We provide an account of ‘hard' and ‘soft' interventions and discuss what they can contribute to causal discovery. We also describe how the choice of the optimal intervention(s) depends heavily on the (...) particular experimental setup and the assumptions that can be made. ‡The first author is funded by the Causal Learning Collaborative Initiative supported by the James S. McDonnell Foundation. Many aspects of this paper were inspired by discussions with members of the collaborative. †To contact the authors, please write to: Department of Philosophy, Carnegie Mellon University, Pittsburgh, PA 15213; e-mail: firstname.lastname@example.org and email@example.com. (shrink)
Bogen and Woodward (1988) advance adistinction between data and phenomena. Roughly, theformer are the observations reported by experimentalscientists, the latter are objective, stable featuresof the world to which scientists infer based onpatterns in reliable data. While phenomena areexplained by theories, data are not, and so theempirical basis for an inference to a theory consistsin claims about phenomena. McAllister (1997) hasrecently offered a critique of their version of thisdistinction, offering in its place a version on whichphenomena are theory laden, and hence (...) on which theempirical support for inferences to theories is also,unavoidably, theory laden. In this commentary I arguethat McAllister and Bogen and Woodward are mistaken inthinking that the distinction is necessary, and thatthe empirical support for inferences to theories isnot necessarily theory laden in the way McAllister'saccount entails they are. (shrink)
argues that correlated interactions are necessary for group selection. His argument turns on a particular procedure for measuring the strength of selection, and employs a restricted conception of correlated interaction. It is here shown that the procedure in question is unreliable, and that while related procedures are reliable in special contexts, they do not require correlated interactions for group selection to occur. It is also shown that none of these procedures, all of which employ partial regression methods, are reliable when (...) correlated interactions of a specific kind arise, and it is argued that such correlated interactions will likely be ubiquitous in natural populations. Introduction Process and Product Fitness, Mean Fitness, and Phenotypic Change Correlated Interactions Causation Implications CiteULike Connotea Del.icio.us What's this? (shrink)
I argue that results from foraging theory give us good reason to think some evolutionary phenomena are indeterministic and hence that evolutionary theory must be probabilistic. Foraging theory implies that random search is sometimes selectively advantageous, and experimental work suggests that it is employed by a variety of organisms. There are reasons to think such search will sometimes be genuinely indeterministic. If it is, then individual reproductive success will also be indeterministic, and so too will frequency change in populations of (...) organisms employing such search. (shrink)
We consider the problems arising from using sequences of experiments to discover the causal structure among a set of variables, none of whom are known ahead of time to be an “outcome”. In particular, we present various approaches to resolve conflicts in the experimental results arising from sampling variability in the experiments. We provide a sufficient condition that allows for pooling of data from experiments with different joint distributions over the variables. Satisfaction of the condition allows for an independence test (...) with greater sample size that may resolve some of the conflicts in the experimental results. The pooling condition has its own problems, but should—due to its generality—be informative to techniques for meta-analysis. (shrink)
Standard models of statistical explanation face two intractable difficulties. In his 1984 Salmon argues that because statistical explanations are essentially probabilistic we can make sense of statistical explanation only by rejecting the intuition that scientific explanations are contrastive. Further, frequently the point of a statistical explanation is to identify the etiology of its explanandum, but on standard models probabilistic explanations often fail to do so. This paper offers an alternative conception of statistical explanations on which explanations of the frequency of (...) a property consist in the derivation of that frequency from a statistical specification of the mechanism by which instances of the relevant property are produced. Such explanations are contrastive precisely because they identify the determinate causal etiologies of their explananda. (shrink)
Sober (1984) presents an account of selection motivated by the view that one property can causally explain the occurrence of another only if the first plays a unique role in the causal production of the second. Sober holds that a causal property will play such a unique role if it is a population level cause of its effect, and on this basis argues that there is selection for a trait T only if T is a population level cause of survival (...) and reproductive success. Sterelny and Kitcher (1988) claim against Sober that some traits directly subject to selection will not satisfy the probabilistic condition on population level causation. In this paper I show that Sober has the resources to resist the Sterelny-Kitcher complaint, but I argue that not all traits that satisfy the probabilistic condition play the required unique role in the production of their effects. (shrink)
An interventionist account of causation characterizes causal relations in terms of changes resulting from particular interventions. I provide a new example of a causal relation for which there does not exist an intervention satisfying the common interventionist standard. I consider adaptations that would save this standard and describe their implications for an interventionist account of causation. No adaptation preserves all the aspects that make the interventionist account appealing. Part of the fallout is a clearer account of the difficulties in characterizing (...) so-called “soft” interventions. (shrink)
Lennox and Wilson (1994) critique dispositional accounts of selection on the grounds that such accounts will class evolutionary events as cases of selection whether or not the environment constrains population growth. Lennox and Wilson claim that pure r-selection involves no environmental checks on growth, and that accounts of natural selection ought to distinguish between the two sorts of cases. I argue that Lennox and Wilson are mistaken in claiming that pure r-selection involves no environmental checks, but suggest that two related (...) cases support their substantive complaint, namely that dispositional accounts of selection have resources insufficient for making important distinctions in causal structure. (shrink)
Twenty years ago, Nancy Cartwright wrote a perceptive essay in which she clearly distinguished causal relations from associations, introduced philosophers to Simpson’s paradox, articulated the difﬁculties for reductive probabilistic analyses of causation that ﬂow from these observations, and connected causal relations with strategies of action (Cartwright 1979). Five years later, without appreciating her essay, I and my (then) students began to develop formal representations of causal and probabilistic relations, which, subsequently informed by the work of computer scientists and statisticians, led (...) eventually to a practical theory of causal inference and prediction, a theory incorporating some of the sensibilities Cartwright had voiced (Glymour et al. 1987; Spirtes et al. 1993). That theory, and ideas related to it, have become a subﬁeld of computer science with contributions far deeper than mine from many sources, and its inferential and predictive techniques have been successfully applied in biology, economics, educational research, geology and space physics. (shrink)
In "The Epistemology of Geometry" Glymour proposed a necessary structural condition for the synonymy of two space-time theories. David Zaret has recently challenged this proposal, by arguing that Newtonian gravitational theory with a flat, non-dynamic connection (FNGT) is intuitively synonymous with versions of the theory using a curved dynamical connection (CNGT), even though these two theories fail to satisfy Glymour's proposed necessary condition for synonymy. Zaret allowed that if FNGT and CNGT were not equally well (bootstrap) tested by (...) the relevant phenomena, the two theories would in fact not be synonymous. He argued, however, that when electrodynamic phenomena are considered, the two theories are equally well tested. We show that it is not FNGT and CNGT which are equally well tested when the electrodynamic phenomena are considered, but only suitable extensions of FNGT and CNGT. Thus, there is good reason to consider FNGT and CNGT to be non-synonymous. We further show that the two extensions of FNGT and CNGT which are equally well tested when electrodynamic phenomena are considered (and which could be considered intuitively synonymous) not only satisfy Glymour's original proposed necessary condition for the synonymy of space-time theories, they satisfy a plausible stronger condition as well. (shrink)
In this paper we present six criteria for assessing proposed solutions to environmental risk problems. To assess the final criterion-the criterion of ethical responsibility-we suggest another series of criteria. However, before these criteria can be used to address ethical problems, business persons must be wiIling to discuss the problem in ethical terms. Yet many decision makers are unwilling to do so. Drawing on research by James Waters and Frederick Bird, we discuss this “moral muteness”-the inability or unwillingness to use (...) morallanguage to solve moral problems-and suggest some underlying causes of moral muteness. (shrink)
We argue that current discussions of criteria for actual causation are ill-posed in several respects. (1) The methodology of current discussions is by induction from intuitions about an infinitesimal fraction of the possible examples and counterexamples; (2) cases with larger numbers of causes generate novel puzzles; (3) “neuron” and causal Bayes net diagrams are, as deployed in discussions of actual causation, almost always ambiguous; (4) actual causation is (intuitively) relative to an initial system state since state changes are relevant, but (...) most current accounts ignore state changes through time; (5) more generally, there is no reason to think that philosophical judgements about these sorts of cases are normative; but (6) there is a dearth of relevant psychological research that bears on whether various philosophical accounts are descriptive. Our skepticism is not directed towards the possibility of a correct account of actual causation; rather, we argue that standard methods will not lead to such an account. A different approach is required. Once upon a time a hungry wanderer came into a village. He filled an iron cauldron with water, built a fire under it, and dropped a stone into the water. “I do like a tasty stone soup” he announced. Soon a villager added a cabbage to the pot, another added some salt and others added potatoes, onions, carrots, mushrooms, and so on, until there was a meal for all. (shrink)
One construal of convergent realism is that for each clear question, scientific inquiry eventually answers it. In this paper we adapt the techniques of formal learning theory to determine in a precise manner the circumstances under which this ideal is achievable. In particular, we define two criteria of convergence to the truth on the basis of evidence. The first, which we call EA convergence, demands that the theorist converge to the complete truth "all at once". The second, which we call (...) AE convergence, demands only that for every sentence in the theorist's language, there is a time at which the theorist settles the status of the sentence. The relative difficulties of these criteria are compared for effective and ineffective agents. We then examine in detail how the enrichment of an agent's hypothesis language makes the task of converging to the truth more difficult. In particular, we parametrize first-order languages by predicate and function symbol arity, presence or absence of identity, and quantifier prefix complexity. For nearly each choice of values of these parameters, we determine the senses in which effective and ineffective agents can converge to the complete truth on an arbitrary structure for the language. Finally, we sketch directions in which our learning theoretic setting can be generalized or made more realistic. (shrink)
Contemporary cognitive neuropsychology attempts to infer unobserved features of normal human cognition, or ?cognitive architecture?, from experiments with normals and with brain-damaged subjects in whom certain normal cognitive capacities are altered, diminished, or absent. Fundamental methodological issues about the enterprise of cognitive neuropsychology concern the characterization of methods by which features of normal cognitive architecture can be identified from such data, the assumptions upon which the reliability of such methods are premised, and the limits of such methods?even granting their assumptions?in (...) resolving uncertainties about that architecture. With some idealization, the question of the capacities of various experimental designs in cognitive neuropsychology to uncover cognitive architecture can be reduced to comparatively simple questions about the prior assumptions investigators are willing to make. This paper presents some of simplest of those reductions. 1Research for this paper was made possible by a fellowship from the John Simon Guggenheim Memorial Foundation and by grant number SBE-9212264 from the National Science Foundation. I thank Martha Farah for teaching me what little I know of cognitive neuropsychology, Jeffrey Bub for stimulating me to think about these issues and for commenting on drafts of this paper, and Peter Slezak for additional comments. Alfonso Caramazza and Michael McCloskey provided very helpful comments on a second draft. (shrink)
Ethical guidelines for multinational corporations are included in several international accords adopted during the past four decades. These guidelines attempt to influence the practices of multinational enterprises in such areas as employment relations, consumer protection, environmental pollution, political participation, and basic human rights. Their moral authority rests upon the competing principles of national sovereignty, social equity, market integrity, and human rights. Both deontological principles and experience-based value systems undergird and justify the primacy of human rights as the fundamental moral authority (...) of these transnational and transcultural compacts. Although difficulties and obstacles abound in gaining operational acceptance of such codes of conduct, it is possible to argue that their guidelines betoken the emergence of a transcultural corporate ethic. (shrink)
. This paper explores the relationship between gift giving, guanxi and corruption through a study of the relationships between UK manufacturing companies in China and their local component suppliers. The analysis is based on interviews in the China-based operations of 49 UK companies. Interviews were carried out both with senior (often expatriate) staff and with local line managers who were responsible for everyday purchasing decisions and for managing relationships with suppliers. The results suggest that gift giving is perceived to be (...) a significant problem in UK-owned companies in China. However the relationship between these payments and established understanding of gift giving within guanxi-networks appears to be weak. Gift giving appears to be associated with illicit payments, corruption and the pursuit of self-interest. Firms seek to reduce the incidence of illicit transactions by changing staff roles, instituting joint responsibilities, which include the separation of different aspects of sourcing/purchasing, ineasing the involvement of senior staff in the process and through the education of employee and suppliers. (shrink)
In this paper we consider whether one type of individual investor, which we call at risk investors, should be denied access to securities markets to prevent them from suffering serious financial harm. We consider one kind of paternalistic justification for prohibiting at risk investors from participating in securities markets, and argue that it is not successful. We then argue that restricting access to markets is justified in some circumstances to protect the rights of at risk investors. We conclude with some (...) suggestions about how this might be done. (shrink)
_Words_, _Thoughts and Theories _argues that infants and children discover the physical and psychological features of the world by a process akin to scientiﬁc inquiry, more or less as conceived by philosophers of science in the 1960s (the theory theory). This essay discusses some of the philosophical background to an alternative, more popular, “modular” or “maturational” account of development, dismisses an array of philosoph- ical objections to the theory theory, suggests that the theory theory offers an undeveloped project for artiﬁcial (...) intelligence, and, relying on recent psychological work on causation, offers suggestions about how principles of causal inference may provide a developmental solution to the “frame problem”. (shrink)
S CIENTISTS often claim that an experiment or observation tests certain hypotheses within a complex theory but not others. Relativity theorists, for example, are unanimous in the judgment that measurements of the gravitational red shift do not test the field equations of general relativity; psychoanalysts sometimes complain that experimental tests of Freudian theory are at best tests of rather peripheral hypotheses; astronomers do not regard observations of the positions of a single planet as a test of Kepler's third law, even (...) though those observations may test Kepler's first and second laws. Observations are regarded as relevant to some hypotheses in a theory but not relevant to others in that same theory. There is another kind of scientific judgment that may or may not be related to such judgments of relevance: determinations of the accuracy of the predictions of some theories are not held to provide tests of those theories, or, at least, positive results are not held to support or confirm the theories in question. There are, for example, special relativistic theories of gravity that predict the same phenomena as does general relativity, yet the theories are regarded as.. (shrink)
Rather than attempting to characterize a relation of confirmation between evidence and theory, epistemology might better consider which methods of forming conjectures from evidence, or of altering beliefs in the light of evidence, are most reliable for getting to the truth. A logical framework for such a study was constructed in the early 1960s by E. Mark Gold and Hilary Putnam. This essay describes some of the results that have been obtained in that framework and their significance for philosophy of (...) science, artificial intelligence, and for normative epistemology when truth is relative. (shrink)
Your use of the JSTOR archive indicates your acceptance of J STOR’s Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. J STOR’s Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non—commercial use.
In everyday matters, as well as in law, we allow that someone’s reasons can be causes of her actions, and often are. That correct reasoning accords with Bayesian principles is now so widely held in philosophy, psychology, computer science and elsewhere that the contrary is beginning to seem obtuse, or at best quaint. And that rational agents should learn about the world from energies striking sensory inputs nerves in people—seems beyond question. Even rats seem to recognize the difference between correlation (...) and causation,1 and accordingly make different inferences from passive observation than from interventions. A few statisticians aside,” so do most of us. To square these views with the demands of computability, increasing numbers of psychologists and others have embraced a particular formalization, causal Bayes nets, as an account of human reasoning about and to causal connections.111 Such structures can be used by rational agents, including humans in so far as they are rational, to have degrees of belief in various conceptual contents, which they use to reason to expectations, which are realized or defeated by sensory inputs, which cause them to change their degrees of belief in other contents in accord with Bayes Rule, or some generalization of it. How is all of this supposed to be carried out? l. Representing Causal Structures The causal Bayes net framework adopted by a growing number of psychologists goes like this: Our representations of causal relations are captured in a graphical causal. (shrink)
Time series of macroscopic quantities that are aggregates of microscopic quantities, with unknown one‐many relations between macroscopic and microscopic states, are common in applied sciences, from economics to climate studies. When such time series of macroscopic quantities are claimed to be causal, the causal relations postulated are representable by a directed acyclic graph and associated probability distribution—sometimes called a dynamical Bayes net. Causal interpretations of such series imply claims that hypothetical manipulations of macroscopic variables have unambiguous effects on variables “downstream” (...) in the graph, and such macroscopic variables may be predictably produced or altered even while particular microstates are not. This paper argues that such causal time series of macroscopic aggregates of microscopic processes are the appropriate model for mental causation. (shrink)
We consider the dispute between causal decision theorists and evidential decision theorists over Newcomb-like problems. We introduce a framework relating causation and directed graphs developed by Spirtes et al. (1993) and evaluate several arguments in this context. We argue that much of the debate between the two camps is misplaced; the disputes turn on the distinction between conditioning on an event E as against conditioning on an event I which is an action to bring about E. We give the essential (...) machinery for calculating the effect of an intervention and consider recent work which extends the basic account given here to the case where causal Knowledge is incomplete. (shrink)
The notion of reduction in the natural sciences has been assimilated to the notion of inter-theoretical explanation. Many philosophers of science (following Nagel) have held that the apparently ontological issues involved in reduction should be replaced by analyses of the syntactic and semantic connections involved in explaining one theory on the basis of another. The replacement does not seem to have been especially successful, for we still lack a plausible account of inter-theoretical explanation. I attempt to provide one.