Causality offers the first comprehensive coverage of causal analysis in many sciences, including recent advances using graphical methods. Pearl presents a unified account of the probabilistic, manipulative, counterfactual and structural approaches to causation, and devises simple mathematical tools for analyzing the relationships between causal connections, statistical associations, actions and observations. The book will open the way for including causal analysis in the standard curriculum of statistics, artificial intelligence, business, epidemiology, social science and economics.
Written by one of the preeminent researchers in the field, this book provides a comprehensive exposition of modern analysis of causation. It shows how causality has grown from a nebulous concept into a mathematical theory with significant applications in the fields of statistics, artificial intelligence, economics, philosophy, cognitive science, and the health and social sciences. Judea Pearl presents and unifies the probabilistic, manipulative, counterfactual, and structural approaches to causation and devises simple mathematical tools for studying the relationships between causal connections (...) and statistical associations. Cited in more than 2,100 scientific publications, it continues to liberate scientists from the traditional molds of statistical thinking. In this revised edition, Judea Pearl elucidates thorny issues, answers readers' questions, and offers a panoramic view of recent advances in this field of research. Causality will be of interest to students and professionals in a wide variety of fields. Dr Judea Pearl has received the 2011 Rumelhart Prize for his leading research in Artificial Intelligence and systems from The Cognitive Science Society. (shrink)
Judea Pearl has been at the forefront of research in the burgeoning field of causal modeling, and Causality is the culmination of his work over the last dozen or so years. For philosophers of science with a serious interest in causal modeling, Causality is simply mandatory reading. Chapter 2, in particular, addresses many of the issues familiar from works such as Causation, Prediction and Search by Peter Spirtes, Clark Glymour, and Richard Scheines. But philosophers with a more general interest in (...) causation will also profit from reading Pearl’s book, especially the material in chapters 7, 9, and 10, which is self-contained and less technical than other parts of the book. The present review is aimed primarily at readers of the second type. (shrink)
We propose a new definition of actual causes, using structural equations to model counterfactuals. We show that the definition yields a plausible and elegant account of causation that handles well examples which have caused problems for other definitions and resolves major difficulties in the traditional account.
We propose a new definition of actual causes, using structural equations to model counterfactuals. We show that the definition yields a plausible and elegant account of causation that handles well examples which have caused problems for other definitions and resolves major difficultiesn in the traditional account.
We show in this paper that the AGM postulates are too weak to ensure the rational preservation of conditional beliefs during belief revision, thus permitting improper responses to sequences of observations. We remedy this weakness by proposing four additional postulates, which are sound relative to a qualitative version of probabilistic conditioning. Contrary to the AGM framework, the proposed postulates characterize belief revision as a process which may depend on elements of an epistemic state that are not necessarily captured by a (...) belief set. We also show that a simple modification to the AGM framework can allow belief revision to be a function of epistemic states. We establish a model-based representation theorem which characterizes the proposed postulates and constrains, in turn, the way in which entrenchment orderings may be transformed under iterated belief revision. (shrink)
This paper studies the causal interpretation of counterfactual sentences using a modifiable structural equation model. It is shown that two properties of counterfactuals, namely, composition and effectiveness, are sound and complete relative to this interpretation, when recursive (i.e., feedback-less) models are considered. Composition and effectiveness also hold in Lewis's closest-world semantics, which implies that for recursive models the causal interpretation imposes no restrictions beyond those embodied in Lewis's framework. A third property, called reversibility, holds in nonrecursive causal models but not (...) in Lewis's closest-world semantics, which implies that Lewis's axioms do not capture some properties of systems with feedback. Causal inferences based on counterfactual analysis are exemplified and compared to those based on graphical models. (shrink)
We propose new definitions of (causal) explanation, using structural equations to model counterfactuals. The definition is based on the notion of actual cause, as defined and motivated in a companion article. Essentially, an explanation is a fact that is not known for certain but, if found to be true, would constitute an actual cause of the fact to be explained, regardless of the agent's initial uncertainty. We show that the definition handles well a number of problematic examples from the literature.
This paper presents a formalism that combines useful properties of both logic and probabilities. Like logic, the formalism admits qualitative sentences and provides symbolic machinery for deriving deductively closed beliefs and, like probability, it permits us to express if-then rules with different levels of firmness and to retract beliefs in response to changing observations. Rules are interpreted as order-of-magnitude approximations of conditional probabilities which impose constraints over the rankings of worlds. Inferences are supported by a unique priority ordering on rules (...) which is syntactically derived from the knowledge base. This ordering accounts for rule interactions, respects specificity considerations and facilitates the construction of coherent states of beliefs. Practical algorithms are developed and analyzed for testing consistency, computing rule ordering, and answering queries. Imprecise observations are incorporated using qualitative versions of Jeffrey's rule and Bayesian updating, with the result that coherent belief revision is embodied naturally and tractably. Finally, causal rules are interpreted as imposing Markovian conditions that further constrain world rankings to reflect the modularity of causal organizations. These constraints are shown to facilitate reasoning about causal projections, explanations, actions and change. (shrink)
Non-manipulable factors, such as gender or race have posed conceptual and practical challenges to causal analysts. On the one hand these factors do have consequences, and on the other hand, they do not fit into the experimentalist conception of causation. This paper addresses this challenge in the context of public debates over the health cost of obesity, and offers a new perspective, based on the theory of Structural Causal Models.
Recent advances in causal reasoning have given rise to a computational model that emulates the process by which humans generate, evaluate, and distinguish counterfactual sentences. Contrasted with the “possible worlds” account of counterfactuals, this “structural” model enjoys the advantages of representational economy, algorithmic simplicity, and conceptual clarity. This introduction traces the emergence of the structural model and gives a panoramic view of several applications where counterfactual reasoning has benefited problem areas in the empirical sciences.
According to common judicial standard, judgment in favor ofplaintiff should be made if and only if it is more probable than not thatthe defendant''s action was the cause for the plaintiff''s damage (or death). This paper provides formal semantics, based on structural models ofcounterfactuals, for the probability that event x was a necessary orsufficient cause (or both) of another event y. The paper then explicates conditions under which the probability of necessary (or sufficient)causation can be learned from statistical data, and (...) shows how data fromboth experimental and nonexperimental studies can be combined to yieldinformation that neither study alone can provide. Finally, we show thatnecessity and sufficiency are two independent aspects of causation, andthat both should be invoked in the construction of causal explanations for specific scenarios. (shrink)
We demonstrate how counterfactuals can be used to compute the probability that one event was/is a sufficient cause of another, and how counterfactuals emerge organically from basic scientific knowledge, rather than manipulative experiments. We contrast this demonstration with the potential outcome framework and address the distinction between causes and enablers.
Among the many peculiarities that were dubbed “paradoxes” by well meaning statisticians, the one reported by Frederic M. Lord in 1967 has earned a special status. Although it can be viewed, formally, as a version of Simpson’s paradox, its reputation has gone much worse. Unlike Simpson’s reversal, Lord’s is easier to state, harder to disentangle and, for some reason, it has been lingering for almost four decades, under several interpretations and re-interpretations, and it keeps coming up in new situations and (...) under new lights. Most peculiar yet, while some of its variants have received a satisfactory resolution, the original version presented by Lord, to the best of my knowledge, has not been given a proper treatment, not to mention a resolution.The purpose of this paper is to trace back Lord’s paradox from its original formulation, resolve it using modern tools of causal analysis, explain why it resisted prior attempts at resolution and, finally, address the general methodological issue of whether adjustments for preexisting conditions is justified in group comparison applications. (shrink)
This paper provides empirical interpretation of the dodo operator when applied to non-manipulable variables such as race, obesity, or cholesterol level. We view dodo as an ideal intervention that provides valuable information on the effects of manipulable variables and is thus empirically testable. We draw parallels between this interpretation and ways of enabling machines to learn effects of untried actions from those tried. We end with the conclusion that researchers need not distinguish manipulable from non-manipulable variables; both types are equally (...) eligible to receive the dodo operator and to produce useful information for decision makers. (shrink)
I contrast the “data fitting” vs “data interpreting” approaches to data science along three dimensions: Expediency, Transparency, and Explainability. “Data fitting” is driven by the faith that the secret to rational decisions lies in the data itself. In contrast, the data-interpreting school views data, not as a sole source of knowledge but as an auxiliary means for interpreting reality, and “reality” stands for the processes that generate the data. I argue for restoring balance to data science through a task-dependent symbiosis (...) of fitting and interpreting, guided by the Logic of Causation. (shrink)