The Gestalt Bubble model describes a subjective phenomenal experience (what is seen) without taking into account the extraphenomenal constraints of perceptual experience (why it is seen as it is). If it intends to be an explanatory model, then it has to include either stimulus or neural constraints, or both.
Doppelt argues that the democratic socialist conception of human freedom expressed in some recent works of mine lacks philosophical justification and fails to get to the roots of the socialist ideals of dignity, human worth, and self-respect. Doppelt claims to provide a new approach to the grounding of human freedom which allows him to avoid what he regards as the narrowness of my own conception. Not only does Doppelt fail to show that my own conception of freedom is confined to (...) self-management and cannot embrace the dimensions of social life his own paradigm is claimed to take care of, his article fails to raise or resolve the question of the conflict between the democratic socialist and the Rawlsian components that make up his 'new' paradigm. In this reply I discuss the issues Doppelt himself raises in connection with my own work: (1) how to ground human freedom; (2) whether my conception of freedom in democratic socialism is rationally preferable to the conception embodied in contemporary capitalist society; and (3) whether my idea of freedom does indeed exclude those dimensions of life to which Doppelt refers. (shrink)
Marxism is often claimed to be incompatible with any kind of ethical theory, because of its assumptions of economic determinism, of the class character of morals, and of the subordination of morality to politics. But the author proposes that these assumptions can be interpreted in such a flexible way as not to rule out the freedom of choice and responsibility, die relative independence of morals from economic conditions and political ends, and concepts of universal human value and a specifically moral (...) ideal. A humanist philosophy, centered in Marx's analysis of alienation, provides a sufficiently rich theoretical basis for the solution of both ethical and meta-ethical' problems. (shrink)
The elementary, liberal form of democracy has been criticized for being purely political, predominantly representative, centralistic, involving struggle for power among oligarchic political parties, maintaining professional politics and domination of wealthy classes. A more rational and radical form, the council democracy, is projected as a historically possible and better alternative. It extends democratic principles to economy and culture, combines direct participation with representation, replaces centralism with federalism, develops political pluralism without ruling parties, deprofessionalizes politics, and dismantles any monopoly of power. (...) In the light of existing historical experiences the structure of council-democracy is analyzed, possible solutions of crucial practical problems indicated, and different strategies of its realization are examined. (shrink)
There are two different senses of rationality of methodological rules: one is instrumental rationality, another is rationality of goals. In the first sense methodological rules are mere means of an apparently neutral true description of a given reality. Such a description, no matter how adequate, involves hidden value-assumptions and may be used for irrational purposes. A different notion of ends-means rationality characterizes methodological rules of critical science which analyses limitations of the given reality from an explicitly stated value-standpoint. The ultimate (...) purpose of such critical research is to produce changes in human behaviour and in objective reality. As in the case of medical activity: diagnosis is followed by therapy. Methodological rules of critical inquiry are only a special case of a general methodology of human practice, the rationality of which presupposes a universal emancipatory goal. (shrink)
expose some gaps and difficulties in the argument for the causal Markov condition in our essay ‘Independence, Invariance and the Causal Markov Condition’ (), and we are grateful for the opportunity to reformulate our position. In particular, Cartwright disagrees vigorously with many of the theses we advance about the connection between causation and manipulation. Although we are not persuaded by some of her criticisms, we shall confine ourselves to showing how our central argument can be reconstructed and to casting doubt (...) on Cartwright's claim that the causal Markov condition typically fails when there are indeterministic by-products. Why believe the causal Markov condition? Causation and manipulation The argument Indeterministic by-products and the causal Markov condition The chemical factory counterexample and PM2 Conclusions: causation and manipulability. (shrink)
In their rich and intricate paper ‘Independence, Invariance, and the Causal Markov Condition’, Daniel Hausman and James Woodward () put forward two independent theses, which they label ‘level invariance’ and ‘manipulability’, and they claim that, given a specific set of assumptions, manipulability implies the causal Markov condition. These claims are interesting and important, and this paper is devoted to commenting on them. With respect to level invariance, I argue that Hausman and Woodward's discussion is confusing because, as I point out, (...) they use different senses of ‘intervention’ and ‘invariance’ without saying so. I shall remark on these various uses and point out that the thesis is true in at least two versions. The second thesis, however, is not true. I argue that in their formulation, the manipulability thesis is patently false and that a modified version does not fare better. Furthermore, I think their proof that manipulability implies the causal Markov condition is not conclusive. In the deterministic case it is valid but vacuous, whereas it is invalid in the probabilistic case. 1 Introduction 2 Intervention, invariance and modularity 3 The causal Markov condition: CM1 and CM2 4 From MOD to the causal Markov condition and back 5 A second argument for CM2 6 The proof of the causal Markov condition for probabilistic causes 7 ‘Cartwright's objection’ defended 8 Metaphysical defenses of the causal Markov condition 9 Conclusion. (shrink)
This essay explains what the Causal Markov Condition says and defends the condition from the many criticisms that have been launched against it. Although we are skeptical about some of the applications of the Causal Markov Condition, we argue that it is implicit in the view that causes can be used to manipulate their effects and that it cannot be surrendered without surrendering this view of causation.
Markov models of evolution describe changes in the probability distribution of the trait values a population might exhibit. In consequence, they also describe how entropy and conditional entropy values evolve, and how the mutual information that characterizes the relation between an earlier and a later moment in a lineage’s history depends on how much time separates them. These models therefore provide an interesting perspective on questions that usually are considered in the foundations of physics—when and why does entropy increase and (...) at what rates do changes in entropy take place? They also throw light on an important epistemological question: are there limits on what your observations of the present can tell you about the evolutionary past? (shrink)
The present text comments on Steel 2005 , in which the author claims to extend from the deterministic to the general case, the result according to which the causal Markov condition is satisfied by systems with jointly independent exogenous variables. I show that Steel’s claim cannot be accepted unless one is prepared to abandon standard causal modeling terminology. Correlatively, I argue that the most fruitful aspect of Steel 2005 consists in a realist conception of error terms, and I show how (...) this conception sheds new light on the relationship between determinism and the causal Markov condition. †To contact the author, please write to: Institut Supérieur de Philosophie, Université Catholique de Louvain, Place du Cardinal Mercier 14, 1348 Louvain la Neuve, Belgium; e‐mail: email@example.com. (shrink)
Daniel Hausman and James Woodward claim to prove that the causal Markov condition, so important to Bayes-nets methods for causal inference, is the ‘flip side’ of an important metaphysical fact about causation—that causes can be used to manipulate their effects. This paper disagrees. First, the premise of their proof does not demand that causes can be used to manipulate their effects but rather that if a relation passes a certain specific kind of test, it is causal. Second, the proof is (...) invalid. Third, the kind of testability they require can easily be had without the causal Markov condition. Introduction Earlier views: manipulability v testability Increasingly weaker theses The proof is invalid MOD* is implausible Two alternative claims and their defects A true claim and a valid argument Indeterminism Overall conclusion. (shrink)
This paper explores the relationship between a manipulability conception of causation and the causal Markov condition (CM). We argue that violations of CM also violate widely shared expectations—implicit in the manipulability conception—having to do with the absence of spontaneous correlations. They also violate expectations concerning the connection between independence or dependence relationships in the presence and absence of interventions.
Carnap's Inductive Logic, like most philosophical discussions of induction, is designed for the case of independent trials. To take account of periodicities, and more generally of order, the account must be extended. From both a physical and a probabilistic point of view, the first and fundamental step is to extend Carnap's inductive logic to the case of finite Markov chains. Kuipers (1988) and Martin (1967) suggest a natural way in which this can be done. The probabilistic character of Carnapian inductive (...) logic(s) for Markov chains and their relationship to Carnap's inductive logic(s) is discussed at various levels of Bayesian analysis. (shrink)
Woodward present an argument for the Causal Markov Condition (CMC) on the basis of a principle they dub ‘modularity’ ([1999, 2004]). I show that the conclusion of their argument is not in fact the CMC but a substantially weaker proposition. In addition, I show that their argument is invalid and trace this invalidity to two features of modularity, namely, that it is stated in terms of pairwise independence and ‘arrow-breaking’ interventions. Hausman & Woodward's argument can be rendered valid through a (...) reformulation of modularity, but it is doubtful that the argument so revised provides any substantially new insight regarding the basis of the CMC. Introduction The CMC versus Hausman & Woodward's conclusion Hausman & Woodward's argument Modularity and independent error terms Conclusion Appendix: D-separation. (shrink)
The causal Markov condition (CMC) plays an important role in much recent work on the problem of causal inference from statistical data. It is commonly thought that the CMC is a more problematic assumption for genuinely indeterministic systems than for deterministic ones. In this essay, I critically examine this proposition. I show how the usual motivation for the CMC—that it is true of any acyclic, deterministic causal system in which the exogenous variables are independent—can be extended to the indeterministic case. (...) In light of this result, I consider several arguments for supposing indeterminism a particularly hostile environment for the CMC, but conclude that none are persuasive. Introduction Functional models and directed graphs The causal Markov theorem The causal Markov theorem and genuine indeterminism Are the exogenous variables independent? EPR Conclusion. (shrink)
It is still a matter of controversy whether the Principle of the Common Cause (PCC) can be used as a basis for sound causal inference. It is thus to be expected that its application to quantum mechanics should be a correspondingly controversial issue. Indeed the early 90’s saw a flurry of papers addressing just this issue in connection with the EPR correlations. Yet, that debate does not seem to have caught up with the most recent literature on causal inference generally, (...) which has moved on to consider the virtues of a generalised PCC-inspired condition, the so-called Causal Markov Condition (CMC). In this paper we argue that the CMC is an appropriate benchmark for debating possible causal explanations of the EPR correlations. But we go on to take issue with some pronouncements on EPR by defenders of the CMC. (shrink)
Nancy Cartwright believes that we live in a Dappled World– a world in which theories, principles, and methods applicable in one domain may be inapplicable in others; in which there are no universal principles. One of the targets of Cartwright’s arguments for this conclusion is the Causal Markov condition, a condition which has been proposed as a universal condition on causal structures.1 The Causal Markov condition, Cartwright argues, is applicable only in a limited domain of special cases, and thus cannot (...) be used as a universal principle in causal discovery. I have no dispute with any of these claims here. Rather, I wish to argue for a very limited thesis: that the Causal Markov condition is applicable in the specific domain of microscopic quantum mechanical systems; further, that the condition can fruitfully be applied to the much discussed EPR setup. This is perhaps a surprising conclusion, for it is precisely in this domain that Cartwright’s arguments against the Causal Markov condition have been considered to be the most successful. (shrink)
Recent discussions in the philosophy of science have devoted considerable attention to the analysis of conceptual issues relating to the methodology of explanation and prediction in the sciences. Part of this literature has been devoted to clarifying the very ideas of explanation and prediction. But the discussion has also ranged over various related topics, including the status of laws to be used for explanatory and predictive purposes, the logical interrelationships between explanatory and predictive reasonings, the differences in the strategy of (...) explanatory argumentation in different branches of science, the nature and possibility of teleological explanation, etc. The aim of the present article is to examine the issues involved in such questions from the specialized perspective afforded by one particular kind of physical systems--namely, systems, here to be characterized as discrete state systems, whose behavior has been studied extensively in the scientific literature under the general heading of Markov chains. These systems have been chosen as our focus because their behavior over time can be analyzed at once with great ease and with extraordinary precision. (shrink)
A model of a Markov process is presented in which observing the present state of a system is asymmetrically related to inferring the system's future and inferring its past. A likelihood inference about the system's past state, based on observing its present state, is justified no matter what the parameter values in the model happen to be. In contrast, a probability inference of the system's future state, based on observing its present state, requires further information about the parameter values.
In this paper, we study the performance of baseline hidden Markov model (HMM) for segmentation of speech signals. It is applied on single-speaker segmentation task, using Hindi speech database. The automatic phoneme segmentation framework evolved imitates the human phoneme segmentation process. A set of 44 Hindi phonemes were chosen for the segmentation experiment, wherein we used continuous density hidden Markov model (CDHMM) with a mixture of Gaussian distribution. The left-to-right topology with no skip states has been selected as it is (...) effective in speech recognition due to its consistency with the natural way of articulating the spoken words. This system accepts speech utterances along with their orthographic “transcriptions” and generates segmentation information of the speech. This corpus was used to develop context-independent hidden Markov models (HMMs) for each of the Hindi phonemes. The system was trained using numerous sentences that are relevant to provide information to the passengers of the Metro Rail. The system was validated against a few manually segmented speech utterances. The evaluation of the experiments shows that the best performance is obtained by using a combination of two Gaussians mixtures and five HMM states. A category-wise phoneme error analysis has been performed, and the performance of the phonetic segmentation has been reported. The modeling of HMMs has been implemented using Microsoft Visual Studio 2005 (C++), and the system is designed to work on Windows operating system. The goal of this study is automatic segmentation of speech at phonetic level. (shrink)
Page's manifesto makes a case for localist representations in neural networks, one of the advantages being ease of interpretation. However, even localist networks can be hard to interpret, especially when at some hidden layer of the network distributed representations are employed, as is often the case. Hidden Markov models can be used to provide useful interpretable representations.
The theory of Markov decision processes (MDP) can be used to analyze a wide variety of stopping time problems in economics. In this paper, the nature of such problems is discussed and then the underlying theory is applied to the question of arranged marriages. We construct a stylized model of arranged marriages and, inter alia, it is shown that a decision maker's optimal policy depends only on the nature of the current marriage proposal, independent of whether there is recall (storage) (...) of previous marriage proposals. (shrink)
The conditional independence relations present in a data set usually admit multiple causal explanations — typically represented by directed graphs — which are Markov equivalent in that they entail the same conditional independence relations among the observed variables. Markov equivalence between directed acyclic graphs (DAGs) has been characterized in various ways, each of which has been found useful for certain purposes. In particular, Chickering’s transformational characterization is useful in deriving properties shared by Markov equivalent DAGs, and, with certain generalization, is (...) needed to justify a search procedure over Markov equivalence classes, known as the GES algorithm. Markov equivalence between DAGs with latent variables has also been characterized, in the spirit of Verma and Pearl (1990), via maximal ancestral graphs (MAGs). The latter can represent the observable conditional independence relations as well as some causal features of DAG models with latent variables. However, no characterization of Markov equivalent MAGs is yet available that is analogous to the transformational characterization for Markov equivalent DAGs. The main contribution of the current paper is to establish such a characterization for directed MAGs, which we expect will have similar uses as Chickering’s characterization does for DAGs. (shrink)
Exploring how people represent natural categories is a key step toward developing a better understanding of how people learn, form memories, and make decisions. Much research on categorization has focused on artificial categories that are created in the laboratory, since studying natural categories defined on high-dimensional stimuli such as images is methodologically challenging. Recent work has produced methods for identifying these representations from observed behavior, such as reverse correlation (RC). We compare RC against an alternative method for inferring the structure (...) of natural categories called Markov chain Monte Carlo with People (MCMCP). Based on an algorithm used in computer science and statistics, MCMCP provides a way to sample from the set of stimuli associated with a natural category. We apply MCMCP and RC to the problem of recovering natural categories that correspond to two kinds of facial affect (happy and sad) from realistic images of faces. Our results show that MCMCP requires fewer trials to obtain a higher quality estimate of people’s mental representations of these two categories. (shrink)
It is still a matter of controversy whether the Principle of the Common Cause (PCC) can be used as a basis for sound causal inference. It is thus to be expected that its application to quantum mechanics should be a correspondingly controversial issue. Indeed the early 90's saw a flurry of papers addressing just this issue in connection with the EPR correlations. Yet, that debate does not seem to have caught up with the most recent literature on causal inference generally, (...) which has moved on to consider the virtues of a generalized PCC-inspired condition, the so-called Causal Markov Condition (CMC). In this paper we argue that the CMC is an appropriate benchmark for debating possible causal explanations of the EPR corrleations. But we go on to take issue with some pronouncements on EPR by defenders of the CMC. (shrink)
The development of causal modelling since the 1950s has been accompanied by a number of controversies, the most striking of which concerns the Markov condition. Reichenbach's conjunctive forks did satisfy the Markov condition, while Salmon's interactive forks did not. Subsequently some experts in the field have argued that adequate causal models should always satisfy the Markov condition, while others have claimed that non-Markovian causal models are needed in some cases. This paper argues for the second position by considering the multi-causal (...) forks, which are widespread in contemporary medicine. (shrink)
The Modern Synthesis of Darwinism and genetics regards non-genetic factors as merely constraints on the genetic variations that result in the characteristics of organisms. Even though the environment (including social interactions and culture) is as necessary as genes in terms of selection and inheritance, it does not contain the information that controls the development of the traits. S. Oyama’s account of the Parity Thesis, however, states that one cannot conceivably distinguish in a meaningful way between nature-based (i.e., gene-based) and nurture-based (...) (i.e., environment-based) characteristics in development because the information necessary for the resulting characteristics is contained at both levels. Oyama and others argue that the Parity Thesis has far-reaching implications for developmental psychology, in that both nativist and interactionist developmental accounts of motor, cognitive, affective, social, and linguistic capacities that presuppose a substantial nature/nurture dichotomy are inadequate. After considering these arguments, we conclude that either Oyama’s version of the Parity Thesis does not differ from the version advocated by liberal interactionists, or it renders precarious any analysis involving abilities present at birth (despite her claim to the contrary). More importantly, developmental psychologists need not discard the distinction between innate characteristics present at birth and those acquired by learning, even if they abandon genocentrism. Furthermore, we suggest a way nativists can disentangle the concept of maturation from a genocentric view of biological nature. More specifically, we suggest they can invoke the maturational segment of the developmental process (which involves genetic, epigenetic and environmental causes) that results in the biological “machinery” (e.g. language acquisition device) which is necessary for learning as a subsequent segment of the developmental process. (shrink)
S. Oyama’s prominent account of the Parity Thesis states that one cannot distinguish in a meaningful way between nature-based (i.e. gene-based) and nurture-based (i.e. environment-based) characteristics in development because the information necessary for the resulting characteristics is contained at both levels. Oyama as well as P. E. Griffiths and K. Stotz argue that the Parity Thesis has far-reaching implications for developmental psychology in that both nativist and interactionist developmental accounts of psychological capacities that presuppose a substantial nature/nurture dichotomy are inadequate. (...) We argue that well-motivated abandoning of the nature/nurture dichotomy, as advocated in converging versions of the Parity Thesis in biology, does not necessarily entail abandoning the distinction between biologically given abilities necessary for the development of higher psychological capacities and the learning process they enable. Thus, contrary to the claims of the aforementioned authors, developmental psychologists need not discard a substantial distinction between innate (biologically given) characteristics and those acquired by learning, even if they accept the Parity Thesis. We suggest a two-stage account of development: the first stage is maturational and involves interaction of genetic, epigenetic and environmental causes, resulting in the endogenous biological ‘machinery’ (e.g. language acquisition device), responsible for learning in the subsequent stage of the developmental process by determining the organism’s responses to the environment. This account retains the crux of nativism (the endogenous biological structure determines the way the organism learns/responds to an environment) whilst adopting the developmentalist view of biology by characterizing environments as distinctly different in terms of structure and function in two developmental stages. (shrink)
Advancing the reductionist conviction that biology must be in agreement with the assumptions of reductive physicalism (the upward hierarchy of causal powers, the upward fixing of facts concerning biological levels) A. Rosenberg argues that downward causation is ontologically incoherent and that it comes into play only when we are ignorant of the details of biological phenomena. Moreover, in his view, a careful look at relevant details of biological explanations will reveal the basic molecular level that characterizes biological systems, defined by (...) wholly physical properties, e.g., geometrical structures of molecular aggregates (cells). In response, we argue that contrary to his expectations one cannot infer reductionist assumptions even from detailed biological explanations that invoke the molecular level, as interlevel causal reciprocity is essential to these explanations. Recent very detailed explanations that concern the structure and function of chromatin—the intricacies of supposedly basic molecular level—demonstrate this. They show that what seem to be basic physical parameters extend into a more general biological context, thus rendering elusive the concepts of the basic level and causal hierarchy postulated by the reductionists. In fact, relevant phenomena are defined across levels by entangled, extended parameters. Nor can the biological context be explained away by basic physical parameters defining molecular level shaped by evolution as a physical process. Reductionists claim otherwise only because they overlook the evolutionary significance of initial conditions best defined in terms of extended biological parameters. Perhaps the reductionist assumptions (as well as assumptions that postulate any particular levels as causally fundamental) cannot be inferred from biological explanations because biology aims at manipulating organisms rather than producing explanations that meet the coherence requirements of general ontological models. Or possibly the assumptions of an ontology not based on the concept of causal powers stratified across levels can be inferred from biological explanations. The incoherence of downward causation is inevitable, given reductionist assumptions, but an ontological alternative might avoid this. We outline desiderata for the treatment of levels and properties that realize interlevel causation in such an ontology. (shrink)
Few people have thought so hard about the nature of the quantum theory as has Jeff Bub,· and so it seems appropriate to offer in his honor some reflections on that theory. My topic is an old one, the consistency of our microscopic theories with our macroscopic theories, my example, the Aspect experiments (Aspect et al., 1981, 1982, 1982a; Clauser and Shimony, l978;_Duncan and Kleinpoppen, 199,8) is familiar, and my sirnplrcation of it is borrowed. All that is new here is (...) a kind of diagonalization: an argument that the fundamental principles found to be violated by the quantum theory must be assumed to be true of the experimental apparatus used in the experiments.. (shrink)
The success of particle detection in high energy physics colliders critically depends on the criteria for selecting a small number of interactions from an overwhelming number that occur in the detector. It also depends on the selection of the exact data to be analyzed and the techniques of analysis. The introduction of automation into the detection process has traded the direct involvement of the physicist at each stage of selection and analysis for the efficient handling of vast amounts of data. (...) This tradeoff, in combination with the organizational changes in laboratories of increasing size and complexity, has resulted in automated and semi-automated systems of detection. Various aspects of the semi-automated regime were greatly diminished in more generic automated systems, but turned out to be essential to a number of surprising discoveries of anomalous processes that led to theoretical breakthroughs, notably the establishment of the Standard Model of particle physics. The automated systems are much more efficient in confirming specific hypothesis in narrow energy domains than in performing broad exploratory searches. Thus, in the main, detection processes relying excessively on automation are more likely to miss potential anomalies and impede potential theoretical advances. I suggest that putting substantially more effort into the study of electron–positron colliders and increasing its funding could minimize the likelihood of missing potential anomalies, because detection in such an environment can be handled by the semi-automated regime—unlike detection in hadron colliders. Despite virtually unavoidable excessive reliance on automated detection in hadron colliders, their development has been deemed a priority because they can operate at currently highest energy levels. I suggest, however, that a focus on collisions at the highest achievable energy levels diverts funds from searches for potential anomalies overlooked due to tradeoffs at the previous energy thresholds. I also note that even in the same collision environment, different research strategies will opt for different tradeoffs and thus achieve different experimental outcomes. Finally, I briefly discuss current searches for anomalous process in the context of the previous analysis. (shrink)
This paper considers the role of mathematics in the process of acquiring new knowledge in physics and astronomy. The defining of the notions of continuum and discreteness in mathematics and the natural sciences is examined. The basic forms of representing the heuristic function of mathematics at theoretical and empirical levels of knowledge are studied: deducing consequences from the axiomatic system of theory, the method of generating mathematical hypotheses, “pure” proofs for the existence of objects and processes, mathematical modelling, the formation (...) of mathematics on the basis of internal mathematical principles and the mathematical theory of experiment. (shrink)
Jaegwon Kim’s exclusion argument is a general ontological argument, applicable to any properties deemed supervenient on a microproperty basis, including biological properties. It implies that the causal power of any higher-level property must be reducible to the subset of the causal powers of its lower-level properties. Moreover, as Kim’s recent version of the argument indicates, a higher-level property can be causally efficient only to the extent of the efficiency of its micro-basis. In response, I argue that the ontology that aims (...) to capture experimentally based explanations of metabolic control systems and morphogenetic systems must involve causally relevant contextual properties. Such an ontology challenges the exclusiveness of micro-based causal efficiency that grounds Kim’s reductionism, since configurations themselves are inherently causally efficient constituents. I anticipate and respond to the reductionist’s objection that the nonreductionist ontology’s account of causes and inter-level causal relations is incoherent. I also argue that such an ontology is not open to Kim’s overdetermination objection. (shrink)
E. Schrödinger's ideas on interpreting quantum mechanics have been recently re-examined by historians and revived by philosophers of quantum mechanics. Such recent re-evaluations have focused on Schrödinger's retention of space–time continuity and his relinquishment of the corpuscularian understanding of microphysical systems. Several of these historical re-examinations claim that Schrödinger refrained from pursuing his 1926 wave-mechanical interpretation of quantum mechanics under pressure from the Copenhagen and Göttingen physicists, who misinterpreted his ideas in their dogmatic pursuit of the complementarity doctrine and the (...) principle of uncertainty. My analysis points to very different reasons for Schrödinger's decision and, accordingly, to a rather different understanding of the dialogue between Schrödinger and N. Bohr, who refuted Schrödinger's arguments. Bohr's critique of Schrödinger's arguments predominantly focused on the results of experiments on the scattering of electrons performed by Bothe and Geiger, and by Compton and Simon. Although he shared Schrödinger's rejection of full-blown classical entities, Bohr argued that these results demonstrated the corpuscular nature of atomic interactions. I argue that it was Schrödinger's agreement with Bohr's critique, not the dogmatic pressure, which led him to give up pursuing his interpretation for 7 yr. Bohr's critique reflected his deep understanding of Schrödinger's ideas and motivated, at least in part, his own pursuit of his complementarity principle. However, in 1935 Schrödinger revived and reformulated the wave-mechanical interpretation. The revival reflected N. F. Mott's novel wave-mechanical treatment of particle-like properties. R. Shankland's experiment, which demonstrated an apparent conflict with the results of Bothe–Geiger and Compton–Simon, may have been additional motivation for the revival. Subsequent measurements have proven the original experimental results accurate, and I argue that Schrödinger may have perceived even the reformulated wave-mechanical approach as too tenuous in light of Bohr's critique. (shrink)
A recent rethinking of the early history of Quantum Mechanics deemed the late 1920s agreement on the equivalence of Matrix Mechanics and Wave Mechanics, prompted by Schrödinger's 1926 proof, a myth. Schrödinger supposedly failed to prove isomorphism, or even a weaker equivalence (“Schrödinger-equivalence”) of the mathematical structures of the two theories; developments in the early 1930s, especially the work of mathematician von Neumann provided sound proof of mathematical equivalence. The alleged agreement about the Copenhagen Interpretation, predicated to a large extent (...) on this equivalence, was deemed a myth as well. In response, I argue that Schrödinger's proof concerned primarily a domain-specific ontological equivalence, rather than the isomorphism or a weaker mathematical equivalence. It stemmed initially from the agreement of the eigenvalues of Wave Mechanics and energy-states of Bohr's Model that was discovered and published by Schrödinger in his first and second communications of 1926. Schrödinger demonstrated in this proof that the laws of motion arrived at by the method of Matrix Mechanics are satisfied by assigning the auxiliary role to eigenfunctions in the derivation of matrices (while he only outlined the reversed derivation of eigenfunctions from Matrix Mechanics, which was necessary for the proof of both isomorphism and Schrödinger-equivalence of the two theories). This result was intended to demonstrate the domain-specific ontological equivalence of Matrix Mechanics and Wave Mechanics, with respect to the domain of Bohr's atom. And although the mathematical equivalence of the theories did not seem out of the reach of existing theories and methods, Schrödinger never intended to fully explore such a possibility in his proof paper. In a further development of Quantum Mechanics, Bohr's complementarity and Copenhagen Interpretation captured a more substantial convergence of the subsequently revised (in light of the experimental results) Wave and Matrix Mechanics. I argue that both the equivalence and Copenhagen Interpretation can be deemed myths if one predicates the philosophical and historical analysis on a narrow model of physical theory which disregards its historical context, and focuses exclusively on its formal aspects and the exploration of the logical models supposedly implicit in it. (shrink)
A recent rethinking of the early history of Quantum Mechanics deemed the late 1920s agreement on the equivalence of Matrix Mechanics and Wave Mechanics, prompted by Schrödinger’s 1926 proof, a myth. Schrödinger supposedly failed to achieve the goal of proving isomorphism of the mathematical structures of the two theories, while only later developments in the early 1930s, especially the work of mathematician John von Neumman (1932) provided sound proof of equivalence. The alleged agreement about the Copenhagen Interpretation, predicated to a (...) large extent on this equivalence, was deemed a myth as well. If such analysis is correct, it provides considerable evidence that, in its critical moments, the foundations of scientific practice might not live up to the minimal standards of rigor, as such standards are established in the practice of logic, mathematics, and mathematical physics, thereby prompting one to question the rationality of the practice of physics. In response, I argue that Schrödinger’s proof concerned primarily a domain-specific ontological equivalence, rather than the isomorphism. It stemmed initially from the agreement of the eigenvalues of Wave Mechanics and energy-states of Bohr’s Model that was discovered and published by Schrödinger in his First and Second Communications of 1926. Schrödinger demonstrated in this proof that the laws of motion arrived at by the method of Matrix Mechanics could be derived successfully from eigenfunctions as well (while he only outlined the reversed derivation of eigenfunctions from Matrix Mechanics, which was necessary for the proof of isomorphism of the two theories). This result was intended to demonstrate the domain-specific ontological equivalence of Matrix Mechanics and Wave Mechanics, with respect to the domain of Bohr’s atom. And although the full-fledged mathematico-logical equivalence of the theories did not seem out of the reach of existing theories and methods, Schrödinger never intended to fully explore such a possibility in his proof paper. In a further development of Quantum Mechanics, Bohr’s complementarity and Copenhagen Interpretation captured a more substantial convergence of the subsequently revised (in light of the experimental results) Wave and Matrix Mechanics. I argue that both the equivalence and Copenhagen Interpretation can be deemed myths if one predicates the philosophical and historical analysis on a narrow model of physical theory which disregards its historical context, and focuses exclusively on its formal aspects and the exploration of the logical models supposedly implicit in it. (shrink)
Communism, in Marx' mind, did not mean simple liberation, but the economics of liberation. The realm of necessity (technē) was to become the primary field for emancipation (praxis), the latter taking form in new institutions, responsive to real socio-economic needs. In this sense, the problem of technocracy and the corporatist ethos in Marx are part of a broader discursive structure, which links the experiences of workers through the industrial revolution with the philosophies ofpraxis as they reach from Hegel through Marković.
This study elucidates and appraises a conception of praxis developed by the Yugoslav Marxist Mihailo Markovi . This notion is first distinguished from everyday and alternative theoretical uses of 'practice', 'practical', and 'praxis' . Markovic's view is then characterized as a normative, pluralistic theory of both human being and doing. Praxis , for Markovi , is activity which realizes one's best potentialities: (i) the humanly generic dispositions of intentionality, self-determination, creativity, sociality, and rationality, and (ii) one's relatively distinctive abilities (...) and bents compatible with (i). Following a critical analysis of Markovic's attempts to justify praxis as norm, two substantive criticisms are advanced. The theory needs (i) priority rules for the relative weighting of praxis components when they cannot all be (fully) realized in an action, and (ii) a specification of the genus praxis so as to recognize important differences among optimal activities which shape things, construct theories, rear children, and share with mature persons. (shrink)
This paper introduces a class of graphical independence models that is closed under marginalization and conditioning but that contains all DAG independence models. This class of graphs, called maximal ancestral graphs, has two attractive features: there is at most one edge between each pair of vertices; every missing edge corresponds to an independence relation. These features lead to a simple parameterization of the corresponding set of distributions in the Gaussian case.
 If asked to name career diplomats who have tackled some very difficult international crises, many foreign policy makers would put Richard Holbrooke near the top of the list. Not many negotiators have wielded moral principle, power, and reason as well as Holbrooke. His book on the Bosnia negotiations leading up to the 1995 Dayton Peace Agreement is timely, given the ethnic cleansing that is being carried out in Kosovo, a southern province of Yugoslavia's Serb Republic. Once again we are (...) faced with unrest in the Balkans. We have seen the daily newspaper headlines change from "24 Albanian Men Killed in Kosovo" and "Hopes Fade for New Kosovo Talks" to "NATO Air Campaign Expanded" and "Chinese Embassy Bombed in Belgrade." Although talk of "Bosnian Muslims," "the Bosnian Army" and "Srebrenica' has been replaced with "Kosovars," "the Kosovo Liberation Army," and "Rogovo," two of the main actors in the Bosnia negotiations have returned to put their stamp on the Kosovo negotiations: President Slobodan Milosevic and U.S. envoy Richard Holbrooke. Unfortunately, Holbrooke's words that begin the last paragraph of his book seem to have come true: "There will be other Bosnias in our lives." With that in mind, Holbrooke's book will best he appreciated as a harbinger of things to come in Kosovo and elsewhere. (shrink)
Critique of idealistic naturalism: methodological pollution in the main stream of American philosophy, by D. Riepe.--Ex nihilo nihil fit: philosophy's "starting point," by D. H. DeGrood.--An historical critique of empiricism, by J. E. Hansen.--Epilogue on Berkeley, by R. W. Sellars.--Mandala thinking, by A. Mackay.--An empirical conception of freedom, by E. D'Angelo.--Heidegger on the essence of truth, by M. Farber.--Minding as a material force, by H. L. Parsons.--The crisis of the 1890's and the shaping of twentieth century America, by R. B. (...) Carson.--Ideology, scientific philosophy, and Marxism, by J. Somerville.--Marx and critical scientific thought, by M. Marković.--Experimentalism extended to politics, by E. Guevara.--The unity of opposites: a dialectical principle, by V. J. McGill and W. T. Parry.--A need definition of "value," by R. Handy.--Alienation and social action, by A. Schaff.--Naturalism in the Tao of Confucius and Lao Tzu, by D. H.-F. Poe.--Bibliography (p. 260-269). (shrink)
This paper introduces a class of graphical independence models that is closed under marginalization and conditioning but that contains all DAG independence models. This class of graphs, called maximal ancestral graphs, has two attractive features: there is at most one edge between each pair of vertices; every missing edge corresponds to an independence relation. These features lead to a simple parameterization of the corresponding set of distributions in the Gaussian case.
The essential precondition of implementing interventionist techniques of causal reasoning is that particular variables are identified as so-called intervention variables. While the pertinent literature standardly brackets the question how this can be accomplished in concrete contexts of causal discovery, the first part of this paper shows that the interventionist nature of variables cannot, in principle, be established based only on an interventionist notion of causation. The second part then demonstrates that standard observational methods that draw on Bayesian networks identify intervention (...) variables only if they also answer the questions that can be answered by interventionist techniques—which are thus rendered dispensable. The paper concludes by suggesting a way of identifying intervention variables that allows for exploiting the whole inferential potential of interventionist techniques. (shrink)
Probabilistic models of sentence comprehension are increasingly relevant to questions concerning human language processing. However, such models are often limited to syntactic factors. This restriction is unrealistic in light of experimental results suggesting interactions between syntax and other forms of linguistic information in human sentence processing. To address this limitation, this article introduces two sentence processing models that augment a syntactic component with information about discourse co-reference. The novel combination of probabilistic syntactic components with co-reference classifiers permits them to more (...) closely mimic human behavior than existing models. The first model uses a deep model of linguistics, based in part on probabilistic logic, allowing it to make qualitative predictions on experimental data; the second model uses shallow processing to make quantitative predictions on a broad-coverage reading-time corpus. (shrink)
Is the common cause principle merely one of a set of useful heuristics for discovering causal relations, or is it rather a piece of heavy duty metaphysics, capable of grounding the direction of causation itself? Since the principle was introduced in Reichenbach’s groundbreaking work The Direction of Time (1956), there have been a series of attempts to pursue the latter program—to take the probabilistic relationships constitutive of the principle of the common cause and use them to ground the direction of (...) causation. These attempts have not all explicitly appealed to the principle as originally formulated; it has also appeared in the guise of independence conditions, counterfactual overdetermination, and, in the causal modelling literature, as the causal markov condition. In this paper, I identify a set of difficulties for grounding the asymmetry of causation on the principle and its descendents. The first difficulty, concerning what I call the vertical placement of causation, consists of a tension between considerations that drive towards the macroscopic scale, and considerations that drive towards the microscopic scale—the worry is that these considerations cannot both be comfortably accommodated. The second difficulty consists of a novel potential counterexample to the principle based on the familiar Einstein Podolsky Rosen (EPR) correlations in quantum mechanics. (shrink)
We clarify the status of the so-called causal minimality condition in the theory of causal Bayesian networks, which has received much attention in the recent literature on the epistemology of causation. In doing so, we argue that the condition is well motivated in the interventionist (or manipulability) account of causation, assuming the causal Markov condition which is essential to the semantics of causal Bayesian networks. Our argument has two parts. First, we show that the causal minimality condition, rather than an (...) add-on methodological assumption of simplicity, necessarily follows from the substantive interventionist theses, provided that the actual probability distribution is strictly positive. Second, we demonstrate that the causal minimality condition can fail when the actual probability distribution is not positive, as is the case in the presence of deterministic relationships. But we argue that the interventionist account still entails a pragmatic justification of the causal minimality condition. Our argument in the second part exemplifies a general perspective that we think commendable: when evaluating methods for inferring causal structures and their underlying assumptions, it is relevant to consider how the inferred causal structure will be subsequently used for counterfactual reasoning. (shrink)
Comprehensive account of constructive theory of first-order predicate calculus. Covers formal methods including algorithms and epi-theory, brief treatment of Markov’s approach to algorithms, elementary facts about lattices and similar algebraic systems, more. Philosophical and reflective as well as mathematical. Graduate-level course. 1963 ed. Exercises.
We put together several observations on constructive negation. First, Russell anticipated intuitionistic logic by clearly distinguishing propositional principles implying the law of the excluded middle from remaining valid principles. He stated what was later called Peirce’s law. This is important in connection with the method used later by Heyting for developing his axiomatization of intuitionistic logic. Second, a work by Dragalin and his students provides easy embeddings of classical arithmetic and analysis into intuitionistic negationless systems. In the last section, we (...) present in some detail a stepwise construction of negation which essentially concluded the formation of the logical base of the Russian constructivist school. Markov’s own proof of Markov’s principle (different from later proofs by Friedman and Dragalin) is described. (shrink)
We present a survey of proof theory in the USSR beginning with the paper by Kolmogorov  and ending (mostly) in 1969; the last two sections deal with work done by A. A. Markov and N. A. Shanin in the early seventies, providing a kind of effective interpretation of negative arithmetic formulas. The material is arranged in chronological order and subdivided according to topics of investigation. The exposition is more detailed when the work is little known in the West or (...) the original presentation can be improved using notions or results which appeared later. This includes such topics as Novikov's cut-elimination method (regular formulas) and Maslov's inverse method for the predicate logic. (shrink)
We provide a formally rigorous framework for integrating singular causation, as understood by Nuel Belnap's theory of causae causantes, and objective single case probabilities. The central notion is that of a causal probability space whose sample space consists of causal alternatives. Such a probability space is generally not isomorphic to a product space. We give a causally motivated statement of the Markov condition and an analysis of the concept of screening-off. Causal dependencies and probabilities 1.1 Background: causation in branching space-times (...) 1.2 What are probabilities defined for? Basic transitions 2.1 Basics of basic transitions 2.2 Sets of basic transitions Causal probability theory 3.1 Some simple cases 3.2 General causal probabilities 3.3 Application: probability of suprema of a chain. (shrink)
It has recently been suggested that philosophy – in particular epistemology – has a contribution to make to the analysis of criminal and military intelligence. The present article pursues this suggestion, taking three phenomena that have recently been studied by philosophers, and showing that they have important implications for the gathering and sharing of intelligence, and for the use of intelligence in the determining of military strategy. The phenomena discussed are: (1) Simpson's Paradox, (2) the distinction between resiliency and reliability (...) of data, and (3) the Causal Markov Condition. (shrink)
The central question of this paper is: are deterministic and indeterministic descriptions observationally equivalent in the sense that they give the same predictions? I tackle this question for measure-theoretic deterministic systems and stochastic processes, both of which are ubiquitous in science. I first show that for many measure-theoretic deterministic systems there is a stochastic process which is observationally equivalent to the deterministic system. Conversely, I show that for all stochastic processes there is a measure-theoretic deterministic system which is observationally equivalent (...) to the stochastic process. Still, one might guess that the measure-theoretic deterministic systems which are observationally equivalent to stochastic processes used in science do not include any deterministic systems used in science. I argue that this is not so because deterministic systems used in science even give rise to Bernoulli processes. Despite this, one might guess that measure-theoretic deterministic systems used in science cannot give the same predictions at every observation level as stochastic processes used in science. By proving results in ergodic theory, I show that also this guess is misguided: there are several deterministic systems used in science which give the same predictions at every observation level as Markov processes. All these results show that measure-theoretic deterministic systems and stochastic processes are observationally equivalent more often than one might perhaps expect. Furthermore, I criticise the claims of the previous philosophy papers Suppes (1993, 1999), Suppes and de Barros (1996) and Winnie (1998) on observational equivalence. (shrink)
Much of the recent work on the epistemology of causation has centered on two assumptions, known as the Causal Markov Condition and the Causal Faithfulness Condition. Philosophical discussions of the latter condition have exhibited situations in which it is likely to fail. This paper studies the Causal Faithfulness Condition as a conjunction of weaker conditions. We show that some of the weaker conjuncts can be empirically tested, and hence do not have to be assumed a priori. Our results lead to (...) two methodologically significant observations: (1) some common types of counterexamples to the Faithfulness condition constitute objections only to the empirically testable part of the condition; and (2) some common defenses of the Faithfulness condition do not provide justification or evidence for the testable parts of the condition. It is thus worthwhile to study the possibility of reliable causal inference under weaker Faithfulness conditions. As it turns out, the modification needed to make standard procedures work under a weaker version of the Faithfulness condition also has the practical effect of making them more robust when the standard Faithfulness condition actually holds. This, we argue, is related to the possibility of controlling error probabilities with finite sample size (“uniform consistency”) in causal inference. (shrink)
The common cause principle states that common causes produce correlations amongst their effects, but that common effects do not produce correlations amongst their causes. I claim that this principle, as explicated in terms of probabilistic relations, is false in classical statistical mechanics. Indeterminism in the form of stationary Markov processes rather than quantum mechanics is found to be a possible saviour of the principle. In addition I argue that if causation is to be explicated in terms of probabilities, then it (...) should be done in terms of probabilistic relations which are invariant under changes of initial distributions. Such relations can also give rise to an asymmetric cause-effect relationship which always runs forwards in time. (shrink)
A main message from the causal modelling literature in the last several decades is that under some plausible assumptions, there can be statistically consistent procedures for inferring (features of) the causal structure of a set of random variables from observational data. But whether we can control the error probabilities with a finite sample size depends on the kind of consistency the procedures can achieve. It has been shown that in general, under the standard causal Markov and Faithfulness assumptions, the procedures (...) can only be pointwise but not uniformly consistent without substantial background knowledge. This implies the impossibility of choosing a finite sample size to control the worst case error probabilities. In this paper, I consider the simpler task of inferring causal directions when the skeleton of the causal structure is known, and establish a similarly negative result concerning the possibility of controlling error probabilities. Although the result is negative in form, it has an interesting positive implication for causal discovery methods. (shrink)
The idea that the changing entropy of a system is relevant to explaining why we know more about the system's past than about its future has been criticized on several fronts. This paper assesses the criticisms and clarifies the epistemology of the inference problem. It deploys a Markov process model to investigate the relationship between entropy and temporally asymmetric inference.
In order to make scientific results relevant to practical decision making, it is often necessary to transfer a result obtained in one set of circumstances—an animal model, a computer simulation, an economic experiment—to another that may differ in relevant respects—for example, to humans, the global climate, or an auction. Such inferences, which we can call extrapolations, are a type of argument by analogy. This essay sketches a new approach to analogical inference that utilizes chain graphs, which resemble directed acyclic graphs (...) (DAGs) except in allowing that nodes may be connected by lines as well as arrows. This chain graph approach generalizes the account of extrapolation I provided in my (2008) book and leads to new insights that integrate the contributions of the other participants of this symposium. More specifically, this approach explicates the role of “fingerprints,” or distinctive markers, as a strategy for avoiding an underdetermination problem having to do with spurious analogies. Moreover, it shows how the extrapolator’s circle, one of the central challenges for extrapolation highlighted in my book, is closely tied to distinctive markers and the Markov condition as it applies to chain graphs. Finally, the approach suggests additional ways in which investigations of a model can provide information about a target that are illustrated by examples concerning nanomaterials in sunscreens and Wendy Parker’s discussion of fingerprints in climate science. (shrink)
Models that fail to satisfy the Markov condition are unstable because changes in state variable values may cause changes in the values of background variables, and these changes in background lead to predictive error. Such error arises because non‐Markovian models fail to track the causal relations generating the values of response variables. This has implications for discussions of the level of selection: under certain plausible conditoins most standard models of group selection will not satisfy the Markov condition when fit to (...) data from real populations. These models neither correctly represent the causal structure generating nor correctly explain the phenomena of interest. †To contact the author, please write to: Bruce Glymour, Department of Philosophy, 201 Dickens Hall, Kansas State University, Manhattan KS, 66506; e‐mail: firstname.lastname@example.org. (shrink)
How ought we learn causal relationships? While Popper advocated a hypothetico-deductive logic of causal discovery, inductive accounts are currently in vogue. Many inductive approaches depend on the causal Markov condition as a fundamental assumption. This condition, I maintain, is not universally valid, though it is justifiable as a default assumption. In which case the results of the inductive causal learning procedure must be tested before they can be accepted. This yields a synthesis of the hypothetico-deductive and inductive accounts, which forms (...) the focus of this paper. I discuss the justification of this synthesis and draw an analogy between objective Bayesianism and the account of causal learning presented here. (shrink)
The comparison between biological and social macroevolution is a very important (though insufficiently studied) subject whose analysis renders new significant possibilities to comprehend the processes, trends, mechanisms, and peculiarities of each of the two types of macroevolution. Of course, there are a few rather important (and very understandable) differences between them; however, it appears possible to identify a number of fundamental similarities. One may single out at least three fundamental sets of factors determining those similarities. First of all, those similarities (...) stem from the fact that in both cases we are dealing with very complex non-equilibrium (but rather stable) systems whose principles of functioning and evolution are described by the General Systems' Theory, as well as by a number of cybernetic principles and laws. -/- Secondly, in both cases we do not deal with isolated systems; in both cases we deal with a complex interaction between systems of organic systems and external environment, whereas the reaction of systems to external challenges can be described in terms of certain general principles (that, however, express themselves rather differently within the biological reality, on the one hand, and within the social reality, on the other). -/- Thirdly, it is necessary to mention a direct ‘genetic’ link between the two types of macroevolution and their mutual influence. -/- It is important to emphasize that the very similarity of the principles and regularities of the two types of macroevolution does not imply their identity. Rather significant similarities are frequently accompanied by enormous differences. For example, genomes of the chimpanzees and the humans are very similar – with differences constituting just a few per cent; however, there are enormous differences with respect to intellectual and social differences of the chimpanzees and the humans hidden behind the apparently ‘insignificant’ difference between the two genomes. Thus, in certain respects it appears reasonable to consider the biological and social macroevolution as a single macroevolutionary process. This implies the necessity to comprehend the general laws and regularities that describe this process, though their manifestations may display significant variations depending on properties of a concrete evolving entity (biological, or social one). An important notion that may contribute to the improvement of the operationalization level as regards the comparison between the two types of macroevolution is the one that we suggested some time ago – the social aromorphosis (that was developed as a counterpart to the notion of biological aromorphosis well established within Russian evolutionary biology). We regard social aromorphosis as a rare qualitative macrochange that increases in a very significant way complexity, adaptability, and mutual influence of the social systems, that opens new possibilities for social macrodevelopment. In our paper we discuss a number of regularities that describe biological and social macroevolution and that employ the notions of social and biological aromorphosis such as ones of the module evolution (or the evolutionary ‘block assemblage’), ‘payment for arogenic progress’ etc. (shrink)
nature of modern data collection and storage techniques, and the increases in the speed and storage capacities of computers. Statistics books from 30 years ago often presented examples with fewer than 10 variables, in domains where some background knowledge was plausible. In contrast, in new domains, such as climate research where satellite data now provide daily quantities of data unthinkable a few decades ago, fMRI brain imaging, and microarray measurements of gene expression, the number of variables can range into the (...) tens of thousands, and there is often limited background knowledge to reduce the space of alternative causal hypotheses. In such domains, non-automated causal discovery techniques appear to be hopeless, while the availability of faster computers with larger memories and disc space allow for the practical implementation of computationally intensive automated search algorithms over large search spaces. Contemporary science is not your grandfather’s science, or Karl Popper’s. Causal inference without experimental controls has long seemed as if it must somehow be capable of being cast as a kind of statistical inference involving estimators with some kind of convergence and accuracy properties under some kind of assumptions. Until recently, the statistical literature said not. While parameter estimation and experimental design for the effective use of data developed throughout the 20th century, as recently as 20 years ago the methodology of causal inference without experimental controls remained relatively primitive. Besides a cessation of hostilities from the majority of the statistical and philosophical communities (which has still only partially happened), several things were needed for theories of causal estimation to appear and to flower: well defined mathematical objects to represent causal relations; well defined connections between aspects of these objects and sample data; and a way to compute those connections. A sequence of studies beginning with Dempster’s work on the factorization of probability distributions [Dempster 1972] and culminating with Kiiveri and Speed’s [Kiiveri & Speed 1982] study of linear structural equation models, provided the first, in the form of directed acyclic graphs, and the second, in the form of the “local” Markov condition.. (shrink)