This paper presents an attempt to integrate theories of causal processes—of the kind developed by Wesley Salmon and Phil Dowe—into a theory of causal models using Bayesian networks. We suggest that arcs in causal models must correspond to possible causal processes. Moreover, we suggest that when processes are rendered physically impossible by what occurs on distinct paths, the original model must be restricted by removing the relevant arc. These two techniques suffice to explain cases of late preëmption and other cases (...) that have proved problematic for causal models. (shrink)
We present a probabilistic extension to active path analyses of token causation (Halpern & Pearl 2001, forthcoming; Hitchcock 2001). The extension uses the generalized notion of intervention presented in (Korb et al. 2004): we allow an intervention to set any probability distribution over the intervention variables, not just a single value. The resulting account can handle a wide range of examples. We do not claim the account is complete --- only that it fills an obvious gap in previous active-path approaches. (...) It still succumbs to recent counterexamples by Hiddleston (2005), because it does not explicitly consider causal processes. We claim three benefits: a detailed comparison of three active-path approaches, a probabilistic extension for each, and an algorithmic formulation. (shrink)
Carter and Leslie's Doomsday Argument maintains that reflection upon the number of humans born thus far, when that number is viewed as having been uniformly randomly selected from amongst all humans, past, present and future, leads to a dramatic rise in the probability of an early end to the human experiment. We examine the Bayesian structure of the Argument and find that the drama is largely due to its oversimplification.
I consider three aspects in which machine learning and philosophy of science can illuminate each other: methodology, inductive simplicity and theoretical terms. I examine the relations between the two subjects and conclude by claiming these relations to be very close.
The Lottery Paradox has been thought to provide a reductio argument against probabilistic accounts of inductive inference. As a result, much work in artificial intelligence has concentrated on qualitative methods of inference, including default logics, which are intended to model some varieties of inductive inference. It has recently been shown that the paradox can be generated within qualitative default logics. However, John Pollock's qualitative system of defeasible inference, does avoid the Lottery Paradox by incorporating a rule designed specifically for that (...) purpose. I shall argue that Pollock's system instead succumbs to a worse disease: it fails to allow for induction at all. (shrink)
The common opinion has been that evolution results in the continuing development of more complex forms of life, generally understood as more complex organisms. The arguments supporting that opinion have recently come under scrutiny and been found wanting. Nevertheless, the appearance of increasing complexity remains. So, is there some sense in which evolution does grow complexity? Artificial life simulations have consistently failed to reproduce even the appearance of increasing complexity, which poses a challenge. Simulations, as much as scientific theories, are (...) obligated at least to save the appearances! We suggest a relation between these two problems, understanding biological complexity growth and the failure to model even its appearances. We present a different understanding of that complexity which evolution grows, one that genuinely runs counter to entropy and has thus far eluded proper analysis in information-theoretic terms. This complexity is reflected best in the increase in niches within the biosystem as a whole. Past and current artificial life simulations lack the resources with which to grow niches and so to reproduce evolution’s complexity. We propose a more suitable simulation design integrating environments and organisms, allowing old niches to change and new ones to emerge. (shrink)
This book describes the application of Artificial Life simulation to evolutionary scenarios of wide ethical interest, including the evolution of altruism, rape and abortion, providing a new meaning to “experimental philosophy”. The authors also apply evolutionary ALife techniques to explore contentious issues within evolutionary theory itself, such as the evolution of aging. They justify these uses of simulation in science and philosophy, both in general and in their specific applications here.Evolving Ethics will be of interest to researchers, enthusiasts, students and (...) interested lay readers in the fields of Artificial Life, philosophy of science, ethics, agent- and individual-based modeling in ecology and the social sciences, computer simulation, evolutionary biology, evolutionary psychology and the social sciences.Dr Steven Mascaro is a researcher in computer simulation and Artificial Life.Dr Kevin Korb is a Reader in the Clayton School of Information Technology, Monash University.Dr Ann Nicholson is an Associate Professor in the Clayton School of Information Technology, Monash University.Owen Woodberry is a researcher in the Clayton School of Information Technology, Monash University. (shrink)
The investigation of probabilistic causality has been plagued by a variety of misconceptions and misunderstandings. One has been the thought that the aim of the probabilistic account of causality is the reduction of causal claims to probabilistic claims. Nancy Cartwright (1979) has clearly rebutted that idea. Another ill-conceived idea continues to haunt the debate, namely the idea that contextual unanimity can do the work of objective homogeneity. It cannot. We argue that only objective homogeneity in combination with a causal interpretation (...) of Bayesian networks can provide the desired criterion of probabilistic causality. (shrink)
We further develop the mathematical theory of causal interventions, extending earlier results of Korb, Twardy, Handfield, & Oppy, (2005) and Spirtes, Glymour, Scheines (2000). Some of the skepticism surrounding causal discovery has concerned the fact that using only observational data can radically underdetermine the best explanatory causal model, with the true causal model appearing inferior to a simpler, faithful model (cf. Cartwright, (2001). Our results show that experimental data, together with some plausible assumptions, can reduce the space of viable explanatory (...) causal models to one. (shrink)
I analyze the frame problem and its relation to other epistemological problems for artificial intelligence, such as the problem of induction, the qualification problem and the "general" AI problem. I dispute the claim that extensions to logic (default logic and circumscriptive logic) will ever offer a viable way out of the problem. In the discussion it will become clear that the original frame problem is really a fairy tale: as originally presented, and as tools for its solution are circumscribed by (...) Pat Hayes, the problem is entertaining, but incapable of resolution. The solution to the frame problem becomes available, and even apparent, when we remove artificial restrictions on its treatment and understand the interrelation between the frame problem and the many other problems for artificial epistemology. I present the solution to the frame problem: an adequate theory and method for the machine induction of causal structure. Whereas this solution is clearly satisfactory in principle, and in practice real progress has been made in recent years in its application, its ultimate implementation is in prospect only for future generations of AI researchers. (shrink)
We present a probabilistic extension to active path analyses of token causation. The extension uses the generalized notion of intervention presented in : we allow an intervention to set any probability distribution over the intervention variables, not just a single value. The resulting account can handle a wide range of examples. We do not claim the account is complete --- only that it fills an obvious gap in previous active-path approaches. It still succumbs to recent counterexamples by Hiddleston, because it (...) does not explicitly consider causal processes. We claim three benefits: a detailed comparison of three active-path approaches, a probabilistic extension for each, and an algorithmic formulation. (shrink)
This is an exercise in the metaphysics of causation, an essay loosely in the empiricist tradition that defends a full-blooded realism for both material and abstract objects. Fales begins, as have most empiricists, with introspectively accessible phenomena. Although he claims to take doubts about the "given" seriously, the upshot appears to be that we must accommodate the fact that the philosophically misled have had such doubts: the given has foundational status, and so is known, but that fact may itself be (...) obscured by the complexity of the phenomenal world and the difficulty of the philosophical enterprise. As Fales says, "foundationalism is by no means dead". (shrink)
Bayesian networks are computer programs which represent probabilitistic relationships graphically as directed acyclic graphs, and which can use those graphs to reason probabilistically , often at relatively low computational cost. Almost every expert system in the past tried to support probabilistic reasoning, but because of the computational difficulties they took approximating short-cuts, such as those afforded by MYCIN's certainty factors. That all changed with the publication of Judea Pearl's Probabilistic Reasoning in Intelligent Systems, in 1988, which synthesized a decade of (...) research making accurate graphical probabilistic reasoning computationally achievable.Bayesian network technology is now one of the fastest growing fields of research in artificial intelligence. That it has become a publication industry in its own right is shown by a search on Google scholar :This development, together with a parallel related growth in the use of causal discovery algorithms which automate the learning of Bayesian networks from sample data, has generated considerable interest, and controversy, within the philosophy-of-science community.Three central questions bringing together AI researchers and philosophers of science are: Are Bayesian networks Bayesian? What is the relation between probability and causality? Are the assumptions behind causal discovery of Bayesian networks realistic or fantastical?Jon Williamson, as a philosopher of science with a keen interest in the technology, asks and answers these questions in his new book. Although it is self-contained, his book is not very likely as an introduction to the technology , nor is it optimal even as an introduction to the philosophical problems in interpreting Bayesian networks . Rather Williamson's book is an attempt to move the debate forward by solving the central problems of the …. (shrink)
This work provides a conceptual foundation for a Bayesian approach to artificial inference and learning. I argue that Bayesian confirmation theory provides a general normative theory of inductive learning and therefore should have a role in any artificially intelligent system that is to learn inductively about its world. I modify the usual Bayesian theory in three ways directly pertinent to an eventual research program in artificial intelligence. First, I construe Bayesian inference rules as defeasible, allowing them to be overridden in (...) certain contexts and therefore allowing them to play a part in a hybrid system, coexisting with non-Bayesian modes of inference. I take seriously the need to find meaningful prior probabilities for hypotheses, and elaborate means for supplying an artificial intelligence with such. And I address the computational complexity of Bayesian inference by reference to simplifications using causal networks and by allowing the probabilistic acceptance of hypotheses and subsequent qualitative inference to supplement Bayesian reasoning. The result is a Pragmatic Bayesian model of induction. (shrink)
Rolls's presentation of emotion as integral to cognition is a welcome counter to a long tradition of treating them as antagonists. His eduction of experimental evidence in support of this view is impressive. However, we find his excursion into the philosophy of consciousness less successful. Rolls gives syntactical manipulation the central role in consciousness (in stark contrast to Searle, for whom “mere” syntax inevitably falls short of consciousness), and leaves us wondering about the roles left for emotion after all.