We present a theoretical account of implicit and explicit learning in terms of ACT-R, an integrated architecture of human cognition as a computational supplement to Dienes & Perner's conceptual analysis of knowledge. Explicit learning is explained in ACT-R by the acquisition of new symbolic knowledge, whereas implicit learning amounts to statistically adjusting subsymbolic quantities associated with that knowledge. We discuss the common foundation of a set of models that are able to explain data gathered in several signature paradigms of implicit (...) learning. (shrink)
A principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. Intermediate goals of artificial morality are directed at building into AI systems sensitivity to the values, ethics, and legality of activities. The development of an effective foundation for the field of artificial morality involves exploring the technological and philosophical issues involved in making computers into explicit moral reasoners. The goal of this paper is to discuss strategies for implementing (...) artificial morality and the differing criteria for success that are appropriate to different strategies. (shrink)
Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most challenging tasks for computational (...) approaches to higher-order cognition. The need for increasingly autonomous artificial agents to factor moral considerations into their choices and actions has given rise to another new field of inquiry variously known as Machine Morality, Machine Ethics, Roboethics, or Friendly AI. In this study, we discuss how LIDA, an AGI model of human cognition, can be adapted to model both affective and rational features of moral decision making. Using the LIDA model, we will demonstrate how moral decisions can be made in many domains using the same mechanisms that enable general decision making. Comprehensive models of human cognition typically aim for compatibility with recent research in the cognitive and neural sciences. Global workspace theory, proposed by the neuropsychologist Bernard Baars (1988), is a highly regarded model of human cognition that is currently being computationally instantiated in several software implementations. LIDA (Franklin, Baars, Ramamurthy, & Ventura, 2005) is one such computational implementation. LIDA is both a set of computational tools and an underlying model of human cognition, which provides mechanisms that are capable of explaining how an agent’s selection of its next action arises from bottom-up collection of sensory data and top-down processes for making sense of its current situation. We will describe how the LIDA model helps integrate emotions into the human decision-making process, and we will elucidate a process whereby an agent can work through an ethical problem to reach a solution that takes account of ethically relevant factors. (shrink)
Building artificial moral agents (AMAs) underscores the fragmentary character of presently available models of human ethical behavior. It is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the “ought” of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology, and yet it draws on both. Philosophers and cognitive scientists have tended to stress the importance of particular cognitive mechanisms, e.g., reasoning, moral sentiments, heuristics, intuitions, or a moral grammar, (...) in the making of moral decisions. However, assembling a system from the bottom-up which is capable of accommodating moral considerations draws attention to the importance of a much wider array of mechanisms in honing moral intelligence. Moral machines need not emulate human cognitive faculties in order to function satisfactorily in responding to morally significant situations. But working through methods for building AMAs will have a profound effect in deepening an appreciation for the many mechanisms that contribute to a moral acumen, and the manner in which these mechanisms work together. Building AMAs highlights the need for a comprehensive model of how humans arrive at satisfactory moral judgments. (shrink)
The implementation of moral decision making abilities in artificial intelligence (AI) is a natural and necessary extension to the social mechanisms of autonomous software agents and robots. Engineers exploring design strategies for systems sensitive to moral considerations in their choices and actions will need to determine what role ethical theory should play in defining control architectures for such systems. The architectures for morally intelligent agents fall within two broad approaches: the top-down imposition of ethical theories, and the bottom-up building of (...) systems that aim at goals or standards which may or may not be specified in explicitly theoretical terms. In this paper we wish to provide some direction for continued research by outlining the value and limitations inherent in each of these approaches. (shrink)
"An invaluable guide to avoiding the stuff of science-fiction nightmares."--John Gilby, Times Higher Education -/- "Moral Machines is a fine introduction to the emerging field of robot ethics. There is much here that will interest ethicists, philosophers, cognitive scientists, and roboticists."-Peter Danielson, Notre Dame Philosophical Reviews -/- "Written with an abundance of examples and lessons learned, scenarios of incidents that may happen, and elaborate discussions on existing artificial agents on the cutting edge of research/practice, Moral Machines goes beyond what is (...) known as computer ethics into what will soon be called the discipline of machine morality. Highly recommended."-G. Trajkovski, CHOICE -/- "...the book does succeed in making the essential point that the phrase 'moral machine' is not an oxymoron. It also provides a window onto an area of research with which psychologists are unlikely to be familiar and one from which, at some point, we may be able to learn quite a lot."-PsycCRITIQUES -/- "Moral Machines represents a valuable addition to, and extension of, the current literature on machine morality. As the development of autonomous artificial moral agents becomes closer to being realized, I suspect that this book will only gain in importance."--Metapsychology. (shrink)
Dieter Lohmar, Phänomenologie der schwachen Phantasie. Untersuchungen der Psychologie, Cognitive Science, Neurologie und Phänomenologie zur Funktion der Phantasie in der Wahrnehmung Content Type Journal Article DOI 10.1007/s10743-010-9069-3 Authors Andrea Staiti, Boston College Department of Philosophy Chestnut Hill MA USA Journal Husserl Studies Online ISSN 1572-8501 Print ISSN 0167-9848 Journal Volume Volume 26 Journal Issue Volume 26, Number 2.