Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most challenging tasks for computational (...) approaches to higher-order cognition. The need for increasingly autonomous artificial agents to factor moral considerations into their choices and actions has given rise to another new field of inquiry variously known as Machine Morality, Machine Ethics, Roboethics, or Friendly AI. In this study, we discuss how LIDA, an AGI model of human cognition, can be adapted to model both affective and rational features of moral decision making. Using the LIDA model, we will demonstrate how moral decisions can be made in many domains using the same mechanisms that enable general decision making. Comprehensive models of human cognition typically aim for compatibility with recent research in the cognitive and neural sciences. Global workspace theory, proposed by the neuropsychologist Bernard Baars (1988), is a highly regarded model of human cognition that is currently being computationally instantiated in several software implementations. LIDA (Franklin, Baars, Ramamurthy, & Ventura, 2005) is one such computational implementation. LIDA is both a set of computational tools and an underlying model of human cognition, which provides mechanisms that are capable of explaining how an agent’s selection of its next action arises from bottom-up collection of sensory data and top-down processes for making sense of its current situation. We will describe how the LIDA model helps integrate emotions into the human decision-making process, and we will elucidate a process whereby an agent can work through an ethical problem to reach a solution that takes account of ethically relevant factors. (shrink)
Baars (1988, 1997) has proposed a psychological theory of consciousness, called global workspace theory. The present study describes a software agent implementation of that theory, called ''Conscious'' Mattie (CMattie). CMattie operates in a clerical domain from within a UNIX operating system, sending messages and interpreting messages in natural language that organize seminars at a university. CMattie fleshes out global workspace theory with a detailed computational model that integrates contemporary architectures in cognitive science and artificial intelligence. Baars (1997) lists the psychological (...) ''facts that any complete theory of consciousness must explain'' in his appendix to In the Theater of Consciousness; global workspace theory was designed to explain these ''facts.'' The present article discusses how the design of CMattie accounts for these facts and thereby the extent to which it implements global workspace theory. (shrink)
Top-down dynamical models of cognitive processes, such as the one presented by Thelen et al., are important pieces in understanding the development of cognitive abilities in humans and biological organisms. Unlike standard symbolic computational approaches to cognition, such dynamical models offer the hope that they can be connected with more bottom-up, neurologically inspired dynamical models to provide a complete view of cognition at all levels. We raise some questions about the details of their simulation and about potential limitations of top-down (...) dynamical models. (shrink)
In the target article, Baars has offered both a theory of consciousness and a strategy for scientifically testing the theory. This commentary is intended as an addendum. I'd like to suggest implementing global workspace agents as both an additional strategy toward scientific testing, and as a means of fleshing out the theory.
After discussing various types of consciousness, several approaches to machine consciousness, software agent, and global workspace theory, we describe a software agent, IDA, that is 'conscious' in the sense of implementing that theory of consciousness. IDA perceives, remembers, deliberates, negotiates, and selects actions, sometimes 'consciously'. She uses a variety of mechanisms, each of which is briefly described. It's tempting to think of her as a conscious artifact. Is such a view in any way justified? The remainder of the paper considers (...) this question. (shrink)
The importance of the Stability Problem in neurocomputing is discussed, as well as the need for the study of infinite networks. Stability must be the key ingredient in the solution of a problem by a neural network without external intervention. Infinite discrete networks seem to be the proper objects of study for a theory of neural computability which aims at characterizing problems solvable, in principle, by a neural network. Precise definitions of such problems and their solutions are given. Some consequences (...) are explored, in particular, the neural unsolvability of the Stability Problem for neural networks. (shrink)
Cognition, writ broadly to include motivation and emotion, is best conceived of as control structure for autonomous agents . Autonomous agents are situated in a environment. They both sense and act on that environment, over time, so as to effect subsequent sensing. Examples of such agents include humans, animals, some mobile robots, some artificial life creatures (who "live" in a simulated environment on a computer) and some software agents (who "live" in a file system, a database, or on a network). (...) Their actions are in pursuit of their own agendas, as designed in by their maker or programmer, or as evolved and shaped by culture. Each such agent employs some control mechanism whose continual duty is to select the next action. The term "cognition," in its broad sense, refers to the workings of such control mechanisms. (shrink)
This commentary connects some of Glenberg's ideas to similar ideas from artificial intelligence. Second, it briefly discusses hidden assumptions relating to meaning, representations, and projectable properties. Finally, questions about mechanisms, mental imagery, and conceptualization in animals are posed.
Robots, as well as software agents, can be of use in biology as implementations of a theory rather than as simulations of specific real world target systems. Such implementations generate hypotheses rather than representing them. Their behavior is not predicted, but rather observed, and is not expected to duplicate that of a target system. Scientific knowledge is gained through the testing of generated hypotheses.
In his article on The Liabilities of Mobility, Merker asserts that “Consciousness presents us with a stable arena for our actions—the world …” and argues for this property as providing evolutionary pressure for the evolution of consciousness. In this commentary, I will explore the implications of Merker’s ideas for consciousness in artificial agents as well as animals, and also meet some possible objections to his evolutionary pressure claim.