We present a theoretical account of implicit and explicit learning in terms of ACT-R, an integrated architecture of human cognition as a computational supplement to Dienes & Perner's conceptual analysis of knowledge. Explicit learning is explained in ACT-R by the acquisition of new symbolic knowledge, whereas implicit learning amounts to statistically adjusting subsymbolic quantities associated with that knowledge. We discuss the common foundation of a set of models that are able to explain data gathered in several signature paradigms of implicit (...) learning. (shrink)
Computers are already approving financial transactions, controlling electrical supplies, and driving trains. Soon, service robots will be taking care of the elderly in their homes, and military robots will have their own targeting and firing protocols. Colin Allen and Wendell Wallach argue that as robots take on more and more responsibility, they must be programmed with moral decision-making abilities, for our own safety. Taking a fast paced tour through the latest thinking about philosophical ethics and artificial intelligence, the authors (...) argue that even if full moral agency for machines is a long way off, it is already necessary to start building a kind of functional morality, in which artificial moral agents have some basic ethical sensitivity. But the standard ethical theories don't seem adequate, and more socially engaged and engaging robots will be needed. As the authors show, the quest to build machines that are capable of telling right from wrong has begun. -/- Moral Machines is the first book to examine the challenge of building artificial moral agents, probing deeply into the nature of human decision making and ethics. (shrink)
Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most challenging tasks for computational (...) approaches to higher-order cognition. The need for increasingly autonomous artificial agents to factor moral considerations into their choices and actions has given rise to another new field of inquiry variously known as Machine Morality, Machine Ethics, Roboethics, or Friendly AI. In this study, we discuss how LIDA, an AGI model of human cognition, can be adapted to model both affective and rational features of moral decision making. Using the LIDA model, we will demonstrate how moral decisions can be made in many domains using the same mechanisms that enable general decision making. Comprehensive models of human cognition typically aim for compatibility with recent research in the cognitive and neural sciences. Global workspace theory, proposed by the neuropsychologist Bernard Baars (1988), is a highly regarded model of human cognition that is currently being computationally instantiated in several software implementations. LIDA (Franklin, Baars, Ramamurthy, & Ventura, 2005) is one such computational implementation. LIDA is both a set of computational tools and an underlying model of human cognition, which provides mechanisms that are capable of explaining how an agent’s selection of its next action arises from bottom-up collection of sensory data and top-down processes for making sense of its current situation. We will describe how the LIDA model helps integrate emotions into the human decision-making process, and we will elucidate a process whereby an agent can work through an ethical problem to reach a solution that takes account of ethically relevant factors. (shrink)
Building artificial moral agents (AMAs) underscores the fragmentary character of presently available models of human ethical behavior. It is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the “ought” of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology, and yet it draws on both. Philosophers and cognitive scientists have tended to stress the importance of particular cognitive mechanisms, e.g., reasoning, moral sentiments, heuristics, intuitions, or a moral grammar, (...) in the making of moral decisions. However, assembling a system from the bottom-up which is capable of accommodating moral considerations draws attention to the importance of a much wider array of mechanisms in honing moral intelligence. Moral machines need not emulate human cognitive faculties in order to function satisfactorily in responding to morally significant situations. But working through methods for building AMAs will have a profound effect in deepening an appreciation for the many mechanisms that contribute to a moral acumen, and the manner in which these mechanisms work together. Building AMAs highlights the need for a comprehensive model of how humans arrive at satisfactory moral judgments. (shrink)
The implementation of moral decision making abilities in artificial intelligence (AI) is a natural and necessary extension to the social mechanisms of autonomous software agents and robots. Engineers exploring design strategies for systems sensitive to moral considerations in their choices and actions will need to determine what role ethical theory should play in defining control architectures for such systems. The architectures for morally intelligent agents fall within two broad approaches: the top-down imposition of ethical theories, and the bottom-up building of (...) systems that aim at goals or standards which may or may not be specified in explicitly theoretical terms. In this paper we wish to provide some direction for continued research by outlining the value and limitations inherent in each of these approaches. (shrink)
This article examines the effect of material evidence upon historiographic hypotheses. Through a series of successive Bayesian conditionalizations, I analyze the extended competition among several hypotheses that offered different accounts of the transition between the Bronze Age and the Iron Age in Palestine and in particular to the “emergence of Israel”. The model reconstructs, with low sensitivity to initial assumptions, the actual outcomes including a complete alteration of the scientific consensus. Several known issues of Bayesian confirmation, including the problem of (...) old evidence, the introduction and confirmation of novel theories and the sensitivity of convergence to uncertain and disputed evidence are discussed in relation to the model’s result and the actual historical process. The most important result is that convergence of probabilities and of scientific opinion is indeed possible when advocates of rival hypotheses hold similar judgment about the factual content of evidence, even if they differ sharply in their historiographic interpretation. This speaks against the contention that understanding of present remains is so irrevocably biased by theoretical and cultural presumptions as to make an objective assessment impossible. (shrink)
The challenge of designing computer systems and robots with the ability to make moral judgments is stepping out of science fiction and moving into the laboratory. Engineers and scholars, anticipating practical necessities, are writing articles, participating in conference workshops, and initiating a few experiments directed at substantiating rudimentary moral reasoning in hardware and software. The subject has been designated by several names, including machine ethics, machine morality, artificial morality, or computational morality. Most references to the challenge elucidate one facet or (...) another of what is a very rich topic. This paper will offer a brief overview of the many dimensions of this new field of inquiry. (shrink)
The development of autonomous, robotic weaponry is progressing rapidly. Many observers agree that banning the initiation of lethal activity by autonomous weapons is a worthy goal. Some disagree with this goal, on the grounds that robots may equal and exceed the ethical conduct of human soldiers on the battlefield. Those who seek arms-control agreements limiting the use of military robots face practical difficulties. One such difficulty concerns defining the notion of an autonomous action by a robot. Another challenge concerns how (...) to verify and monitor the capabilities of rapidly changing technologies. In this article we describe concepts from our previous work about autonomy and ethics for robots and apply them to military robots and robot arms control. We conclude with a proposal for a first step toward limiting the deployment of autonomous weapons capable of initiating lethal force. (shrink)
Building artificial moral agents (AMAs) underscores the fragmentary character of presently available models of human ethical behavior. It is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the “ought” of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology, and yet it draws on both. Philosophers and cognitive scientists have tended to stress the importance of particular cognitive mechanisms, e.g., reasoning, moral sentiments, heuristics, intuitions, or a moral grammar, (...) in the making of moral decisions. However, assembling a system from the bottom-up which is capable of accommodating moral considerations draws attention to the importance of a much wider array of mechanisms in honing moral intelligence. Moral machines need not emulate human cognitive faculties in order to function satisfactorily in responding to morally significant situations. But working through methods for building AMAs will have a profound effect in deepening an appreciation for the many mechanisms that contribute to a moral acumen, and the manner in which these mechanisms work together. Building AMAs highlights the need for a comprehensive model of how humans arrive at satisfactory moral judgments. (shrink)
This paper traces the history of uses of the word “gender”. It suggests that though “gender” has been recuperated and become commonplace, many issues persist around the way “women” and “men”, and the power relations between them, are defined and are evolving. Provided it still allows us to question the meanings attached to the sexes, how they are established and in what contexts, gender remains a useful, because critical, analytical category.
Niche Construction Theory has been gaining acceptance as an explanatory framework for processes in biological and human evolution. Human cultural niche construction, in particular, is suggested as a basis for understanding many phenomena that involve human genetic and cultural evolution. Herein I assess the ability of the cultural niche construction framework to meet this explanatory role by looking into several NCT-inspired accounts that have been offered for two important episodes of human evolution, and by examining the contribution of NCT to (...) the elucidation of two “primary examples” mentioned often in the NCT literature. The result, I claim, is rather disappointing: While NCT may serve as a descriptive framework for these phenomena, it cannot be said to explain them in any substantive sense. Especially disturbing is NCT’s failure to account for differing developments in very similar situations, and to facilitate evaluation and discrimination between divergent and contradictory causal accounts of particular phenomena. I argue that these problems are inherent, and they render NCT unsuitable to serve as an explanatory framework for human phenomena. NCT’s value, at least as related to human phenomena, is therefore descriptive and heuristic rather than explanatory. In conclusion, I discuss and reject comparisons made between NCT and the theory of natural selection, and examine several potential sources of NCT’s explanatory weakness. (shrink)
A principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. Intermediate goals of artificial morality are directed at building into AI systems sensitivity to the values, ethics, and legality of activities. The development of an effective foundation for the field of artificial morality involves exploring the technological and philosophical issues involved in making computers into explicit moral reasoners. The goal of this paper is to discuss strategies for implementing (...) artificial morality and the differing criteria for success that are appropriate to different strategies. (shrink)
Despite often being condemned for having a paradigmatically unrealistic or dangerous conception of power, Plato expends much effort in constructing his distinctive conception of power. In the wake of Socrates’ trial and execution, Plato writes about conventional, elitist, and radically unethical conceptions of power only to ‘refute’ them on behalf of a favoured conception of power allied with justice. Are his arguments as pathetic or wrong-headed as many theorists make them out to be – from Machiavelli to contemporary political realists, (...) from ‘political’ critics of Plato ranging from Popper to Arendt? And if not, has our understanding of power been impoverished? This question has been surprisingly unasked, and it is one I address by asking Plato and his critics: What are the dialectical moves Plato makes in refuting Socrates’s opponents and constructing his own conception of legitimate power? Exactly how does he interweave his conception of power with a kind of ethics? How does it compare to recent conceptions of political realism and the power-politics/ethics relationship – e.g., after Marx and Foucault? While addressing these questions I also attend to the issue of Plato’s historicity: to what extent do the limits of his language and world affect our reading of Plato and his political critics? Ultimately, I argue that and how Plato’s conception of power and its political dimensions realistically have much to teach us that we have not learned. (shrink)
In this first comprehensive treatment of Plato’s political thought in a long time, John Wallach offers a "critical historicist" interpretation of Plato. Wallach shows how Plato’s theory, while a radical critique of the conventional ethical and political practice of his own era, can be seen as having the potential for contributing to democratic discourse about ethics and politics today. The author argues that Plato articulates and "solves" his Socratic Problem in his various dialogues in different but potentially complementary (...) ways. The book effectively extracts Plato from the straightjacket of Platonism and from the interpretive perspectives of the past fifty years—principally those of Karl Popper, Leo Strauss, Hannah Arendt, M. I. Finley, Jacques Derrida, and Gregory Vlastos. The author’s distinctive approach for understanding Plato—and, he argues, for the history of political theory in general—can inform contemporary theorizing about democracy, opening pathways for criticizing democracy on behalf of virtue, justice, and democracy itself. (shrink)
Dieter Birnbacher is professor of philosophy at the University of Düsseldorf and a member of the Foundation for the Rights of Future Generations’ scientific board. In 1988 he published the book Verantwortung für zukünftige Generationen ; which was translated into French and Polish. Hanna Schudy is an ethicist and environmentalist interested in questions of intergenerational responsibility concerning the natural environment. She is a doctoral student at the University of Wroclaw and a DAAD scholarship holder. The interview was conducted in (...) December 2011 at the Heinrich Heine Universität; Duesseldorf. It is part of Ms. Schudy’s current research into “The principle of responsibility in Hans Jonas’ and Dieter Birnbacher’s environmental ethics”. (shrink)
Essays, most of which were published between 1978 and 1985. Evans writes social, not political history. These essays do not give a coherent picture of 19th (and early 20th) century Germany, but make interesting supplementary reading.
The concepts of complementarity and entanglement are considered with respect to their significance in and beyond physics. A formally generalized, weak version of quantum theory, more general than ordinary quantum theory of physical systems, is outlined and tentatively applied to two examples.
Thanks to the editorial work of David Pacini, the lectures appear here with annotations linking them to editions of the masterworks of German philosophy as they ...
Hence, there is still controversy over which of the two versions of the deduction deserves priority and whether indeed any distinction between them can be maintained that would go beyond questions of presentation and involve the structure of the proof itself. Schopenhauer and Heidegger held that the first edition alone fully expresses Kant's unique philosophy, while Kant himself, as well as many other Kantians, have only seen a difference in the method of presentation.
The relation between ethics and social science is often conceived as complementary, both disciplines cooperating in the solution of concrete moral problems. Against this, the paper argues that not only applied ethics but even certain parts of general ethics have to incorporate sociological and psychological data and theories from the start. Applied ethics depends on social science in order to asses the impact of its own principles on the concrete realities which these principles are to regulate as well as in (...) order to propose practice rules suited to adapt these principles to their respective contexts of application. Examples from medical ethics (embryo research) and ecological ethics (Leopold's land ethic) illustrate both the contingence of practice rules in relation to their underlying basic principles and the corresponding need for a co-operation between philosophy and empirical disciplines in judging their functional merits and demerits. In conclusion, the relevance of empirical hypotheses even for some of the perennial problems of ethics is shown by clarifying the role played by empirical theories in the controversies about the ethical differentiation between positive and negative responsibility and the relation between utility maximisation and (seemingly) independent criteria of distributive justice in the context of social distributions. (shrink)
The scope of my considerations here is defined along two lines, which seem to me of essential relevance for a theory of dialectic. On the one hand, the form of negation that – as self-referring antinomical negation – gains a quasi-semantic expulsory force [Sprengkraft] and therewith a forwarding [weiterverweisenden] character; on the other hand, the notion that every logical category is defective insofar as the explicit meaning of a category does not express everything that is already implicitly presupposed for its (...) meaning. Both lines are tightly interwoven. This I would like to demonstrate with the example of the dialectic of Being and Non-Being at the beginning of the Hegelian Logic. I will first make visible the basic structures of dialectical argumentation (sections II and III) – whereby certain revisions will turn out to be necessary in comparison with Hegel’s actual argument. Thereby it proves essential that the whole apparatus of logical categories and principles must be always already available and utilized for the dialectical explication: proving dialectic as a self-explication of logic by logical means: dialectic, as it were, as the self-fulfillment of logic (sections IV–VI). (shrink)