That the successful development of fully autonomous artificial moral agents (AMAs) is imminent is becoming the received view within artificial intelligence research and robotics. The discipline of Machines Ethics, whose mandate is to create such ethical robots, is consequently gaining momentum. Although it is often asked whether a given moral framework can be implemented into machines, it is never asked whether it should be. This paper articulates a pressing challenge for MachineEthics: To identify an ethical framework (...) that is both implementable into machines and whose tenets permit the creation of such AMAs in the first place. Without consistency between ethics and engineering, the resulting AMAs would not be genuine ethical robots, and hence the discipline of MachineEthics would be a failure in this regard. Here this challenge is articulated through a critical analysis of the development of Kantian AMAs, as one of the leading contenders for being the ethic that can be implemented into machines. In the end, however, the development of Kantian artificial moral machines is found to be anti-Kantian. The upshot of all this is that machine ethicists need to look elsewhere for an ethic to implement into their machines. (shrink)
This paper is a summary and evaluation of work presented at the AAAI 2005 Fall Symposium on MachineEthics that brought together participants from the fields of Computer Science and Philosophy to the end of clarifying the nature of this newly emerging field and discussing different approaches one could take towards realizing the ultimate goal of creating an ethical machine.
Many authors have proposed constraining the behaviour of intelligent systems with ‘machineethics’ to ensure positive social outcomes from the development of such systems. This paper critically analyses the prospects for machineethics, identifying several inherent limitations. While machineethics may increase the probability of ethical behaviour in some situations, it cannot guarantee it due to the nature of ethics, the computational limitations of computational agents and the complexity of the world. In addition, (...)machineethics, even if it were to be ‘solved’ at a technical level, would be insufficient to ensure positive social outcomes from intelligent systems. (shrink)
Approaches to programming ethical behavior for computer systems face challenges that are both technical and philosophical in nature. In response, an incrementalist account of machineethics is developed: a successive adaptation of programmed constraints to new, morally relevant abilities in computers. This approach allows progress under conditions of limited knowledge in both ethics and computer systems engineering and suggests reasons that we can circumvent broader philosophical questions about computer intelligence and autonomy.
Are trust relationships involving humans and artificial agents possible? This controversial question has become a hotly debated topic in the emerging field of machineethics. Employing a model of trust advanced by Buechner and Tavani :39–51, 2011), I argue that the “short answer” to this question is yes. However, I also argue that a more complete and nuanced answer will require us to articulate the various levels of trust that are also possible in environments comprising both human agents (...) and AAs. In defending this view, I show how James Moor’s model for distinguishing four levels of ethical agents in the context of machineethics :18–21, 2006) can help us to develop a framework that differentiates four levels of trust. Via a series of hypothetical scenarios, I illustrate each level of trust involved in HA–AA relationships. Finally, I argue that these levels of trust reflect three key factors or variables: the level of autonomy of the individual AAs involved, the degree of risk/vulnerability on the part of the HAs who place their trust in the AAs, and the kind of interactions that occur between the HAs and AAs in the trust environments. (shrink)
This paper focuses on the research field of machineethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, (...) and decision-making. To date, different frameworks on how to arrive at these agents have been put forward. However, there seems to be no hard consensus as to which framework would likely yield a positive result. With the body of work that they have contributed in the study of moral agency, philosophers may contribute to the growing literature on artificial moral agency. While doing so, they could also think about how the said concept could affect other important philosophical concepts. (shrink)
The value sensitive design (VSD) approach to designing transformative technologies for human values is taken as the object of study in this chapter. VSD has traditionally been conceptualized as another type of technology or instrumentally as a tool. The various parts of VSD’s principled approach would then aim to discern the various policy requirements that any given technological artifact under consideration would implicate. Yet, little to no consideration has been given to how laws, regulations, policies and social norms engage within (...) VSD practices. Similarly, how the interactive nature of the VSD approach can, in turn, influence those directives. This is exacerbated when we consider machineethics policy that have global consequences outside their development spheres. What constructs and models will position AI designers to engage in policy concerns? How can the design of AI policy be integrated with technical design? How might VSD be used to develop AI policy? How might law, regulations, social norms, and other kinds of policy regarding AI systems be engaged within value sensitive design? This chapter takes the VSD as its starting point and aims to determine how laws, regulations and policies come to influence how value trade-offs can be managed within VSD practices. It shows that the iterative and interactional nature of VSD both permits and encourages existing policies to be integrated both early on and throughout the design process. The chapter concludes with some potential future research programs. (shrink)
The advent of the intelligent robot has occupied a significant position in society over the past decades and has given rise to new issues in society. As we know, the primary aim of artificial intelligence or robotic research is not only to develop advanced programs to solve our problems but also to reproduce mental qualities in machines. The critical claim of artificial intelligence advocates is that there is no distinction between mind and machines and thus they argue that there are (...) possibilities for machineethics, just as human ethics. Unlike computer ethics, which has traditionally focused on ethical issues surrounding human use of machines, AI or machineethics is concerned with the behaviour of machines towards human users and perhaps other machines as well, and the ethicality of these interactions. The ultimate goal of machineethics, according to the AI scientists, is to create a machine that itself follows an ideal ethical principle or a set of principles; that is to say, it is guided by this principle or these principles in decisions it makes about possible courses of action it could takea. Thus, machineethics task of ensuring ethical behaviour of an artificial agent. Although, there are many philosophical issues related to artificial intelligence, but our attempt in this paper is to discuss, first, whether ethics is the sort of thing that can be computed. Second, if we are ascribing mind to machines, it gives rise to ethical issues regarding machines. And if we are not drawing the difference between mind and machines, we are not only redefining specifically human mind but also the society as a whole. Having a mind is, among other things, having the capacity to make voluntary decisions and actions. The notion of mind is central to our ethical thinking, and this is because the human mind is self-conscious, and this is a property that machines lack, as yet. (shrink)
Ryan Tonkens (2009) has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents (AMAs) satisfy a Kantian inspired recipe - both "rational" and "free" - while also satisfying perceived prerogatives of MachineEthics to create AMAs that are perfectly, not merely reliably, ethical. Challenges for machine ethicists have also been presented by Anthony Beavers and Wendell Wallach, who have pushed for the reinvention of traditional ethics in order to avoid (...) "ethical nihilism" due to the reduction of morality to mechanical causation, and for redoubled efforts toward a comprehensive vision of human ethics to guide machine ethicists on the issue of moral agency. Options thus present themselves: reinterpret traditional ethics in a way that affords a comprehensive account of moral agency inclusive of both artificial and natural agents, “muddle through” regardless, or give up on the possibility. This paper pursues the first option, meets Tonkens' "challenge" and addresses Wallach's concerns through Beaver's proposed means, by "landscaping" traditional moral theory in resolution of the necessary comprehensive and inclusive account that at once draws into question the stated goals of MachineEthics, itself. (shrink)
Rule-based ethical theories like Kant's appear to be promising for machineethics because of the computational structure of their judgments. On one formalist interpretation of Kant's categorical imperative, for instance, a machine could place prospective actions into the traditional deontic categories (forbidden, permissible, obligatory) by a simple consistency test on the maxim of action. We might enhance this test by adding a declarative set of subsidiary maxims and other "buttressing" rules. The ethical judgment is then an outcome (...) of the consistency test. While this kind of test can generate results, it may be vacuous in the sense that it would do no more than forbid obviously contradictory maxims of action. It is also possible that the kind of inference in such a rule-based system may be non-monotonic. I discuss these challenges to a rule-based machineethics, starting from the framework of Kantian ethics. (shrink)
. Intelligent systems are reaching the point where they can take very significant decisions on behalf of humans and society. The moral and ethical impact of such systems needs to be taken very seriously, both internally and externally in respect of such systems. Although some work into defining and systematizing machineethics has begun, a great deal of work remains to be done and many research questions remain open.
One of the enduring concerns of moral philosophy is deciding who or what is deserving of ethical consideration. Much recent attention has been devoted to the "animal question" -- consideration of the moral status of nonhuman animals. In this book, David Gunkel takes up the "machine question": whether and to what extent intelligent and autonomous machines of our own making can be considered to have legitimate moral responsibilities and any legitimate claim to moral consideration. The machine question poses (...) a fundamental challenge to moral thinking, questioning the traditional philosophical conceptualization of technology as a tool or instrument to be used by human agents. Gunkel begins by addressing the question of machine moral agency: whether a machine might be considered a legitimate moral agent that could be held responsible for decisions and actions. He then approaches the machine question from the other side, considering whether a machine might be a moral patient due legitimate moral consideration. Finally, Gunkel considers some recent innovations in moral philosophy and critical theory that complicate the machine question, deconstructing the binary agent--patient opposition itself. Technological advances may prompt us to wonder if the science fiction of computers and robots whose actions affect their human companions could become science fact. Gunkel's argument promises to influence future considerations of ethics, ourselves, and the other entities who inhabit this world. (shrink)
We can learn about human ethics from machines. We discuss the design of a working machine for making ethical decisions, the N-Reasons platform, applied to the ethics of robots. This N-Reasons platform builds on web based surveys and experiments, to enable participants to make better ethical decisions. Their decisions are better than our existing surveys in three ways. First, they are social decisions supported by reasons. Second, these results are based on weaker premises, as no exogenous expertise (...) (aside from that provided by the participants) is needed to seed the survey. Third, N-Reasons is designed to support experiments so we can learn how to improve the platform. We sketch experimental results that show the platform is a success as well as pointing to ways it can be improved. (shrink)
In medical settings, machines are in close proximity with human beings: with patients who are in vulnerable states of health, who have disabilities of various kinds, with the very young or very old, and with medical professionals. Machines in these contexts are undertaking important medical tasks that require emotional sensitivity, knowledge of medical codes, human dignity, and privacy. -/- As machine technology advances, ethical concerns become more urgent: should medical machines be programmed to follow a code of medical (...) class='Hi'>ethics? What theory or theories should constrain medical machine conduct? What design features are required? Should machines share responsibility with humans for the ethical consequences of medical actions? How ought clinical relationships involving machines to be modeled? Is a capacity for empathy and emotion detection necessary? What about consciousness? -/- The essays in this collection by researchers from both humanities and science describe various theoretical and experimental approaches to adding medical ethics to a machine, what design features are necessary in order to achieve this, philosophical and practical questions concerning justice, rights, decision-making and responsibility, and accurately modeling essential physician-machine-patient relationships. -/- This collection is the first book to address these 21st-century concerns. (shrink)
Robot ethics encompasses ethical questions about how humans should design, deploy, and treat robots; machine morality encompasses questions about what moral capacities a robot should have and how these capacities could be computationally implemented. Publications on both of these topics have doubled twice in the past 10 years but have often remained separate from one another. In an attempt to better integrate the two, I offer a framework for what a morally competent robot would look like and discuss (...) a number of ethical questions about the design, use, and treatment of such moral robots in society. Instead of searching for a fixed set of criteria of a robot’s moral competence I identify the multiple elements that make up human moral competence and probe the possibility of designing robots that have one or more of these human elements, which include: moral vocabulary; a system of norms; moral cognition and affect; moral decision making and action; moral communication. Juxtaposing empirical research, philosophical debates, and computational challenges, this article adopts an optimistic perspective: if robotic design truly commits to building morally competent robots, then those robots could be trustworthy and productive partners, caretakers, educators, and members of the human community. Moral competence does not resolve all ethical concerns over robots in society, but it may be a prerequisite to resolve at least some of them. (shrink)
American literary realism burgeoned during a period of tremendous technological innovation. Because the realists evinced not only a fascination with this new technology but also an ethos that seems to align itself with science, many have paired the two fields rather unproblematically. But this book demonstrates that many realist writers, from Mark Twain to Stephen Crane, Charles W. Chesnutt to Edith Wharton, felt a great deal of anxiety about the advent of new technologies – precisely at the crucial intersection of (...)ethics and language. For these writers, the communication revolution was a troubling phenomenon, not only because of the ways in which the new machines had changed and increased the circulation of language but, more pointedly, because of the ways in which language itself had effectively become a machine: a vehicle perpetuating some of society’s most pernicious clichés and stereotypes – particularly stereotypes of race – in unthinking iteration. This work takes a close look at how the realists tried to forge an ethical position between the two poles of science and sentimentality, attempting to create an alternative mode of speech that, avoiding the trap of codifying iteration, could enable ethical action. (shrink)
The extent to which machine metaphors are used in synthetic biology is striking. These metaphors contain a specific perspective on organisms as well as on scientific and technological progress. Expressions such as “genetically engineered machine”, “genetic circuit”, and “platform organism”, taken from the realms of electronic engineering, car manufacturing, and information technology, highlight specific aspects of the functioning of living beings while at the same time hiding others, such as evolutionary change and interdependencies in ecosystems. Since these latter (...) aspects are relevant for, for example, risk evaluation of uncontained uses of synthetic organisms, it is ethically imperative to resist the thrust of machine metaphors in this respect. In addition, from the perspective of the machine metaphor viewing an entity as a moral agent or patient becomes dubious. If one were to regard living beings, including humans, as machines, it becomes difficult to justify ascriptions of moral status. Finally, the machine metaphor reinforces beliefs in the potential of synthetic biology to play a decisive role in solving societal problems, and downplays the role of alternative technological, and social and political measures. (shrink)
The frequent use of metaphors in health care communication in general and clinical ethics cases in particular calls for a more mindful and competent use of figurative speech. Metaphors are powerful tools that enable different ways of thinking about complex issues in health care. However, depending on how and in which context they are used, they can also be harmful and undermine medical decision-making. Given this contingent nature of metaphors, this article discusses two approaches that suggest how medical health (...) care professionals may systematically and imaginatively work with metaphors. The first approach is informed by a model developed by cognitive scientists George Lakoff and Mark Turner. The second approach is a close reading and thus a text-immanent, hermeneutical strategy. Using the double perspective of an ethics consultant and a researcher in literature studies, we take a case from Richard M Zaner in which a metaphor is central to the clinical-ethical problem. The article shows that the approach... (shrink)
Ethics is ordinarily understood as being concerned with questions of responsibility for and in the face of an other. This other is more often than not conceived of as another human being and, as such, necessarily excludes others – most notably animals and machines. This essay examines the ethics of such exclusivity. It is divided into three parts. The first part investigates the exclusive anthropocentrism of traditional forms of moral␣thinking and, following the example of recent innovations in animal (...) rights philosophy, questions the mechanisms of such exclusion. Although recent work in animal- and bio-ethics has successfully implemented strategies for the inclusion of the animal as a legitimate subject of moral consideration, its other, the machine, has remained conspicuously excluded. The second part looks at recent attempts to include these machinic others in moral thinking and critiques the assumptions, values, and strategies that have been employed by these various innovations. And the third part proposes a means for thinking otherwise. That is, it introduces an alternative way to consider these other forms of otherness that is not simply reducible to the conceptual order that has structured and limited moral philosophy’s own concern with and for others. (shrink)
Jon Bing was not only a pioneer in the field of artificial intelligence and law and the legal regulation of technology. He was also an accomplished author of fiction, with an oeuvre spanning from short stories and novels to theatre plays and even an opera. As reality catches up with the imagination of science fiction writers who have anticipated a world shared by humans and non-human intelligences of their creation, some of the copyright issues he has discussed in his academic (...) capacity take on new resonance. How will we regulate copyright when robots are producers and consumers of art? This paper tries to give a sketch of the problem and hints at possible answers that are to a degree inspired by Bing’s academic and creative writing. (shrink)
Herein we make a plea to machine ethicists for the inclusion of constraints on their theories consistent with empirical data on human moral cognition. As philosophers, we clearly lack widely accepted solutions to issues regarding the existence of free will, the nature of persons and firm conditions on moral agency/patienthood; all of which are indispensable concepts to be deployed by any machine able to make moral judgments. No agreement seems forthcoming on these matters, and we don’t hold out (...) hope for machines that can both always do the right thing (on some general ethic) and produce explanations for its behavior that would be understandable to a human confederate. Our tentative solution involves understanding the folk concepts associated with our moral intuitions regarding these matters, and how they might be dependent upon the nature of human cognitive architecture. It is in this spirit that we begin to explore the complexities inherent in human moral judgment via computational theories of the human cognitive architecture, rather than under the extreme constraints imposed by rational-actor models assumed throughout much of the literature on philosophical ethics. After discussing the various advantages and challenges of taking this particular perspective on the development of artificial moral agents, we computationally explore a case study of human intuitions about the self and causal responsibility. We hypothesize that a significant portion of the variance in reported intuitions for this case might be explained by appeal to an interplay between the human ability to mindread and to the way that knowledge is organized conceptually in the cognitive system. In the present paper, we build on a pre-existing computational model of mindreading (Bello et al. 2007) by adding constraints related to psychological distance (Trope and Liberman 2010), a well-established psychological theory of conceptual organization. Our initial results suggest that studies of folk concepts involved in moral intuitions lead us to an enriched understanding of cognitive architecture and a more systematic method for interpreting the data generated by such studies. (shrink)
The American justice system, from police departments to the courts, is increasingly turning to information technology for help identifying potential offenders, determining where, geographically, to allocate enforcement resources, assessing flight risk and the potential for recidivism amongst arrestees, and making other judgments about when, where, and how to manage crime. In particular, there is a focus on machine learning and other data analytics tools, which promise to accurately predict where crime will occur and who will perpetrate it. Activists and (...) academics have begun to raise critical questions about the use of these tools in policing contexts. In this chapter, I review the emerging critical literature on predictive policing and contribute to it by raising ethical questions about the use of predictive analytics tools to identify potential offenders. Drawing from work on the ethics of profiling, I argue that the much-lauded move from reactive to preemptive policing can mean wrongfully generalizing about individuals, making harmful assumptions about them, instrumentalizing them, and failing to respect them as full ethical persons. I suggest that these problems stem both from the nature of predictive policing tools and from the sociotechnical contexts in which they are implemented. Which is to say, the set of ethical issues I describe arises not only from the fact that these tools are predictive, but also from the fact that they are situated in the hands of police. To mitigate these problems, I suggest we place predictive policing tools in the hands of those whose ultimate responsibility is to individuals (such as counselors and social workers), rather than in the hands of those, like the police, whose ultimate duty is to protect the public at large. (shrink)
Building artificial moral agents (AMAs) underscores the fragmentary character of presently available models of human ethical behavior. It is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the “ought” of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology, and yet it draws on both. Philosophers and cognitive scientists have tended to stress the importance of particular cognitive mechanisms, e.g., reasoning, moral sentiments, heuristics, intuitions, or a moral (...) grammar, in the making of moral decisions. However, assembling a system from the bottom-up which is capable of accommodating moral considerations draws attention to the importance of a much wider array of mechanisms in honing moral intelligence. Moral machines need not emulate human cognitive faculties in order to function satisfactorily in responding to morally significant situations. But working through methods for building AMAs will have a profound effect in deepening an appreciation for the many mechanisms that contribute to a moral acumen, and the manner in which these mechanisms work together. Building AMAs highlights the need for a comprehensive model of how humans arrive at satisfactory moral judgments. (shrink)
Artificial Life has two goals. One attempts to describe fundamental qualities of living systems through agent based computer models. And the second studies whether or not we can artificially create living things in computational mediums that can be realized either, virtually in software, or through biotechnology. The study of ALife has recently branched into two further subdivisions, one is “dry” ALife, which is the study of living systems “in silico” through the use of computer simulations, and the other is “wet” (...) ALife that uses biological material to realize what has only been simulated on computers, effectively wet ALife uses biological material as a kind of computer. This is challenging to the field of computer ethics as it points towards a future in which computer and bioethics might have shared concerns. The emerging studies into wet ALife are likely to provide strong empirical evidence for ALife’s most challenging hypothesis: that life is a certain set of computable functions that can be duplicated in any medium. I believe this will propel ALife into the midst of the mother of all cultural battles that has been gathering around the emergence of biotechnology. Philosophers need to pay close attention to this debate and can serve a vital role in clarifying and resolving the dispute. But even if ALife is merely a computer modeling technique that sheds light on living systems, it still has a number of significant ethical implications such as its use in the modeling of moral and ethical systems, as well as in the creation of artificial moral agents. (shrink)
In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences (...) affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms. (shrink)
Many industry leaders and academics from the field of machineethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than (...) humans, and building these machines would lead to a better understanding of human morality. Although some scholars have challenged the very initiative to develop AMAs, what is currently missing from the debate is a closer examination of the reasons offered by machine ethicists to justify the development of AMAs. This closer examination is especially needed because of the amount of funding currently being allocated to the development of AMAs coupled with the amount of attention researchers and industry leaders receive in the media for their efforts in this direction. The stakes in this debate are high because moral robots would make demands on society; answers to a host of pending questions about what counts as an AMA and whether they are morally responsible for their behavior or not. This paper shifts the burden of proof back to the machine ethicists demanding that they give good reasons to build AMAs. The paper argues that until this is done, the development of commercially available AMAs should not proceed further. (shrink)
Utilizing the film I, Robot as a springboard, I here consider the feasibility of robot utilitarians, the moral responsibilities that come with the creation of ethical robots, and the possibility of distinct ethics for robot-robot interaction as opposed to robot-human interaction. (This is a revised and expanded version of an essay that originally appeared in IEEE: Intelligent Systems.).