About this topic
Summary Machine ethics is about artificial moral agency. Machine ethicists wonder why people, human beings, other organisms, do what they do when they do it, what makes these things the right things to do, and how to articulate this process (ideally) in an independent artificial system (and not in a biological child, as an alternative). So, this category includes entries on agency, especially moral agency, and also on what it means to be an agent in general. On the empirical side, machine ethicists interpret rapidly advancing work in cognitive science and psychology alongside that of work in robotics and AI through traditional ethical frameworks, helping to frame robotics research in terms of ethical theory. For example, intelligent machines are (most) often modeled after biological systems, and in any event are often "made sense of" in terms of biological systems, so there is work that must be done in this process of interpretation and integration. More theoretical work wonders about the relative status afforded artificial agents given degrees of autonomy, origin, level of complexity, corporate-institutional and legal standing and so on, as well as research into the essence o consciousness and of moral agency regardless of natural or artificial instantiation. So understood, machine ethics is in the middle of a maelstrom of current research activity, with direct bearing on traditional ethics and with extensive popular implications as well. 
Key works Allen et al 2005Wallach et al 2008Tonkens 2012Tonkens 2009Müller & Bostrom 2014White 2013White 2015 
Related categories

285 found
1 — 50 / 285
  1. Artificial Morality: Top-Down, Bottom-Up, and Hybrid Approaches. [REVIEW]Colin Allen, Iva Smit & Wendell Wallach - 2005 - Ethics and Information Technology 7 (3):149-155.
    A principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. Intermediate goals of artificial morality are directed at building into AI systems sensitivity to the values, ethics, and legality of activities. The development of an effective foundation for the field of artificial morality involves exploring the technological and philosophical issues involved in making computers into explicit moral reasoners. The goal of this paper is to discuss strategies for implementing (...)
  2. Prolegomena to Any Future Arti® Cial Moral Agent.Colin Allen & Gary Varner - unknown
    As arti® cial intelligence moves ever closer to the goal of producing fully autonomous agents, the question of how to design and implement an arti® cial moral agent (AMA) becomes increasingly pressing. Robots possessing autonomous capacities to do things that are useful to humans will also have the capacity to do things that are harmful to humans and other sentient beings. Theoretical challenges to developing arti® cial moral agents result both from controversies among ethicists about moral theory itself, and from (...)
  3. Rethinking Autonomy.Richard Alterman - 2000 - Minds and Machines 10 (1):15-30.
    This paper explores the assumption of autonomy. Several arguments are presented against the assumption of runtime autonomy as a principle of design for artificial intelligence systems. The arguments vary from being theoretical, to practical, and to analytic. The latter parts of the paper focus on one strategy for building non-autonomous systems (the practice view). One critical theme is that intelligence is not located in the system alone, it emerges from a history of interactions among user, builder, and designer over a (...)
  4. Machine Ethics.M. Anderson & S. Anderson (eds.) - 2011 - Cambridge Univ. Press.
    The essays in this volume represent the first steps by philosophers and artificial intelligence researchers toward explaining why it is necessary to add an ...
  5. Association for the Advancement of Artificial Intelligence Fall Symposium Technical Report.M. Anderson, S. L. Anderson & C. Armen (eds.) - 2005
  6. The Status of Machine Ethics: A Report From the AAAI Symposium. [REVIEW]Michael Anderson & Susan Leigh Anderson - 2007 - Minds and Machines 17 (1):1-10.
    This paper is a summary and evaluation of work presented at the AAAI 2005 Fall Symposium on Machine Ethics that brought together participants from the fields of Computer Science and Philosophy to the end of clarifying the nature of this newly emerging field and discussing different approaches one could take towards realizing the ultimate goal of creating an ethical machine.
  7. Philosophical Concerns with Machine Ethics.Susan Leigh Anderson - 2011 - In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press.
  8. Machine Metaethics.Susan Leigh Anderson - 2011 - In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press.
  9. Once People Understand That Machine Ethics is Concerned with How Intelligent Machines Should Behave, They Often Maintain That Isaac Asimov has Already Given Us an Ideal Set of Rules for Such Machines. They Have in Mind Asimov's Three Laws of Robotics: 1. A Robot May Not Injure a Human Being, or, Through Inaction, Allow a Human.Susan Leigh Anderson - 2011 - In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press.
  10. Asimov's “Three Laws of Robotics” and Machine Metaethics.Susan Leigh Anderson - 2008 - AI and Society 22 (4):477-493.
    Using Asimov’s “Bicentennial Man” as a springboard, a number of metaethical issues concerning the emerging field of machine ethics are discussed. Although the ultimate goal of machine ethics is to create autonomous ethical machines, this presents a number of challenges. A good way to begin the task of making ethics computable is to create a program that enables a machine to act an ethical advisor to human beings. This project, unlike creating an autonomous ethical machine, will not require that we (...)
  11. A Prima Facie Duty Approach to Machine Ethics Machine Learning of Features of Ethical Dilemmas, Prima Facie Duties, and Decision Principles Through a Dialogue with Ethicists.Susan Leigh Anderson & Michael Anderson - 2011 - In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press.
  12. How Machines Can Advance Ethics.Susan Leigh Anderson & Michael Anderson - 2009 - Philosophy Now 72:17-19.
  13. The Philosophical Importance of the Problem of Natural and Artificial Intellects.P. K. Anokhin - 1976 - Russian Studies in Philosophy 14 (4):3-27.
    It would be difficult to name a more interesting scientific problem than that of knowledge of the brain, its overall mechanisms and its molecular nature. Rational management of the brain in the future and utilization of the principles of its functioning to construct various mechanisms to undergird present-day technological progress should follow as direct consequences of development of that sphere of knowledge.
  14. The Robot Didn't Do It: A Position Paper for the Workshop on Anticipatory Ethics, Responsibility and Artificial Agents.Ronald C. Arkin - 2013 - Workshop on Anticipatory Ethics, Responsibility and Artificial Agents 2013.
    This position paper addresses the issue of responsibility in the use of autonomous robotic systems. We are nowhere near autonomy in the philosophical sense, i.e., where there exists free agency and moral culpability for a non-human artificial agent. Sentient robots and the singularity are not concerns in the near to mid-term. While agents such as corporations can be held legally responsible for their actions, these exist of organizations under the direct control of humans. Intelligent robots, by virtue of their autonomous (...)
  15. Thinking Inside the Box: Controlling and Using an Oracle AI.Stuart Armstrong, Anders Sandberg & Nick Bostrom - 2012 - Minds and Machines 22 (4):299-324.
    There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in (...)
  16. Mental Time-Travel, Semantic Flexibility, and A.I. Ethics.Marcus Arvan - forthcoming - AI and Society:1-20.
    This article argues that existing approaches to programming ethical AI fail to resolve a serious moral-semantic trilemma, generating interpretations of ethical requirements that are either too semantically strict, too semantically flexible, or overly unpredictable. This paper then illustrates the trilemma utilizing a recently proposed ‘general ethical dilemma analyzer,’ _GenEth_. Finally, it uses empirical evidence to argue that human beings resolve the semantic trilemma using general cognitive and motivational processes involving ‘mental time-travel,’ whereby we simulate different possible pasts and futures. I (...)
  17. What Should We Want From a Robot Ethic.Peter M. Asaro - 2006 - International Review of Information Ethics 6 (12):9-16.
    There are at least three things we might mean by "ethics in robotics": the ethical systems built into robots, the ethics of people who design and use robots, and the ethics of how people treat robots. This paper argues that the best approach to robot ethics is one which addresses all three of these, and to do this it ought to consider robots as socio-technical systems. By so doing, it is possible to think of a continuum of agency that lies (...)
  18. Hans Moravec, Robot. Mere Machine to Transcendent Mind, New York, NY: Oxford University Press, Inc., 1999, IX + 227 Pp., $25.00 (Cloth), ISBN 0-19-511630-. [REVIEW]Peter M. Asaro - 2001 - Minds and Machines 11 (1):143-147.
  19. Can Artificial Intelligences Suffer From Mental Illness? A Philosophical Matter to Consider.Hutan Ashrafian - 2017 - Science and Engineering Ethics 23 (2):403-412.
    The potential for artificial intelligences and robotics in achieving the capacity of consciousness, sentience and rationality offers the prospect that these agents have minds. If so, then there may be a potential for these minds to become dysfunctional, or for artificial intelligences and robots to suffer from mental illness. The existence of artificially intelligent psychopathology can be interpreted through the philosophical perspectives of mental illness. This offers new insights into what it means to have either robot or human mental disorders, (...)
  20. AIonAI: A Humanitarian Law of Artificial Intelligence and Robotics.Hutan Ashrafian - 2015 - Science and Engineering Ethics 21 (1):29-40.
    The enduring progression of artificial intelligence and cybernetics offers an ever-closer possibility of rational and sentient robots. The ethics and morals deriving from this technological prospect have been considered in the philosophy of artificial intelligence, the design of automatons with roboethics and the contemplation of machine ethics through the concept of artificial moral agents. Across these categories, the robotics laws first proposed by Isaac Asimov in the twentieth century remain well-recognised and esteemed due to their specification of preventing human harm, (...)
  21. Artificial Intelligence and Robot Responsibilities: Innovating Beyond Rights.Hutan Ashrafian - 2015 - Science and Engineering Ethics 21 (2):317-326.
    The enduring innovations in artificial intelligence and robotics offer the promised capacity of computer consciousness, sentience and rationality. The development of these advanced technologies have been considered to merit rights, however these can only be ascribed in the context of commensurate responsibilities and duties. This represents the discernable next-step for evolution in this field. Addressing these needs requires attention to the philosophical perspectives of moral responsibility for artificial intelligence and robotics. A contrast to the moral status of animals may be (...)
  22. The Morality Machine.Phil Badger - 2014 - Philosophy Now 104:24-27.
  23. Whole-Personality Emulation.William Sims Bainbridge - 2012 - International Journal of Machine Consciousness 4 (01):159-175.
  24. Technology of Culture: The Roadmap of a Journey Undertaken. [REVIEW]Parthasarathi Banerjee - 2007 - AI and Society 21 (4):411-419.
    Artificial intelligence (AI) impacts society and an individual in many subtler and deeper ways than machines based upon the physics and mechanics of descriptive objects. The AI project involves thus culture and provides scope to liberational undertakings. Most importantly AI implicates human ethical and attitudinal bearings. This essay explores how previous authors in this journal have explored related issues and how such discourses have provided to the present world a roadmap that can be followed to engage in discourses with ethical (...)
  25. Defining Agency: Individuality, Normativity, Asymmetry, and Spatio-Temporality in Action.Xabier Barandiaran, E. Di Paolo & M. Rohde - 2009 - Adaptive Behavior 17 (5):367-386.
    The concept of agency is of crucial importance in cognitive science and artificial intelligence, and it is often used as an intuitive and rather uncontroversial term, in contrast to more abstract and theoretically heavy-weighted terms like “intentionality”, “rationality” or “mind”. However, most of the available definitions of agency are either too loose or unspecific to allow for a progressive scientific program. They implicitly and unproblematically assume the features that characterize agents, thus obscuring the full potential and challenge of modeling agency. (...)
  26. Machines as Moral Patients We Shouldn't Care About (Yet): The Interests and Welfare of Current Machines.John Basl - 2014 - Philosophy and Technology 27 (1):79-96.
    In order to determine whether current (or future) machines have a welfare that we as agents ought to take into account in our moral deliberations, we must determine which capacities give rise to interests and whether current machines have those capacities. After developing an account of moral patiency, I argue that current machines should be treated as mere machines. That is, current machines should be treated as if they lack those capacities that would give rise to psychological interests. Therefore, they (...)
  27. The Ethics of Creating Artificial Consciousness.John Basl - 2013 - APA Newsletter on Philosophy and Computers 13 (1):23-29.
  28. Technikethik.Fiorella Battaglia & Nikil Mukerji - 2015 - In Julian Nida-Rümelin, Irina Spiegel & Markus Tiedemann (eds.), Handbuch Philosophie und Ethik - Band 2: Disziplinen und Themen. UTB. pp. 288-295.
  29. Science, Technology, and Responsibility.Fiorella Battaglia, Nikil Mukerji & Julian Nida-Rümelin - 2014 - In Fiorella Battaglia, Nikil Mukerji & Julian Nida-Rümelin (eds.), Rethinking Responsibility in Science and Technology. Pisa University Press. pp. 7-11.
    The empirical circumstances in which human beings ascribe responsibility to one another are subject to change. Science and technology play a great part in this transformation process. Therefore, it is important for us to rethink the idea, the role and the normative standards behind responsibility in a world that is constantly changing under the influence of scientific and technological progress. This volume is a contribution to that joint societal effort.
  30. Two Challenges for CI Trustworthiness and How to Address Them.Kevin Baum, Eva Schmidt & A. Köhl Maximilian - 2017 - Proceedings of the 1st Workshop on Explainable Computational Intelligence (XCI 2017).
    We argue that, to be trustworthy, Computa- tional Intelligence (CI) has to do what it is entrusted to do for permissible reasons and to be able to give rationalizing explanations of its behavior which are accurate and gras- pable. We support this claim by drawing par- allels with trustworthy human persons, and we show what difference this makes in a hypo- thetical CI hiring system. Finally, we point out two challenges for trustworthy CI and sketch a mechanism which could be (...)
  31. What Can A Robot Teach Us About Kantian Ethics?," in Process".Anthony F. Beavers - unknown
    In this paper, I examine a variety of agents that appear in Kantian ethics in order to determine which would be necessary to make a robot a genuine moral agent. However, building such an agent would require that we structure into a robot’s behavioral repertoire the possibility for immoral behavior, for only then can the moral law, according to Kant, manifest itself as an ought, a prerequisite for being able to hold an agent morally accountable for its actions. Since building (...)
  32. Moral Machines and the Threat of Ethical Nihilism.Anthony F. Beavers - 2011 - In Patrick Lin, George Bekey & Keith Abney (eds.), Robot Ethics: The Ethical and Social Implication of Robotics.
    In his famous 1950 paper where he presents what became the benchmark for success in artificial intelligence, Turing notes that "at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted" (Turing 1950, 442). Kurzweil (1990) suggests that Turing's prediction was correct, even if no machine has yet to pass the Turing Test. In the wake of the (...)
  33. Between Angels and Animals: The Question of Robot Ethics, or is Kantian Moral Agency Desirable?Anthony F. Beavers - unknown
    In this paper, I examine a variety of agents that appear in Kantian ethics in order to determine which would be necessary to make a robot a genuine moral agent. However, building such an agent would require that we structure into a robot’s behavioral repertoire the possibility for immoral behavior, for only then can the moral law, according to Kant, manifest itself as an ought, a prerequisite for being able to hold an agent morally accountable for its actions. Since building (...)
  34. Responsibility and Decision Making in the Era of Neural Networks.William Bechtel - 1996 - Social Philosophy and Policy 13 (2):267.
    Many of the mathematicians and scientists who guided the development of digital computers in the late 1940s, such as Alan Turing and John von Neumann, saw these new devices not just as tools for calculation but as devices that might employ the same principles as are exhibited in rational human thought. Thus, a subfield of what came to be called computer science assumed the label artificial intelligence. The idea of building artificial systems which could exhibit intelligent behavior comparable to that (...)
  35. Social Robots-Emotional Agents: Some Remarks on Naturalizing Man-Machine Interaction.Barbara Becker - 2006 - International Review of Information Ethics 6:37-45.
    The construction of embodied conversational agents - robots as well as avatars - seem to be a new challenge in the field of both cognitive AI and human-computer-interface development. On the one hand, one aims at gaining new insights in the development of cognition and communication by constructing intelligent, physical instantiated artefacts. On the other hand people are driven by the idea, that humanlike mechanical dialog-partners will have a positive effect on human-machine-communication. In this contribution I put for discussion whether (...)
  36. On How to Build a Moral Machine.Paul Bello & Selmer Bringsjord - 2013 - Topoi 32 (2):251-266.
    Herein we make a plea to machine ethicists for the inclusion of constraints on their theories consistent with empirical data on human moral cognition. As philosophers, we clearly lack widely accepted solutions to issues regarding the existence of free will, the nature of persons and firm conditions on moral agency/patienthood; all of which are indispensable concepts to be deployed by any machine able to make moral judgments. No agreement seems forthcoming on these matters, and we don’t hold out hope for (...)
  37. Considerations About the Relationship Between Animal and Machine Ethics.Oliver Bendel - 2016 - AI and Society 31 (1):103-108.
  38. Autonomous Machine Agency.Don Berkich - 2002 - Dissertation, University of Massachusetts Amherst
    Is it possible to construct a machine that can act of its own accord? There are a number of skeptical arguments which conclude that autonomous machine agency is impossible. Yet if autonomous machine agency is impossible, then serious doubt is cast on the possibility of autonomous human action, at least on the widely held assumption that some form of materialism is true. The purpose of this dissertation is to show that autonomous machine agency is possible, thereby showing that the autonomy (...)
  39. Ethical Considerations in the Conduct of Electronic Surveillance Research.Ashok J. Bharucha, Alex John London, David Barnard, Howard Wactlar, Mary Amanda Dew & Charles F. Reynolds - 2006 - Journal of Law, Medicine and Ethics 34 (3):611-619.
    The extant clinical literature indicates profound problems in the assessment, monitoring, and documentation of care in long-term care facilities. The lack of adequate resources to accommodate higher staff-to-resident ratios adds additional urgency to the goal of identifying more costeffective mechanisms to provide care oversight. The ever expanding array of electronic monitoring technologies in the clinical research arena demands a conceptual and pragmatic framework for the resolution of ethical tensions inherent in the use of such innovative tools. CareMedia is a project (...)
  40. Autonomous Weapons Systems: Law, Ethics, Policy.Nehal Bhuta, Susanne Beck, Robin Geiss, Hin-Yan Liu & Claus Kress (eds.) - 2016 - Cambridge University Press.
    The intense and polemical debate over the legality and morality of weapons systems to which human cognitive functions are delegated (up to and including the capacity to select targets and release weapons without further human intervention) addresses a phenomena which does not yet exist but which is widely claimed to be emergent. This groundbreaking collection combines contributions from roboticists, legal scholars, philosophers and sociologists of science in order to recast the debate in a manner that clarifies key areas and articulates (...)
  41. Intelligence Unbound: The Future of Uploaded and Machine Minds.Russell Blackford & Damien Broderick (eds.) - 2014 - Wiley-Blackwell.
    _Intelligence Unbound_ explores the prospects, promises, and potential dangers of machine intelligence and uploaded minds in a collection of state-of-the-art essays from internationally recognized philosophers, AI researchers, science fiction authors, and theorists. Compelling and intellectually sophisticated exploration of the latest thinking on Artificial Intelligence and machine minds Features contributions from an international cast of philosophers, Artificial Intelligence researchers, science fiction authors, and more Offers current, diverse perspectives on machine intelligence and uploaded minds, emerging topics of tremendous interest Illuminates the nature (...)
  42. When is Any Agent a Moral Agent?: Reflections on Machine Consciousness and Moral Agency.Whitby Blay - 2013 - International Journal of Machine Consciousness 5 (1).
  43. Implementation of Moral Uncertainty in Intelligent Machines.Kyle Bogosian - 2017 - Minds and Machines 27 (4):591-608.
    The development of artificial intelligence will require systems of ethical decision making to be adapted for automatic computation. However, projects to implement moral reasoning in artificial moral agents so far have failed to satisfactorily address the widespread disagreement between competing approaches to moral philosophy. In this paper I argue that the proper response to this situation is to design machines to be fundamentally uncertain about morality. I describe a computational framework for doing so and show that it efficiently resolves common (...)
  44. Machine Metaphors and Ethics in Synthetic Biology.Joachim Boldt - 2018 - Life Sciences, Society and Policy 14 (1):1-13.
    The extent to which machine metaphors are used in synthetic biology is striking. These metaphors contain a specific perspective on organisms as well as on scientific and technological progress. Expressions such as “genetically engineered machine”, “genetic circuit”, and “platform organism”, taken from the realms of electronic engineering, car manufacturing, and information technology, highlight specific aspects of the functioning of living beings while at the same time hiding others, such as evolutionary change and interdependencies in ecosystems. Since these latter aspects are (...)
  45. Norms in Artificial Decision Making.Magnus Boman - 1999 - Artificial Intelligence and Law 7 (1):17-35.
    A method for forcing norms onto individual agents in a multi-agent system is presented. The agents under study are supersoft agents: autonomous artificial agents programmed to represent and evaluate vague and imprecise information. Agents are further assumed to act in accordance with advice obtained from a normative decision module, with which they can communicate. Norms act as global constraints on the evaluations performed in the decision module and hence no action that violates a norm will be suggested to any agent. (...)
  46. A Case for Machine Ethics in Modeling Human-Level Intelligent Agents.Robert James M. Boyles - 2018 - Kritike 12 (1):182–200.
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and decision-making. (...)
  47. Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the nature of Artificial Moral Agents may consider certain philosophical (...)
  48. Ethical Robots: The Future Can Heed Us. [REVIEW]Selmer Bringsjord - 2008 - AI and Society 22 (4):539-550.
    Bill Joy’s deep pessimism is now famous. Why the Future Doesn’t Need Us, his defense of that pessimism, has been read by, it seems, everyone—and many of these readers, apparently, have been converted to the dark side, or rather more accurately, to the future-is-dark side. Fortunately (for us; unfortunately for Joy), the defense, at least the part of it that pertains to AI and robotics, fails. Ours may be a dark future, but we cannot know that on the basis of (...)
  49. Piagetian Roboethics Via Category Theory Moving Beyond Mere Formal Operations to Engineer Robots Whose Decisions Are Guaranteed to Be Ethically Correct.Selmer Bringsjord, Joshua Taylor, Bram van Heuveln, Konstantine Arkoudas, Micah Clark & Ralph Wojtowicz - 2011 - In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press.
  50. Twilight Zones and Cornerstones.Rodney A. Brooks & Anita M. Flynn - unknown
    We want to build tiny gnat-sized robots, a millimeter or two in diameter. They will be cheap, disposable, totally sefcontained autonomous agents able to do useful things in the world. This paper consists of two parts. The first describes why we want to build them. The second is a technical outline of how to go about it. Gnat robots are going to change the world.
1 — 50 / 285