AI and Society (1):263-271 (2020)
AbstractGiven that artificial moral agents—such as autonomous vehicles, lethal autonomous weapons, and automated financial trading systems—are now part of the socio-ethical equation, we should morally evaluate their behavior. How should artificial moral agents make decisions? Is one moral theory better suited than others for machine ethics? After briefly overviewing the dominant ethical approaches for building morality into machines, this paper discusses a recent proposal, put forward by Don Howard and Ioan Muntean (2016, 2017), for an artificial moral agent based on virtue theory. While the virtuous artificial moral agent has various strengths, this paper argues that a rule-based utilitarian approach (in contrast to a strict act-utilitarian approach) is superior because it can capture the most important features of the virtue-theoretic approach while realizing additional significant benefits. Specifically, a 2-level utilitarian artificial moral agent incorporating both established moral rules and a utility calculator is especially well-suited for machine ethics.
Similar books and articles
Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
Out of Character: On the Creation of Virtuous Machines. [REVIEW]Ryan Tonkens - 2012 - Ethics and Information Technology 14 (2):137-149.
Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency.Muntean Ioan & Don Howard - 2017 - In Thomas Powers (ed.), Philosophy and Computing: Essays in Epistemology, Philosophy of Mind, Logic, and Ethics. Springer.
A Minimalist Model of the Artificial Autonomous Moral Agent (AAMA).Ioan Muntean & Don Howard - 2016 - In SSS-16 Symposium Technical Reports. Association for the Advancement of Artificial Intelligence. AAAI.
A Case for Machine Ethics in Modeling Human-Level Intelligent Agents.Robert James M. Boyles - 2018 - Kritike 12 (1):182–200.
A Prospective Framework for the Design of Ideal Artificial Moral Agents: Insights From the Science of Heroism in Humans.Travis J. Wiltshire - 2015 - Minds and Machines 25 (1):57-71.
Artificial Moral Agents Are Infeasible with Foreseeable Technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
Artificial Moral Agents: Moral Mentors or Sensible Tools?Fabio Fossa - 2018 - Ethics and Information Technology (2):1-12.
Ethics and Consciousness in Artificial Agents.Steve Torrance - 2008 - AI and Society 22 (4):495-521.
The Ethics of Designing Artificial Agents.S. Grodzinsky Frances, W. Miller Keith & J. Wolf Marty - 2008 - Ethics and Information Technology 10 (2-3):112-121.
Manufacturing Morality A General Theory of Moral Agency Grounding Computational Implementations: The ACTWith Model.Jeffrey White - 2013 - In Floares (ed.), Computational Intelligence. Nova Publications. pp. 1-65.
Artificial Morality: Top-Down, Bottom-Up, and Hybrid Approaches. [REVIEW]Colin Allen, Iva Smit & Wendell Wallach - 2005 - Ethics and Information Technology 7 (3):149-155.
Added to PP
Historical graph of downloads
Citations of this work
Expanding Nallur's Landscape of Machine Implemented Ethics.William A. Bauer - 2020 - Science and Engineering Ethics 26 (5):2401-2410.
Moral Control and Ownership in AI Systems.Raul Gonzalez Fabre, Javier Camacho Ibáñez & Pedro Tejedor Escobar - 2021 - AI and Society 36 (1):289-303.
A neo-aristotelian perspective on the need for artificial moral agents.Alejo José G. Sison & Dulce M. Redín - forthcoming - AI and Society:1-19.
AI and society: a virtue ethics approach.Mirko Farina, Petr Zhdanov, Artur Karimov & Andrea Lavazza - forthcoming - AI and Society:1-14.
Word vector embeddings hold social ontological relations capable of reflecting meaningful fairness assessments.Ahmed Izzidien - 2022 - AI and Society 37 (1):299-318.
References found in this work
From Metaphysics to Ethics: A Defence of Conceptual Analysis.Frank Jackson - 1998 - Oxford University Press.
The Problem of Abortion and the Doctrine of the Double Effect.Philippa Foot - 1967 - Oxford Review 5:5-15.