Varieties of Artificial Moral Agency and the New Control Problem
Humana.Mente - Journal of Philosophical Studies 15 (42):225-256 (2022)
Abstract
This paper presents a new trilemma with respect to resolving the control and alignment problems in machine ethics. Section 1 outlines three possible types of artificial moral agents (AMAs): (1) 'Inhuman AMAs' programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do; (2) 'Better-Human AMAs' programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human moral error; and (3) 'Human-Like AMAs' programmed to understand and apply moral values in broadly the same way that we do, with a human-like moral psychology. Sections 2–4 then argue that each type of AMA generates unique control and alignment problems that have not been fully appreciated. Section 2 argues that Inhuman AMAs are likely to behave in inhumane ways that pose serious existential risks. Section 3 then contends that Better-Human AMAs run a serious risk of magnifying some sources of human moral error by reducing or eliminating others. Section 4 then argues that Human-Like AMAs would not only likely reproduce human moral failures, but also plausibly be highly intelligent, conscious beings with interests and wills of their own who should therefore be entitled to similar moral rights and freedoms as us. This generates what I call the New Control Problem: ensuring that humans and Human-Like AMAs exert a morally appropriate amount of control over each other. Finally, Section 5 argues that resolving the New Control Problem would, at a minimum, plausibly require ensuring what Hume and Rawls term ‘circumstances of justice’ between humans and Human-Like AMAs. But, I argue, there are grounds for thinking this will be profoundly difficult to achieve. I thus conclude on a skeptical note. Different approaches to developing ‘safe, ethical AI’ generate subtly different control and alignment problems that we do not currently know how to adequately resolve, and which may or may not be ultimately surmountable.Author's Profile
My notes
Similar books and articles
Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
Artificial Agency and Moral Agency: Conceptualizing the Relationship and its Ethical Implications on Moral Identity Formation.Simona Tiribelli - 2022 - Scienza E Filosofia 27:54-68.
A Case for Machine Ethics in Modeling Human-Level Intelligent Agents.Robert James M. Boyles - 2018 - Kritike 12 (1):182–200.
Risk Imposition by Artificial Agents: The Moral Proxy Problem.Johanna Thoma - forthcoming - In Silja Vöneky, Philipp Kellmeyer, Oliver Müller & Wolfram Burgard (eds.), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives. Cambridge University Press.
How does Artificial Intelligence Pose an Existential Risk?Karina Vold & Daniel R. Harris - forthcoming - In Carissa Véliz (ed.), Oxford Handbook of Digital Ethics.
A Normative Approach to Artificial Moral Agency.Dorna Behdadi & Christian Munthe - 2020 - Minds and Machines 30 (2):195-218.
Manufacturing Morality A general theory of moral agency grounding computational implementations: the ACTWith model.Jeffrey White - 2013 - In Floares (ed.), Computational Intelligence. Nova Publications. pp. 1-65.
Consequentialism & Machine Ethics: Towards a Foundational Machine Ethic to Ensure the Right Action of Artificial Moral Agents.Josiah Della Foresta - 2020 - Montreal AI Ethics Institute.
The Problem Of Moral Agency In Artificial Intelligence.Riya Manna & Rajakishore Nath - 2021 - 2021 IEEE Conference on Norbert Wiener in the 21st Century (21CW).
The Human Side of Artificial Intelligence.Matthew A. Butkus - 2020 - Science and Engineering Ethics 26 (5):2427-2437.
The Morality of Artificial Friends in Ishiguro’s Klara and the Sun.Jakob Stenseke - 2022 - Journal of Science Fiction and Philosophy 5.
Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? [REVIEW]Kenneth Einar Himma - 2009 - Ethics and Information Technology 11 (1):19-29.
Moral Encounters of the Artificial Kind: Towards a non-anthropocentric account of machine moral agency.Fabio Tollon - 2019 - Dissertation, Stellenbosch University
ETHICA EX MACHINA. Exploring artificial moral agency or the possibility of computable ethics.Rodrigo Sanz - 2020 - Zeitschrift Für Ethik Und Moralphilosophie 3 (2):223-239.
Analytics
Added to PP
2022-12-27
Downloads
6 (#1,103,812)
6 months
6 (#131,995)
2022-12-27
Downloads
6 (#1,103,812)
6 months
6 (#131,995)
Historical graph of downloads