The Puzzle of Evaluating Moral Cognition in Artificial Agents

Cognitive Science 47 (8):e13315 (2023)
  Copy   BIBTEX

Abstract

In developing artificial intelligence (AI), researchers often benchmark against human performance as a measure of progress. Is this kind of comparison possible for moral cognition? Given that human moral judgment often hinges on intangible properties like “intention” which may have no natural analog in artificial agents, it may prove difficult to design a “like‐for‐like” comparison between the moral behavior of artificial and human agents. What would a measure of moral behavior for both humans and AI look like? We unravel the complexity of this question by discussing examples within reinforcement learning and generative AI, and we examine how the puzzle of evaluating artificial agents' moral cognition remains open for further investigation within cognitive science.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 92,038

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

The Human Side of Artificial Intelligence.Matthew A. Butkus - 2020 - Science and Engineering Ethics 26 (5):2427-2437.
Risk Imposition by Artificial Agents: The Moral Proxy Problem.Johanna Thoma - 2022 - In Silja Voeneky, Philipp Kellmeyer, Oliver Mueller & Wolfram Burgard (eds.), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives. Cambridge University Press.
The ethics of designing artificial agents.S. Grodzinsky Frances, W. Miller Keith & J. Wolf Marty - 2008 - Ethics and Information Technology 10 (2-3):112-121.
Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.

Analytics

Added to PP
2023-08-18

Downloads
33 (#485,230)

6 months
19 (#135,612)

Historical graph of downloads
How can I increase my downloads?