Machine agency and representation

AI and Society 39 (1):345-352 (2024)
  Copy   BIBTEX

Abstract

Theories of action tend to require agents to have mental representations. A common trope in discussions of artificial intelligence (AI) is that they do not, and so cannot be agents. Properly understood there may be something to the requirement, but the trope is badly misguided. Here we provide an account of representation for AI that is sufficient to underwrite attributions to these systems of ownership, action, and responsibility. Existing accounts of mental representation tend to be too demanding and unparsimonious. We offer instead a minimalist account of representation that ascribes only those features necessary for explaining action, trimming the “extra” features in existing accounts (e.g., representation as a “mental” phenomenon). Our account makes ‘representation’ whatever it is that, for example, the thermostat is doing with the thermometer. The thermostat is disposed to act as long as the thermometer is outside a given range of parameters.. Our account allows us to offer a new perspective on the ‘responsibility gap’, a problem raised by the actions of sophisticated machines: because nobody has enough control over the machine’s actions to be able to assume responsibility, conventional approaches to responsibility ascription are inappropriate. We argue that there is a distinction between finding responsible and holding responsible and, in order to resolve the responsibility gap, we must first clarify the conceptual terrain on which agent is in fact responsible.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 99,362

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Is Explainable AI Responsible AI?Isaac Taylor - forthcoming - AI and Society.
Reflection and Responsibility.Pamela Hieronymi - 2014 - Philosophy and Public Affairs 42 (1):3-41.
Action and Agency in Artificial Intelligence: A Philosophical Critique.Justin Nnaemeka Onyeukaziri - 2023 - Philosophia: International Journal of Philosophy (Philippine e-journal) 24 (1):73-90.
Human Goals Are Constitutive of Agency in Artificial Intelligence.Elena Popa - 2021 - Philosophy and Technology 34 (4):1731-1750.
Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible.Daniel W. Tigard - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):435-447.

Analytics

Added to PP
2024-02-17

Downloads
92 (#197,733)

6 months
60 (#91,159)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Beba Cibralic
Cambridge University
James Mattingly
Georgetown University

Citations of this work

No citations found.

Add more citations

References found in this work

Minds, brains, and programs.John Searle - 1980 - Behavioral and Brain Sciences 3 (3):417-57.
Intention.G. E. M. Anscombe - 1957 - Proceedings of the Aristotelian Society 57:321-332.
Killer robots.Robert Sparrow - 2007 - Journal of Applied Philosophy 24 (1):62–77.
There Is No Techno-Responsibility Gap.Daniel W. Tigard - 2021 - Philosophy and Technology 34 (3):589-607.
Society-in-the-loop: programming the algorithmic social contract.Iyad Rahwan - 2018 - Ethics and Information Technology 20 (1):5-14.

View all 15 references / Add more references