AI and Society:1-11 (forthcoming)

Authors
Carissa Véliz
Oxford University
Abstract
In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking about the latter can help us better understand and regulate the former. I contend that the main reason why algorithms can be neither autonomous nor accountable is that they lack sentience. Moral zombies and algorithms are incoherent as moral agents because they lack the necessary moral understanding to be morally responsible. To understand what it means to inflict pain on someone, it is necessary to have experiential knowledge of pain. At most, for an algorithm that feels nothing, ‘values’ will be items on a list, possibly prioritised in a certain way according to a number that represents weightiness. But entities that do not feel cannot value, and beings that do not value cannot act for moral reasons.
Keywords Algorithms  Reasons-responsiveness  Moral agency  Moral responsibility  Autonomy  Consciousness  Sentience  Zombies  Accountability  Artificial Intelligence  Autonomous systems
Categories (categorize this paper)
DOI 10.1007/s00146-021-01189-x
Options
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

PhilArchive copy

 PhilArchive page | Other versions
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

Minds, Brains, and Programs.John R. Searle - 1980 - Behavioral and Brain Sciences 3 (3):417-57.
Welcoming Robots Into the Moral Circle: A Defence of Ethical Behaviourism.John Danaher - 2020 - Science and Engineering Ethics 26 (4):2023-2049.
On the Morality of Artificial Agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
Killer Robots.Robert Sparrow - 2007 - Journal of Applied Philosophy 24 (1):62–77.

View all 18 references / Add more references

Citations of this work BETA

No citations found.

Add more citations

Similar books and articles

Equal Rights for Zombies?: Phenomenal Consciousness and Responsible Agency.Alex Madva - 2019 - Journal of Consciousness Studies 26 (5-6):117-40.
Ethics and Consciousness in Artificial Agents.Steve Torrance - 2008 - AI and Society 22 (4):495-521.
Algorithms, Agency, and Respect for Persons.Alan Rubel, Clinton Castro & Adam Pham - 2020 - Social Theory and Practice 46 (3):547-572.
How Autonomy Alone Debunks Corporate Moral Agency.David Rönnegard - 2013 - Business and Professional Ethics Journal 32 (1-2):77-107.

Analytics

Added to PP index
2021-04-16

Total views
80 ( #129,415 of 2,432,812 )

Recent downloads (6 months)
80 ( #8,562 of 2,432,812 )

How can I increase my downloads?

Downloads

My notes