Friendly AI will still be our master. Or, why we should not want to be the pets of super-intelligent computers

AI and Society:1-6 (forthcoming)
  Copy   BIBTEX

Abstract

When asked about humanity’s future relationship with computers, Marvin Minsky famously replied “If we’re lucky, they might decide to keep us as pets”. A number of eminent authorities continue to argue that there is a real danger that “super-intelligent” machines will enslave—perhaps even destroy—humanity. One might think that it would swiftly follow that we should abandon the pursuit of AI. Instead, most of those who purport to be concerned about the existential threat posed by AI default to worrying about what they call the “Friendly AI problem”. Roughly speaking this is the question of how we might ensure that the AI that will develop from the first AI that we create will remain sympathetic to humanity and continue to serve, or at least take account of, our interests. In this paper I draw on the “neo-republican” philosophy of Philip Pettit to argue that solving the Friendly AI problem would not change the fact that the advent of super-intelligent AI would be disastrous for humanity by virtue of rendering us the slaves of machines. A key insight of the republican tradition is that freedom requires equality of a certain sort, which is clearly lacking between pets and their owners. Benevolence is not enough. As long as AI has the power to interfere in humanity’s choices, and the capacity to do so without reference to our interests, then it will dominate us and thereby render us unfree. The pets of kind owners are still pets, which is not a status which humanity should embrace. If we really think that there is a risk that research on AI will lead to the emergence of a superintelligence, then we need to think again about the wisdom of researching AI at all.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,610

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

AI and consciousness.Sam S. Rakover - forthcoming - AI and Society:1-2.
Call for papers.[author unknown] - 2018 - AI and Society 33 (3):457-458.
Call for papers.[author unknown] - 2018 - AI and Society 33 (3):453-455.
The inside out mirror.Sue Pearson - 2021 - AI and Society 36 (3):1069-1070.
Is LaMDA sentient?Max Griffiths - forthcoming - AI and Society:1-2.
A Literature of Working Life.R. Ennals - 2002 - AI and Society 16 (1-2):168-170.
What is a Turing test for emotional AI?Manh-Tung Ho - forthcoming - AI and Society:1-2.
The Turing test is a joke.Attay Kremer - 2024 - AI and Society 39 (1):399-401.
A Look into Modern Working Life.Lena Skio¨ld - 2002 - AI and Society 16 (1-2):166-167.
The Creative Landscapes Column: Epidemic. [REVIEW]Bob Muller - 2002 - AI and Society 16 (1-2):130-137.
Editorial: Beyond regulatory ethics.Satinder P. Gill - 2023 - AI and Society 38 (2):437-438.

Analytics

Added to PP
2023-06-14

Downloads
27 (#585,831)

6 months
14 (#175,908)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Robert Sparrow
Monash University

References found in this work

Artificial Intelligence, Values, and Alignment.Iason Gabriel - 2020 - Minds and Machines 30 (3):411-437.
On the People’s Terms.Philip Pettit - 2012 - Political Theory 44 (5):697-706.

Add more references