Science and Engineering Ethics:1-13 (forthcoming)

Wolfhart Totschnig
Universidad Diego Portales
In the fields of artificial intelligence and robotics, the term “autonomy” is generally used to mean the capacity of an artificial agent to operate independently of human guidance. It is thereby assumed that the agent has a fixed goal or “utility function” with respect to which the appropriateness of its actions will be evaluated. From a philosophical perspective, this notion of autonomy seems oddly weak. For, in philosophy, the term is generally used to refer to a stronger capacity, namely the capacity to “give oneself the law,” to decide by oneself what one’s goal or principle of action will be. The predominant view in the literature on the long-term prospects and risks of artificial intelligence is that an artificial agent cannot exhibit such autonomy because it cannot rationally change its own final goal, since changing the final goal is counterproductive with respect to that goal and hence undesirable. The aim of this paper is to challenge this view by showing that it is based on questionable assumptions about the nature of goals and values. I argue that a general AI may very well come to modify its final goal in the course of developing its understanding of the world. This has important implications for how we are to assess the long-term prospects and risks of artificial intelligence.
Keywords artificial intelligence  autonomy  normativity  goals
Categories (categorize this paper)
DOI 10.1007/s11948-020-00243-z
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

PhilArchive copy

Upload a copy of this paper     Check publisher's policy     Papers currently archived: 51,232
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

Reframing AI Discourse.Deborah G. Johnson & Mario Verdicchio - 2017 - Minds and Machines 27 (4):575-590.

View all 7 references / Add more references

Citations of this work BETA

No citations found.

Add more citations

Similar books and articles

Theoretical Foundations for the Responsibility of Autonomous Agents.Jaap Hage - 2017 - Artificial Intelligence and Law 25 (3):255-271.
“The Moral Magic of Consent.Larry Alexander - 1996 - Legal Theory 2 (3):165-174.
Authority and Voice in Autonomous Agency.Paul Benson - 2005 - In Anderson Joel & Christman John (eds.), Autonomy and the Challenges to Liberalism: New Essays. Cambridge University Press. pp. 101-126.
Autonomy, Consent, and the “Nonideal” Case.Hallvard Lillehammer - 2020 - Journal of Medicine and Philosophy 45 (3):297-311.
Language Evolution in Apes and Autonomous Agents.Angelo Cangelosi - 2002 - Behavioral and Brain Sciences 25 (5):622-623.
Still Autonomous After All.Andrew Knoll - 2018 - Minds and Machines 28 (1):7-27.
Trust and Resilient Autonomous Driving Systems.Adam Henschke - 2020 - Ethics and Information Technology 22 (1):81-92.


Added to PP index

Total views
2 ( #1,332,057 of 2,329,901 )

Recent downloads (6 months)
2 ( #393,046 of 2,329,901 )

How can I increase my downloads?


My notes