Authors
Abstract
This study concerns the sociotechnical bases of human autonomy. Drawing on recent literature on AI ethics, philosophical literature on dimensions of autonomy, and on independent philosophical scrutiny, we first propose a multi-dimensional model of human autonomy and then discuss how AI systems can support or hinder human autonomy. What emerges is a philosophically motivated picture of autonomy and of the normative requirements personal autonomy poses in the context of algorithmic systems. Ranging from consent to data collection and processing, to computational tasks and interface design, to institutional and societal considerations, various aspects related to sociotechnical systems must be accounted for in order to get the full picture of potential effects of AI systems on human autonomy. It is clear how human agents can, for example, via coercion or manipulation, hinder each other’s autonomy, or how they can respect each other’s autonomy. AI systems can promote or hinder human autonomy, but can they literally respect or disrespect a person’s autonomy? We argue for a philosophical view according to which AI systems—while not moral agents or bearers of duties, and unable to literally respect or disrespect—are governed by so-called “ought-to-be norms.” This explains the normativity at stake with AI systems. The responsible people (designers, users, etc.) have duties and ought-to-do norms, which correspond to these ought-to-be norms.
Keywords autonomy  respect  artificial intelligence  ought to be  sociotechnical
Categories (categorize this paper)
Options
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

PhilArchive copy


Upload a copy of this paper     Check publisher's policy     Papers currently archived: 65,657
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

Principles of Biomedical Ethics.Tom L. Beauchamp - 1979 - Oxford University Press.
Groundwork for the Metaphysics of Morals.Immanuel Kant - 1785/2002 - Oxford University Press.
The Metaphysics of Morals.Immanuel Kant - 1797/1996 - Cambridge University Press.
The Nature of Normativity.Ralph Wedgwood - 2007 - Oxford University Press.

View all 30 references / Add more references

Citations of this work BETA

No citations found.

Add more citations

Similar books and articles

Fully Autonomous AI.Wolfhart Totschnig - 2020 - Science and Engineering Ethics 26 (5):2473-2485.
Whose Life is It Anyway? A Study in Respect for Autonomy.M. Norden - 1995 - Journal of Medical Ethics 21 (3):179-183.
Authenticity and Autonomy in Deep-Brain Stimulation.Alistair Wardrope - 2014 - Journal of Medical Ethics 40 (8):563-566.
Establishing the Rules for Building Trustworthy AI.Luciano Floridi - 2019 - Nature Machine Intelligence 1:261-262.

Analytics

Added to PP index
2021-11-01

Total views
11 ( #835,844 of 2,462,336 )

Recent downloads (6 months)
11 ( #65,074 of 2,462,336 )

How can I increase my downloads?

Downloads

My notes