Rage Against the Authority Machines: How to Design Artificial Moral Advisors for Moral Enhancement

AI and Society:1-12 (forthcoming)
  Copy   BIBTEX

Abstract

This paper aims to clear up the epistemology of learning morality from Artificial Moral Advisors (AMAs). We start with a brief consideration of what counts as moral enhancement and consider the risk of deskilling raised by machines that offer moral advice. We then shift focus to the epistemology of moral advice and show when and under what conditions moral advice can lead to enhancement. We argue that people’s motivational dispositions are enhanced by inspiring people to act morally, instead of merely telling them how to act. Drawing upon these insights, we claim that if AMAs are to genuinely enhance people morally, they should be designed as inspiration and not authority machines. In the final section, we evaluate existing AMA models to shed light on which holds the most promise for helping to make users better moral agents.

Other Versions

No versions found

Similar books and articles

Analytics

Added to PP
2024-11-17

Downloads
232 (#109,462)

6 months
232 (#10,855)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Ethan Landes
University of Kent
Radu Uszkai
University of Bucharest

Citations of this work

No citations found.

Add more citations

References found in this work

Meaning.Herbert Paul Grice - 1957 - Philosophical Review 66 (3):377-388.
Moral enhancement and freedom.John Harris - 2010 - Bioethics 25 (2):102-111.
Moral enhancement.Thomas Douglas - 2008 - Journal of Applied Philosophy 25 (3):228-245.
Toward an Ethics of AI Assistants: an Initial Framework.John Danaher - 2018 - Philosophy and Technology 31 (4):629-653.
In defense of moral testimony.Paulina Sliwa - 2012 - Philosophical Studies 158 (2):175-195.

View all 28 references / Add more references