Morality First?

AI and Society:1-13 (forthcoming)
  Copy   BIBTEX

Abstract

The Morality First strategy for developing AI systems that can represent and respond to human values aims to first develop systems that can represent and respond to moral values. I argue that Morality First and other X-First views are unmotivated. Moreover, according to some widely accepted philosophical views about value, these strategies are positively distorting. The natural alternative, according to which no domain of value comes “first” introduces a new set of challenges and highlights an important but otherwise obscured problem for e-AI developers.

Other Versions

No versions found

Analytics

Added to PP
2024-05-08

Downloads
271 (#108,909)

6 months
123 (#54,069)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Nathaniel Sharadin
University of Hong Kong

Citations of this work

Disagreement, AI alignment, and bargaining.Harry R. Lloyd - forthcoming - Philosophical Studies:1-31.

Add more citations

References found in this work

Moral Machines: Teaching Robots Right From Wrong.Wendell Wallach & Colin Allen - 2008 - New York, US: Oxford University Press.
Meaning in Life and Why It Matters.Susan Wolf - 2010 - Princeton University Press.
Epistemic Teleology and the Separateness of Propositions.Selim Berker - 2013 - Philosophical Review 122 (3):337-393.
Unbelievable Errors: An Error Theory About All Normative Judgments.Bart Streumer - 2017 - Oxford, United Kingdom: Oxford University Press.

View all 26 references / Add more references