Abstract: The field of artificial general intelligence (AGI) safety is quickly growing. However, the nature of human values, with which future AGI should be aligned, is underdefined. Different AGI safety researchers have suggested different theories about the nature of human values, but there are contradictions. This article presents an overview of what AGI safety researchers have written about the nature of human values, up to the beginning of 2019. 21 authors were overviewed, and some of them have several theories. A theory classification method is suggested, where the theories are judged according to the level of their complexity and behaviorists-internalists scale, as well as the level of their generality-humanity. We suggest that a multiplicity of well-supported theories means that the nature of human values is difficult to define, and some meta-level theory is needed.
Keywords Artificial intelligence  human values  AI Safety
Categories (categorize this paper)
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

PhilArchive copy

 PhilArchive page | Other versions
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

No references found.

Add more references

Citations of this work BETA

No citations found.

Add more citations

Similar books and articles

Ai: Its Nature and Future.Margaret A. Boden - 2016 - Oxford University Press UK.
Human Values in Management.R. K. Dasgupta - 1997 - Journal of Human Values 3 (2):145-160.
Risks of Artificial General Intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).


Added to PP index

Total views
132 ( #77,034 of 2,432,669 )

Recent downloads (6 months)
30 ( #26,425 of 2,432,669 )

How can I increase my downloads?


My notes