Is superintelligence necessarily moral?

Analysis (forthcoming)
  Copy   BIBTEX

Abstract

Numerous authors have expressed concern that advanced artificial intelligence (AI) poses an existential risk to humanity. These authors argue that we might build AI which is vastly intellectually superior to humans (a ‘superintelligence’), and which optimizes for goals that strike us as morally bad, or even irrational. Thus, this argument assumes that a superintelligence might have morally bad goals. However, according to some views, a superintelligence necessarily has morally adequate goals. This might be the case either because abilities for moral reasoning and intelligence mutually depend on each other, or because moral realism and moral internalism are true. I argue that the former argument misconstrues the view that intelligence and goals are independent, and that the latter argument misunderstands the implications of moral internalism. Moreover, the current state of AI research provides additional reasons to think that a superintelligence could have bad goals.

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Superintelligence as superethical.Steve Petersen - 2017 - In Patrick Lin, Keith Abney & Ryan Jenkins (eds.), Robot Ethics 2. 0: New Challenges in Philosophy, Law, and Society. New York, USA: Oxford University Press. pp. 322-337.
Superintelligence as Moral Philosopher.J. Corabi - 2017 - Journal of Consciousness Studies 24 (5-6):128-149.

Analytics

Added to PP
2024-05-21

Downloads
0

6 months
0

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Leonard Dung
Universität Erlangen-Nürnberg

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references