Language Agents Reduce the Risk of Existential Catastrophe

AI and Society:1-11 (2023)
  Copy   BIBTEX

Abstract

Recent advances in natural language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and then make and update plans to pursue their desires given their beliefs. We argue that the rise of language agents significantly reduces the probability of an existential catastrophe due to loss of control over an AGI. This is because the probability of such an existential catastrophe is proportional to the difficulty of aligning AGI systems, and language agents significantly reduce that difficulty. In particular, language agents help to resolve three important issues related to aligning AIs: reward misspecification, goal misgeneralization, and uninterpretability.

Other Versions

No versions found

Similar books and articles

Analytics

Added to PP
2023-08-02

Downloads
1,300 (#10,905)

6 months
367 (#5,275)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Simon Goldstein
University of Hong Kong
Cameron Domenico Kirk-Giannini
Rutgers University - Newark

Citations of this work

Language Agents and Malevolent Design.Inchul Yum - 2024 - Philosophy and Technology 37 (104):1-19.
Is Alignment Unsafe?Cameron Domenico Kirk-Giannini - 2024 - Philosophy and Technology 37 (110):1–4.
AI takeover and human disempowerment.Adam Bales - forthcoming - Philosophical Quarterly.

Add more citations

References found in this work

No references found.

Add more references