Reward tampering problems and solutions in reinforcement learning: a causal influence diagram perspective

Synthese 198 (Suppl 27):6435-6467 (2021)
  Copy   BIBTEX

Abstract

Can humans get arbitrarily capable reinforcement learning agents to do their bidding? Or will sufficiently capable RL agents always find ways to bypass their intended objectives by shortcutting their reward signal? This question impacts how far RL can be scaled, and whether alternative paradigms must be developed in order to build safe artificial general intelligence. In this paper, we study when an RL agent has an instrumental goal to tamper with its reward process, and describe design principles that prevent instrumental goals for two different types of reward tampering. Combined, the design principles can prevent reward tampering from being an instrumental goal. The analysis benefits from causal influence diagrams to provide intuitive yet precise formalizations.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 97,405

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

強化学習エージェントへの階層化意志決定法の導入―追跡問題を例に―.輿石 尚宏 謙吾 片山 - 2004 - Transactions of the Japanese Society for Artificial Intelligence 19:279-291.
罰を回避する合理的政策の学習.坪井 創吾 宮崎 和光 - 2001 - Transactions of the Japanese Society for Artificial Intelligence 16 (2):185-192.
Profit Sharing 法における強化関数に関する一考察.Tatsumi Shoji Uemura Wataru - 2004 - Transactions of the Japanese Society for Artificial Intelligence 19:197-203.

Analytics

Added to PP
2021-05-21

Downloads
24 (#756,067)

6 months
11 (#468,950)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Citations of this work

Predicting and Preferring.Nathaniel Sharadin - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.

Add more citations