Reward tampering problems and solutions in reinforcement learning: a causal influence diagram perspective

From MaRDI portal
Publication:6182771

DOI10.1007/S11229-021-03141-4zbMATH Open1529.68309arXiv1908.04734OpenAlexW3165436200MaRDI QIDQ6182771FDOQ6182771


Authors: Tom Everitt, Marcus Hutter, Ramana Kumar, Victoria Krakovna Edit this on Wikidata


Publication date: 26 January 2024

Published in: Synthese (Search for Journal in Brave)

Abstract: Can humans get arbitrarily capable reinforcement learning (RL) agents to do their bidding? Or will sufficiently capable RL agents always find ways to bypass their intended objectives by shortcutting their reward signal? This question impacts how far RL can be scaled, and whether alternative paradigms must be developed in order to build safe artificial general intelligence. In this paper, we study when an RL agent has an instrumental goal to tamper with its reward process, and describe design principles that prevent instrumental goals for two different types of reward tampering (reward function tampering and RF-input tampering). Combined, the design principles can prevent both types of reward tampering from being instrumental goals. The analysis benefits from causal influence diagrams to provide intuitive yet precise formalizations.


Full work available at URL: https://arxiv.org/abs/1908.04734




Recommendations




Cites Work


Cited In (1)





This page was built for publication: Reward tampering problems and solutions in reinforcement learning: a causal influence diagram perspective

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6182771)