Beyond Conservatism in Offline Reinforcement Learning: The Importance of Effective Representations
Kevin Li
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2022-193
August 11, 2022
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-193.pdf
Standard off-policy reinforcement learning (RL) methods based on temporal difference (TD) learning generally fail to learn good policies when applied to static offline datasets. Conventionally, this is attributed to distribution shift, where the Bellman backup queries high-value out-of-distribution (OOD) actions for the next time step, which then leads to systematic overestimation. However, this explanation is incomplete, as conservative offline RL methods that directly address overestimation still suffer from stability problems in practice. This suggests that although OOD actions may account for part of the challenge, the difficulties with TD learning in the offline setting are also deeply connected to other aspects such as the quality of representations of learned function approximators. In this work, we show that merely imposing pessimism is not sufficient for good performance in deep RL, and demonstrate empirically that regularizing representations actually accounts for a large part of the improvement observed in modern offline RL methods. Building on this insight, we show how using a simple improved Bellman backup estimator — without changing any other aspect of conservative offline RL algorithms — can achieve more effective representations and better performance across a variety of offline RL problems.
Advisors: Ken Goldberg
BibTeX citation:
@mastersthesis{Li:EECS-2022-193, Author= {Li, Kevin}, Title= {Beyond Conservatism in Offline Reinforcement Learning: The Importance of Effective Representations}, School= {EECS Department, University of California, Berkeley}, Year= {2022}, Month= {Aug}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-193.html}, Number= {UCB/EECS-2022-193}, Abstract= {Standard off-policy reinforcement learning (RL) methods based on temporal difference (TD) learning generally fail to learn good policies when applied to static offline datasets. Conventionally, this is attributed to distribution shift, where the Bellman backup queries high-value out-of-distribution (OOD) actions for the next time step, which then leads to systematic overestimation. However, this explanation is incomplete, as conservative offline RL methods that directly address overestimation still suffer from stability problems in practice. This suggests that although OOD actions may account for part of the challenge, the difficulties with TD learning in the offline setting are also deeply connected to other aspects such as the quality of representations of learned function approximators. In this work, we show that merely imposing pessimism is not sufficient for good performance in deep RL, and demonstrate empirically that regularizing representations actually accounts for a large part of the improvement observed in modern offline RL methods. Building on this insight, we show how using a simple improved Bellman backup estimator — without changing any other aspect of conservative offline RL algorithms — can achieve more effective representations and better performance across a variety of offline RL problems.}, }
EndNote citation:
%0 Thesis %A Li, Kevin %T Beyond Conservatism in Offline Reinforcement Learning: The Importance of Effective Representations %I EECS Department, University of California, Berkeley %D 2022 %8 August 11 %@ UCB/EECS-2022-193 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-193.html %F Li:EECS-2022-193