Philippe Hansen-Estruch and Ilya Kostrikov and Michael Janner and Kuba Grudzien and Sergey Levine

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2023-62

May 2, 2023

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-62.pdf

Effective offline RL methods require properly handling out-of-distribution actions. Implicit Q-learning (IQL) addresses this by training a Q-function using only dataset actions through a modified Bellman backup. However, it is unclear which policy actually attains the values represented by this implicitly trained Q-function. In this paper, we reinterpret IQL as an actor-critic method by generalizing the critic objective and connecting it to a behavior-regularized implicit actor. This generalization shows how the induced actor balances reward maximization and divergence from the behavior policy, with the specific loss choice determining the nature of this tradeoff. Notably, this actor can exhibit complex and multimodal characteristics, suggesting issues with the conditional Gaussian actor fit with advantage weighted regression (AWR) used in prior methods. Instead, we propose using samples from a diffusion parameterized behavior policy and weights computed from the critic to then importance sampled our intended policy. We introduce Implicit Diffusion Q-learning (IDQL), combining our general IQL critic with the policy extraction method. IDQL maintains the ease of implementation of IQL while outperforming prior offline RL methods and demonstrating robustness to hyperparameters.

Advisors: Sergey Levine


BibTeX citation:

@mastersthesis{Hansen-Estruch:EECS-2023-62,
    Author= {Hansen-Estruch, Philippe and Kostrikov, Ilya and Janner, Michael and Grudzien, Kuba and Levine, Sergey},
    Title= {IDQL: Implicit Q-Learning as an Actor-Critic Method with Diffusion Policies},
    School= {EECS Department, University of California, Berkeley},
    Year= {2023},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-62.html},
    Number= {UCB/EECS-2023-62},
    Abstract= {
Effective offline RL methods require properly handling out-of-distribution actions. Implicit Q-learning (IQL) addresses this by training a Q-function using only dataset actions through a modified Bellman backup. However, it is unclear which policy actually attains the values represented by this implicitly trained Q-function. In this paper, we reinterpret IQL as an actor-critic method by generalizing the critic objective and connecting it to a behavior-regularized implicit actor. This generalization shows how the induced actor balances reward maximization and divergence from the behavior policy, with the specific loss choice determining the nature of this tradeoff. Notably, this actor can exhibit complex and multimodal characteristics, suggesting issues with the conditional Gaussian actor fit with advantage weighted regression (AWR) used in prior methods. Instead, we propose using samples from a diffusion parameterized behavior policy and weights computed from the critic to then importance sampled our intended policy. We introduce Implicit Diffusion Q-learning (IDQL), combining our general IQL critic with the policy extraction method. IDQL maintains the ease of implementation of IQL while outperforming prior offline RL methods and demonstrating robustness to hyperparameters.},
}

EndNote citation:

%0 Thesis
%A Hansen-Estruch, Philippe 
%A Kostrikov, Ilya 
%A Janner, Michael 
%A Grudzien, Kuba 
%A Levine, Sergey 
%T IDQL: Implicit Q-Learning as an Actor-Critic Method with Diffusion Policies
%I EECS Department, University of California, Berkeley
%D 2023
%8 May 2
%@ UCB/EECS-2023-62
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-62.html
%F Hansen-Estruch:EECS-2023-62