Acquiring Diverse Robot Skills via Maximum Entropy Deep Reinforcement Learning

Tuomas Haarnoja

EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2018-176
December 14, 2018

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-176.pdf

In this thesis, we study how maximum entropy framework can provide efficient deep reinforcement learning (deep RL) algorithms that solve tasks consistently and sample efficiently. This framework has several intriguing properties. First, the optimal policies are stochastic, improving exploration and preventing convergence to local optima, particularly when the objective is multimodal. Second, the entropy term provides regularization, resulting in more consistent and robust learning when compared to deterministic methods. Third, maximum entropy policies are composable, that is, two or more policies can be combined, and the resulting policy can be shown to be approximately optimal for the sum of the constituent task rewards. And fourth, the view of maximum entropy RL as probabilistic inference provides a foundation for building hierarchical policies that can solve complex and sparse reward tasks. In the first part, we will devise new algorithms based on this framework, starting from soft Q-learning that learns expressive energy-based policies, to soft actor-critic that provides simplicity and convenience of actor-critic methods, and ending with automatic temperature adjustment scheme that practically eliminates the need for hyperparameter tuning, which is a crucial feature for real-world applications where tuning of hyperparameters can be prohibitively expensive. In the second part, we will discuss extensions enabled by the inherent stochasticity of maximum entropy polices, including compositionality and hierarchical learning. We will demonstrate the effectiveness of the proposed algorithms on both simulated and real-world robotic manipulation and locomotion tasks.

Advisor: Pieter Abbeel and Sergey Levine


BibTeX citation:

@phdthesis{Haarnoja:EECS-2018-176,
    Author = {Haarnoja, Tuomas},
    Title = {Acquiring Diverse Robot Skills via Maximum Entropy Deep Reinforcement Learning},
    School = {EECS Department, University of California, Berkeley},
    Year = {2018},
    Month = {Dec},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-176.html},
    Number = {UCB/EECS-2018-176},
    Abstract = {In this thesis, we study how maximum entropy framework can provide efficient deep reinforcement learning (deep RL) algorithms that solve tasks consistently and sample efficiently. This framework has several intriguing properties. First, the optimal policies are stochastic, improving exploration and preventing convergence to local optima, particularly when the objective is multimodal. Second, the entropy term provides regularization, resulting in more consistent and robust learning when compared to deterministic methods. Third, maximum entropy policies are composable, that is, two or more policies can be combined, and the resulting policy can be shown to be approximately optimal for the sum of the constituent task rewards. And fourth, the view of maximum entropy RL as probabilistic inference provides a foundation for building hierarchical policies that can solve complex and sparse reward tasks. In the first part, we will devise new algorithms based on this framework, starting from soft Q-learning that learns expressive energy-based policies, to soft actor-critic that provides simplicity and convenience of actor-critic methods, and ending with automatic temperature adjustment scheme that practically eliminates the need for hyperparameter tuning, which is a crucial feature for real-world applications where tuning of hyperparameters can be prohibitively expensive. In the second part, we will discuss extensions enabled by the inherent stochasticity of maximum entropy polices, including compositionality and hierarchical learning. We will demonstrate the effectiveness of the proposed algorithms on both simulated and real-world robotic manipulation and locomotion tasks.}
}

EndNote citation:

%0 Thesis
%A Haarnoja, Tuomas
%T Acquiring Diverse Robot Skills via Maximum Entropy Deep Reinforcement Learning
%I EECS Department, University of California, Berkeley
%D 2018
%8 December 14
%@ UCB/EECS-2018-176
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-176.html
%F Haarnoja:EECS-2018-176