Soroush Nasiriany

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2020-151

August 13, 2020

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-151.pdf

Reinforcement learning is focused on the problem of learning a near-optimal policy for a given task. But can we use reinforcement learning to instead learn general-purpose policies that can perform a wide range of different tasks, resulting in flexible and reusable skills? Contextual policies provide this capability in principle, but the representation of the context determines the degree of generalization and expressivity. Categorical contexts preclude generalization to entirely new tasks. Goal-conditioned policies may enable some generalization, but cannot capture all tasks that might be desired. In this paper, we propose goal distributions as a general and broadly applicable task representation suitable for contextual policies. Goal distributions are general in the sense that they can represent any state-based reward function when equipped with an appropriate distribution class, while the particular choice of distribution class allows us to trade off expressivity and learnability. We develop an off-policy algorithm called distribution-conditioned reinforcement learning (DisCo RL) to efficiently learn these policies. We evaluate DisCo RL on a variety of robot manipulation tasks and find that it significantly outperforms prior methods on tasks that require generalization to new goal distributions.

Advisors: Sergey Levine


BibTeX citation:

@mastersthesis{Nasiriany:EECS-2020-151,
    Author= {Nasiriany, Soroush},
    Title= {DisCo RL: Distribution-Conditioned Reinforcement Learning for General-Purpose Policies},
    School= {EECS Department, University of California, Berkeley},
    Year= {2020},
    Month= {Aug},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-151.html},
    Number= {UCB/EECS-2020-151},
    Abstract= {Reinforcement learning is focused on the problem of learning a near-optimal policy for a given task. But can we use reinforcement learning to instead learn general-purpose policies that can perform a wide range of different tasks, resulting in flexible and reusable skills? Contextual policies provide this capability in principle, but the representation of the context determines the degree of generalization and expressivity. Categorical contexts preclude generalization to entirely new tasks. Goal-conditioned policies may enable some generalization, but cannot capture all tasks that might be desired. In this paper, we propose goal distributions as a general and broadly applicable task representation suitable for contextual policies. Goal distributions are general in the sense that they can represent any state-based reward function when equipped with an appropriate distribution class, while the particular choice of distribution class allows us to trade off expressivity and learnability. We develop an off-policy algorithm called distribution-conditioned reinforcement learning (DisCo RL) to efficiently learn these policies. We evaluate DisCo RL on a variety of robot manipulation tasks and find that it significantly outperforms prior methods on tasks that require generalization to new goal distributions.},
}

EndNote citation:

%0 Thesis
%A Nasiriany, Soroush 
%T DisCo RL: Distribution-Conditioned Reinforcement Learning for General-Purpose Policies
%I EECS Department, University of California, Berkeley
%D 2020
%8 August 13
%@ UCB/EECS-2020-151
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-151.html
%F Nasiriany:EECS-2020-151