Benjamin Kha

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2019-54

May 17, 2019

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-54.pdf

Inverse reinforcement learning (Ng & Russell, 2000) is the setting where an agent is trying to infer a reward function based on expert demonstrations. Meta-learning is the problem where an agent is trained on some collection of different, but related environments or tasks, and is trying to learn a way to quickly adapt to new tasks. Thus, meta inverse reinforcement learning is the setting where an agent is trying to infer reward functions that generalize to multiple tasks. It appears, however, that the rewards learned by current meta IRL algorithms are highly susceptible to overfitting on the training tasks, and during finetuning are sometimes unable to quickly adapt to the test environment.

In this paper, we contribute a general framework of approaching the problem of meta IRL by jointly meta-learning both policies and reward networks. We first show that by applying this modification using a gradient-based approach, we are able to improve upon an existing meta IRL algorithm called Meta-AIRL (Gleave & Habryka, 2018). We also propose an alternative method based on the idea of contextual RNN meta-learners. We evaluate our algorithms against a single-task baseline and the original Meta-AIRL algorithm on a collection of continuous control tasks, and we conclude with suggestions for future research.

Advisors: Stuart J. Russell


BibTeX citation:

@mastersthesis{Kha:EECS-2019-54,
    Author= {Kha, Benjamin},
    Title= {Policy Transfer Algorithms for Meta Inverse Reinforcement Learning},
    School= {EECS Department, University of California, Berkeley},
    Year= {2019},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-54.html},
    Number= {UCB/EECS-2019-54},
    Abstract= {Inverse reinforcement learning (Ng & Russell, 2000) is the setting where an agent is trying to infer a reward function based on expert demonstrations. Meta-learning is the problem where an agent is trained on some collection of different, but related environments or tasks, and is trying to learn a way to quickly adapt to new tasks. Thus, meta inverse reinforcement learning is the setting where an agent is trying to infer reward functions that generalize to multiple tasks. It appears, however, that the rewards learned by current meta IRL algorithms are highly susceptible to overfitting on the training tasks, and during finetuning are sometimes unable to quickly adapt to the test environment.

In this paper, we contribute a general framework of approaching the problem of meta IRL by jointly meta-learning both policies and reward networks. We first show that by applying this modification using a gradient-based approach, we are able to improve upon an existing meta IRL algorithm called Meta-AIRL (Gleave & Habryka, 2018). We also propose an alternative method based on the idea of contextual RNN meta-learners. We evaluate our algorithms against a single-task baseline and the original Meta-AIRL algorithm on a collection of continuous control tasks, and we conclude with suggestions for future research.},
}

EndNote citation:

%0 Thesis
%A Kha, Benjamin 
%T Policy Transfer Algorithms for Meta Inverse Reinforcement Learning
%I EECS Department, University of California, Berkeley
%D 2019
%8 May 17
%@ UCB/EECS-2019-54
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-54.html
%F Kha:EECS-2019-54