Katie Luo

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2019-81

May 17, 2019

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-81.pdf

Inverse reinforcement learning holds the promise for automated reward acquisition from demonstrations, but the rewards learn generally cannot transfer between tasks. We propose a framework that is able to learn a reward function that is language-aware on a multi-task setting.

This work presents Goal-Induced Inverse Reinforcement Learning, an IRL framework that learns a transferable reward function and achieves good performance as compared to imitation-learning algorithms. By learning the rewards in the IRL framework, our algorithm is able to obtain a more generalizable reward function that is able to solve different tasks by changing just the goal specification. Indeed, this work showed that the reward function learned changes to match the task at hand, and can be toggled depending on the given goal-instruction, mapping to the true, underlying reward function that the goal-instruction intends. This work also shows that the learned reward is shaped, allowing for ease learning by reinforcement learning agents. Furthermore, by training the policy and reward models jointly, we are able to efficiently obtain a policy that can perform on par with other imitation-learning policies. GIIRL shows comparable, if not better, results than behavioral-cloning algorithm.

Advisors: Sergey Levine


BibTeX citation:

@mastersthesis{Luo:EECS-2019-81,
    Author= {Luo, Katie},
    Title= {Goal-Induced Inverse Reinforcement Learning},
    School= {EECS Department, University of California, Berkeley},
    Year= {2019},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-81.html},
    Number= {UCB/EECS-2019-81},
    Abstract= {Inverse reinforcement learning holds the promise for automated reward acquisition from demonstrations, but the rewards learn generally cannot transfer between tasks. We propose a framework that is able to learn a reward function that is language-aware on a multi-task setting. 

This work presents Goal-Induced Inverse Reinforcement Learning, an IRL framework that learns a transferable reward function and achieves good performance as compared to imitation-learning algorithms. By learning the rewards in the IRL framework, our algorithm is able to obtain a more generalizable reward function that is able to solve different tasks by changing just the goal specification. Indeed, this work showed that the reward function learned changes to match the task at hand, and can be toggled depending on the given goal-instruction, mapping to the true, underlying reward function that the goal-instruction intends. This work also shows that the learned reward is shaped, allowing for ease learning by reinforcement learning agents. Furthermore, by training the policy and reward models jointly, we are able to efficiently obtain a policy that can perform on par with other imitation-learning policies. GIIRL shows comparable, if not better, results than behavioral-cloning algorithm.},
}

EndNote citation:

%0 Thesis
%A Luo, Katie 
%T Goal-Induced Inverse Reinforcement Learning
%I EECS Department, University of California, Berkeley
%D 2019
%8 May 17
%@ UCB/EECS-2019-81
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-81.html
%F Luo:EECS-2019-81