Parsa Mahmoudieh

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2017-51

May 11, 2017

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-51.pdf

Reinforcement learning optimizes policies for expected cumulative reward. Need the supervision be so narrow? Reward is delayed and sparse for many tasks, making it a difficult and impoverished signal for end-to-end optimization. To augment reward, we consider a range of self-supervised tasks that incorporate states, actions, and successors to provide auxiliary losses. These losses offer ubiquitous and instantaneous supervision for representation learning even in the absence of reward. While current results show that learning from reward alone is feasible, pure reinforcement learning methods are constrained by computational and data efficiency issues that can be remedied by auxiliary losses. Self-supervised pretraining and joint optimization improve the data efficiency and policy returns of end-to-end reinforcement learning.

Advisors: Trevor Darrell


BibTeX citation:

@techreport{Mahmoudieh:EECS-2017-51,
    Author= {Mahmoudieh, Parsa},
    Title= {Self-Supervision for Reinforcement Learning},
    Year= {2017},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-51.html},
    Number= {UCB/EECS-2017-51},
    Abstract= {Reinforcement learning optimizes policies for expected cumulative reward. Need the supervision be so narrow? Reward is delayed and sparse for many tasks, making it a difficult and impoverished signal for end-to-end optimization. To augment reward, we consider a range of self-supervised tasks that incorporate states, actions, and successors to provide auxiliary losses. These losses offer ubiquitous and instantaneous supervision for representation learning even in the absence of reward. While current results show that learning from reward alone is feasible, pure reinforcement learning methods are constrained by computational and data efficiency issues that can be remedied by auxiliary losses. Self-supervised pretraining and joint optimization improve the data efficiency and policy returns of end-to-end reinforcement learning.},
}

EndNote citation:

%0 Report
%A Mahmoudieh, Parsa 
%T Self-Supervision for Reinforcement Learning
%I EECS Department, University of California, Berkeley
%D 2017
%8 May 11
%@ UCB/EECS-2017-51
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-51.html
%F Mahmoudieh:EECS-2017-51