JD Co-Reyes

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2021-178

August 10, 2021

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-178.pdf

Building general purpose RL algorithms that can efficiently solve a wide variety of problems will require encoding the right structure and representations into our models. A key component of our generalization capability is our ability to develop an internal model of the world that can be used for robust prediction and efficient planning. In this thesis, we discuss work that leverages representation learning to learn better predictive models of physical scenes and enable an agent to generalize to new tasks by planning with the learned model under the framework of model-based RL. We cover two kinds of abstraction that can enable good generalization: state abstraction in the form of object level representations and temporal abstraction in the form of skill representations for hierarchical RL. By incorporating these abstractions into our models, we can achieve efficient learning and combinatorial generalization over long horizon, multi-stage problems. We also discuss the role of meta-learning in automatically learning the right structure for general RL algorithms. By leveraging large scale evolutionary based computation, we can meta-learn general purpose RL algorithms that have better sample efficiency and final performance over a wide variety of tasks. Finally, we cover how these internal models can be used to compute the RL objective itself and train general RL agents with complex behavior without having to design the reward function.

Advisors: Sergey Levine


BibTeX citation:

@phdthesis{Co-Reyes:EECS-2021-178,
    Author= {Co-Reyes, JD},
    Title= {Building Reinforcement Learning Algorithms that Generalize: From Latent Dynamics Models to Meta-Learning},
    School= {EECS Department, University of California, Berkeley},
    Year= {2021},
    Month= {Aug},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-178.html},
    Number= {UCB/EECS-2021-178},
    Abstract= {Building general purpose RL algorithms that can efficiently solve a wide variety of problems will require encoding the right structure and representations into our models. A key component of our generalization capability is our ability to develop an internal model of the world that can be used for robust prediction and efficient planning. In this thesis, we discuss work that leverages representation learning to learn better predictive models of physical scenes and enable an agent to generalize to new tasks by planning with the learned model under the framework of model-based RL. We cover two kinds of abstraction that can enable good generalization: state abstraction in the form of object level representations and temporal abstraction in the form of skill representations for hierarchical RL. By incorporating these abstractions into our models, we can achieve efficient learning and combinatorial generalization over long horizon, multi-stage problems. We also discuss the role of meta-learning in automatically learning the right structure for general RL algorithms. By leveraging large scale evolutionary based computation, we can meta-learn general purpose RL algorithms that have better sample efficiency and final performance over a wide variety of tasks. Finally, we cover how these internal models can be used to compute the RL objective itself and train general RL agents with complex behavior without having to design the reward function.},
}

EndNote citation:

%0 Thesis
%A Co-Reyes, JD 
%T Building Reinforcement Learning Algorithms that Generalize: From Latent Dynamics Models to Meta-Learning
%I EECS Department, University of California, Berkeley
%D 2021
%8 August 10
%@ UCB/EECS-2021-178
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-178.html
%F Co-Reyes:EECS-2021-178