Learning and Analyzing Representations for Meta-Learning and Control

Kate Rakelly

EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2020-224
December 18, 2020

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-224.pdf

While artificial learning agents have demonstrated impressive capabilities, these successes are typically realized in narrowly defined problems and require large amounts of labeled data. Our agents struggle to leverage what they already know to generalize to new inputs and acquire new skills quickly, abilities quite natural to humans. To learn and leverage the structure present in the world, we study data-driven abstractions of states and tasks. We begin with unsupervised state representation learning, in which the goal is to learn a compact state representation that discards irrelevant information but preserves the information needed to learn the optimal policy. Surprisingly, we find that several commonly used objectives are not guaranteed to produce sufficient representations, and demonstrate that our theoretical findings are are reflected empirically in simple visual RL domains. Next, we turn to learning abstractions of tasks, a problem typically studied as meta-learning. Meta-learning is an approach to endow artificial agents with this capability that leverages a set of related training tasks to learn an adaptation mechanism that can be used to acquire new skills from little supervision. We adopt an inference perspective that casts meta-learning as learning probabilistic task representations, framing the problem of learning to learn as learning to infer hidden task variables from experience. Leveraging this viewpoint, we propose meta-learning algorithms for diverse applications: image segmentation, state-based robotic control, and robotic control from sensory observations. We find that an inference approach to these problems constitutes an efficient and practical choice, while also revealing deeper connections between meta-learning and other concepts in statistical learning.

Advisor: Sergey Levine


BibTeX citation:

@phdthesis{Rakelly:EECS-2020-224,
    Author = {Rakelly, Kate},
    Title = {Learning and Analyzing Representations for Meta-Learning and Control},
    School = {EECS Department, University of California, Berkeley},
    Year = {2020},
    Month = {Dec},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-224.html},
    Number = {UCB/EECS-2020-224},
    Abstract = {While artificial learning agents have demonstrated impressive capabilities, these successes are typically realized in narrowly defined problems and require large amounts of labeled data. Our agents struggle to leverage what they already know to generalize to new inputs and acquire new skills quickly, abilities quite natural to humans. To learn and leverage the structure present in the world, we study data-driven abstractions of states and tasks. We begin with unsupervised state representation learning, in which the goal is to learn a compact state representation that discards irrelevant information but preserves the information needed to learn the optimal policy. Surprisingly, we find that several commonly used objectives are not guaranteed to produce sufficient representations, and demonstrate that our theoretical findings are are reflected empirically in simple visual RL domains.
Next, we turn to learning abstractions of tasks, a problem typically studied as meta-learning. Meta-learning is an approach to endow artificial agents with this capability that leverages a set of related training tasks to learn an adaptation mechanism that can be used to acquire new skills from little supervision. We adopt an inference perspective that casts meta-learning as learning probabilistic task representations, framing the problem of learning to learn as learning to infer hidden task variables from experience. Leveraging this viewpoint, we propose meta-learning algorithms for diverse applications: image segmentation, state-based robotic control, and robotic control from sensory observations. We find that an inference approach to these problems constitutes an efficient and practical choice, while also revealing deeper connections between meta-learning and other concepts in statistical learning.}
}

EndNote citation:

%0 Thesis
%A Rakelly, Kate
%T Learning and Analyzing Representations for Meta-Learning and Control
%I EECS Department, University of California, Berkeley
%D 2020
%8 December 18
%@ UCB/EECS-2020-224
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-224.html
%F Rakelly:EECS-2020-224