Anusha Nagabandi

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2020-158

August 13, 2020

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-158.pdf

Deep learning has shown promising results in robotics, but we are still far from having intelligent systems that can operate in the unstructured settings of the real world, where disturbances, variations, and unobserved factors lead to a dynamic environment. The premise of the work in this thesis is that model-based deep RL provides an efficient and effective framework for making sense of the world, thus allowing for reasoning and adaptation capabilities that are necessary for successful operation in the dynamic settings of the world.

We first build up a model-based deep RL framework and demonstrate that it can indeed allow for efficient skill acquisition, as well as the ability to repurpose models to solve a variety of tasks. We then scale up these approaches to enable locomotion with a 6-DoF legged robot on varying terrains in the real world, as well as dexterous manipulation with a 24-DoF anthropomorphic hand in the real world. Next, we focus on the inevitable mismatch between an agent's training conditions and the test conditions in which it may actually be deployed, thus illuminating the need for adaptive systems. Inspired by the ability of humans and animals to adapt quickly in the face of unexpected changes, we present a meta-learning algorithm within this model-based RL framework to enable online adaptation of large, high-capacity models using only small amounts of data from the new task. We demonstrate these fast adaptation capabilities in both simulation and the real-world, with experiments such as a 6-legged robot adapting online to an unexpected payload or suddenly losing a leg. We then further extend the capabilities of our robotic systems by enabling the agents to reason directly from raw image observations. Bridging the benefits of representation learning techniques with the adaptation capabilities of meta-RL, we present a unified framework for effective meta-RL from images. With robotic arms in the real world that learn peg insertion and ethernet cable insertion to varying targets, we show the fast acquisition of new skills, directly from raw image observations in the real world. Finally, we conclude by discussing the key limitations of our existing approaches and present promising directions for future work in the area of model-based deep RL for robotic systems.

Advisors: Ronald S. Fearing and Sergey Levine


BibTeX citation:

@phdthesis{Nagabandi:EECS-2020-158,
    Author= {Nagabandi, Anusha},
    Title= {Model-based Deep Reinforcement Learning for Robotic Systems},
    School= {EECS Department, University of California, Berkeley},
    Year= {2020},
    Month= {Aug},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-158.html},
    Number= {UCB/EECS-2020-158},
    Abstract= {Deep learning has shown promising results in robotics, but we are still far from having intelligent systems that can operate in the unstructured settings of the real world, where disturbances, variations, and unobserved factors lead to a dynamic environment. The premise of the work in this thesis is that model-based deep RL provides an efficient and effective framework for making sense of the world, thus allowing for reasoning and adaptation capabilities that are necessary for successful operation in the dynamic settings of the world.

We first build up a model-based deep RL framework and demonstrate that it can indeed allow for efficient skill acquisition, as well as the ability to repurpose models to solve a variety of tasks. We then scale up these approaches to enable locomotion with a 6-DoF legged robot on varying terrains in the real world, as well as dexterous manipulation with a 24-DoF anthropomorphic hand in the real world. Next, we focus on the inevitable mismatch between an agent's training conditions and the test conditions in which it may actually be deployed, thus illuminating the need for adaptive systems. Inspired by the ability of humans and animals to adapt quickly in the face of unexpected changes, we present a meta-learning algorithm within this model-based RL framework to enable online adaptation of large, high-capacity models using only small amounts of data from the new task. We demonstrate these fast adaptation capabilities in both simulation and the real-world, with experiments such as a 6-legged robot adapting online to an unexpected payload or suddenly losing a leg. We then further extend the capabilities of our robotic systems by enabling the agents to reason directly from raw image observations. Bridging the benefits of representation learning techniques with the adaptation capabilities of meta-RL, we present a unified framework for effective meta-RL from images. With robotic arms in the real world that learn peg insertion and ethernet cable insertion to varying targets, we show the fast acquisition of new skills, directly from raw image observations in the real world. Finally, we conclude by discussing the key limitations of our existing approaches and present promising directions for future work in the area of model-based deep RL for robotic systems.},
}

EndNote citation:

%0 Thesis
%A Nagabandi, Anusha 
%T Model-based Deep Reinforcement Learning for Robotic Systems
%I EECS Department, University of California, Berkeley
%D 2020
%8 August 13
%@ UCB/EECS-2020-158
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-158.html
%F Nagabandi:EECS-2020-158