Learning, Planning, and Acting with Models

Thanard Kurutach

EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2021-39
May 11, 2021

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-39.pdf

While the classical approach to planning and control has enabled robots to achieve various challenging control tasks, it requires domain experts to specify transition dynamics as well as inferring hand-designed symbolic states from raw observations. Therefore, bringing such method into a diverse, unstructured environment is still a grand challenge. Recent successes in computer vision and natural language processing have shed light on how robot learning could be pivotal in tackling such complexity. However, there are many challenges in deploy learning-based systems such as (1) data efficiency -- how to minimize the amount of training data required, (2) generalization -- how to handle tasks that the robots are not explicitly trained on, and (3) long-horizon tasks -- how to simplify the optimization complexity when presented with temporal-extended tasks.

In this thesis, we present learning-and-planning methods that utilize deep neural networks to model the environments in different forms in order to facilitate planning and acting. We validate the efficacy of our methods in data efficiency, generalization, and long-horizon tasks on simulated locomotion benchmarks, navigation tasks, and real robot manipulation tasks.

Advisor: Stuart J. Russell and Pieter Abbeel


BibTeX citation:

@phdthesis{Kurutach:EECS-2021-39,
    Author = {Kurutach, Thanard},
    Title = {Learning, Planning, and Acting with Models},
    School = {EECS Department, University of California, Berkeley},
    Year = {2021},
    Month = {May},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-39.html},
    Number = {UCB/EECS-2021-39},
    Abstract = {While the classical approach to planning and control has enabled robots to achieve various challenging control tasks, it requires domain experts to specify transition dynamics as well as inferring hand-designed symbolic states from raw observations. Therefore, bringing such method into a diverse, unstructured environment is still a grand challenge. Recent successes in computer vision and natural language processing have shed light on how robot learning could be pivotal in tackling such complexity. However, there are many challenges in deploy learning-based systems such as (1) data efficiency -- how to minimize the amount of training data required, (2) generalization -- how to handle tasks that the robots are not explicitly trained on, and (3) long-horizon tasks -- how to simplify the optimization complexity when presented with temporal-extended tasks.

In this thesis, we present learning-and-planning methods that utilize deep neural networks to model the environments in different forms in order to facilitate planning and acting. We validate the efficacy of our methods in data efficiency, generalization, and long-horizon tasks on simulated locomotion benchmarks, navigation tasks, and real robot manipulation tasks.}
}

EndNote citation:

%0 Thesis
%A Kurutach, Thanard
%T Learning, Planning, and Acting with Models
%I EECS Department, University of California, Berkeley
%D 2021
%8 May 11
%@ UCB/EECS-2021-39
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-39.html
%F Kurutach:EECS-2021-39