Mobile Robot Learning

Gregory Kahn

EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2020-203
December 16, 2020

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-203.pdf

In order to create mobile robots that can autonomously navigate real-world environments, we need generalizable perception and control systems that can reason about the outcomes of navigational decisions. Learning-based methods, in which the robot learns to navigate by observing the outcomes of navigational decisions in the real world, offer considerable promise for obtaining these intelligent navigation systems. However, there are many challenges impeding mobile robots from autonomously learning to act in the real-world, in particular (1) sample-efficiency---how to learn using a limited amount of data? (2) supervision---how to tell the robot what to do? and (3) safety---how to ensure the robot and environment are not damaged or destroyed during learning?

In this thesis, we will present deep reinforcement learning methods for addressing these real world mobile robot learning challenges. At the core of these methods is a predictive model, which takes as input the current robot sensors and predicts future navigational outcomes; this predictive model can then be used for planning and control. We will show how this framework can address the challenges of sample-efficiency, supervision, and safety to enable ground and aerial robots to navigate in complex indoor and outdoor environments.

Advisor: Pieter Abbeel and Sergey Levine


BibTeX citation:

@phdthesis{Kahn:EECS-2020-203,
    Author = {Kahn, Gregory},
    Title = {Mobile Robot Learning},
    School = {EECS Department, University of California, Berkeley},
    Year = {2020},
    Month = {Dec},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-203.html},
    Number = {UCB/EECS-2020-203},
    Abstract = {In order to create mobile robots that can autonomously navigate real-world environments, we need generalizable perception and control systems that can reason about the outcomes of navigational decisions. Learning-based methods, in which the robot learns to navigate by observing the outcomes of navigational decisions in the real world, offer considerable promise for obtaining these intelligent navigation systems. However, there are many challenges impeding mobile robots from autonomously learning to act in the real-world, in particular (1) sample-efficiency---how to learn using a limited amount of data? (2) supervision---how to tell the robot what to do? and (3) safety---how to ensure the robot and environment are not damaged or destroyed during learning?

In this thesis, we will present deep reinforcement learning methods for addressing these real world mobile robot learning challenges. At the core of these methods is a predictive model, which takes as input the current robot sensors and predicts future navigational outcomes; this predictive model can then be used for planning and control. We will show how this framework can address the challenges of sample-efficiency, supervision, and safety to enable ground and aerial robots to navigate in complex indoor and outdoor environments.}
}

EndNote citation:

%0 Thesis
%A Kahn, Gregory
%T Mobile Robot Learning
%I EECS Department, University of California, Berkeley
%D 2020
%8 December 16
%@ UCB/EECS-2020-203
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-203.html
%F Kahn:EECS-2020-203