Varun Tolani

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2018-69

May 17, 2018

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-69.pdf

We introduce an autonomous navigation framework for ground-based, mobile robots that incorporates a known dynamics model into training, allows for planning in unknown, partially observable environments, and solves the full navigation problem of goal-directed, collision avoidant movement on a robot with complex, non-linear dynamics. We leverage visual semantics through a trained policy that, given a desired goal location and first person image of the environment, predicts a low frequency guiding control, or waypoint. We use the waypoint produced by our policy along with robust feedback controllers and known dynamics models to generate high frequency control outputs. Our approach allows for visual semantics to be learned during training while providing a simple methodology for incorporating robust dynamics models into training. Our experiments demonstrate that our method is able to reason through statistics of the visual world allowing for effective planning in unknown spaces. Additionally, we demonstrate that our formulation is robust to the particulars of low-level control, achieving performance over twice that of a comparable end-to-end learning method.


BibTeX citation:

@mastersthesis{Tolani:EECS-2018-69,
    Author= {Tolani, Varun},
    Title= {Visual Model Predictive Control},
    School= {EECS Department, University of California, Berkeley},
    Year= {2018},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-69.html},
    Number= {UCB/EECS-2018-69},
    Abstract= {We introduce an autonomous navigation framework for ground-based, mobile robots that incorporates a known dynamics model into training, allows for planning in unknown, partially observable environments, and solves the full navigation problem of goal-directed, collision avoidant
movement on a robot with complex, non-linear dynamics. We leverage visual semantics through a trained policy that, given a desired goal location and first person image of the environment, predicts a low frequency guiding control, or waypoint. We use the waypoint produced by our policy along with robust feedback controllers and known dynamics models to generate high frequency control outputs. Our approach allows for visual semantics to be learned during training while providing a simple methodology for incorporating robust dynamics models into training. Our experiments demonstrate that our method is able to reason through statistics of the visual world allowing for effective planning in unknown
spaces. Additionally, we demonstrate that our formulation is robust to the particulars of low-level control, achieving performance over twice that of a comparable end-to-end learning method.},
}

EndNote citation:

%0 Thesis
%A Tolani, Varun 
%T Visual Model Predictive Control
%I EECS Department, University of California, Berkeley
%D 2018
%8 May 17
%@ UCB/EECS-2018-69
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-69.html
%F Tolani:EECS-2018-69