Goal-Driven Dynamics Learning via Bayesian Optimization

Teddey Xiao

EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2017-62
May 11, 2017

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-62.pdf

Robotic systems are becoming increasingly complex and commonly act in poorly understood environments where it is extremely challenging to model or learn their true dynamics. Therefore, it might be desirable to take a task-specific approach, wherein the focus is on explicitly learning the dynamics model which achieves the best control performance for the task at hand, rather than learning the true system dynamics. In this work, we use Bayesian optimization in an active learning framework where a locally linear dynamics model is learned with the intent of maximizing the control performance, and then used in conjunction with optimal control schemes to efficiently design a controller for a given task. This model is updated directly in an iterative manner based on the performance observed in experiments on the physical system until a desired performance is achieved. We demonstrate the efficacy of the proposed approach through simulations and real experiments on a quadrotor testbed

Advisor: Claire Tomlin


BibTeX citation:

@mastersthesis{Xiao:EECS-2017-62,
    Author = {Xiao, Teddey},
    Title = {Goal-Driven Dynamics Learning via Bayesian Optimization},
    School = {EECS Department, University of California, Berkeley},
    Year = {2017},
    Month = {May},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-62.html},
    Number = {UCB/EECS-2017-62},
    Abstract = {Robotic systems are becoming increasingly complex and commonly act in poorly understood environments where it is extremely challenging to model or learn their true dynamics. Therefore, it might be desirable to take a task-specific approach, wherein the focus is on explicitly learning the dynamics model which achieves the best control performance for the task at hand, rather than learning the true system dynamics. In this work, we use Bayesian optimization in an active learning framework where a locally linear dynamics model is learned with the intent of maximizing the control performance, and then used in conjunction with optimal control schemes to efficiently design a controller for a given task. This model is updated directly in an iterative manner based on the performance observed in experiments on the physical system until a desired performance is achieved. We demonstrate the efficacy of the proposed approach through simulations and real experiments on a quadrotor testbed}
}

EndNote citation:

%0 Thesis
%A Xiao, Teddey
%T Goal-Driven Dynamics Learning via Bayesian Optimization
%I EECS Department, University of California, Berkeley
%D 2017
%8 May 11
%@ UCB/EECS-2017-62
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-62.html
%F Xiao:EECS-2017-62