A Learning-Based Approach to Safety for Uncertain Robotic Systems

THIS REPORT HAS BEEN WITHDRAWN

EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2018-40
May 10, 2018

Robotic systems are becoming more pervasive, and have the potential to significantly improve human lives. However, for these benefits to be realized it is critical that the safe operation of these systems be guaranteed. Reachability analysis has proven to be an effective tool for providing safety certificates for dynamical systems, given a model of the system. A major challenge in assuring safety, is that systems often have uncertainty due to the hard-to-model complex physical interactions, or lack of knowledge of the behavior of external agents, on which safety may depend.

This thesis uses Hamilton-Jacobi (HJ) reachability analysis to robustly guarantee safety for systems with uncertainty. In the presence of uncertainty there must be a balance between conservativeness as it pertains to safety and performance as it pertains to other system objectives, and we also account for this through reachability analysis. In addition, this thesis also explores methods for modifying the analysis as more data is collected from the robotics system, which ultimately allows for improved performance. This is referred to here as learning-based reachability analysis. The thesis concludes with a new HJ reachability formulation that enhances the learning-based analysis. The myriad of ideas presented throughout the thesis are demonstrated on various examples.

Advisor: Claire Tomlin

Author Comments: I submitted the authors name as an editor instead of author. I have since fixed the error and uploaded another tech report.