Ellis Ratner

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2023-243

December 1, 2023

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-243.pdf

Many modern robotic systems rely on models to complete their tasks. A model captures all aspects of the environment relevant to the robot, and enables the robot to predict the result of any action it may take. Such a model enables the robot to plan and execute an efficient behavior to complete a given task.

Unfortunately, however, no model is perfect. Many real-world phenomena that a robot may encounter --- from contact forces such as friction, to unpredictable human behavior --- are exceedingly difficult to model precisely. Instead, we typically make certain simplifying assumptions, which make it easier to build a model of the environment for the robot to use in planning and execution. These assumptions, however, invariably break down somewhere. Since the robot relies on such simplified models for planning and execution, its performance can suffer as a result of these inaccuracies. We refer to such models as being <em>unreliable</em>, since the robot's planning and execution systems are unable to rely on the model's predictions to produce a behavior to ensure that the robot completes its task in the real-world with a good level of performance. It is therefore critical to design the robot's planning and execution systems to be as robust as possible to unreliable models.

To that end, we focus on increasing the robustness of two key aspects of a model-based planning and execution system: first, how to choose a model that best captures the robot's current environment; and, second, how to plan with that model, despite the possibility that it is unreliable. In this dissertation, we introduce three new algorithms towards increasing the robustness of a robot's planning and execution system, to unreliable models of the real-world. First, we introduce an approach to enable the robot to choose the best model for its current situation <em>online</em>, from a larger set of possible models. However, even this best model may not be perfectly reliable. To address this, we complement our approach with the ability to reason about where the robot should and should not rely on the model, which we describe in the second part of this dissertation. In doing so, the robot is able to leverage the model where it is accurate, but avoid regions where the model may lead the robot's plan astray. More specifically, we present two implementations of this idea in the second part of this dissertation, the main difference being in the <em>type</em> of model that each approach assumes --- the first focuses on deterministic (or, certain) models, whereas the second focuses on probabilistic (or, uncertain) models.

Throughout this dissertation, to evaluate the effectiveness of our approaches, we present a set of experiments in simulation, as well as on various real-robots. These robots include a 7 degree-of-freedom robot arm, a non-holonomic ground robot, and a free-flying space robot that operates in zero-gravity. Overall, we show how our approaches improve the robot's ability to plan and execute behaviors with greater robustness to unreliable models. Finally, we summarize several preliminary ideas and future research directions around building a unified framework for model-based planning and execution with unreliable models, bringing together all of the key contributions in this dissertation.

Advisors: Claire Tomlin and Anca Dragan


BibTeX citation:

@phdthesis{Ratner:EECS-2023-243,
    Author= {Ratner, Ellis},
    Title= {Robot Planning and Execution with Unreliable Models},
    School= {EECS Department, University of California, Berkeley},
    Year= {2023},
    Month= {Dec},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-243.html},
    Number= {UCB/EECS-2023-243},
    Abstract= {Many modern robotic systems rely on models to complete their tasks. A model captures all aspects of the environment relevant to the robot, and enables the robot to predict the result of any action it may take. Such a model enables the robot to plan and execute an efficient behavior to complete a given task.

Unfortunately, however, no model is perfect. Many real-world phenomena that a robot may encounter --- from contact forces such as friction, to unpredictable human behavior --- are exceedingly difficult to model precisely. Instead, we typically make certain simplifying assumptions, which make it easier to build a model of the environment for the robot to use in planning and execution. These assumptions, however, invariably break down somewhere. Since the robot relies on such simplified models for planning and execution, its performance can suffer as a result of these inaccuracies. We refer to such models as being <em>unreliable</em>, since the robot's planning and execution systems are unable to rely on the model's predictions to produce a behavior to ensure that the robot completes its task in the real-world with a good level of performance. It is therefore critical to design the robot's planning and execution systems to be as robust as possible to unreliable models. 

To that end, we focus on increasing the robustness of two key aspects of a model-based planning and execution system: first, how to choose a model that best captures the robot's current environment; and, second, how to plan with that model, despite the possibility that it is unreliable. In this dissertation, we introduce three new algorithms towards increasing the robustness of a robot's planning and execution system, to unreliable models of the real-world. First, we introduce an approach to enable the robot to choose the best model for its current situation <em>online</em>, from a larger set of possible models. However, even this best model may not be perfectly reliable. To address this, we complement our approach with the ability to reason about where the robot should and should not rely on the model, which we describe in the second part of this dissertation. In doing so, the robot is able to leverage the model where it is accurate, but avoid regions where the model may lead the robot's plan astray. More specifically, we present two implementations of this idea in the second part of this dissertation, the main difference being in the <em>type</em> of model that each approach assumes --- the first focuses on deterministic (or, certain) models, whereas the second focuses on probabilistic (or, uncertain) models. 

Throughout this dissertation, to evaluate the effectiveness of our approaches, we present a set of experiments in simulation, as well as on various real-robots. These robots include a 7 degree-of-freedom robot arm, a non-holonomic ground robot, and a free-flying space robot that operates in zero-gravity. Overall, we show how our approaches improve the robot's ability to plan and execute behaviors with greater robustness to unreliable models. Finally, we summarize several preliminary ideas and future research directions around building a unified framework for model-based planning and execution with unreliable models, bringing together all of the key contributions in this dissertation.},
}

EndNote citation:

%0 Thesis
%A Ratner, Ellis 
%T Robot Planning and Execution with Unreliable Models
%I EECS Department, University of California, Berkeley
%D 2023
%8 December 1
%@ UCB/EECS-2023-243
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-243.html
%F Ratner:EECS-2023-243