Optimizing for Robot Transparency

Sandy Huang

EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2019-115
August 16, 2019

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-115.pdf

As robots become more capable and commonplace, it becomes increasingly important that they are transparent to humans. People need to have accurate mental models of a robot, so that they can anticipate what it will do, know when and where not to rely it, and understand why it failed. This helps engineers ensure safety and robustness of the robot systems they develop, and enables human end-users to interact more safely and seamlessly with robots.

This thesis introduces a framework for producing robot behavior that increases transparency. Our key insight is that a robot's actions do not just influence the physical world; they also inevitably influence a human observer's mental model of the robot. We attempt to model the latter---how humans might make inferences about a robot's objectives, policy, and capabilities from observations of its behavior---so that we can then present examples of robot behavior that optimally bring the human's understanding closer to the true robot model. In this way, our framework casts transparency as an optimization problem.

Part I introduces our framework of optimizing for robot transparency, and applies it in three ways: communicating a robot's objectives, which situations it can handle, and why it is incapable of performing a task. Part II investigates how transparency is useful not just for safe and seamless interaction, but also for learning. When humans teach a robot, giving human teachers transparency regarding what the robot has learned so far makes it easier for them to select informative teaching examples.

Advisor: Pieter Abbeel and Anca Dragan


BibTeX citation:

@phdthesis{Huang:EECS-2019-115,
    Author = {Huang, Sandy},
    Title = {Optimizing for Robot Transparency},
    School = {EECS Department, University of California, Berkeley},
    Year = {2019},
    Month = {Aug},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-115.html},
    Number = {UCB/EECS-2019-115},
    Abstract = {As robots become more capable and commonplace, it becomes increasingly important that they are transparent to humans. People need to have accurate mental models of a robot, so that they can anticipate what it will do, know when and where not to rely it, and understand why it failed. This helps engineers ensure safety and robustness of the robot systems they develop, and enables human end-users to interact more safely and seamlessly with robots.

This thesis introduces a framework for producing robot behavior that increases transparency. Our key insight is that a robot's actions do not just influence the physical world; they also inevitably influence a human observer's mental model of the robot. We attempt to model the latter---how humans might make inferences about a robot's objectives, policy, and capabilities from observations of its behavior---so that we can then present examples of robot behavior that optimally bring the human's understanding closer to the true robot model. In this way, our framework casts transparency as an optimization problem.

Part I introduces our framework of optimizing for robot transparency, and applies it in three ways: communicating a robot's objectives, which situations it can handle, and why it is incapable of performing a task. Part II investigates how transparency is useful not just for safe and seamless interaction, but also for learning. When humans teach a robot, giving human teachers transparency regarding what the robot has learned so far makes it easier for them to select informative teaching examples.}
}

EndNote citation:

%0 Thesis
%A Huang, Sandy
%T Optimizing for Robot Transparency
%I EECS Department, University of California, Berkeley
%D 2019
%8 August 16
%@ UCB/EECS-2019-115
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-115.html
%F Huang:EECS-2019-115