David McPherson

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2022-74

May 12, 2022

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-74.pdf

Every good collaboration is built on solid mutual understanding. Without understanding their machines' behavior, human operators cannot plan around them. Yet increasing automation is distancing them from active understanding. This dissertation will apply cognitive science to build automation that boosts human understanding. The need for transparency is urgent for safety supervising tasks. Humans' environmental awareness and expansive understanding of safety can save robots from unforeseen edge cases. But only if those humans can also think through the robot's ongoing activity. Actions can be optimized to evidence safety or clearly anticipate faults, enabling supervisors to develop evidence-based appropriate trust. This work explores how observing action allows both humans and robots to construct better working models of the other. In research on assured autonomy we focus on how machines can autonomously guarantee safety. Yet there will always remain a modeling gap that we require human collaborators to help fill: that's why after decades of autopilot experience and improvements, we still require two human pilots to validate ongoing safe operation. This thesis contends that safe robotics must work to inform these safety collaborators; that choices don't only function to complete objectives but are also evidence that other agents ultimately judge. Characterizing how agents judge can empower our machines to choose actions to win correct judgements. First we will exposit how to learn humans' safety concerns from data despite noisy dynamics and demonstrations. After learning humans' concerns, we typify how they perceive and forecast danger. Building on cognitive science we present a model of human safety forecasting structured by reachability analysis. This structure induces data-efficient learning on small datasets so we can learn each supervisor's idiosyncratic ways of thinking -- enabling designers to conform their intelligent systems like a glove to a hand. We build on these models of human safety judgement to support that judgement through machine choices. After learning each supervisor's unique alarms, respecting that safe set lets robot teams decrease supervisory false positives. Extending this approach to anticipate safety concerns ahead of the decision point, we optimize motion as evidence to reject the null hypothesis of danger. The approaches in this dissertation contributes a mathematical lens for further inquiries into human risk-taking, safety negotiation, and technology learning. By employing the formalisms of intelligent safety to sketch human safety behavior, we imbue machines with a ``theory of mind'' that is essential to fluent collaboration for our societal systems.

Advisors: S. Shankar Sastry


BibTeX citation:

@phdthesis{McPherson:EECS-2022-74,
    Author= {McPherson, David},
    Title= {Towards Mutual Understanding between Mortals and Machines in Motion for Safety},
    School= {EECS Department, University of California, Berkeley},
    Year= {2022},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-74.html},
    Number= {UCB/EECS-2022-74},
    Abstract= {Every good collaboration is built on solid mutual understanding. Without understanding their machines' behavior, human operators cannot plan around them. Yet increasing automation is distancing them from active understanding. This dissertation will apply cognitive science to build automation that boosts human understanding. The need for transparency is urgent for safety supervising tasks. Humans' environmental awareness and expansive understanding of safety can save robots from unforeseen edge cases. But only if those humans can also think through the robot's ongoing activity. Actions can be optimized to evidence safety or clearly anticipate faults, enabling supervisors to develop evidence-based appropriate trust. This work explores how observing action allows both humans and robots to construct better working models of the other. In research on assured autonomy we focus on how machines can autonomously guarantee safety. Yet there will always remain a modeling gap that we require human collaborators to help fill: that's why after decades of autopilot experience and improvements, we still require two human pilots to validate ongoing safe operation. This thesis contends that safe robotics must work to inform these safety collaborators; that choices don't only function to complete objectives but are also evidence that other agents ultimately judge. Characterizing how agents judge can empower our machines to choose actions to win correct judgements. First we will exposit how to learn humans' safety concerns from data despite noisy dynamics and demonstrations. After learning humans' concerns, we typify how they perceive and forecast danger. Building on cognitive science we present a model of human safety forecasting structured by reachability analysis. This structure induces data-efficient learning on small datasets so we can learn each supervisor's idiosyncratic ways of thinking -- enabling designers to conform their intelligent systems like a glove to a hand. We build on these models of human safety judgement to support that judgement through machine choices. After learning each supervisor's unique alarms, respecting that safe set lets robot teams decrease supervisory false positives. Extending this approach to anticipate safety concerns ahead of the decision point, we optimize motion as evidence to reject the null hypothesis of danger. The approaches in this dissertation contributes a mathematical lens for further inquiries into human risk-taking, safety negotiation, and technology learning. By employing the formalisms of intelligent safety to sketch human safety behavior, we imbue machines with a ``theory of mind'' that is essential to fluent collaboration for our societal systems.},
}

EndNote citation:

%0 Thesis
%A McPherson, David 
%T Towards Mutual Understanding between Mortals and Machines in Motion for Safety
%I EECS Department, University of California, Berkeley
%D 2022
%8 May 12
%@ UCB/EECS-2022-74
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-74.html
%F McPherson:EECS-2022-74