Modeling Supervisor Safe Sets for Improving Collaboration in Human-Robot Teams
Dexter Scobee
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2018-55
May 11, 2018
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-55.pdf
When a human supervisor collaborates with a team of robots, the human's attention is divided, and cognitive resources are at a premium. We aim to optimize the distribution of these resources and the flow of attention. To this end, we propose the model of an idealized supervisor to describe human behavior. Such a supervisor employs a potentially inaccurate internal model of the the robots' dynamics to judge safety. We represent these safety judgements by constructing a <em>safe set</em> from this internal model using reachability theory. When a robot leaves this safe set, the idealized supervisor will intervene to assist, regardless of whether or not the robot remains objectively safe. False positives, where a human supervisor incorrectly judges a robot to be in danger, needlessly consume supervisor attention. In this work, we propose a method that decreases false positives by learning the supervisor's safe set and using that information to govern robot behavior. We prove that robots behaving according to our approach will reduce the occurrence of false positives for our idealized supervisor model. Furthermore, we empirically validate our approach with a user study that demonstrates a significant (<i>p</i> = 0.0328) reduction in false positives for our method compared to a baseline safety controller.
Advisors: S. Shankar Sastry
BibTeX citation:
@mastersthesis{Scobee:EECS-2018-55, Author= {Scobee, Dexter}, Title= {Modeling Supervisor Safe Sets for Improving Collaboration in Human-Robot Teams}, School= {EECS Department, University of California, Berkeley}, Year= {2018}, Month= {May}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-55.html}, Number= {UCB/EECS-2018-55}, Abstract= {When a human supervisor collaborates with a team of robots, the human's attention is divided, and cognitive resources are at a premium. We aim to optimize the distribution of these resources and the flow of attention. To this end, we propose the model of an idealized supervisor to describe human behavior. Such a supervisor employs a potentially inaccurate internal model of the the robots' dynamics to judge safety. We represent these safety judgements by constructing a <em>safe set</em> from this internal model using reachability theory. When a robot leaves this safe set, the idealized supervisor will intervene to assist, regardless of whether or not the robot remains objectively safe. False positives, where a human supervisor incorrectly judges a robot to be in danger, needlessly consume supervisor attention. In this work, we propose a method that decreases false positives by learning the supervisor's safe set and using that information to govern robot behavior. We prove that robots behaving according to our approach will reduce the occurrence of false positives for our idealized supervisor model. Furthermore, we empirically validate our approach with a user study that demonstrates a significant (<i>p</i> = 0.0328) reduction in false positives for our method compared to a baseline safety controller.}, }
EndNote citation:
%0 Thesis %A Scobee, Dexter %T Modeling Supervisor Safe Sets for Improving Collaboration in Human-Robot Teams %I EECS Department, University of California, Berkeley %D 2018 %8 May 11 %@ UCB/EECS-2018-55 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-55.html %F Scobee:EECS-2018-55