Alina Trinh

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2024-89

May 10, 2024

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-89.pdf

In this thesis we examine the problem of demonstration sufficiency: how can an agent self-assess whether or not it has received enough demonstrations from an expert to ensure a desired level of performance? To address this problem, we propose a novel self-assessment approach based on Bayesian inverse reinforcement learning and value-at-risk, enabling learning-from-demonstration ("LfD") agents to compute high-confidence bounds on their performance and use these bounds to determine when they have received a sufficient number of demonstrations. We propose and evaluate two definitions of sufficiency: (1) normalized expected value difference, which measures regret with respect to the human's unobserved reward function, and (2) percent improvement over a baseline policy. We demonstrate how to formulate high-confidence bounds on both of these metrics. We evaluate our approach in simulation in both discrete and continuous state-space domains and illustrate the feasibility of developing a robotic system that can accurately evaluate demonstration sufficiency. We also show how the agent can utilize active learning in asking for demonstrations from specific states which results in fewer demos needed for the agent to still maintain high confidence in its policy. Finally, via a user study, we show that our approach successfully enables agents to accomplish tasks at users' desired performance levels, without needing too many or perfectly optimal demonstrations.

Advisors: Stuart J. Russell


BibTeX citation:

@mastersthesis{Trinh:EECS-2024-89,
    Author= {Trinh, Alina},
    Title= {Autonomous Assessment of Demonstration Sufficiency},
    School= {EECS Department, University of California, Berkeley},
    Year= {2024},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-89.html},
    Number= {UCB/EECS-2024-89},
    Abstract= {In this thesis we examine the problem of demonstration sufficiency: how can an agent self-assess whether or not it has received enough demonstrations from an expert to ensure a desired level of performance? To address this problem, we propose a novel self-assessment approach based on Bayesian inverse reinforcement learning and value-at-risk, enabling learning-from-demonstration ("LfD") agents to compute high-confidence bounds on their performance and use these bounds to determine when they have received a sufficient number of demonstrations. We propose and evaluate two definitions of sufficiency: (1) normalized expected value difference, which measures regret with respect to the human's unobserved reward function, and (2) percent improvement over a baseline policy. We demonstrate how to formulate high-confidence bounds on both of these metrics. We evaluate our approach in simulation in both discrete and continuous state-space domains and illustrate the feasibility of developing a robotic system that can accurately evaluate demonstration sufficiency. We also show how the agent can utilize active learning in asking for demonstrations from specific states which results in fewer demos needed for the agent to still maintain high confidence in its policy. Finally, via a user study, we show that our approach successfully enables agents to accomplish tasks at users' desired performance levels, without needing too many or perfectly optimal demonstrations.},
}

EndNote citation:

%0 Thesis
%A Trinh, Alina 
%T Autonomous Assessment of Demonstration Sufficiency
%I EECS Department, University of California, Berkeley
%D 2024
%8 May 10
%@ UCB/EECS-2024-89
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-89.html
%F Trinh:EECS-2024-89