Arjun Sripathy and Andreea Bobu and Zhongyu Li and Koushil Sreenath and Daniel Brown and Anca Dragan

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2022-45

May 9, 2022

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-45.pdf

Our goal is to enable robots to perform functional tasks in emotive ways, be it in response to their users’ emotional states, or expressive of their confidence levels. Prior work has proposed learning independent cost functions from user feedback for each target emotion, so that the robot may optimize it alongside task and environment specific objectives for any situation it encounters. However, this approach is inefficient when modeling multiple emotions and unable to generalize to new ones. In this work, we leverage the fact that emotions are not independent of each other: they are related through a latent space of Valence-Arousal-Dominance (VAD). Our key idea is to learn a model for how trajectories map onto VAD with user labels. Considering the distance between a trajectory’s mapping and a target VAD allows this single model to represent cost functions for all emotions. As a result 1) all user feedback can contribute to learning about every emotion; 2) the robot can generate trajectories for any emotion in the space instead of only a few predefined ones; and 3) the robot can respond emotively to user-generated natural language by mapping it to a target VAD. We introduce a method that interactively learns to map trajectories to this latent space and test it in simulation and in a user study. In experiments, we use a simple vacuum robot as well as the Cassie biped.

Advisors: Anca Dragan


BibTeX citation:

@mastersthesis{Sripathy:EECS-2022-45,
    Author= {Sripathy, Arjun and Bobu, Andreea and Li, Zhongyu and Sreenath, Koushil and Brown, Daniel and Dragan, Anca},
    Title= {Teaching Robots to Span the Space of Functional Expressive Motion},
    School= {EECS Department, University of California, Berkeley},
    Year= {2022},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-45.html},
    Number= {UCB/EECS-2022-45},
    Abstract= {Our goal is to enable robots to perform functional tasks in emotive ways, be it in response to their users’ emotional states, or expressive of their confidence levels. Prior work has proposed learning independent cost functions from user feedback for each target emotion, so that the robot may optimize it alongside task and environment specific objectives for any situation it encounters. However, this approach is inefficient when modeling multiple emotions and unable to generalize to new ones. In this work, we leverage the fact that emotions are not independent of each other: they are related through a latent space of Valence-Arousal-Dominance (VAD). Our key idea is to learn a model for how trajectories map onto VAD with user labels. Considering the distance between a trajectory’s mapping and a target VAD allows this single model to represent cost functions for all emotions. As a result 1) all user feedback can contribute to learning about every emotion; 2) the robot can generate trajectories for any emotion in the space instead of only a few predefined ones; and 3) the robot can respond emotively to user-generated natural language by mapping it to a target VAD. We introduce a method that interactively learns to map trajectories to this latent space and test it in simulation and in a user study. In experiments, we use a simple vacuum robot as well as the Cassie biped.},
}

EndNote citation:

%0 Thesis
%A Sripathy, Arjun 
%A Bobu, Andreea 
%A Li, Zhongyu 
%A Sreenath, Koushil 
%A Brown, Daniel 
%A Dragan, Anca 
%T Teaching Robots to Span the Space of Functional Expressive Motion
%I EECS Department, University of California, Berkeley
%D 2022
%8 May 9
%@ UCB/EECS-2022-45
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-45.html
%F Sripathy:EECS-2022-45