Building Assistive Sensorimotor Interfaces through Human-in-the-Loop Machine Learning
Siddharth Reddy
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2022-7
April 22, 2022
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-7.pdf
One of the outstanding challenges in the field of human-computer interaction is building assistive interfaces that help users with the perception and control of complex systems, such as cars, quadcopters, and prosthetic limbs. In this thesis, we propose machine learning algorithms for automatically designing personalized, adaptive interfaces that improve users' performance on sequential decision-making tasks. First, we present work that uses theory of mind to model irrational user behavior as rational with respect to incorrect internal beliefs about how the world works, and leverages this assumption to assist users by modifying their observations and actions. Second, we present work that uses model-free reinforcement learning from human feedback to fine-tune user actions, with minimal assumptions about user behavior. We demonstrate the effectiveness of our methods through experiments with human participants, in which users play the Lunar Lander video game, perform simulated navigation tasks, and land a quadcopter.
Advisors: Anca Dragan and Sergey Levine
BibTeX citation:
@phdthesis{Reddy:EECS-2022-7, Author= {Reddy, Siddharth}, Title= {Building Assistive Sensorimotor Interfaces through Human-in-the-Loop Machine Learning}, School= {EECS Department, University of California, Berkeley}, Year= {2022}, Month= {Apr}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-7.html}, Number= {UCB/EECS-2022-7}, Abstract= {One of the outstanding challenges in the field of human-computer interaction is building assistive interfaces that help users with the perception and control of complex systems, such as cars, quadcopters, and prosthetic limbs. In this thesis, we propose machine learning algorithms for automatically designing personalized, adaptive interfaces that improve users' performance on sequential decision-making tasks. First, we present work that uses theory of mind to model irrational user behavior as rational with respect to incorrect internal beliefs about how the world works, and leverages this assumption to assist users by modifying their observations and actions. Second, we present work that uses model-free reinforcement learning from human feedback to fine-tune user actions, with minimal assumptions about user behavior. We demonstrate the effectiveness of our methods through experiments with human participants, in which users play the Lunar Lander video game, perform simulated navigation tasks, and land a quadcopter.}, }
EndNote citation:
%0 Thesis %A Reddy, Siddharth %T Building Assistive Sensorimotor Interfaces through Human-in-the-Loop Machine Learning %I EECS Department, University of California, Berkeley %D 2022 %8 April 22 %@ UCB/EECS-2022-7 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-7.html %F Reddy:EECS-2022-7