Similarity-Based Representation Learning
Yi Liu and Andreea Bobu and Anca Dragan
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2023-78
May 9, 2023
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-78.pdf
When robots optimize their behavior in an environment, they need to both learn a representation for what matters in the task -- the task “features” -- as well as how to combine these features into a single objective. The ability to learn meaningful representations from raw observations is crucial for efficient and effective reward and policy learning. This paper introduces a novel approach to representation learning, termed Similarity-Based Representation Learning (SIRL), motivated by the need to incorporate human feedback that generalizes to multiple tasks and multiple users. SIRL operates by querying the human on which two of three shown trajectories are more similar. By obtaining human feedback in this manner, we train a model using contrastive learning with triplet loss. This approach allows for the learning of robust representations that encode meaningful aspects of the environment, while discarding irrelevant information. We showcase the efficacy of our proposed SIRL framework through experiments on various benchmark tasks. Our results indicate that SIRL effectively learns representations that lead to improved performance in both preference-based reward learning and policy learning from human demonstrations. Moreover, we demonstrate the scalability and transferability of the learned representations, highlighting the potential of SIRL as a versatile and efficient tool for reinforcement learning in complex environments.
Advisors: Anca Dragan
BibTeX citation:
@mastersthesis{Liu:EECS-2023-78, Author= {Liu, Yi and Bobu, Andreea and Dragan, Anca}, Title= {Similarity-Based Representation Learning}, School= {EECS Department, University of California, Berkeley}, Year= {2023}, Month= {May}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-78.html}, Number= {UCB/EECS-2023-78}, Abstract= {When robots optimize their behavior in an environment, they need to both learn a representation for what matters in the task -- the task “features” -- as well as how to combine these features into a single objective. The ability to learn meaningful representations from raw observations is crucial for efficient and effective reward and policy learning. This paper introduces a novel approach to representation learning, termed Similarity-Based Representation Learning (SIRL), motivated by the need to incorporate human feedback that generalizes to multiple tasks and multiple users. SIRL operates by querying the human on which two of three shown trajectories are more similar. By obtaining human feedback in this manner, we train a model using contrastive learning with triplet loss. This approach allows for the learning of robust representations that encode meaningful aspects of the environment, while discarding irrelevant information. We showcase the efficacy of our proposed SIRL framework through experiments on various benchmark tasks. Our results indicate that SIRL effectively learns representations that lead to improved performance in both preference-based reward learning and policy learning from human demonstrations. Moreover, we demonstrate the scalability and transferability of the learned representations, highlighting the potential of SIRL as a versatile and efficient tool for reinforcement learning in complex environments.}, }
EndNote citation:
%0 Thesis %A Liu, Yi %A Bobu, Andreea %A Dragan, Anca %T Similarity-Based Representation Learning %I EECS Department, University of California, Berkeley %D 2023 %8 May 9 %@ UCB/EECS-2023-78 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-78.html %F Liu:EECS-2023-78