Closing the Domain Gap for Data-Efficient Robotic Learning
Sarah Young
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2020-62
May 26, 2020
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-62.pdf
Object manipulation has always been an important task in robotics. Though there are various methods for learning to do such tasks, imitation learning is one method that has been immensely successful in allowing agents to learn through human demonstrations. One of the key challenges of learning such tasks is working with the domain gap. Getting large scale kinesthetic data on robots doing real world tasks is tedious, and while getting third person data is much easier, there is a domain gap to resolve. In this work, we present a method to simplify the data collection process while eliminating the domain gap present in third-person demonstrations. We focus on using one universal grasping tool that can attach to a variety of robots and perform a variety of tasks. We present a trash-bot setup where we are able to easily collect first-person demonstration videos. Our goal is to learn to reach, push, and place objects in the scene in diverse environments. To train this task, we collected demonstration data in various different environments and objects using our grasping tool. We then learn a policy to output the appropriate actions for completing the task.
Advisors: Pieter Abbeel
BibTeX citation:
@mastersthesis{Young:EECS-2020-62, Author= {Young, Sarah}, Title= {Closing the Domain Gap for Data-Efficient Robotic Learning}, School= {EECS Department, University of California, Berkeley}, Year= {2020}, Month= {May}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-62.html}, Number= {UCB/EECS-2020-62}, Abstract= {Object manipulation has always been an important task in robotics. Though there are various methods for learning to do such tasks, imitation learning is one method that has been immensely successful in allowing agents to learn through human demonstrations. One of the key challenges of learning such tasks is working with the domain gap. Getting large scale kinesthetic data on robots doing real world tasks is tedious, and while getting third person data is much easier, there is a domain gap to resolve. In this work, we present a method to simplify the data collection process while eliminating the domain gap present in third-person demonstrations. We focus on using one universal grasping tool that can attach to a variety of robots and perform a variety of tasks. We present a trash-bot setup where we are able to easily collect first-person demonstration videos. Our goal is to learn to reach, push, and place objects in the scene in diverse environments. To train this task, we collected demonstration data in various different environments and objects using our grasping tool. We then learn a policy to output the appropriate actions for completing the task.}, }
EndNote citation:
%0 Thesis %A Young, Sarah %T Closing the Domain Gap for Data-Efficient Robotic Learning %I EECS Department, University of California, Berkeley %D 2020 %8 May 26 %@ UCB/EECS-2020-62 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-62.html %F Young:EECS-2020-62