Acquiring Motor Skills Through Motion Imitation and Reinforcement Learning
Xue Bin Peng
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2021-267
December 20, 2021
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-267.pdf
Humans are capable of performing awe-inspiring feats of agility by drawing from a vast repertoire of diverse and sophisticated motor skills. This dynamism is in sharp contrast to the narrowly specialized and rigid behaviors commonly exhibited by artificial agents in both simulated and real-world domains. How can we create agents that are able to replicate the agility, versatility, and diversity of human motor behaviors? Manually constructing controllers for such motor skills often involves a lengthy and labor-intensive development process, which needs to be repeated for each skill. Reinforcement learning has the potential to automate much of this development process, but designing reward functions that elicit the desired behaviors from a learning algorithm can itself involve a laborious and skill-specific tuning process. In this thesis, we present motion imitation techniques that enable agents to learn large repertoires of highly dynamic and athletic behaviors by mimicking demonstrations. Instead of designing controllers or reward functions for each skill of interest, the agent need only be provided with a few example motion clips of the desired skill, and our framework can then synthesize a controller that closely replicates the target behavior.
We begin by presenting a motion imitation framework that enables simulated agents to imitate complex behaviors from reference motion clips, ranging from common locomotion skills such as walking and running, to more athletic behaviors such as acrobatics and martial arts. The agents learn to produce robust and life-like behaviors that are nearly indistinguishable in appearance from motions recorded from real-life actors. We then develop models that can reuse and compose skills learned through motion imitation to tackle challenging downstream tasks. In addition to developing controllers for simulated agents, our approach can also synthesize controllers for robots operating in the real world. We demonstrate the effectiveness of our approach by developing controllers for a large variety of agile locomotion skills for bipedal and quadrupedal robots.
Advisors: Pieter Abbeel and Sergey Levine
BibTeX citation:
@phdthesis{Peng:EECS-2021-267, Author= {Peng, Xue Bin}, Title= {Acquiring Motor Skills Through Motion Imitation and Reinforcement Learning}, School= {EECS Department, University of California, Berkeley}, Year= {2021}, Month= {Dec}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-267.html}, Number= {UCB/EECS-2021-267}, Abstract= {Humans are capable of performing awe-inspiring feats of agility by drawing from a vast repertoire of diverse and sophisticated motor skills. This dynamism is in sharp contrast to the narrowly specialized and rigid behaviors commonly exhibited by artificial agents in both simulated and real-world domains. How can we create agents that are able to replicate the agility, versatility, and diversity of human motor behaviors? Manually constructing controllers for such motor skills often involves a lengthy and labor-intensive development process, which needs to be repeated for each skill. Reinforcement learning has the potential to automate much of this development process, but designing reward functions that elicit the desired behaviors from a learning algorithm can itself involve a laborious and skill-specific tuning process. In this thesis, we present motion imitation techniques that enable agents to learn large repertoires of highly dynamic and athletic behaviors by mimicking demonstrations. Instead of designing controllers or reward functions for each skill of interest, the agent need only be provided with a few example motion clips of the desired skill, and our framework can then synthesize a controller that closely replicates the target behavior. We begin by presenting a motion imitation framework that enables simulated agents to imitate complex behaviors from reference motion clips, ranging from common locomotion skills such as walking and running, to more athletic behaviors such as acrobatics and martial arts. The agents learn to produce robust and life-like behaviors that are nearly indistinguishable in appearance from motions recorded from real-life actors. We then develop models that can reuse and compose skills learned through motion imitation to tackle challenging downstream tasks. In addition to developing controllers for simulated agents, our approach can also synthesize controllers for robots operating in the real world. We demonstrate the effectiveness of our approach by developing controllers for a large variety of agile locomotion skills for bipedal and quadrupedal robots.}, }
EndNote citation:
%0 Thesis %A Peng, Xue Bin %T Acquiring Motor Skills Through Motion Imitation and Reinforcement Learning %I EECS Department, University of California, Berkeley %D 2021 %8 December 20 %@ UCB/EECS-2021-267 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-267.html %F Peng:EECS-2021-267