Avi Singh and Larry Yang and Kristian Hartikainen and Chelsea Finn and Sergey Levine

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2019-40

May 14, 2019

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-40.pdf

The combination of deep neural network models and reinforcement learning algorithms can make it possible to learn policies for robotic behaviors that directly read in raw sensory inputs, such as camera images, effectively subsuming both estimation and control into one model. However, real-world applications of reinforcement learning must specify the goal of the task by means of a manually programmed reward function, which in practice requires either designing the very same perception pipeline that end-to-end reinforcement learning promises to avoid, or else instrumenting the environment with additional sensors to determine if the task has been performed successfully. In this paper, we propose an approach for removing the need for manual engineering of reward specifications by enabling a robot to learn from a modest number of examples of successful outcomes, followed by actively solicited queries, where the robot shows the user a state and asks for a label to determine whether that state represents successful completion of the task. While requesting labels for every single state would amount to asking the user to manually provide the reward signal, our method requires labels for only a tiny fraction of the states seen during training, making it an efficient and practical approach for learning skills without manually engineered rewards. We evaluate our method on real-world robotic manipulation tasks where the observations consist of images viewed by the robot's camera. In our experiments, our method effectively learns to arrange objects, place books, and drape cloth, directly from images and without any manually specified reward functions, and with only 1-4 hours of interaction with the real world.

Advisors: Sergey Levine


BibTeX citation:

@mastersthesis{Singh:EECS-2019-40,
    Author= {Singh, Avi and Yang, Larry and Hartikainen, Kristian and Finn, Chelsea and Levine, Sergey},
    Title= {End-to-End Robotic Reinforcement Learning without Reward Engineering},
    School= {EECS Department, University of California, Berkeley},
    Year= {2019},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-40.html},
    Number= {UCB/EECS-2019-40},
    Abstract= {The combination of deep neural network models and reinforcement learning algorithms can make it possible to learn policies for robotic behaviors that directly read in raw sensory inputs, such as camera images, effectively subsuming both estimation and control into one model.
However, real-world applications of reinforcement learning must specify the goal of the task by means of a manually programmed reward function, which in practice requires either designing the very same perception pipeline that end-to-end reinforcement learning promises to avoid, or else instrumenting the environment with additional sensors to determine if the task has been performed successfully. In this paper, we propose an approach for removing the need for manual engineering of reward specifications by enabling a robot to learn from a modest number of examples of successful outcomes, followed by actively solicited queries, where the robot shows the user a state and asks for a label to determine whether that state represents successful completion of the task. While requesting labels for every single state would amount to asking the user to manually provide the reward signal, our method requires labels for only a tiny fraction of the states seen during training, making it an efficient and practical approach for learning skills without manually engineered rewards. We evaluate our method on real-world robotic manipulation tasks where the observations consist of images viewed by the robot's camera. In our experiments, our method effectively learns to arrange objects, place books, and drape cloth, directly from images and without any manually specified reward functions, and with only 1-4 hours of interaction with the real world.},
}

EndNote citation:

%0 Thesis
%A Singh, Avi 
%A Yang, Larry 
%A Hartikainen, Kristian 
%A Finn, Chelsea 
%A Levine, Sergey 
%T End-to-End Robotic Reinforcement Learning without Reward Engineering
%I EECS Department, University of California, Berkeley
%D 2019
%8 May 14
%@ UCB/EECS-2019-40
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-40.html
%F Singh:EECS-2019-40