Joshua Tobin

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2019-104

June 23, 2019

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-104.pdf

Modern deep learning techniques are data-hungry, which presents a problem in robotics because real-world robotic data is difficult to collect. Simulated data is cheap and scalable, but jumping the “reality gap” to use simulated data for real-world tasks is challenging. In this thesis, we discuss using synthetic data to learn visual models that allow robots to perform manipulation tasks in the real world. We begin by discussing domain randomization, a technique for bridging the reality gap by massively randomizing the visual properties of the simulator. We demonstrate that, using domain randomization, synthetic data alone can be used to train a deep neural network to localize objects accurately enough for a robot to grasp them in the real world. The remainder of the thesis discusses extensions of this approach to a broader range of objects and scenes. First, we introduce a data generation pipeline inspired by the success of domain randomization for visual data that creates millions of unrealistic procedurally generated random objects, removing the assumption that 3D models of the objects are present at training time. Second, we reformulate the problem from pose prediction to grasp prediction and introduce a generative model architecture that learns a distribution over grasps, allowing our models to handle pose ambiguity and grasp a wide range of objects with a single neural network. Third, we introduce an attention mechanism for 3-dimensional data. We demonstrate that this attention mechanism can be used to perform higher fidelity neural rendering, and that models learned this way can be fine-tuned to perform accurate pose estimation when the camera intrinsics are unknown at training time. We conclude by surveying recent applications and extensions of domain randomization in the literature and suggesting several promising directions for research in sim-to-real transfer for robotics.

Advisors: Pieter Abbeel


BibTeX citation:

@phdthesis{Tobin:EECS-2019-104,
    Author= {Tobin, Joshua},
    Title= {Real-World Robotic Perception and Control Using Synthetic Data},
    School= {EECS Department, University of California, Berkeley},
    Year= {2019},
    Month= {Jun},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-104.html},
    Number= {UCB/EECS-2019-104},
    Abstract= {Modern deep learning techniques are data-hungry, which presents a problem in robotics because real-world robotic data is difficult to collect. Simulated data is cheap and scalable, but jumping the “reality gap” to use simulated data for real-world tasks is challenging. In this thesis, we discuss using synthetic data to learn visual models that allow robots to perform manipulation tasks in the real world. We begin by discussing domain randomization, a technique for bridging the reality gap by massively randomizing the visual properties of the simulator. We demonstrate that, using domain randomization, synthetic data alone can be used to train a deep neural network to localize objects accurately enough for a robot to grasp them in the real world. The remainder of the thesis discusses extensions of this approach to a broader range of objects and scenes. First, we introduce a data generation pipeline inspired by the success of domain randomization for visual data that creates millions of unrealistic procedurally generated random objects, removing the assumption that 3D models of the objects are present at training time. Second, we reformulate the problem from pose prediction to grasp prediction and introduce a generative model architecture that learns a distribution over grasps, allowing our models to handle pose ambiguity and grasp a wide range of objects with a single neural network. Third, we introduce an attention mechanism for 3-dimensional data. We demonstrate that this attention mechanism can be used to perform higher fidelity neural rendering, and that models learned this way can be fine-tuned to perform accurate pose estimation when the camera intrinsics are unknown at training time. We conclude by surveying recent applications and extensions of domain randomization in the literature and suggesting several promising directions for research in sim-to-real transfer for robotics.},
}

EndNote citation:

%0 Thesis
%A Tobin, Joshua 
%T Real-World Robotic Perception and Control Using Synthetic Data
%I EECS Department, University of California, Berkeley
%D 2019
%8 June 23
%@ UCB/EECS-2019-104
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-104.html
%F Tobin:EECS-2019-104