Ryan Hoque

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2020-72

May 27, 2020

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-72.pdf

While work on robotic manipulation of rigid objects has come a long way, the manipulation of deformable objects remains an open problem in robotics. Highly deformable structures such as rope, fabric and bags are very tricky to model and reason about due to their infinite-dimensional state space and complex dynamics. Meanwhile, techniques for deep learning, reinforcement learning, depth sensing, and simulation continue to advance and offer powerful tools for representing these complex objects. We study the problem of fabric manipulation, which has applications in the home (laundry, dressing assistance, bed-making, etc.) as well as industry (surgical gauze handling, textile and upholstery manufacturing, carbon fiber molding, etc.).

First, we study the problem of fabric smoothing, i.e. finding the sequence of pick-and-place actions that maximize coverage of the underlying plane. We train a policy on RGB and depth (RGBD) image observations with imitation learning of an algorithmic supervisor that has access to fabric state in simulation. To evaluate our approach, we transfer the policy to a physical da Vinci Research Kit (dVRK) surgical robot and achieve 95% final coverage on simple starting configurations and 83% final coverage on highly crumpled starting states.

Next, we generalize the problem from smoothing to goal-conditioned fabric manipulation, in which we wish to learn a policy to manipulate toward some arbitrary goal image. To do this, we propose a technique we call VisuoSpatial Foresight (VSF), which decouples learning a visual dynamics model on simulated RGBD data from planning over that model. In our experiments we demonstrate that a single learned policy can both rival the imitation learning agent on the smoothing task and manipulate toward images of singly folded and doubly folded fabric configurations without any task demonstrations at training or test time.

Advisors: Ken Goldberg


BibTeX citation:

@mastersthesis{Hoque:EECS-2020-72,
    Author= {Hoque, Ryan},
    Editor= {Goldberg, Ken and Abbeel, Pieter},
    Title= {Robotic Fabric Manipulation with Deep Imitation Learning and Reinforcement Learning in Simulation},
    School= {EECS Department, University of California, Berkeley},
    Year= {2020},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-72.html},
    Number= {UCB/EECS-2020-72},
    Abstract= {While work on robotic manipulation of rigid objects has come a long way, the manipulation of deformable objects remains an open problem in robotics. Highly deformable structures such as rope, fabric and bags are very tricky to model and reason about due to their infinite-dimensional state space and complex dynamics. Meanwhile, techniques for deep learning, reinforcement learning, depth sensing, and simulation continue to advance and offer powerful tools for representing these complex objects. We study the problem of fabric manipulation, which has applications in the home (laundry, dressing assistance, bed-making, etc.) as well as industry (surgical gauze handling, textile and upholstery manufacturing, carbon fiber molding, etc.). 

First, we study the problem of fabric smoothing, i.e. finding the sequence of pick-and-place actions that maximize coverage of the underlying plane. We train a policy on RGB and depth (RGBD) image observations with imitation learning of an algorithmic supervisor that has access to fabric state in simulation. To evaluate our approach, we transfer the policy to a physical da Vinci Research Kit (dVRK) surgical robot and achieve 95% final coverage on simple starting configurations and 83% final coverage on highly crumpled starting states.

Next, we generalize the problem from smoothing to goal-conditioned fabric manipulation, in which we wish to learn a policy to manipulate toward some arbitrary goal image. To do this, we propose a technique we call VisuoSpatial Foresight (VSF), which decouples learning a visual dynamics model on simulated RGBD data from planning over that model. In our experiments we demonstrate that a single learned policy can both rival the imitation learning agent on the smoothing task and manipulate toward images of singly folded and doubly folded fabric configurations without any task demonstrations at training or test time.},
}

EndNote citation:

%0 Thesis
%A Hoque, Ryan 
%E Goldberg, Ken 
%E Abbeel, Pieter 
%T Robotic Fabric Manipulation with Deep Imitation Learning and Reinforcement Learning in Simulation
%I EECS Department, University of California, Berkeley
%D 2020
%8 May 27
%@ UCB/EECS-2020-72
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-72.html
%F Hoque:EECS-2020-72