Learning to Open Deformable Bags with a Bimanual Robot

Lawrence Chen

EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2025-111
May 16, 2025

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2025/EECS-2025-111.pdf

Deformable bag manipulation is a useful capability for robotic systems with applications such as grocery automation, packaging, recycling, and household assistance. However, bags are uniquely challenging due to their 3D structure, complex and unstable deformations, and difficult visual properties such as translucency and specularity. This thesis investigates the problem of autonomous robotic bagging—opening a bag from an unstructured initial state and inserting objects using bimanual manipulation. The thesis presents two main systems. First, AutoBag introduces a self-supervised learning pipeline in which a dual-arm robot learns to detect semantic features of plastic bags, such as handles and rims, by training on UV-labeled data. The system uses these models at test time to manipulate and open plastic bags for object insertion. In experiments, a YuMi robot using AutoBag achieves a success rate of 16/30 insertions across various bag configurations. Second, SLIP-Bagging builds on the insight that opening bags often requires isolating the top layer. This system proposes SLIP (Singulating Layers using Interactive Perception), an algorithm that uses iterative visual feedback to separate the top layer of a bag using standard grippers and RGB cameras. Applied to the bagging task, SLIP-Bagging significantly improves success rates and generality across plastic and fabric bags, achieving 67% to 81% success across varied bag types. Experiments also demonstrate that SLIP generalizes to other tasks such as singulating layers of folded cloth or garments. Together, these systems demonstrate the viability of general-purpose bimanual robotic bagging using only RGB (or RGBD) perception, parallel-jaw grippers, and data-driven learning. This thesis contributes new algorithms, evaluation metrics, and empirical results toward the broader goal of deformable object manipulation in unstructured environments.

\"Edit"; ?>


BibTeX citation:

@techreport{Chen:EECS-2025-111,
    Author = {Chen, Lawrence},
    Title = {Learning to Open Deformable Bags with a Bimanual Robot},
    Institution = {EECS Department, University of California, Berkeley},
    Year = {2025},
    Month = {May},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2025/EECS-2025-111.html},
    Number = {UCB/EECS-2025-111},
    Abstract = {Deformable bag manipulation is a useful capability for robotic systems with applications such as grocery automation, packaging, recycling, and household assistance. However, bags are uniquely challenging due to their 3D structure, complex and unstable deformations, and difficult visual properties such as translucency and specularity. This thesis investigates the problem of autonomous robotic bagging—opening a bag from an unstructured initial state and inserting objects using bimanual manipulation.
The thesis presents two main systems. First, AutoBag introduces a self-supervised learning pipeline in which a dual-arm robot learns to detect semantic features of plastic bags, such as handles and rims, by training on UV-labeled data. The system uses these models at test time to manipulate and open plastic bags for object insertion. In experiments, a YuMi robot using AutoBag achieves a success rate of 16/30 insertions across various bag configurations.
Second, SLIP-Bagging builds on the insight that opening bags often requires isolating the top layer. This system proposes SLIP (Singulating Layers using Interactive Perception), an algorithm that uses iterative visual feedback to separate the top layer of a bag using standard grippers and RGB cameras. Applied to the bagging task, SLIP-Bagging significantly improves success rates and generality across plastic and fabric bags, achieving 67% to 81% success across varied bag types. Experiments also demonstrate that SLIP generalizes to other tasks such as singulating layers of folded cloth or garments.
Together, these systems demonstrate the viability of general-purpose bimanual robotic bagging using only RGB (or RGBD) perception, parallel-jaw grippers, and data-driven learning. This thesis contributes new algorithms, evaluation metrics, and empirical results toward the broader goal of deformable object manipulation in unstructured environments.}
}

EndNote citation:

%0 Report
%A Chen, Lawrence
%T Learning to Open Deformable Bags with a Bimanual Robot
%I EECS Department, University of California, Berkeley
%D 2025
%8 May 16
%@ UCB/EECS-2025-111
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2025/EECS-2025-111.html
%F Chen:EECS-2025-111