Coline Devin

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2020-207

December 17, 2020

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-207.pdf

Humans are remarkably proficient at decomposing and recombining concepts they have learned. In contrast, while deep learning-based methods have been shown to fit large datasets and out-perform humans at some tasks, they often fail when presented with conditions even just slightly outside of the distribution they were trained on. In particular, machine learning models fail at compositional generalization, where the model would need to predict how concepts fit together without having seen that exact combination during training. This thesis proposes several learning-based methods that take advantage of the compositional structure of tasks and shows how they perform better than black-box models when presented with novel compositions of previously seen subparts. The first type of method is to directly decompose neural network into separate modules that are trained jointly in varied combinations. The second type of method is to learn representations of tasks and objects that obey arithmetic properties such that tasks representations can be summed or subtracted to indicate their composition or decomposition. We show results in diverse domains including games, simulated environments, and real robots.

Advisors: Trevor Darrell and Pieter Abbeel and Sergey Levine


BibTeX citation:

@phdthesis{Devin:EECS-2020-207,
    Author= {Devin, Coline},
    Title= {Compositionality and Modularity for Robot Learning},
    School= {EECS Department, University of California, Berkeley},
    Year= {2020},
    Month= {Dec},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-207.html},
    Number= {UCB/EECS-2020-207},
    Abstract= {Humans are remarkably proficient at decomposing and recombining
concepts they have learned. In contrast, while deep learning-based
methods have been shown to fit large datasets and out-perform humans
at some tasks, they often fail when presented with conditions even
just slightly outside of the distribution they were trained on. In
particular, machine learning models fail at compositional
generalization, where the model would need to predict how concepts fit
together without having seen that exact combination during training.
This thesis proposes several learning-based methods that take
advantage of the compositional structure of tasks and shows how they
perform better than black-box models when presented with novel
compositions of previously seen subparts. The first type of method is
to directly decompose neural network into separate modules that are
trained jointly in varied combinations. The second type of method is
to learn representations of tasks and objects that obey arithmetic
properties such that tasks representations can be summed or subtracted
to indicate their composition or decomposition. We show results in
diverse domains including games, simulated environments, and real
robots.},
}

EndNote citation:

%0 Thesis
%A Devin, Coline 
%T Compositionality and Modularity for Robot Learning
%I EECS Department, University of California, Berkeley
%D 2020
%8 December 17
%@ UCB/EECS-2020-207
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-207.html
%F Devin:EECS-2020-207