Xiangyu Yue

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2022-213

August 15, 2022

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-213.pdf

Deep neural networks have achieved great success in learning representations on a given dataset. However, in many cases, the learned representations are dataset-dependent and cannot be transferred to datasets with different distributions, even for the same task. How to deal with domain shift is crucial to improve the generalization capability of models. Domain adaptation offers a potential solution, allowing us to transfer networks from a source domain with abundant labels onto target domains with only limited or no labels.

In this dissertation, I will present the many ways that we can learn transferable representations under different scenarios, including 1) when the source domain has only limited labels, even only one label per class, 2) when there are multiple labeled source domains, 3) when there are multiple unseen unlabeled target domains. These approaches are general across different data modalities (e.g. vision and language) and can be easily combined to solve other similar domain transfer settings (e.g. adapting from multiple sources with limited labels), enabling models to generalize beyond the source domains. Many of the works transfer knowledge from simulation data to real-world data in order to alleviate the need for expensive manual annotations. Finally, I present our pioneering work on building a LiDAR point cloud simulator, which has further enabled a large amount of domain adaptation work on LiDAR point cloud segmentation adaptation.

Advisors: Alberto L. Sangiovanni-Vincentelli


BibTeX citation:

@phdthesis{Yue:EECS-2022-213,
    Author= {Yue, Xiangyu},
    Title= {Learning Transferable Representations across Domains},
    School= {EECS Department, University of California, Berkeley},
    Year= {2022},
    Month= {Aug},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-213.html},
    Number= {UCB/EECS-2022-213},
    Abstract= {Deep neural networks have achieved great success in learning representations on a given dataset. However, in many cases, the learned representations are dataset-dependent and cannot be transferred to datasets with different distributions, even for the same task. How to deal with domain shift is crucial to improve the generalization capability of models. Domain adaptation offers a potential solution, allowing us to transfer networks from a source domain with abundant labels onto target domains with only limited or no labels. 

In this dissertation, I will present the many ways that we can learn transferable representations under different scenarios, including 1) when the source domain has only limited labels, even only one label per class, 2) when there are multiple labeled source domains, 3) when there are multiple unseen unlabeled target domains. These approaches are general across different data modalities (e.g. vision and language) and can be easily combined to solve other similar domain transfer settings (e.g. adapting from multiple sources with limited labels), enabling models to generalize beyond the source domains. Many of the works transfer knowledge from simulation data to real-world data in order to alleviate the need for expensive manual annotations. Finally, I present our pioneering work on building a LiDAR point cloud simulator, which has further enabled a large amount of domain adaptation work on LiDAR point cloud segmentation adaptation.},
}

EndNote citation:

%0 Thesis
%A Yue, Xiangyu 
%T Learning Transferable Representations across Domains
%I EECS Department, University of California, Berkeley
%D 2022
%8 August 15
%@ UCB/EECS-2022-213
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-213.html
%F Yue:EECS-2022-213