Yaodong Yu

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2024-75

May 10, 2024

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-75.pdf

Machine learning models trained on vast amounts of data have achieved remarkable success across various applications. However, they also pose new challenges and risks for deployment in real-world high-stakes domains. Decisions made by deep learning models are often difficult to interpret, and the underlying mechanisms remain poorly understood, and large-scale foundational models can memorize and leak private personal information. Given that deep learning models operate as black-boxes, it is challenging to understand, let alone resolve, various types of failures in current machine learning systems.

In this dissertation, we present research towards building reliable machine learning systems through the lens of representation learning. The first part focuses on transparent representation learning. We first propose a principled and effective objective function, called coding rate reduction, for measuring the goodness of representations, and present a white-box approach to understanding transformer models. We then show how to derive a family of mathematically interpretable transformer-like deep network architectures by maximizing the information gain of the learned representations. The second part focuses on privacy-preserving representation learning. We first present our investigation on understanding the effectiveness of learned representations using federated optimization methods, and present our approach for overcoming data heterogeneity when training deep, non-convex models in the federated setting. Next, we describe our work on training the first set of vision foundation models with rigorous differential privacy guarantees, and demonstrate the promise of high-utility differentially private representation learning.

Advisors: Michael Jordan and Yi Ma


BibTeX citation:

@phdthesis{Yu:EECS-2024-75,
    Author= {Yu, Yaodong},
    Title= {Reliable Representation Learning: Theory and Practice},
    School= {EECS Department, University of California, Berkeley},
    Year= {2024},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-75.html},
    Number= {UCB/EECS-2024-75},
    Abstract= {Machine learning models trained on vast amounts of data have achieved remarkable success across various applications. However, they also pose new challenges and risks for deployment in real-world high-stakes domains. Decisions made by deep learning models are often difficult to interpret, and the underlying mechanisms remain poorly understood, and large-scale foundational models can memorize and leak private personal information. Given that deep learning models operate as black-boxes, it is challenging to understand, let alone resolve, various types of failures in current machine learning systems.

In this dissertation, we present research towards building reliable machine learning systems through the lens of representation learning. The first part focuses on transparent representation learning. We first propose a principled and effective objective function, called coding rate reduction, for measuring the goodness of representations, and present a white-box approach to understanding transformer models. We then show how to derive a family of mathematically interpretable transformer-like deep network architectures by maximizing the information gain of the learned representations. The second part focuses on privacy-preserving representation learning. We first present our investigation on understanding the effectiveness of learned representations using federated optimization methods, and present our approach for overcoming data heterogeneity when training deep, non-convex models in the federated setting. Next, we describe our work on training the first set of vision foundation models with rigorous differential privacy guarantees, and demonstrate the promise of high-utility differentially private representation learning.},
}

EndNote citation:

%0 Thesis
%A Yu, Yaodong 
%T Reliable Representation Learning: Theory and Practice
%I EECS Department, University of California, Berkeley
%D 2024
%8 May 10
%@ UCB/EECS-2024-75
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-75.html
%F Yu:EECS-2024-75