Alexander Wei

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2024-46

May 4, 2024

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-46.pdf

In this dissertation, we present several forays into the complexity that characterizes modern machine learning, with a focus on the interplay between learning processes, incentives, and high-dimensional models. We aim to uncover new principles that address the challenges that arise at the frontiers of this rapidly advancing field. This work is structured into two parts, each exploring a different facet of these complexities.

In Part I, we examine complexity arising from strategic and adversarial environments. We present two studies. The first explores learning and decision-making in a matching market, where a platform hopes to learn a market equilibrium amidst uncertainty about user preferences. The second investigates the robustness of safety-trained large language models to adversarial ``jailbreak'' attacks. We identify and exploit failure modes of safety training and discuss the implications of these findings for language model safety going forward.

In Part II, we study complexity arising from the high-dimensional models that are by now ubiquitous in machine learning. We start by investigating what mathematical foundations lead to an accurate predictive theory of high-dimensional generalization, and identify models based on random matrix theory as a promising candidate. We then delve further into the theoretical underpinnings of random matrix theory for high-dimensional linear regression to shed light on phenomena such as double descent, benign overfitting, and scaling laws.

Advisors: Michael Jordan and Jacob Steinhardt and Nika Haghtalab


BibTeX citation:

@phdthesis{Wei:EECS-2024-46,
    Author= {Wei, Alexander},
    Title= {Learning and Decision-Making in Complex Environments},
    School= {EECS Department, University of California, Berkeley},
    Year= {2024},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-46.html},
    Number= {UCB/EECS-2024-46},
    Abstract= {In this dissertation, we present several forays into the complexity that characterizes modern machine learning, with a focus on the interplay between learning processes, incentives, and high-dimensional models. We aim to uncover new principles that address the challenges that arise at the frontiers of this rapidly advancing field. This work is structured into two parts, each exploring a different facet of these complexities.

In Part I, we examine complexity arising from strategic and adversarial environments. We present two studies. The first explores learning and decision-making in a matching market, where a platform hopes to learn a market equilibrium amidst uncertainty about user preferences. The second investigates the robustness of safety-trained large language models to adversarial ``jailbreak'' attacks. We identify and exploit failure modes of safety training and discuss the implications of these findings for language model safety going forward.

In Part II, we study complexity arising from the high-dimensional models that are by now ubiquitous in machine learning. We start by investigating what mathematical foundations lead to an accurate predictive theory of high-dimensional generalization, and identify models based on random matrix theory as a promising candidate. We then delve further into the theoretical underpinnings of random matrix theory for high-dimensional linear regression to shed light on phenomena such as double descent, benign overfitting, and scaling laws.},
}

EndNote citation:

%0 Thesis
%A Wei, Alexander 
%T Learning and Decision-Making in Complex Environments
%I EECS Department, University of California, Berkeley
%D 2024
%8 May 4
%@ UCB/EECS-2024-46
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-46.html
%F Wei:EECS-2024-46