Tianyi Lin

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2023-71

May 8, 2023

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-71.pdf

A textbook property of optimization algorithms is their ability to solve the problems under generic regularity conditions. Two examples are simplex method and gradient descent (GD) method. However, the performance of these fundamental and general-purpose optimization algorithms is often unsatisfactory; they often run slowly and perhaps return the suboptimal solutions in generic settings. In my view, this is the price of their generality; indeed, the generic algorithms are an achievement, but for many problems, the gains from leveraging spe- cial structure can be huge. A basic question then arises: how can we harness problem-specific structure within our algorithms to obtain fast, practical algorithms with strong performance guarantees? As more structured data-driven decision-making models emerge, this question has become increasingly pressing and relevant to practitioners.

For example, the GD is known to get stuck at a suboptimal saddle points in nonconvex optimization. Nonetheless, a line of recent works have shown that random initialization or perturbation changes the dynamics of GD and makes it provably converge to a global optimal solution. In addition, both Markov decision process (MDP) and discrete optimal transport (OT) problems can be solved using large-scale linear programs. Rather than using generic LP algorithms, the policy iteration and the Sinkhorn iteration exploit special structures in MDP and OT and thus perform better in practice. Adapting algorithms to problem-specific structure is generally referred to as structure-driven algorithm design.

Although this line of research – which has been studied extensively for over 70 years – has enjoyed widespread success, the machine-learning success stories have introduced new formulations ripe for deep theoretical analysis and remarkable practical impact. My research pushes this frontier by identifying special structure of reliable machine learning (minimax optimization) and multi-agent machine learning (high-order optimization and beyond) and design optimal algorithms for computing the appropriately defined optimal solutions; and other structured problems, such as efficient entropic regularized optimal transport, gradient- free nonsmooth nonconvex optimization, and adaptive and doubly optimal learning in games.

Advisors: Michael Jordan


BibTeX citation:

@phdthesis{Lin:EECS-2023-71,
    Author= {Lin, Tianyi},
    Title= {Structure-Driven Algorithm Design in Optimization and Machine Learning},
    School= {EECS Department, University of California, Berkeley},
    Year= {2023},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-71.html},
    Number= {UCB/EECS-2023-71},
    Abstract= {A textbook property of optimization algorithms is their ability to solve the problems under generic regularity conditions. Two examples are simplex method and gradient descent (GD) method. However, the performance of these fundamental and general-purpose optimization algorithms is often unsatisfactory; they often run slowly and perhaps return the suboptimal solutions in generic settings. In my view, this is the price of their generality; indeed, the generic algorithms are an achievement, but for many problems, the gains from leveraging spe- cial structure can be huge. A basic question then arises: how can we harness problem-specific structure within our algorithms to obtain fast, practical algorithms with strong performance guarantees? As more structured data-driven decision-making models emerge, this question has become increasingly pressing and relevant to practitioners.

For example, the GD is known to get stuck at a suboptimal saddle points in nonconvex optimization. Nonetheless, a line of recent works have shown that random initialization or perturbation changes the dynamics of GD and makes it provably converge to a global optimal solution. In addition, both Markov decision process (MDP) and discrete optimal transport (OT) problems can be solved using large-scale linear programs. Rather than using generic LP algorithms, the policy iteration and the Sinkhorn iteration exploit special structures in MDP and OT and thus perform better in practice. Adapting algorithms to problem-specific structure is generally referred to as structure-driven algorithm design.

Although this line of research – which has been studied extensively for over 70 years – has enjoyed widespread success, the machine-learning success stories have introduced new formulations ripe for deep theoretical analysis and remarkable practical impact. My research pushes this frontier by identifying special structure of reliable machine learning (minimax optimization) and multi-agent machine learning (high-order optimization and beyond) and design optimal algorithms for computing the appropriately defined optimal solutions; and other structured problems, such as efficient entropic regularized optimal transport, gradient- free nonsmooth nonconvex optimization, and adaptive and doubly optimal learning in games.},
}

EndNote citation:

%0 Thesis
%A Lin, Tianyi 
%T Structure-Driven Algorithm Design in Optimization and Machine Learning
%I EECS Department, University of California, Berkeley
%D 2023
%8 May 8
%@ UCB/EECS-2023-71
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-71.html
%F Lin:EECS-2023-71