Virginia Smith

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2017-140

August 9, 2017

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-140.pdf

New computing systems have emerged in response to the increasing size and complexity of modern datasets. For best performance, machine learning methods must be designed to closely align with the underlying properties of these systems.

In this thesis, we illustrate the impact of system-aware machine learning through the lens of optimization, a crucial component in formulating and solving most machine learning problems. Classically, the performance of an optimization method is measured in terms of accuracy (i.e., does it realize the correct machine learning model?) and convergence rate (after how many iterations?). In modern computing regimes, however, it becomes critical to additionally consider a number of systems-related aspects for best overall performance. These aspects can range from low-level details, such as data structures or machine specifications, to higher-level concepts, such as the tradeoff between communication and computation.

We propose a general optimization framework for machine learning, CoCoA, that gives careful consideration to systems parameters, often incorporating them directly into the method and theory. We illustrate the impact of CoCoA in two popular distributed regimes: the traditional cluster-computing environment, and the increasingly common setting of on-device (federated) learning. Our results indicate that by marrying systems-level parameters and optimization techniques, we can achieve orders-of-magnitude speedups for solving modern machine learning problems at scale. We corroborate these empirical results by providing theoretical guarantees that expose systems parameters to give further insight into empirical performance.

Advisors: David E. Culler and Michael Jordan


BibTeX citation:

@phdthesis{Smith:EECS-2017-140,
    Author= {Smith, Virginia},
    Title= {System-Aware Optimization for Machine Learning at Scale},
    School= {EECS Department, University of California, Berkeley},
    Year= {2017},
    Month= {Aug},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-140.html},
    Number= {UCB/EECS-2017-140},
    Abstract= {New computing systems have emerged in response to the increasing size and complexity of modern datasets. For best performance, machine learning methods must be designed to closely align with the underlying properties of these systems.

In this thesis, we illustrate the impact of system-aware machine learning through the lens of optimization, a crucial component in formulating and solving most machine learning problems. Classically, the performance of an optimization method is measured in terms of accuracy (i.e., does it realize the correct machine learning model?) and convergence rate (after how many iterations?). In modern computing regimes, however, it becomes critical to additionally consider a number of systems-related aspects for best overall performance. These aspects can range from low-level details, such as data structures or machine specifications, to higher-level concepts, such as the tradeoff between communication and computation.

We propose a general optimization framework for machine learning, CoCoA, that gives careful consideration to systems parameters, often incorporating them directly into the method and theory. We illustrate the impact of CoCoA in two popular distributed regimes: the traditional cluster-computing environment, and the increasingly common setting of on-device (federated) learning. Our results indicate that by marrying systems-level parameters and optimization techniques, we can achieve orders-of-magnitude speedups for solving modern machine learning problems at scale. We corroborate these empirical results by providing theoretical guarantees that expose systems parameters to give further insight into empirical performance.},
}

EndNote citation:

%0 Thesis
%A Smith, Virginia 
%T System-Aware Optimization for Machine Learning at Scale
%I EECS Department, University of California, Berkeley
%D 2017
%8 August 9
%@ UCB/EECS-2017-140
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-140.html
%F Smith:EECS-2017-140