Efficient Parallel Computing for Machine Learning at Scale

Arissa Wongpanich

EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2020-225
December 18, 2020

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-225.pdf

Recent years have seen countless advances in the fields of both machine learning and high performance computing. Although computing power has steadily increased and become more available, many widely-used machine learning techniques fail to take full advantage of the parallelism available from large-scale computing clusters. Exploring techniques to scale machine learning algorithms on distributed and high performance systems can potentially help us reduce training time and increase the accessibility of machine learning research. To this end, this thesis investigates methods for scaling up deep learning on distributed systems using a variety of optimization techniques, ranging from clusters of Intel Xeon Phi processors to Tensor Processing Unit (TPU) pods. Training machine learning models and fully optimizing compute on such distributed systems requires us to overcome multiple challenges at both the algorithmic and the systems level. This thesis evaluates and presents scaling methods for distributed systems which can be used to address such challenges, and more broadly, to bridge the gap between high performance computing and machine learning.

Advisor: James Demmel


BibTeX citation:

@mastersthesis{Wongpanich:EECS-2020-225,
    Author = {Wongpanich, Arissa},
    Title = {Efficient Parallel Computing for Machine Learning at Scale},
    School = {EECS Department, University of California, Berkeley},
    Year = {2020},
    Month = {Dec},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-225.html},
    Number = {UCB/EECS-2020-225},
    Abstract = {Recent years have seen countless advances in the fields of both machine learning and high performance computing. Although computing power has steadily increased and become more available, many widely-used machine learning techniques fail to take full advantage of the parallelism available from large-scale computing clusters. Exploring techniques to scale machine learning algorithms on distributed and high performance systems can potentially help us reduce training time and increase the accessibility of machine learning research. To this end, this thesis investigates methods for scaling up deep learning on distributed systems using a variety of optimization techniques, ranging from clusters of Intel Xeon Phi processors to Tensor Processing Unit (TPU) pods. Training machine learning models and fully optimizing compute on such distributed systems requires us to overcome multiple challenges at both the algorithmic and the systems level. This thesis evaluates and presents scaling methods for distributed systems which can be used to address such challenges, and more broadly, to bridge the gap between high performance computing and machine learning.}
}

EndNote citation:

%0 Thesis
%A Wongpanich, Arissa
%T Efficient Parallel Computing for Machine Learning at Scale
%I EECS Department, University of California, Berkeley
%D 2020
%8 December 18
%@ UCB/EECS-2020-225
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-225.html
%F Wongpanich:EECS-2020-225