Huseyin Elibol and Michael Jordan and Ion Stoica

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2022-115

May 13, 2022

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-115.pdf

Scientists increasingly rely on Python tools to perform scalable distributed memory array operations using rich, NumPy-like expressions. Existing solutions achieve sub-optimal performance on numerical operations and training of machine learning models by relying on dynamic scheduling provided by task-based distributed systems. This can lead to performance problems which are difficult to address without in-depth knowledge of the underlying distributed system. In particular, generalized linear models are difficult to scale given their reliance on element-wise array and basic linear algebra operations.

In this thesis, I present these problems in terms of scalable linear algebra and automatic parallelization of Python. The solutions presented seamlessly scale the NumPy API and generalized linear models on task-based distributed systems. Our overall solution is presented in three primary parts: (1) An approach to parallelizing generalized linear models (GLMs) using blocked matrix operations. (2) The open source library NumS, an implementation of these ideas for the NumPy API optimized for the distributed system Ray. (3) Formal syntax and semantics for automatic parallelization of basic Python and linear algebra operations.

Our primary contribution is NumS, a modular Python-based distributed numerical array library optimized for Ray. Load Simulated Hierarchical Scheduling (LSHS), the scheduler developed for NumS, is capable of attaining communication lower bounds on some common numerical operations. Our empirical study shows that LSHS enhances performance on Ray by decreasing network load by a factor of 2×, requiring 4× less memory, and reducing execution time by 10× on the logistic regression problem. In a comparison to related solutions, LSHS achieves up to 2× speedup on logistic regression compared to Dask ML and Spark’s MLlib on a terabyte of data.

Advisors: Michael Jordan and Ion Stoica


BibTeX citation:

@phdthesis{Elibol:EECS-2022-115,
    Author= {Elibol, Huseyin and Jordan, Michael and Stoica, Ion},
    Title= {NumS: Scalable Array Programming for the Cloud},
    School= {EECS Department, University of California, Berkeley},
    Year= {2022},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-115.html},
    Number= {UCB/EECS-2022-115},
    Abstract= {Scientists increasingly rely on Python tools to perform scalable distributed memory array operations using rich, NumPy-like expressions. Existing solutions achieve sub-optimal performance on numerical operations and training of machine learning models by relying on dynamic scheduling provided by task-based distributed systems. This can lead to performance problems which are difficult to address without in-depth knowledge of the underlying distributed system. In particular, generalized linear models are difficult to scale given their reliance on element-wise array and basic linear algebra operations.

In this thesis, I present these problems in terms of scalable linear algebra and automatic parallelization of Python. The solutions presented seamlessly scale the NumPy API and generalized linear models on task-based distributed systems. Our overall solution is presented in three primary parts: (1) An approach to parallelizing generalized linear models (GLMs) using blocked matrix operations. (2) The open source library NumS, an implementation of these ideas for the NumPy API optimized for the distributed system Ray. (3) Formal syntax and semantics for automatic parallelization of basic Python and linear algebra operations.

Our primary contribution is NumS, a modular Python-based distributed numerical array library optimized for Ray. Load Simulated Hierarchical Scheduling (LSHS), the scheduler
developed for NumS, is capable of attaining communication lower bounds on some common numerical operations. Our empirical study shows that LSHS enhances performance on Ray
by decreasing network load by a factor of 2×, requiring 4× less memory, and reducing execution time by 10× on the logistic regression problem. In a comparison to related solutions, LSHS achieves up to 2× speedup on logistic regression compared to Dask ML and Spark’s MLlib on a terabyte of data.},
}

EndNote citation:

%0 Thesis
%A Elibol, Huseyin 
%A Jordan, Michael 
%A Stoica, Ion 
%T NumS: Scalable Array Programming for the Cloud
%I EECS Department, University of California, Berkeley
%D 2022
%8 May 13
%@ UCB/EECS-2022-115
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-115.html
%F Elibol:EECS-2022-115