System Design for Large Scale Machine Learning
Shivaram Venkataraman
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2017-219
December 15, 2017
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-219.pdf
The last decade has seen two main trends in the large scale computing: on the one hand we have seen the growth of cloud computing where a number of big data applications are deployed on shared cluster of machines. On the other hand there is a deluge of machine learning algorithms used for applications ranging from image classification, machine translation to graph processing, and scientific analysis on large datasets. In light of these trends, a number of challenges arise in terms of how we program, deploy and achieve high performance for large scale machine learning applications. In this dissertation we study the execution properties of machine learning applications and based on these properties we present the design and implementation of systems that can address the above challenges. We first identify how choosing the appropriate hardware can affect the performance of applications and describe Ernest, an efficient performance prediction scheme that uses experiment design to minimize the cost and time taken for building performance models. We then design scheduling mechanisms that can improve performance using two approaches: first by improving data access time by accounting for locality using data-aware scheduling and then by using scalable scheduling techniques that can reduce coordination overheads.
Advisors: Michael Franklin and Ion Stoica
BibTeX citation:
@phdthesis{Venkataraman:EECS-2017-219, Author= {Venkataraman, Shivaram}, Title= {System Design for Large Scale Machine Learning}, School= {EECS Department, University of California, Berkeley}, Year= {2017}, Month= {Dec}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-219.html}, Number= {UCB/EECS-2017-219}, Abstract= {The last decade has seen two main trends in the large scale computing: on the one hand we have seen the growth of cloud computing where a number of big data applications are deployed on shared cluster of machines. On the other hand there is a deluge of machine learning algorithms used for applications ranging from image classification, machine translation to graph processing, and scientific analysis on large datasets. In light of these trends, a number of challenges arise in terms of how we program, deploy and achieve high performance for large scale machine learning applications. In this dissertation we study the execution properties of machine learning applications and based on these properties we present the design and implementation of systems that can address the above challenges. We first identify how choosing the appropriate hardware can affect the performance of applications and describe Ernest, an efficient performance prediction scheme that uses experiment design to minimize the cost and time taken for building performance models. We then design scheduling mechanisms that can improve performance using two approaches: first by improving data access time by accounting for locality using data-aware scheduling and then by using scalable scheduling techniques that can reduce coordination overheads.}, }
EndNote citation:
%0 Thesis %A Venkataraman, Shivaram %T System Design for Large Scale Machine Learning %I EECS Department, University of California, Berkeley %D 2017 %8 December 15 %@ UCB/EECS-2017-219 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-219.html %F Venkataraman:EECS-2017-219