Jiaqi Xie

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2017-59

May 11, 2017

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-59.pdf

This capstone project focuses on developing a cost-effective, energy-effective and computationally-powerful distributed machine learning library (BIDMach), to catch and lead the current trend of big data. We are collaborating with our industry partner OpenChai, a start-up aiming at putting distributed BIDMach on their mobile-GPU-based hardware and leveraging all its advantages. Ultimately, our goal is to bring machine learning to small companies and customers, to further benefit the general public.

Our capstone team work on making BIDMach running on cluster of machines to increase the overall computing power. As a setup, we made great improvement on BIDMach’s communication framework, which is discussed in my paper. The capstone report of my teammate Aleks Kamko is all about our core technical accomplishments - parallel version of machine learning algorithms. Quanlai Li will talk about other miscellaneous achievements in his part, such as the better updating rule - EASGD and computation of network communication bandwidth.

Chapter 1 of this paper (Technical Contribution) covers the motivation, design, implementation, result and discussion of the communication framework, which is the underlying core of parallel models. On the business side, OpenChai’s integrated product aims to solve three main problems in current mainstream machine learning solution - low computation power, waste of energy and data privacy issue. Details will be further discussed in Chapter 2 - Engineering Leadership part of this paper.

Advisors: John F. Canny


BibTeX citation:

@mastersthesis{Xie:EECS-2017-59,
    Author= {Xie, Jiaqi},
    Title= {Scaling Up Deep Learning on Clusters},
    School= {EECS Department, University of California, Berkeley},
    Year= {2017},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-59.html},
    Number= {UCB/EECS-2017-59},
    Abstract= {This capstone project focuses on developing a cost-effective, energy-effective and computationally-powerful distributed machine learning library (BIDMach), to catch and lead the current trend of big data. We are collaborating with our industry partner OpenChai, a start-up aiming at putting distributed BIDMach on their mobile-GPU-based hardware and leveraging all its advantages. Ultimately, our goal is to bring machine learning to small companies and customers, to further benefit the general public.

Our capstone team work on making BIDMach running on cluster of machines to increase the overall computing power. As a setup, we made great improvement on BIDMach’s communication framework, which is discussed in my paper. The capstone report of my teammate Aleks Kamko is all about our core technical accomplishments - parallel version of machine learning algorithms. Quanlai Li will talk about other miscellaneous achievements in his part, such as the better updating rule - EASGD and computation of network communication bandwidth. 

Chapter 1 of this paper (Technical Contribution) covers the motivation, design, implementation, result and discussion of the communication framework, which is the underlying core of parallel models. On the business side, OpenChai’s integrated product aims to solve three main problems in current mainstream machine learning solution - low computation power, waste of energy and data privacy issue. Details will be further discussed in Chapter 2 - Engineering Leadership part of this paper.},
}

EndNote citation:

%0 Thesis
%A Xie, Jiaqi 
%T Scaling Up Deep Learning on Clusters
%I EECS Department, University of California, Berkeley
%D 2017
%8 May 11
%@ UCB/EECS-2017-59
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-59.html
%F Xie:EECS-2017-59