Ray: A Distributed Execution Engine for the Machine Learning Ecosystem

Philipp Moritz

EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2019-124
August 16, 2019

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-124.pdf

In recent years, growing data volumes and more sophisticated computational procedures have greatly increased the demand for computational power. Machine learning and artificial intelligence applications, for example, are notorious for their computational requirements. At the same time, Moores law is ending and processor speeds are stalling. As a result, distributed computing has become ubiquitous. While the cloud makes distributed hardware infrastructure widely accessible and therefore offers the potential of horizontal scale, developing these distributed algorithms and applications remains surprisingly hard. This is due to the inherent complexity of concurrent algorithms, the engineering challenges that arise when communicating between many machines, the requirements like fault tolerance and straggler mitigation that arise at large scale and the lack of a general-purpose distributed execution engine that can support a wide variety of applications.

In this thesis, we study the requirements for a general-purpose distributed computation model and present a solution that is easy to use yet expressive and resilient to faults. At its core our model takes familiar concepts from serial programming, namely functions and classes, and generalizes them to the distributed world, therefore unifying stateless and stateful distributed computation. This model not only supports many machine learning workloads like training or serving, but is also a good t for cross-cutting machine learning applications like reinforcement learning and data processing applications like streaming or graph processing. We implement this computational model as an open-source system called Ray, which matches or exceeds the performance of specialized systems in many application domains, while also offering horizontally scalability and strong fault tolerance properties.

Advisor: Michael Jordan and Ion Stoica


BibTeX citation:

@phdthesis{Moritz:EECS-2019-124,
    Author = {Moritz, Philipp},
    Title = {Ray: A Distributed Execution Engine for the Machine Learning Ecosystem},
    School = {EECS Department, University of California, Berkeley},
    Year = {2019},
    Month = {Aug},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-124.html},
    Number = {UCB/EECS-2019-124},
    Abstract = {In recent years, growing data volumes and more sophisticated computational procedures have greatly increased the demand for computational power. Machine learning and artificial intelligence applications, for example, are notorious for their computational requirements. At the same time, Moores law is ending and processor speeds are stalling. As a result, distributed computing has become ubiquitous. While the cloud makes distributed hardware infrastructure widely accessible and therefore offers the potential of horizontal scale, developing these distributed algorithms and applications remains surprisingly hard. This is due to the inherent complexity of concurrent algorithms, the engineering challenges that arise when communicating between many machines, the requirements like fault tolerance and straggler mitigation that arise at large scale and the lack of a general-purpose distributed execution engine that can support a wide variety of applications.

In this thesis, we study the requirements for a general-purpose distributed computation model and present a solution that is easy to use yet expressive and resilient to faults. At its core our model takes familiar concepts from serial programming, namely functions and classes, and generalizes them to the distributed world, therefore unifying stateless and stateful distributed computation. This model not only supports many machine learning workloads like training or serving, but is also a good t for cross-cutting machine learning applications like reinforcement learning and data processing applications like streaming or graph processing. We implement this computational model as an open-source system called Ray, which matches or exceeds the performance of specialized systems in many application domains, while also offering horizontally scalability and strong fault tolerance properties.}
}

EndNote citation:

%0 Thesis
%A Moritz, Philipp
%T Ray: A Distributed Execution Engine for the Machine Learning Ecosystem
%I EECS Department, University of California, Berkeley
%D 2019
%8 August 16
%@ UCB/EECS-2019-124
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-124.html
%F Moritz:EECS-2019-124