The Design and Implementation of Low-Latency Prediction Serving Systems

Daniel Crankshaw

EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2019-171
December 16, 2019

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-171.pdf

Machine learning is being deployed in a growing number of applications which demand real- time, accurate, and cost-efficient predictions under heavy query load. These applications employ a variety of machine learning frameworks and models, often composing several models within the same application. However, most machine learning frameworks and systems are optimized for model training and not deployment.

In this thesis, I discuss three prediction serving systems designed to meet the needs of modern interactive machine learning applications. The key idea in this work is to utilize a decoupled, layered design that interposes systems on top of training frameworks to build low-latency, scalable serving systems. Velox introduced this decoupled architecture to enable fast online learning and model personalization in response to feedback. Clipper generalized this system architecture to be framework-agnostic and introduced a set of optimizations to reduce and bound prediction latency and improve prediction throughput, accuracy, and robustness without modifying the underlying machine learning frameworks. And InferLine provisions and manages the individual stages of prediction pipelines to minimize cost while meeting end-to-end tail latency constraints.

Advisor: Michael Franklin and Joseph Gonzalez


BibTeX citation:

@phdthesis{Crankshaw:EECS-2019-171,
    Author = {Crankshaw, Daniel},
    Title = {The Design and Implementation of Low-Latency Prediction Serving Systems},
    School = {EECS Department, University of California, Berkeley},
    Year = {2019},
    Month = {Dec},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-171.html},
    Number = {UCB/EECS-2019-171},
    Abstract = {Machine learning is being deployed in a growing number of applications which demand real- time, accurate, and cost-efficient predictions under heavy query load. These applications employ a variety of machine learning frameworks and models, often composing several models within the same application. However, most machine learning frameworks and systems are optimized for model training and not deployment.

In this thesis, I discuss three prediction serving systems designed to meet the needs of modern interactive machine learning applications. The key idea in this work is to utilize a decoupled, layered design that interposes systems on top of training frameworks to build low-latency, scalable serving systems. Velox introduced this decoupled architecture to enable fast online learning and model personalization in response to feedback. Clipper generalized this system architecture to be framework-agnostic and introduced a set of optimizations to reduce and bound prediction latency and improve prediction throughput, accuracy, and robustness without modifying the underlying machine learning frameworks. And InferLine provisions and manages the individual stages of prediction pipelines to minimize cost while meeting end-to-end tail latency constraints.}
}

EndNote citation:

%0 Thesis
%A Crankshaw, Daniel
%T The Design and Implementation of Low-Latency Prediction Serving Systems
%I EECS Department, University of California, Berkeley
%D 2019
%8 December 16
%@ UCB/EECS-2019-171
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-171.html
%F Crankshaw:EECS-2019-171