Corey Zumar

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2018-76

May 18, 2018

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-76.pdf

Model composition in the form of prediction pipelines is an emerging pattern in the design of machine learning applications that offers the opportunity to substantially simplify development, improve accuracy, and reduce cost. However, in low-latency settings spanning multiple machine learning frameworks with varying resource requirements, prediction pipelines are challenging and expensive to provision and execute.

In this paper we address the challenges of allocating resources and efficiently and reliably executing prediction pipelines spanning multiple machine learning models and frameworks. We exploit the reproducible performance characteristics of individual models and monotonic performance scaling of prediction workloads to decompose the resource allocation and performance tuning problem along model boundaries. Consequently, we are able to estimate and optimize end-to-end system performance.

Our proposed system---InferLine---leverages these insights and instantiates a general-purpose framework for serving prediction pipelines. We demonstrate that InferLine is able to configure and execute prediction pipelines across a wide range of throughput and latency goals and achieve over a 6x reduction in cost when compared to a hand-tuned and horizontally scaled single process pipeline.

Advisors: Joseph Gonzalez


BibTeX citation:

@mastersthesis{Zumar:EECS-2018-76,
    Author= {Zumar, Corey},
    Title= {InferLine: ML Inference Pipeline Composition Framework},
    School= {EECS Department, University of California, Berkeley},
    Year= {2018},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-76.html},
    Number= {UCB/EECS-2018-76},
    Abstract= {Model composition in the form of prediction pipelines is an emerging pattern in the design of machine learning applications that offers the opportunity to substantially simplify development, improve accuracy, and reduce cost. However, in low-latency settings spanning multiple machine learning frameworks with varying resource requirements, prediction pipelines are challenging and expensive to provision and execute.

In this paper we address the challenges of allocating resources and efficiently and reliably executing prediction pipelines spanning multiple machine learning models and frameworks. We exploit the reproducible performance characteristics of individual models and monotonic performance scaling of prediction workloads to decompose the resource allocation and performance tuning problem along model boundaries. Consequently, we are able to estimate and optimize end-to-end system performance.

Our proposed system---InferLine---leverages these insights and instantiates a general-purpose framework for serving prediction pipelines. We demonstrate that InferLine is able to configure and execute prediction pipelines across a wide range of throughput and latency goals and achieve over a 6x reduction in cost when compared to a hand-tuned and horizontally scaled single process pipeline.},
}

EndNote citation:

%0 Thesis
%A Zumar, Corey 
%T InferLine: ML Inference Pipeline Composition Framework
%I EECS Department, University of California, Berkeley
%D 2018
%8 May 18
%@ UCB/EECS-2018-76
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-76.html
%F Zumar:EECS-2018-76