Towards Understanding Cloud Performance Tradeoffs Using Statistical Workload Analysis and Replay

Yanpei Chen, Archana Sulochana Ganapathi, Rean Griffith and Randy H. Katz

EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2010-81
May 15, 2010

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2010/EECS-2010-81.pdf

Cloud computing has given rise to a variety of distributed applications that rely on the ability to harness commodity resources for large scale computations. The inherent performance variability in these applications’ workload coupled with the system’s heterogeneity render ineffective heuristics-based design decisions such as system configuration, application partitioning and placement, and job scheduling. Furthermore, the cloud operator’s objective to maximize utilization conflicts with cloud application developers’ goals of minimizing latency, necessitating systematic approaches to tradeoff these optimization angles. One important cloud application that highlights these tradeoffs is MapReduce. In this paper, we demonstrate a systematic approach to reasoning about cloud performance tradeoffs using a tool we developed called Statistical Workload Analysis and Replay for MapReduce (SWARM). We use SWARM to generate realistic workloads to examine latency-utilization tradeoffs in MapReduce. SWARM enables us to infer that batched and multi-tenant execution effectively balance the tradeoff between latency and cluster utilization, a key insight for cloud operators.


BibTeX citation:

@techreport{Chen:EECS-2010-81,
    Author = {Chen, Yanpei and Ganapathi, Archana Sulochana and Griffith, Rean and Katz, Randy H.},
    Title = {Towards Understanding Cloud Performance Tradeoffs Using Statistical Workload Analysis and Replay},
    Institution = {EECS Department, University of California, Berkeley},
    Year = {2010},
    Month = {May},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2010/EECS-2010-81.html},
    Number = {UCB/EECS-2010-81},
    Abstract = {Cloud computing has given rise to a variety of distributed applications that rely on the ability to harness commodity resources for large scale computations. The inherent performance variability in these applications’ workload coupled with the system’s heterogeneity render ineffective heuristics-based design decisions such as system configuration, application partitioning and placement, and job scheduling. Furthermore, the cloud operator’s objective to maximize utilization conflicts with cloud application developers’ goals of minimizing latency, necessitating systematic approaches to tradeoff these optimization angles. One important cloud application that highlights these tradeoffs is MapReduce.
In this paper, we demonstrate a systematic approach to reasoning about cloud performance tradeoffs using a tool we developed called Statistical Workload Analysis and Replay for MapReduce (SWARM). We use SWARM to generate realistic workloads to examine latency-utilization tradeoffs in MapReduce. SWARM enables us to infer that batched and multi-tenant execution effectively balance the tradeoff between latency and cluster utilization, a key insight for cloud operators.}
}

EndNote citation:

%0 Report
%A Chen, Yanpei
%A Ganapathi, Archana Sulochana
%A Griffith, Rean
%A Katz, Randy H.
%T Towards Understanding Cloud Performance Tradeoffs Using Statistical Workload Analysis and Replay
%I EECS Department, University of California, Berkeley
%D 2010
%8 May 15
%@ UCB/EECS-2010-81
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2010/EECS-2010-81.html
%F Chen:EECS-2010-81