Rashmi Vinayak

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2016-155

September 14, 2016

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-155.pdf

Big-data systems enable storage and analysis of massive amounts of data, and are fueling the data revolution that is impacting almost all walks of human endeavor today. The foundation of any big-data system is a large-scale, distributed, data storage system. These storage systems are typically built out of inexpensive and unreliable commodity components, which in conjunction with numerous other operational glitches make unavailability events the norm rather than the exception.

In order to ensure data durability and service reliability, data needs to be stored redundantly. While the traditional approach towards this objective is to store multiple replicas of the data, today's unprecedented data growth rates mandate more efficient alternatives. Coding theory, and erasure coding in particular, offers a compelling alternative by making optimal use of the storage space. For this reason, many data-center scale distributed storage systems are beginning to deploy erasure coding instead of replication. This paradigm shift has opened up exciting new challenges and opportunities both on the theoretical as well as the system design fronts. Broadly, this thesis addresses some of these challenges and opportunities by contributing in the following two areas:

(1) Resource-efficient distributed storage codes and systems: Although traditional erasure codes optimize the usage of storage space, they result in a significant increase in the consumption of other important cluster resources such as the network bandwidth, input-output operations on the storage devices (I/O), and computing resources (CPU). This thesis considers the problem of constructing codes, and designing and building storage systems, that reduce the usage of I/O, network, and CPU resources while not compromising on storage efficiency.

(2) New avenues for erasure coding in big-data systems: In big-data systems, the usage of erasure codes has largely been limited to disk-based storage systems, and furthermore, primarily towards achieving space-efficient fault tolerance---in other words, to durably store "cold" (less-frequently accessed) data. This thesis takes a step forward in exploring new avenues for erasure coding---in particular for "hot" (more-frequently accessed) data---by showing how erasure coding can be employed to improve load balancing, and to reduce the (median and tail) latencies in data-intensive cluster caches.

An overarching goal of this thesis is to bridge theory and practice. Towards this goal, we present new code constructions and techniques that possess attractive theoretical guarantees. We also design and build systems that employ the proposed codes and techniques. These systems exhibit significant benefits over the state-of-the-art in evaluations that we perform in real-world settings, and are also slated to be a part of the next release of Apache Hadoop.

Advisors: Kannan Ramchandran


BibTeX citation:

@phdthesis{Vinayak:EECS-2016-155,
    Author= {Vinayak, Rashmi},
    Title= {Erasure Coding for Big-data Systems: Theory and Practice},
    School= {EECS Department, University of California, Berkeley},
    Year= {2016},
    Month= {Sep},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-155.html},
    Number= {UCB/EECS-2016-155},
    Abstract= {Big-data systems enable storage and analysis of massive amounts of data, and are fueling the data revolution that is impacting almost all walks of human endeavor today. The foundation of any big-data system is a large-scale, distributed, data storage system. These storage systems are typically built out of inexpensive and unreliable commodity components, which in conjunction with numerous other operational glitches make unavailability events the norm rather than the exception. 

In order to ensure data durability and service reliability, data needs to be stored redundantly. While the traditional approach towards this objective is to store multiple replicas of the data, today's unprecedented data growth rates mandate more efficient alternatives. Coding theory, and erasure coding in particular, offers a compelling alternative by making optimal use of the storage space. For this reason, many data-center scale distributed storage systems are beginning to deploy erasure coding instead of replication. This paradigm shift has opened up exciting new challenges and opportunities both on the theoretical as well as the system design fronts. Broadly, this thesis addresses some of these challenges and opportunities by contributing in the following two areas:

(1) Resource-efficient distributed storage codes and systems: 
Although traditional erasure codes optimize the usage of storage space, they result in a significant increase in the consumption of other important cluster resources such as the network bandwidth, input-output operations on the storage devices (I/O), and computing resources (CPU). This thesis considers the problem of constructing codes, and designing and building storage systems, that reduce the usage of I/O, network, and CPU resources while not compromising on storage efficiency.

(2) New avenues for erasure coding in big-data systems: 
In big-data systems, the usage of erasure codes has largely been limited to disk-based storage systems, and furthermore, primarily towards achieving space-efficient fault tolerance---in other words, to durably store "cold" (less-frequently accessed) data. This thesis takes a step forward in exploring new avenues for erasure coding---in particular for "hot" (more-frequently accessed) data---by showing how erasure coding can be employed to improve load balancing, and to reduce the (median and tail) latencies in data-intensive cluster caches.

An overarching goal of this thesis is to bridge theory and practice. Towards this goal, we present new code constructions and techniques that possess attractive theoretical guarantees. We also design and build systems that employ the proposed codes and techniques. These systems exhibit significant benefits over the state-of-the-art in evaluations that we perform in real-world settings, and are also slated to be a part of the next release of Apache Hadoop.},
}

EndNote citation:

%0 Thesis
%A Vinayak, Rashmi 
%T Erasure Coding for Big-data Systems: Theory and Practice
%I EECS Department, University of California, Berkeley
%D 2016
%8 September 14
%@ UCB/EECS-2016-155
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-155.html
%F Vinayak:EECS-2016-155