Speeding up Crowds for Low-latency Data Labeling
Daniel Haas
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2016-10
March 10, 2016
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-10.pdf
Data labeling is a necessary but often slow process that impedes the development of interactive systems for modern data analysis. Despite rising demand for manual data labeling, there is a surprising lack of work addressing its high and unpredictable latency. In this paper, we introduce CLAMShell, a system that speeds up crowds in order to achieve consistently low-latency data labeling. We offer a taxonomy of the sources of labeling latency and study several large crowd-sourced labeling deployments to understand their empirical latency profiles. Driven by these insights, we comprehensively tackle each source of latency, both by developing novel techniques such as straggler mitigation and pool maintenance and by optimizing existing methods such as crowd retainer pools and active learning. We evaluate CLAMShell in simulation and on live workers on Amazon's Mechanical Turk, demonstrating that our techniques can provide an order of magnitude speedup and variance reduction over existing crowdsourced labeling strategies.
Advisors: Michael Franklin
BibTeX citation:
@mastersthesis{Haas:EECS-2016-10, Author= {Haas, Daniel}, Title= {Speeding up Crowds for Low-latency Data Labeling}, School= {EECS Department, University of California, Berkeley}, Year= {2016}, Month= {Mar}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-10.html}, Number= {UCB/EECS-2016-10}, Abstract= {Data labeling is a necessary but often slow process that impedes the development of interactive systems for modern data analysis. Despite rising demand for manual data labeling, there is a surprising lack of work addressing its high and unpredictable latency. In this paper, we introduce CLAMShell, a system that speeds up crowds in order to achieve consistently low-latency data labeling. We offer a taxonomy of the sources of labeling latency and study several large crowd-sourced labeling deployments to understand their empirical latency profiles. Driven by these insights, we comprehensively tackle each source of latency, both by developing novel techniques such as straggler mitigation and pool maintenance and by optimizing existing methods such as crowd retainer pools and active learning. We evaluate CLAMShell in simulation and on live workers on Amazon's Mechanical Turk, demonstrating that our techniques can provide an order of magnitude speedup and variance reduction over existing crowdsourced labeling strategies.}, }
EndNote citation:
%0 Thesis %A Haas, Daniel %T Speeding up Crowds for Low-latency Data Labeling %I EECS Department, University of California, Berkeley %D 2016 %8 March 10 %@ UCB/EECS-2016-10 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-10.html %F Haas:EECS-2016-10