FRPA: A Framework for Recursive Parallel Algorithms
David Eliahu and Omer Spillinger and Armando Fox and James Demmel
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2015-28
May 1, 2015
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2015/EECS-2015-28.pdf
Recursion continues to play an important role in high-performance computing. However, parallelizing recursive algorithms while achieving high performance is nontrivial and can result in complex, hard-to-maintain code. In particular, assigning processors to subproblems is complicated by recent observations that communication costs often dominate computation costs. Previous work demonstrates that carefully choosing which divide-and-conquer steps to execute in parallel (breadth-first steps) and which to execute sequentially (depth-first steps) can result in significant performance gains over naive scheduling. Our Framework for Recursive Parallel Algorithms (FRPA) allows for the separation of an algorithm's implementation from its parallelization. The programmer must simply define how to split a problem, solve the base case, and merge solved subproblems; FRPA handles parallelizing the code and tuning the recursive parallelization strategy, enabling algorithms to achieve high performance. To demonstrate FRPA's performance capabilities, we present a detailed analysis of two algorithms: Strassen-Winograd and Communication-Optimal Parallel Recursive Rectangular Matrix Multiplication (CARMA). Our single-precision CARMA implementation is fewer than 80 lines of code and achieves a speedup of up to 11X over Intel's Math Kernel Library (MKL) matrix multiplication routine on "skinny" matrices. Our double-precision Strassen-Winograd implementation, at just 150 lines of code, is up to 45% faster than MKL for large square matrix multiplications. To show FRPA's generality and simplicity, we implement six additional algorithms: mergesort, quicksort, TRSM, SYRK, Cholesky decomposition, and Delaunay triangulation. FRPA is implemented in C++, runs in shared-memory environments, uses Intel's Cilk Plus for task-based parallelism, and leverages OpenTuner to tune the parallelization strategy.
Advisors: Armando Fox
BibTeX citation:
@mastersthesis{Eliahu:EECS-2015-28, Author= {Eliahu, David and Spillinger, Omer and Fox, Armando and Demmel, James}, Title= {FRPA: A Framework for Recursive Parallel Algorithms}, School= {EECS Department, University of California, Berkeley}, Year= {2015}, Month= {May}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2015/EECS-2015-28.html}, Number= {UCB/EECS-2015-28}, Abstract= {Recursion continues to play an important role in high-performance computing. However, parallelizing recursive algorithms while achieving high performance is nontrivial and can result in complex, hard-to-maintain code. In particular, assigning processors to subproblems is complicated by recent observations that communication costs often dominate computation costs. Previous work demonstrates that carefully choosing which divide-and-conquer steps to execute in parallel (breadth-first steps) and which to execute sequentially (depth-first steps) can result in significant performance gains over naive scheduling. Our Framework for Recursive Parallel Algorithms (FRPA) allows for the separation of an algorithm's implementation from its parallelization. The programmer must simply define how to split a problem, solve the base case, and merge solved subproblems; FRPA handles parallelizing the code and tuning the recursive parallelization strategy, enabling algorithms to achieve high performance. To demonstrate FRPA's performance capabilities, we present a detailed analysis of two algorithms: Strassen-Winograd and Communication-Optimal Parallel Recursive Rectangular Matrix Multiplication (CARMA). Our single-precision CARMA implementation is fewer than 80 lines of code and achieves a speedup of up to 11X over Intel's Math Kernel Library (MKL) matrix multiplication routine on "skinny" matrices. Our double-precision Strassen-Winograd implementation, at just 150 lines of code, is up to 45% faster than MKL for large square matrix multiplications. To show FRPA's generality and simplicity, we implement six additional algorithms: mergesort, quicksort, TRSM, SYRK, Cholesky decomposition, and Delaunay triangulation. FRPA is implemented in C++, runs in shared-memory environments, uses Intel's Cilk Plus for task-based parallelism, and leverages OpenTuner to tune the parallelization strategy.}, }
EndNote citation:
%0 Thesis %A Eliahu, David %A Spillinger, Omer %A Fox, Armando %A Demmel, James %T FRPA: A Framework for Recursive Parallel Algorithms %I EECS Department, University of California, Berkeley %D 2015 %8 May 1 %@ UCB/EECS-2015-28 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2015/EECS-2015-28.html %F Eliahu:EECS-2015-28