Avoiding communication in primal and dual block coordinate descent methods
Aditya Devarakonda
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2016-196
December 12, 2016
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-196.pdf
Primal and dual block coordinate descent methods are iterative methods for solving regularized and unregularized optimization problems. Distributed-memory parallel implementations of these methods have become popular in analyzing large machine learning datasets. However, existing implementations communicate at every iteration which, on modern data center and supercomputing architectures, often dominates the cost of floating-point computation. Recent results on communication-avoiding Krylov subspace methods suggest that large speedups are possible by re-organizing iterative algorithms to avoid communication. We show how applying similar algorithmic transformations can lead to primal and dual block coordinate descent methods that only communicate every s iterations--where s is a tuning parameter--instead of every iteration for the regularized least-squares problem. We derive communication-avoiding variants of the primal and dual block coordinate descent methods which reduce the number of synchronizations by a factor of s on distributed-memory parallel machines without altering the convergence rate. Our communication-avoiding algorithms attain modeled strong scaling speedups of 14x and 165x on a modern supercomputer using MPI and Apache Spark, respectively. Our algorithms attain modeled weak scaling speedups of 12x and 396x on the same machine using MPI and Apache Spark, respectively.
Advisors: James Demmel and Michael William Mahoney
BibTeX citation:
@mastersthesis{Devarakonda:EECS-2016-196, Author= {Devarakonda, Aditya}, Title= {Avoiding communication in primal and dual block coordinate descent methods}, School= {EECS Department, University of California, Berkeley}, Year= {2016}, Month= {Dec}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-196.html}, Number= {UCB/EECS-2016-196}, Abstract= {Primal and dual block coordinate descent methods are iterative methods for solving regularized and unregularized optimization problems. Distributed-memory parallel implementations of these methods have become popular in analyzing large machine learning datasets. However, existing implementations communicate at every iteration which, on modern data center and supercomputing architectures, often dominates the cost of floating-point computation. Recent results on communication-avoiding Krylov subspace methods suggest that large speedups are possible by re-organizing iterative algorithms to avoid communication. We show how applying similar algorithmic transformations can lead to primal and dual block coordinate descent methods that only communicate every s iterations--where s is a tuning parameter--instead of every iteration for the regularized least-squares problem. We derive communication-avoiding variants of the primal and dual block coordinate descent methods which reduce the number of synchronizations by a factor of s on distributed-memory parallel machines without altering the convergence rate. Our communication-avoiding algorithms attain modeled strong scaling speedups of 14x and 165x on a modern supercomputer using MPI and Apache Spark, respectively. Our algorithms attain modeled weak scaling speedups of 12x and 396x on the same machine using MPI and Apache Spark, respectively.}, }
EndNote citation:
%0 Thesis %A Devarakonda, Aditya %T Avoiding communication in primal and dual block coordinate descent methods %I EECS Department, University of California, Berkeley %D 2016 %8 December 12 %@ UCB/EECS-2016-196 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-196.html %F Devarakonda:EECS-2016-196