Aditya Devarakonda and Kimon Fountoulakis and James Demmel and Michael W. Mahoney

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2016-197

December 12, 2016

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-197.pdf

Primal and dual block coordinate descent methods are iterative methods for solving regularized and unregularized optimization problems. Distributed-memory parallel implementations of these methods have become popular in analyzing large machine learning datasets. However, existing implementations communicate at every iteration which, on modern data center and supercomputing architectures, often dominates the cost of floating-point computation. Recent results on communication-avoiding Krylov subspace methods suggest that large speedups are possible by re-organizing iterative algorithms to avoid communication. We show how applying similar algorithmic transformations can lead to primal and dual block coordinate descent methods that only communicate every s iterations--where s is a tuning parameter--instead of every iteration for the regularized least-squares problem. We derive communication-avoiding variants of the primal and dual block coordinate descent methods which reduce the number of synchronizations by a factor of s on distributed-memory parallel machines without altering the convergence rate. Our communication-avoiding algorithms attain modeled strong scaling speedups of 14x and 165x on a modern supercomputer using MPI and Apache Spark, respectively. Our algorithms attain modeled weak scaling speedups of 12x and 396x on the same machine using MPI and Apache Spark, respectively.


BibTeX citation:

@techreport{Devarakonda:EECS-2016-197,
    Author= {Devarakonda, Aditya and Fountoulakis, Kimon and Demmel, James and Mahoney, Michael W.},
    Title= {Avoiding communication in primal and dual block coordinate descent methods},
    Year= {2016},
    Month= {Dec},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-197.html},
    Number= {UCB/EECS-2016-197},
    Abstract= {Primal and dual block coordinate descent methods are iterative methods for solving regularized and unregularized optimization problems. Distributed-memory parallel implementations of these methods have become popular in analyzing large machine learning datasets. However, existing implementations communicate at every iteration which, on modern data center and supercomputing architectures, often dominates the cost of floating-point computation. Recent results on communication-avoiding Krylov subspace methods suggest that large speedups are possible by re-organizing iterative algorithms to avoid communication. We show how applying similar algorithmic transformations can lead to primal and dual block coordinate descent methods that only communicate every s iterations--where s is a tuning parameter--instead of every iteration for the regularized least-squares problem. We derive communication-avoiding variants of the primal and dual block coordinate descent methods which reduce the number of synchronizations by a factor of s on distributed-memory parallel machines without altering the convergence rate. Our communication-avoiding algorithms attain modeled strong scaling speedups of 14x and 165x on a modern supercomputer using MPI and Apache Spark, respectively. Our algorithms attain modeled weak scaling speedups of 12x and 396x on the same machine using MPI and Apache Spark, respectively.},
}

EndNote citation:

%0 Report
%A Devarakonda, Aditya 
%A Fountoulakis, Kimon 
%A Demmel, James 
%A Mahoney, Michael W. 
%T Avoiding communication in primal and dual block coordinate descent methods
%I EECS Department, University of California, Berkeley
%D 2016
%8 December 12
%@ UCB/EECS-2016-197
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-197.html
%F Devarakonda:EECS-2016-197