Communication-optimal parallel and sequential QR and LU factorizations

James Demmel, Laura Grigori, Mark Frederick Hoemmen and Julien Langou

EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2008-89
August 4, 2008

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2008/EECS-2008-89.pdf

We present parallel and sequential dense QR factorization algorithms that are both \emph{optimal} (up to polylogarithmic factors) in the amount of communication they perform, and just as \emph{stable} as Householder QR. Our first algorithm, Tall Skinny QR (TSQR), factors $m \times n$ matrices in a one-dimensional (1-D) block cyclic row layout, and is optimized for $m \gg n$. Our second algorithm, CAQR (Communication-Avoiding QR), factors general rectangular matrices distributed in a two-dimensional block cyclic layout. It invokes TSQR for each block column factorization.

The new algorithms are superior in both theory and practice. We have extended known lower bounds on communication for sequential and parallel matrix multiplication to provide latency lower bounds, and show these bounds apply to the LU and QR decompositions. We not only show that our QR algorithms attain these lower bounds (up to polylogarithmic factors), but that existing LAPACK and ScaLAPACK algorithms perform asymptotically more communication. We also point out recent LU algorithms in the literature that attain at least some of these lower bounds.

Both TSQR and CAQR have asymptotically lower latency cost in the parallel case, and asymptotically lower latency and bandwidth costs in the sequential case. In practice, we have implemented parallel TSQR on several machines, with speedups of up to $6.7\times$ on 16 processors of a Pentium III cluster, and up to $4\times$ on 32 processors of a BlueGene/L. We have also implemented sequential TSQR on a laptop for matrices that do not fit in DRAM, so that slow memory is disk. Our out-of-DRAM implementation was as little as $2\times$ slower than the predicted runtime as though DRAM were infinite.

We have also modeled the performance of our parallel CAQR algorithm, yielding predicted speedups over ScaLAPACK's \lstinline!PDGEQRF! of up to $9.7\times$ on an IBM Power5, up to $22.9\times$ on a model Petascale machine, and up to $5.3\times$ on a model of the Grid.

Author Comments: Current version available in the ArXiv at http://arxiv.org/pdf/0809.0101 Replaces EECS-2008-89 and EECS-2008-74.


BibTeX citation:

@techreport{Demmel:EECS-2008-89,
    Author = {Demmel, James and Grigori, Laura and Hoemmen, Mark Frederick and Langou, Julien},
    Title = {Communication-optimal parallel and sequential QR and LU factorizations},
    Institution = {EECS Department, University of California, Berkeley},
    Year = {2008},
    Month = {Aug},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2008/EECS-2008-89.html},
    Number = {UCB/EECS-2008-89},
    Note = {Current version available in the ArXiv at http://arxiv.org/pdf/0809.0101

Replaces EECS-2008-89 and EECS-2008-74.},
    Abstract = {  We present parallel and sequential dense QR factorization algorithms
  that are both \emph{optimal} (up to polylogarithmic factors) in the
  amount of communication they perform, and just as \emph{stable} as
  Householder QR.  Our first algorithm, Tall Skinny QR (TSQR), factors
  $m \times n$ matrices in a one-dimensional (1-D) block cyclic row
  layout, and is optimized for $m \gg n$.  Our second algorithm, CAQR
  (Communication-Avoiding QR), factors general rectangular matrices
  distributed in a two-dimensional block cyclic layout.  It invokes
  TSQR for each block column factorization.

  The new algorithms are superior in both theory and practice.  We
  have extended known lower bounds on communication for sequential and
  parallel matrix multiplication to provide latency lower bounds, and
  show these bounds apply to the LU and QR decompositions.  We not
  only show that our QR algorithms attain these lower bounds (up to
  polylogarithmic factors), but that existing LAPACK and ScaLAPACK
  algorithms perform asymptotically more communication.  We also point
  out recent LU algorithms in the literature that attain at least some
  of these lower bounds.

  Both TSQR and CAQR have asymptotically lower latency cost in the
  parallel case, and asymptotically lower latency and bandwidth costs
  in the sequential case.  In practice, we have implemented parallel
  TSQR on several machines, with speedups of up to $6.7\times$ on 16
  processors of a Pentium III cluster, and up to $4\times$ on 32
  processors of a BlueGene/L.  We have also implemented sequential
  TSQR on a laptop for matrices that do not fit in DRAM, so that slow
  memory is disk.  Our out-of-DRAM implementation was as little as
  $2\times$ slower than the predicted runtime as though DRAM were
  infinite.

  We have also modeled the performance of our parallel CAQR algorithm,
  yielding predicted speedups over ScaLAPACK's \lstinline!PDGEQRF! of
  up to $9.7\times$ on an IBM Power5, up to $22.9\times$ on a model
  Petascale machine, and up to $5.3\times$ on a model of the Grid.}
}

EndNote citation:

%0 Report
%A Demmel, James
%A Grigori, Laura
%A Hoemmen, Mark Frederick
%A Langou, Julien
%T Communication-optimal parallel and sequential QR and LU factorizations
%I EECS Department, University of California, Berkeley
%D 2008
%8 August 4
%@ UCB/EECS-2008-89
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2008/EECS-2008-89.html
%F Demmel:EECS-2008-89