Grey Ballard and James Demmel and Nicholas Knight

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2013-131

July 11, 2013

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2013/EECS-2013-131.pdf

The running time of an algorithm depends on both arithmetic and communication (i.e., data movement) costs, and the relative costs of communication are growing over time. In this work, we present sequential and parallel algorithms for tridiagonalizing a symmetric band matrix that asymptotically reduce communication compared to previous approaches.

The tridiagonalization of a symmetric band matrix is a key kernel in solving the symmetric eigenvalue problem for both full and band matrices. In order to preserve sparsity, tridiagonalization routines use annihilate-and-chase procedures that previously have suffered from poor data locality. We improve data locality by reorganizing the computation and obtain asymptotic improvements. We consider the cases of computing eigenvalues only and of computing eigenvalues and all eigenvectors.


BibTeX citation:

@techreport{Ballard:EECS-2013-131,
    Author= {Ballard, Grey and Demmel, James and Knight, Nicholas},
    Title= {Avoiding Communication in Successive Band Reduction},
    Year= {2013},
    Month= {Jul},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2013/EECS-2013-131.html},
    Number= {UCB/EECS-2013-131},
    Abstract= {The running time of an algorithm depends on both arithmetic and communication (i.e., data movement) costs, and the relative costs of communication are growing over time.  In this work, we present sequential and parallel algorithms for tridiagonalizing a symmetric band matrix that asymptotically reduce communication compared to previous approaches.

The tridiagonalization of a symmetric band matrix is a key kernel in solving the symmetric eigenvalue problem for both full and band matrices. In order to preserve sparsity, tridiagonalization routines use annihilate-and-chase procedures that previously have suffered from poor data locality. We improve data locality by reorganizing the computation and obtain asymptotic improvements. We consider the cases of computing eigenvalues only and of computing eigenvalues and all eigenvectors.},
}

EndNote citation:

%0 Report
%A Ballard, Grey 
%A Demmel, James 
%A Knight, Nicholas 
%T Avoiding Communication in Successive Band Reduction
%I EECS Department, University of California, Berkeley
%D 2013
%8 July 11
%@ UCB/EECS-2013-131
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2013/EECS-2013-131.html
%F Ballard:EECS-2013-131