Vasily Volkov and James Demmel

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2008-49

May 13, 2008

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2008/EECS-2008-49.pdf

We present performance results for dense linear algebra using the 8-series NVIDIA GPUs. Our matrix-matrix multiply routine (GEMM) runs 60% faster than the vendor implementation in CUBLAS 1.1 and approaches the peak of hardware capabilities. Our LU, QR and Cholesky factorizations achieve up to 80¿90% of the peak GEMM rate. Our parallel LU running on two GPUs achieves up to ~300 Gflop/s. These results are accomplished by challenging the accepted view of the GPU architecture and programming guidelines. We argue that modern GPUs should be viewed as multithreaded multicore vector units. We exploit blocking similarly to vector computers and heterogeneity of the system by computing both on GPU and CPU. This study includes detailed benchmarking of the GPU memory system that reveals sizes and latencies of caches and TLB. We present a couple of algorithmic optimizations aimed at increasing parallelism and regularity in the problem that provide us with slightly higher performance.


BibTeX citation:

@techreport{Volkov:EECS-2008-49,
    Author= {Volkov, Vasily and Demmel, James},
    Title= {LU, QR and Cholesky Factorizations using Vector Capabilities of GPUs},
    Year= {2008},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2008/EECS-2008-49.html},
    Number= {UCB/EECS-2008-49},
    Abstract= {We present performance results for dense linear algebra using the 8-series NVIDIA GPUs. Our matrix-matrix multiply routine (GEMM) runs 60% faster than the vendor implementation in CUBLAS 1.1 and approaches the peak of hardware capabilities. Our LU, QR and Cholesky factorizations achieve up to 80¿90% of the peak GEMM rate. Our parallel LU running on two GPUs achieves up to ~300 Gflop/s. These results are accomplished by challenging the accepted view of the GPU architecture and programming guidelines. We argue that modern GPUs should be viewed as multithreaded multicore vector units. We exploit blocking similarly to vector computers and heterogeneity of the system by computing both on GPU and CPU. This study includes detailed benchmarking of the GPU memory system that reveals sizes and latencies of caches and TLB. We present a couple of algorithmic optimizations aimed at increasing parallelism and regularity in the problem that provide us with slightly higher performance.},
}

EndNote citation:

%0 Report
%A Volkov, Vasily 
%A Demmel, James 
%T LU, QR and Cholesky Factorizations using Vector Capabilities of GPUs
%I EECS Department, University of California, Berkeley
%D 2008
%8 May 13
%@ UCB/EECS-2008-49
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2008/EECS-2008-49.html
%F Volkov:EECS-2008-49