TY - JOUR
T1 - An MPI-CUDA implementation and optimization for parallel Sparse Equations and Least Squares (LSQR)
AU - Huang, He
AU - Wang, Liqiang
AU - Lee, En Jui
AU - Chen, Po
N1 - Funding Information:
1The work was supported in part by NSF under Grants 0941735 and NSF CAREER 1054834, and by the Graduate Assistantship of the School of Energy Resources at the University of Wyoming. This research used resources of the Keeneland Computing Facility supported by the NSF under Contract OCI-0910735. Thanks to Galen Arnold of NCSA and Dr. John Dennis of NCAR for their insightful suggestions. ∗Corresponding author. Email address: [email protected]
PY - 2012
Y1 - 2012
N2 - LSQR (Sparse Equations and Least Squares) is a widely used Krylov subspace method to solve large-scale linear systems in seismic tomography. This paper presents a parallel MPI-CUDA implementation for LSQR solver. On CUDA level, our contributions include: (1) utilize CUBLAS and CUSPARSE to compute major steps in LSQR; (2) optimize memory copy between host memory and device memory; (3) develop a CUDA kernel to perform transpose SpMV without transposing the matrix in memory or preserving additional copy. On MPI level, our contributions include: (1) decompose both matrix and vector to increase parallelism; (2) design a static load balancing strategy. In our experiment, the single GPU code achieves up to 17.6× speedup with 15.7 GFlops in single precision and 15.2× speedup with 12.0 GFlops in double precision compared with the original serial CPU code. The MPI-GPU code achieves up to 3.7× speedup with 268 GFlops in single precision and 3.8× speedup with 223 GFlops in double precision on 135 MPI tasks compared with the corresponding MPI-CPU code. The MPI-GPU code scales on both strong and weak scaling tests. In addition, our parallel implementations have better performance than the LSQR subroutine in PETSc library.
AB - LSQR (Sparse Equations and Least Squares) is a widely used Krylov subspace method to solve large-scale linear systems in seismic tomography. This paper presents a parallel MPI-CUDA implementation for LSQR solver. On CUDA level, our contributions include: (1) utilize CUBLAS and CUSPARSE to compute major steps in LSQR; (2) optimize memory copy between host memory and device memory; (3) develop a CUDA kernel to perform transpose SpMV without transposing the matrix in memory or preserving additional copy. On MPI level, our contributions include: (1) decompose both matrix and vector to increase parallelism; (2) design a static load balancing strategy. In our experiment, the single GPU code achieves up to 17.6× speedup with 15.7 GFlops in single precision and 15.2× speedup with 12.0 GFlops in double precision compared with the original serial CPU code. The MPI-GPU code achieves up to 3.7× speedup with 268 GFlops in single precision and 3.8× speedup with 223 GFlops in double precision on 135 MPI tasks compared with the corresponding MPI-CPU code. The MPI-GPU code scales on both strong and weak scaling tests. In addition, our parallel implementations have better performance than the LSQR subroutine in PETSc library.
UR - http://www.scopus.com/inward/record.url?scp=84868318836&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84868318836&partnerID=8YFLogxK
U2 - 10.1016/j.procs.2012.04.009
DO - 10.1016/j.procs.2012.04.009
M3 - Conference article
AN - SCOPUS:84868318836
SN - 1877-0509
VL - 9
SP - 76
EP - 85
JO - Procedia Computer Science
JF - Procedia Computer Science
T2 - 12th Annual International Conference on Computational Science, ICCS 2012
Y2 - 4 June 2012 through 6 June 2012
ER -