Abstract
For the time-domain linear system with diagonal sparse matrices, based on the popular preconditioned generalized minimum residual method (GMRES), we proposed an efficient solving algorithm on the graphics processing unit (GPU), which is called T-GMRES. In the proposed T-GMRES, three are the following novelties: (1) a new sparse storage format BRCSD is presented to alleviate the drawback of the diagonal format (DIA) that a large number of zeros are filled to maintain the diagonal structure when many diagonals are far away from the main diagonal; (2) an efficient sparse matrix-vector multiplication on GPU for BRCSD is proposed; and (3) for assembling the sparse matrix for BRCSD and the vector efficiently on GPU, a new kernel is suggested. The experimental results have validated the high efficiency and good performance of our proposed algorithm.
The research has been supported by the Natural Science Foundation of Zhejiang Province, China under grant number LY17F020021, and the Natural Science Foundation of Jiangsu Province, China under grant number BK20171480, and the Open Project Program of the State Laboratory of Computer Architecture under grant number CARCH201603, and the Qing Lan Project of Nanjing Normal University.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Qin, M., Wang, Y.: Structure-Preserving Algorithm of Partial Differential Equations. Zhejiang Science and Technology Press, Hangzhou (2011)
CUDA C Programming Guide 9.0. http://docs.nvidia.com/cuda/cuda-c-programming-guide
Bell, N., Garland, M.: Implementing sparse matrix-vector multiplication on throughput-oriented processors. In: Proceedings Conference on High Performance Computing Networking, Storage and Analysis (SC 2009), pp. 14–19. ACM, New York (2009)
Saad, Y.: Iterative Methods for Sparse Linear Systems, second version. SIAM, Philadelphia, PA (2003)
Couturier, R., Domas, S.: Sparse systems solving on GPUs with GMRES. J. Supercomput. 59(3), 1504–1516 (2012)
Li, R., Saad, Y.: GPU-accelerated preconditioned iterative linear solvers. J. Supercomput. 63(2), 443–466 (2013)
Yang, B., Liu, H., Chen, Z.: Preconditioned GMRES solver on multiple-GPU architecture. Comput. Math. Appl. 72(4), 1076–1095 (2016)
Gao, J., Wu, K., Wang, Y., Qi, P., He, G.: GPU-accelerated preconditioned GMRES method for two-dimensional Maxwell’s equations. Int. J. Comput. Math. 94(10), 2122–2144 (2017)
Choi, J.W., Singh, A., Vuduc, R.W.: Model-driven autotuning of sparse matrix-vector multiply on GPUs. In: Proceedings of the 15th ACM SIGPLAN Symposium Principles and Practice of Parallel Programming (PPoPP 2010), pp. 9–14. ACM, Bangalore (2010)
Yan, S., Li, C., Zhang, Y.: yaSpMV: Yet another SpMV framework on GPUs. In: Proceedings of the 19th ACM SIGPLAN Symposium Principles and Practice of Parallel Programming (PPoPP 2014), pp. 107–118. ACM, New York (2014)
Kreutzer, M., Hager, G., Wellein, G.: A unified sparse matrix data format for efficient general sparse matrix-vector multiply on modern processors with wide simd units. SIAM J. Sci. Comput. 36(5), C401–C423 (2014)
Gao, J., Liang, R., Wang, J.: Research on the conjugate gradient algorithm with a modified incomplete Cholesky preconditioner on GPU. J. Parallel Distr. Comput. 74(2), 2088–2098 (2014)
Filippone, S., Cardellini, V., Barbieri, D.: Sparse matrix-vector multiplication on GPGPUs. ACM Trans. Math. Software 43(4), 30 (2017)
Gao, J., Wang, Y., Wang, J.: A novel multi-graphics processing unit parallel optimization framework for the sparse matrix-vector multiplication. Concurr. Comput.-Pract. E. 29(5), e3936 (2017)
Gao, J., Wang, Y., Wang, J., Liang, R.: Adaptive optimization modeling of preconditioned conjugate gradient on multi-GPUs. ACM Trans. Parallel Comput. 3(3), 16 (2016)
Sun, X., Zhang, Y., Wang, T.: Optimizing SpMV for diagonal sparse matrices on GPU. In: 2011 International Conference on Parallel Processing, ICPP 2011, pp. 492–501. IEEE, Taipei (2011)
CUBLAS Library 9.0. http://docs.nvidia.com/cuda/cublas
He, G., Gao, J., Wang, J.: Efficient dense matrix-vector multiplication on GPU. Concurr. Comput.-Pract. E., e4705(2018). https://doi.org/10.1002/cpe.4705
Abdelfattah, A., Keyes, D., Ltaief, H.: KBLAS: an optimized library for dense matrix-vector multiplication on GPU accelerators. ACM Trans. Math. Software 42(3), 18 (2014)
Davis, T.A., Hu, Y.: The university of florida sparse matrix collection. ACM Trans. Math. Software 38(1), 1–25 (2011)
Gao, J., Zhou, Y., He, G., Xia, Y.: A multi-GPU parallel optimization model for the preconditioned conjugate gradient algorithm. Parallel Comput. 63, 1–16 (2017)
Wang, T., Zhao, X., Jiang, J.: Unconditional and optimal \(H^2\)-error estimates of two linear and conservative finite difference schemes for the Klein-Gordon-Schrödinger equation in high dimensions. Adv. Comput. Math. 44(2), 477–503 (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Xia, Y., Gao, J., He, G. (2019). A Parallel Solving Algorithm on GPU for the Time-Domain Linear System with Diagonal Sparse Matrices. In: Ren, R., Zheng, C., Zhan, J. (eds) Big Scientific Data Benchmarks, Architecture, and Systems. SDBA 2018. Communications in Computer and Information Science, vol 911. Springer, Singapore. https://doi.org/10.1007/978-981-13-5910-1_7
Download citation
DOI: https://doi.org/10.1007/978-981-13-5910-1_7
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-13-5909-5
Online ISBN: 978-981-13-5910-1
eBook Packages: Computer ScienceComputer Science (R0)