Skip to main content

Solving Large Systems of Linear Equations on GPUs

  • Conference paper
  • First Online:
Advances in Computing (CCC 2018)

Abstract

Graphical Processing Units (GPUs) have become more accessible peripheral devices with great computing capacity. Moreover, GPUs can be used not only to accelerate the graphics produced by a computer but also for general purpose computing. Many researchers use this technique on their personal workstations to accelerate the execution of their programs and have often encountered that the amount of memory available on GPU cards is typically smaller than the amount of memory available on the host computer. We are interested in exploring approaches to solve problems with this restriction.

Our main contribution is to devise ways in which portions of the problem can be moved to the memory of the GPU to be solved using its multiprocessing capabilities. We implemented on a GPU the Jacobi iterative method to solve systems of linear equations and report the details from the results obtained, analyzing its performance and accuracy. Our code solves a system of linear equations large enough to exceed the card’s memory, but not the host memory. Significant speedups were observed, as the execution time taken to solve each system is faster than those obtained with Intel\(^{\tiny \textregistered }\) MKL and Eigen, libraries designed to work on CPUs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We use initial_row to point the diagonal element. This variable is calculated with respect to the general matrix.

References

  1. Papakonstantinou, A., Gururaj, K., Stratton, J.A., Chen, D., Cong, J., Hwu, W.-M.W.: FCUDA: enabling efficient compilation of CUDA kernels onto FPGAS. In: 2009 IEEE 7th Symposium on Application Specific Processors (2009)

    Google Scholar 

  2. Donno, D.D., Esposito, A., Tarricone, L., Catarinucci, L.: Introduction to GPU computing and CUDA programming: a case study on FDTD [em programmers notebook]. IEEE Antennas Propag. Mag. 52(3), 116–122 (2010)

    Article  Google Scholar 

  3. Tomov, S., Nath, R., Ltaief, H., Dongarra, J.: Dense linear algebra solvers for multicore with GPU accelerators. In: Proceedings of the IEEE IPDPS 2010, 19–23 April 2010, Atlanta, GA, pp. 1–8. IEEE Computer Society (2010). https://doi.org/10.1109/IPDPSW.2010.5470941

  4. Dongarra, J., et al.: Accelerating numerical dense linear algebra calculations with GPUs. In: Kindratenko, V. (ed.) Numerical Computations with GPUs, pp. 3–28. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-06548-9_1

    Chapter  Google Scholar 

  5. Guennebaud, G., Jacob, B., et al.: Eigen v3 (2010). http://eigen.tuxfamily.org

  6. Intel: Developer reference for intel\(\textregistered \) math kernel library 2018 - c (2017). https://software.intel.com/en-us/mkl-developer-reference-c

  7. NVIDIA: CUDA CUBLAS Library, January 2010

    Google Scholar 

  8. Tomov, S., Dongarra, J., Baboulin, M.: Towards dense linear algebra for hybrid GPU accelerated manycore systems. Parallel Comput. 36, 232–240 (2010)

    Article  Google Scholar 

  9. Jaramillo, J.D., Vidal Maciá, A.M., Correa Zabala, F.J.: Métodos directos para la solución de sistemas de ecuaciones lineales simétricos, indefinidos, dispersos y de gran dimensión. Universidad Eafit (2006)

    Google Scholar 

  10. NVIDIA: CUDA programming guide, January 2010

    Google Scholar 

  11. Correa Zabala, F.J.: Métodos Numéricos, 1st edn. Universidad EAFIT, November 2010

    Google Scholar 

  12. NVIDIA: What is GPU-accelerated computing? http://www.nvidia.com/object/what-is-gpu-computing.html

  13. Flynn, M.: Very high-speed computing systems. Proc. IEEE 54, 1901–1909 (1967)

    Article  Google Scholar 

  14. Almasi, G.S., Gottlieb, A.: Highly Parallel Computing. Benjamin-Cummings Publishing Co., Inc., Redwood City (1989)

    MATH  Google Scholar 

  15. Eves, H.: Elementary Matrix Theory, Reprinted edn. Dover Publications Inc., Mineola (1980)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tomás Felipe Llano-Ríos .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Llano-Ríos, T.F., Ocampo-García, J.D., Yepes-Ríos, J.S., Correa-Zabala, F.J., Trefftz, C. (2018). Solving Large Systems of Linear Equations on GPUs. In: Serrano C., J., Martínez-Santos, J. (eds) Advances in Computing. CCC 2018. Communications in Computer and Information Science, vol 885. Springer, Cham. https://doi.org/10.1007/978-3-319-98998-3_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-98998-3_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-98997-6

  • Online ISBN: 978-3-319-98998-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics