Advertisement

CUDA 2D Stencil Computations for the Jacobi Method

  • José María Cecilia
  • José Manuel García
  • Manuel Ujaldón
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7133)

Abstract

We are witnessing the consolidation of the GPUs streaming paradigm in parallel computing. This paper explores stencil operations in CUDA to optimize on GPUs the Jacobi method for solving Laplace’s differential equation. The code keeps constant the access pattern through a large number of loop iterations, that way being representative of a wide set of iterative linear algebra algorithms. Optimizations are focused on data parallelism, threads deployment and the GPU memory hierarchy, whose management is explicit by the CUDA programmer. Experimental results are shown on Nvidia Teslas C870 and C1060 GPUs and compared to a counterpart version optimized on a quadcore Intel CPU. The speed-up factor for our set of GPU optimizations reaches 3-4x and the execution times defeat those of the CPU by a wide margin, also showing great scalability when moving towards a more sophisticated GPU architecture and/or more demanding problem sizes.

Keywords

CUDA GPGPU Stencil Computation Parallel Numerical Algorithms 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Amorim, R., Haase, G., Liebmann, M., Weber dos Santos, R.: Comparing CUDA and OpenGL Implementations for a Jacobi Iteration. Technical Report, Graz University of Technology (December 2008)Google Scholar
  2. 2.
    Christen, M., Schenk, O., Neufeld, E., Messmer, P., Burkhart, H.: Parallel Data-Locality Aware Stencil Computations on Modern Micro-Architectures. In: Procs. IEEE Intl. Parallel and Distributed Processing Symposium, Rome (May 2009)Google Scholar
  3. 3.
    CUDA: http://developer.nvidia.com/object/cuda.html (accessed September 15, 2010)
  4. 4.
    Datta, K., Murphy, M., Volkov, V., Williams, S., Carter, J., Oliker, L., Patterson, D.A., Shalf, J., Yelick, K.: Stencil Computation Optimization and Auto-Tuning on State-of-the-art Multicore Architectures. In: Proceedings ACM/IEEE Supercomputing 2008, Austin, TX, USA, pp. 1–12 (November 2008)Google Scholar
  5. 5.
    Demmel, J.: Applied Numerical Linear Algebra. SIAM, Philadelphia (1997)Google Scholar
  6. 6.
    Firestream: AMD Stream Computing, http://www.amd.com/us/products/workstation/firestream/Pages/firestream.aspx (accessed September 15, 2010)
  7. 7.
    GPGPU: General-Purpose Computation Using Graphics Hardware (2010), http://www.gpgpu.org
  8. 8.
    The Khronos Group: The OpenCL Core API Specification. Headers and documentation, http://www.khronos.org/registry/cl (accessed September 15, 2010)
  9. 9.
    Lester, B.: The Art of Parallel Programming. Prentice Hall, Engl. Cliffs (1993)Google Scholar
  10. 10.
    OpenMP: The OpenMP API (2010), http://www.openmp.org
  11. 11.
    Owens, J., Luebke, D., Govindaraju, Harris, M., Kruger, J., Lefohn, A., Purcell, T.: A Survey of General-Purpose Computation on Graphics Hardware. Journal Computer Graphics Forum 26(1), 80–113 (2007)CrossRefGoogle Scholar
  12. 12.
    Tesla: Nvidia Tesla GPU computing solutions for HPC, http://www.nvidia.com/object/personal_supercomputing.html (accessed September 15, 2010)
  13. 13.
    Venkatasubramanian, S., Vuduc, R.W.: Tuned and Wildly Asynchronous Stencil Kernels for Hybrid CPU/GPU Systems. In: Proceedings ACM Intl. Conference on Supercomputing, New York, USA (June 2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • José María Cecilia
    • 1
  • José Manuel García
    • 1
  • Manuel Ujaldón
    • 2
  1. 1.Computer Engineering and Technology DepartmentUniversity of MurciaSpain
  2. 2.Computer Architecture DepartmentUniversity of MalagaSpain

Personalised recommendations