Advertisement

Stepping into Fully GPU Accelerated Biomedical Applications

  • Caroline Mendonca Costa
  • Gundolf Haase
  • Manfred Liebmann
  • Aurel Neic
  • Gernot Plank
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8353)

Abstract

We present ideas and first results on a GPU acceleration of a non-linear solver embedded into the biomedical application code CARP. The linear system solvers have been transferred already in the past and so we concentrate on how to extend the GPU acceleration to larger portions of the code. The finite element assembling of stiffness and mass matrices takes at least 50 % of the CPU time and therefore we investigate this process for the bidomain equations but with focus on later use in non-linear and/or time-dependent problems. The CUDA code for matrix calculation and assembling is faster by a factor up to \(90\) compared to a single CPU core. The routines were integrated to CARP’s main code and they are already used to assemble the FE matrices of the bidomain model. Further performance studies are still required for the bidomain-mechanics model.

Keywords

Mass Matrice Memory Bandwidth Matrix Entry Linear Solver Conductivity Tensor 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Bell, N., Dalton, S., Olson, L.N.: Exposing fine-grained parallelism in algebraic multigrid methods. SIAM J. Sci. Comput. 34(2), C123–C152 (2012)CrossRefMATHMathSciNetGoogle Scholar
  2. 2.
    Cecka, C., Lew, A.J., Darve, E.: Assembly of finite element methods on graphics processors. Int. J. Numer. Methods Eng. 85(5), 640–669 (2011)CrossRefMATHGoogle Scholar
  3. 3.
    Geveler, M., Ribbrock, D., Göddeke, D., Zajac, P., Turek, S.: Towards a complete FEM-based simulation toolkit on GPUs: unstructured grid finite element geometric multigrid solvers with strong smoothers based on sparse approximate inverses. Comput. Fluids 80, 327–332 (2013)CrossRefMATHMathSciNetGoogle Scholar
  4. 4.
    Gockenbach, M.S.: Understanding and Implementing the Finite Element Method. SIAM, Philadelphia (2007)Google Scholar
  5. 5.
    Göddeke, D.: Fast and accurate finite-element multigrid solvers for PDE simulations on GPU clusters. Ph.D. thesis, Technische Universität Dortmund, Fakultät für Mathematik, May 2010. http://hdl.handle.net/2003/27243
  6. 6.
    Göddeke, D., Strzodka, R., Mohd-Yusof, J., McCormick, P.S., Wobker, H., Becker, C., Turek, S.: Using GPUs to improve multigrid solver performance on a cluster. Int. J. Comput. Sci. Eng. 4(1), 36–55 (2008)Google Scholar
  7. 7.
    Haase, G., Liebmann, M., Douglas, C.C., Plank, G.: A parallel algebraic multigrid solver on graphics processing units. In: Zhang, W., Chen, Z., Douglas, C.C., Tong, W. (eds.) HPCA 2009. LNCS, vol. 5938, pp. 38–47. Springer, Heidelberg (2010) CrossRefGoogle Scholar
  8. 8.
    Jónasson, K. (ed.): PARA 2010, Part II. LNCS, vol. 7134. Springer, Heidelberg (2012)Google Scholar
  9. 9.
    Jung, M., Langer, U.: Methode der finiten Elemente für Ingenieure. Lehrbuch, 2nd edn. Springer Vieweg, Wiesbaden (2013)CrossRefGoogle Scholar
  10. 10.
    Larson, M.G., Bengzon, F.: The Finite Element Method: Theory, Implementations and Applications. Texts in Computational Science and Engineering, vol. 10, 1st edn. Springer, Heidelberg (2013)Google Scholar
  11. 11.
    Markall, G.R., Slemmer, A., Ham, D.A., Kelly, P.H.J., Cantwell, C.D., Sherwin, S.J.: Finite element assembly strategies on multi-core and many-core architectures. Int. J. Numer. Methods Fluids 71(1), 80–97 (2013)CrossRefMathSciNetGoogle Scholar
  12. 12.
    Neic, A., Liebmann, M., Haase, G., Plank, G.: Algebraic multigrid solvers on clusters of CPUs and GPUs. In: Jónasson [8], pp. 389–398Google Scholar
  13. 13.
    Neic, A., Liebmann, M., Hötzl, E., Mitchell, L., Vigmond, E., Haase, G., Plank, G.: Accelerating cardiac bidomain simulations using graphics processing units. IEEE Trans. Biomed. Eng. 59(8), 2281–2290 (2012)CrossRefGoogle Scholar
  14. 14.
    NVIDIA Corporation. CUDA programming guide 5.0 (2012). http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html
  15. 15.
    Pathmanathan, P., Whiteley, J.P.: A numerical method for cardiac mechanoelectric simulations. Ann. Biomed. Eng. 37(5), 860–873 (2009)CrossRefGoogle Scholar
  16. 16.
    Rocha, B., Campos, F., Plank, G., Weber dos Santos, R., Liebmann, M., Haase, G.: Simulations of the electrical activity in the heart with graphic processing units. Concur. Comput. Pract. Exp. 23, 708–720 (2011)Google Scholar
  17. 17.
    Tracy, F.T.: Optimizing finite element programs on the cray X1 using coloring schemes. In: Proceedings of the 2004 Users Group Conference, \(\text{ DOD }\!\!\_\!\!\text{ UGC } \text{'04 }\), pp. 329–333. IEEE Computer Society, Washington, DC, USA (2004)Google Scholar
  18. 18.
    Vigmond, E., Hughes, M., Plank, G., Leon, L.: Computational tools for modeling electrical activity in cardiac tissue. J. Electrocardiol. 36, 69–74 (2003)CrossRefGoogle Scholar
  19. 19.
    Vigmond, E., Plank, G.: Cardiac arrhythmia research package (2009). http://carp.meduni-graz.at

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  • Caroline Mendonca Costa
    • 2
  • Gundolf Haase
    • 1
  • Manfred Liebmann
    • 1
  • Aurel Neic
    • 1
  • Gernot Plank
    • 2
  1. 1.Institute for Mathematics and Scientific ComputingUniversity of GrazGrazAustria
  2. 2.Institute of BiophysicsMedical University of GrazGrazAustria

Personalised recommendations