Advertisement

FleCSPHg: A GPU Accelerated Framework for Physics and Astrophysics Simulations

  • Julien Loiseau
  • François Alin
  • Christophe Jaillet
  • Michaël KrajeckiEmail author
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 979)

Abstract

This paper presents FleCSPHg, a GPU accelerated framework dedicated to Smoothed Particle Hydrodynamics (SPH) and gravitation (FMM) computation. Astrophysical simulations, with the case of binary neutron stars coalescence, are used as test cases. In this context we show the efficiency of the tree data structure in two conditions. The first for near-neighbors search with SPH and the second with N-body algorithm for the gravitation computation.

FleCSPHg is based on FleCI and FleCSPH developed at the Los Alamos National Laboratory. This work is a first step to provide a multi-physics framework for tree-based methods.

This paper details either SPH, FMM methods and the simulation we propose. It describes FleCSI and FleCSPH and our strategy to divide the work load between CPU and GPU. The CPU is associate with the tree traversal and generates tasks at a specific depth for the GPU. These tasks are offloaded to the GPU and gathered on the CPU at the end of the traversal.

The computation time is up to 3.5 times faster on the GPU version than classical CPU. We also give details on the simulation itself for the binary neutron star coalescence.

Keywords

HPC Hybrid architectures Smoothed Particle Hydrodynamics Simulation 

Notes

Acknowledgement

We would like to thanks the ROMEO supercomputer center on which all the tests below were performed. This work is part of the FleCSI and FleCSPH development. We would like to thanks the Los Alamos National Laboratory and the CCS-7 for the contributions on this work.

References

  1. 1.
    Abbott, B.P., et al.: GW170817: observation of gravitational waves from a binary neutron star inspiral. Phys. Rev. Lett. 119(16), 161101 (2017)CrossRefGoogle Scholar
  2. 2.
    Barnes, J., Hut, P.: A hierarchical O(N log N) force-calculation algorithm. Nature 324(6096), 446–449 (1986)CrossRefGoogle Scholar
  3. 3.
    Barnes, J.E.: A modified tree code: don’t laugh; it runs. J. Comput. Phys. 87(1), 161–170 (1990)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Bauer, M., Treichler, S., Slaughter, E., Aiken, A.: Legion: expressing locality and independence with logical regions. In: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, p. 66. IEEE Computer Society Press (2012)Google Scholar
  5. 5.
    Beatson, R., Greengard, L.: A short course on fast multipole methods. Wavelets Multilevel Methods Elliptic PDEs 1, 1–37 (1997)MathSciNetzbMATHGoogle Scholar
  6. 6.
    Bergen, B., Moss, N., Charest, M.R.J.: Flexible computational science infrastructure. Technical report, Los Alamos National Laboratory (LANL), Los Alamos, NM, United States (2016)Google Scholar
  7. 7.
    Gingold, R., Monaghan, J.: Kernel estimates as a basis for general particle methods in hydrodynamics. J. Comput. Phys. 46(3), 429–453 (1982)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Gingold, R.A., Monaghan, J.J.: Smoothed particle hydrodynamics: theory and application to non-spherical stars. Mon. Not. R. Astron. Soc. 181(3), 375–389 (1977)CrossRefGoogle Scholar
  9. 9.
    Hopkins, P.F.: Gizmo: multi-method magneto-hydrodynamics+ gravity code. Astrophysics Source Code Library (2014)Google Scholar
  10. 10.
    Kale, L.V., Krishnan, S.: CHARM++: a portable concurrent object oriented system based on C++. ACM SIGPLAN Not. 28, 91–108 (1993)CrossRefGoogle Scholar
  11. 11.
    Landau, L.D., Lifshitz, E.M.: Fluid mechanics (1959)Google Scholar
  12. 12.
    Lucy, L.B.: A numerical approach to the testing of the fission hypothesis. Astron. J. 82, 1013–1024 (1977)CrossRefGoogle Scholar
  13. 13.
    Miki, Y., Umemura, M.: Gothic: Gravitational Oct-Tree code accelerated by hierarchical time step controlling. New Astron. 52, 65–81 (2017)CrossRefGoogle Scholar
  14. 14.
    Monaghan, J., Gingold, R.: Shock simulation by the particle method SPH. J. Comput. Phys. 52(2), 374–389 (1983).  https://doi.org/10.1016/0021-9991(83)90036-0. http://www.sciencedirect.com/science/article/pii/0021999183900360CrossRefzbMATHGoogle Scholar
  15. 15.
    Rosswog, S.: Astrophysical smooth particle hydrodynamics. New Astron. Rev. 53(4), 78–104 (2009).  https://doi.org/10.1016/j.newar.2009.08.007. http://www.sciencedirect.com/science/article/pii/S1387647309000487CrossRefGoogle Scholar
  16. 16.
    Springel, V.: The cosmological simulation code GADGET-2. Mon. Not. R. Astron. Soc. 364(4), 1105–1134 (2005)CrossRefGoogle Scholar
  17. 17.
    Wadsley, J.W., Keller, B.W., Quinn, T.R.: Gasoline2: a modern smoothed particle hydrodynamics code. Mon. Not. R. Astron. Soc. 471(2), 2357–2369 (2017)CrossRefGoogle Scholar
  18. 18.
    Warren, M.S.: 2HOT: an improved parallel hashed Oct-Tree N-body algorithm for cosmological simulation. In: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, p. 72. ACM (2013)Google Scholar
  19. 19.
    Yokota, R., Barba, L.A.: Treecode and fast multipole method for N-body simulation with CUDA. In: GPU Computing Gems Emerald Edition, pp. 113–132. Elsevier (2011)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Julien Loiseau
    • 1
  • François Alin
    • 1
  • Christophe Jaillet
    • 1
  • Michaël Krajecki
    • 1
    Email author
  1. 1.CReSTIC Laboratory EA3804University of Reims Champagne-ArdenneReimsFrance

Personalised recommendations