Advertisement

Performance and Portability of State-of-Art Molecular Dynamics Software on Modern GPUs

  • Evgeny Kuznetsov
  • Nikolay Kondratyuk
  • Mikhail Logunov
  • Vsevolod Nikolskiy
  • Vladimir StegailovEmail author
Conference paper
  • 49 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12043)

Abstract

Classical molecular dynamics (MD) calculations represent a significant part of utilization time of high performance computing systems. As usual, efficiency of such calculations is based on an interplay of software and hardware that are nowadays moving to hybrid GPU-based technologies. Several well-developed MD packages focused on GPUs differ both in their data management capabilities and in performance. In this paper, we present our results for the porting of the CUDA backend of LAMMPS to ROCm HIP that shows considerable benefits for AMD GPUs comparatively to the existing OpenCL backend. We consider the efficiency of solving the same physical models using different software and hardware combinations. We analyze the performance of LAMMPS, HOOMD, GROMACS and OpenMM MD packages with different GPU back-ends on modern Nvidia Volta and AMD Vega20 GPUs.

Keywords

LAMMPS HOOMD GROMACS OpenMM OpenCL Nvidia CUDA AMD ROCm HIP 

Notes

Acknowledgments

The authors gratefully acknowledge financial support of the President grant NS-5922.2018.8.

References

  1. 1.
    Tchipev, N., et al.: TweTriS: twenty trillion-atom simulation. Int. J. High Perform. Comput. Appl. 33(5), 838–854 (2019).  https://doi.org/10.1177/1094342018819741CrossRefGoogle Scholar
  2. 2.
    Morozov, I., Kazennov, A., Bystryi, R., Norman, G., Pisarev, V., Stegailov, V.: Molecular dynamics simulations of the relaxation processes in the condensed matter on GPUs. Comput. Phys. Commun. 182(9), 1974–1978 (2011).  https://doi.org/10.1016/j.cpc.2010.12.026. Computer Physics Communications Special Edition for Conference on Computational Physics Trondheim, Norway, 23–26 June 2010
  3. 3.
    Dong, W., et al.: Implementing molecular dynamics simulation on Sunway TaihuLight system. In: 2016 IEEE 18th International Conference on High Performance Computing and Communications, IEEE 14th International Conference on Smart City, IEEE 2nd International Conference on Data Science and Systems (HPCC/SmartCity/DSS), pp. 443–450, December 2016.  https://doi.org/10.1109/HPCC-SmartCity-DSS.2016.0070
  4. 4.
    Dong, W., Li, K., Kang, L., Quan, Z., Li, K.: Implementing molecular dynamics simulation on the Sunway TaihuLight system with heterogeneous many-core processors. Concurr. Comput. Pract. Experience 30(16), e4468 (2018).  https://doi.org/10.1002/cpe.4468CrossRefGoogle Scholar
  5. 5.
    Yu, Y., An, H., Chen, J., Liang, W., Xu, Q., Chen, Y.: Pipelining computation and optimization strategies for scaling GROMACS on the sunway many-core processor. In: Ibrahim, S., Choo, K.-K.R., Yan, Z., Pedrycz, W. (eds.) ICA3PP 2017. LNCS, vol. 10393, pp. 18–32. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-65482-9_2CrossRefGoogle Scholar
  6. 6.
    Duan, X., et al.: Redesigning LAMMPS for peta-scale and hundred-billion-atom simulation on Sunway TaihuLight. In: SC18: International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 148–159, November 2018.  https://doi.org/10.1109/SC.2018.00015
  7. 7.
    Nikolskii, V., Stegailov, V.: Domain-decomposition parallelization for molecular dynamics algorithm with short-ranged potentials on Epiphany architecture. Lobachevskii J. Math. 39(9), 1228–1238 (2018).  https://doi.org/10.1134/S1995080218090159MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Kondratyuk, N.D., Pisarev, V.V.: Calculation of viscosities of branched alkanes from 0.1 to 1000 MPa by molecular dynamics methods using COMPASS force field. Fluid Phase Equilib. 498, 151–159 (2019).  https://doi.org/10.1016/j.fluid.2019.06.023CrossRefGoogle Scholar
  9. 9.
    Pisarev, V., Kondratyuk, N.: Prediction of viscosity-density dependence of liquid methane+n-butane+n-pentane mixtures using the molecular dynamics method and empirical correlations. Fluid Phase Equilib. 501, 112273 (2019).  https://doi.org/10.1016/j.fluid.2019.112273CrossRefGoogle Scholar
  10. 10.
    Stegailov, V.V., Orekhov, N.D., Smirnov, G.S.: HPC hardware efficiency for quantum and classical molecular dynamics. In: Malyshkin, V. (ed.) PaCT 2015. LNCS, vol. 9251, pp. 469–473. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-21909-7_45CrossRefGoogle Scholar
  11. 11.
    Vermaas, J.V., Hardy, D.J., Stone, J.E., Tajkhorshid, E., Kohlmeyer, A.: TopoGromacs: automated topology conversion from CHARMM to GROMACS within VMD. J. Chem. Inf. Model. 56(6), 1112–1116 (2016).  https://doi.org/10.1021/acs.jcim.6b00103CrossRefGoogle Scholar
  12. 12.
    Lee, J., et al.: CHARMM-GUI input generator for NAMD, GROMACS, AMBER, OpenMM, and CHARMM/OpenMM simulations using the CHARMM36 additive force field. J. Chem. Theory Comput. 12(1), 405–413 (2016).  https://doi.org/10.1021/acs.jctc.5b00935CrossRefGoogle Scholar
  13. 13.
    Merz, P.T., Shirts, M.R.: Testing for physical validity in molecular simulations. PLOS ONE 13(9), 1–22 (2018).  https://doi.org/10.1371/journal.pone.0202764CrossRefGoogle Scholar
  14. 14.
    Mesnard, O., Barba, L.A.: Reproducible and replicable computational fluid dynamics: it’s harder than you think. Comput. Sci. Eng. 19(4), 44–55 (2017).  https://doi.org/10.1109/MCSE.2017.3151254CrossRefGoogle Scholar
  15. 15.
    Humphrey, W., Dalke, A., Schulten, K.: VMD - visual molecular dynamics. J. Mol. Graph. 14, 33–38 (1996)CrossRefGoogle Scholar
  16. 16.
    Sun, Y., et al.: Evaluating performance tradeoffs on the radeon open compute platform. In: 2018 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), pp. 209–218, April 2018.  https://doi.org/10.1109/ISPASS.2018.00034
  17. 17.
    Stegailov, V., et al.: Angara interconnect makes GPU-based desmos supercomputer an efficient tool for molecular dynamics calculations. Int. J. High Perform. Comput. Appl. 33(3), 507–521 (2019).  https://doi.org/10.1177/1094342019826667CrossRefGoogle Scholar
  18. 18.
    Norman, G.E., Stegailov, V.V.: Stochastic theory of the classical molecular dynamics method. Math. Models Comput. Simul. 5(4), 305–333 (2013).  https://doi.org/10.1134/S2070048213040108MathSciNetCrossRefGoogle Scholar
  19. 19.
    Eastman, P., et al.: OpenMM 7: rapid development of high performance algorithms for molecular dynamics. PLOS Comput. Biol. 13, 1–17 (2017).  https://doi.org/10.1371/journal.pcbi.1005659CrossRefGoogle Scholar
  20. 20.
    Plimpton, S.: Fast parallel algorithms for short-range molecular dynamics. J. Comput. Phys. 117(1), 1–19 (1995).  https://doi.org/10.1006/jcph.1995.1039CrossRefzbMATHGoogle Scholar
  21. 21.
    Berendsen, H., van der Spoel, D., van Drunen, R.: GROMACS: a message-passing parallel molecular dynamics implementation. Comput. Phys. Commun. 91(1), 43–56 (1995).  https://doi.org/10.1016/0010-4655(95)00042-ECrossRefGoogle Scholar
  22. 22.
    Brown, W.M., Wang, P., Plimpton, S.J., Tharrington, A.N.: Implementing molecular dynamics on hybrid high performance computers – short range forces. Comput. Phys. Commun. 182(4), 898–911 (2011).  https://doi.org/10.1016/j.cpc.2010.12.021CrossRefzbMATHGoogle Scholar
  23. 23.
    Brown, W.M., Kohlmeyer, A., Plimpton, S.J., Tharrington, A.N.: Implementing molecular dynamics on hybrid high performance computers – particle-particle particle-mesh. Comput. Phys. Commun. 183(3), 449–459 (2012).  https://doi.org/10.1016/j.cpc.2011.10.012CrossRefGoogle Scholar
  24. 24.
    Brown, W.M., Yamada, M.: Implementing molecular dynamics on hybrid high performance computers—three-body potentials. Comput. Phys. Commun. 184(12), 2785–2793 (2013).  https://doi.org/10.1016/j.cpc.2013.08.002CrossRefGoogle Scholar
  25. 25.
    Abraham, M.J., et al.: GROMACS: high performance molecular simulations through multi-level parallelism from laptops to supercomputers. SoftwareX 1–2, 19–25 (2015).  https://doi.org/10.1016/j.softx.2015.06.001CrossRefGoogle Scholar
  26. 26.
    Anderson, J.A., Lorenz, C.D., Travesset, A.: General purpose molecular dynamics simulations fully implemented on graphics processing units. J. Comput. Phys. 227(10), 5342–5359 (2008).  https://doi.org/10.1016/j.jcp.2008.01.047CrossRefzbMATHGoogle Scholar
  27. 27.
    Glaser, J., et al.: Strong scaling of general-purpose molecular dynamics simulations on GPUs. Comput. Phys. Commun. 192, 97–107 (2015).  https://doi.org/10.1016/j.cpc.2015.02.028CrossRefGoogle Scholar
  28. 28.
    Eastman, P., et al.: OpenMM 4: a reusable, extensible, hardware independent library for high performance molecular simulation. J. Chem. Theory Comput. 9(1), 461–469 (2013).  https://doi.org/10.1021/ct300857jCrossRefGoogle Scholar
  29. 29.
    Kutzner, C., Páll, S., Fechner, M., Esztermann, A., de Groot, B.L., Grubmüller, H.: Best bang for your buck: GPU nodes for GROMACS biomolecular simulations. J. Comput. Chem. 36(26), 1990–2008 (2015)CrossRefGoogle Scholar
  30. 30.
    Kutzner, C., Páll, S., Fechner, M., Esztermann, A., de Groot, B.L., Grubmüller, H.: More bang for your buck: improved use of GPU nodes for GROMACS 2018. CoRR abs/1903.05918 (2019). http://arxiv.org/abs/1903.05918
  31. 31.

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.National Research University Higher School of EconomicsMoscowRussia
  2. 2.Joint Institute for High Temperatures of RASMoscowRussia
  3. 3.Moscow Institute of Physics and TechnologyDolgoprudnyRussia

Personalised recommendations