Skip to main content

Efficient Implementation of the Force Calculation in MD Simulations

  • Chapter
  • First Online:
Supercomputing for Molecular Dynamics Simulations

Part of the book series: SpringerBriefs in Computer Science ((BRIEFSCOMPUTER))

  • 1295 Accesses

Abstract

This chapter describes how the computational kernel of MD simulations, the force calculation between particles, can be mapped to different kinds of hardware by applying minimal changes to the software. Since ls1 mardyn is based on the so-called linked-cells algorithm, several difference facets of this approach are optimized. First, we present a newly developed sliding window traversal of the entire data structure which enables the seamless integration of new optimizations such as the vectorization of the Lennard-Jones-12-6 potential. Second, we describe and evaluate several variants of mapping this potential to today’s SIMD/vector hardware using intrinsics at the example of the Intel Xeon processor and the Intel Xeon Phi coprocessor, in dependence on the functionality offered by the hardware. This is done for single-center and as well for multicentered rigid-body molecules.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://www.ncsa.illinois.edu/News/Stories/PFapps/.

References

  1. J. Mellor-Crummey, D. Whalley, K. Kennedy, Improving memory hierarchy performance for irregular applications using data and computation reorderings. Int. J. Parallel Program. 29, 217–247 (2001)

    Article  MATH  Google Scholar 

  2. S. Meloni, M. Rosati, L. Colombo, Efficient particle labelling in atomistic simulations. J. Chem. Phys. 126(12), 121102 (2007)

    Article  Google Scholar 

  3. M. Schoen, Structure of a simple molecular dynamics FORTRAN program optimized for CRAY vector processing computers. Comput. Phys. Commun. 52(2), 175–185 (1989)

    Article  MathSciNet  Google Scholar 

  4. G.S. Grest, B. Dnweg, K. Kremer, Vectorized link cell Fortran code for molecular dynamics simulations for a large number of particles. Comput. Phys. Commun. 55(3), 269–285 (1989)

    Article  Google Scholar 

  5. R. Everaers, K. Kremer, A fast grid search algorithm for molecular dynamics simulations with short-range interactions. Comput. Phys. Commun. 81(12), 19–55 (1994)

    Article  Google Scholar 

  6. D.C. Rapaport, Large-scale molecular dynamics simulation using vector and parallel computers. Comput. Phys. Rep. 9, 1–53 (1988)

    Article  Google Scholar 

  7. D.C. Rapaport, The Art of Molecular Dynamics Simulation (Cambridge University Press, Cambridge, 2004)

    Book  MATH  Google Scholar 

  8. D.C. Rapaport, Multibillion-atom molecular dynamics simulation: design considerations for vector-parallel processing. Comput. Phys. Commun. 174(7), 521–529 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  9. K. Benkert, F. Gähler, Molecular Dynamics on NEC Vector Systems (Springer, Berlin, 2007), pp. 145–152

    Google Scholar 

  10. E. Lindahl, B. Hess, D. van der Spoel, GROMACS 3.0: a package for molecular simulation and trajectory analysis. J. Mol. Model. 7, 306–317 (2001)

    Google Scholar 

  11. S. Olivier, J. Prins, J. Derby, K. Vu, Porting the GROMACS molecular dynamics code to the cell processor, in IEEE International Parallel and Distributed Processing Symposium, IPDPS 2007, pp. 1–8 (2007)

    Google Scholar 

  12. L. Peng, M. Kunaseth, H. Dursun, K.-I. Nomura, W. Wang, R. Kalia, A. Nakano, P. Vashishta, Exploiting hierarchical parallelisms for molecular dynamics simulation on multicore clusters. J. Supercomput. 57, 20–33 (2011)

    Article  Google Scholar 

  13. S. Pll, B. Hess, A flexible algorithm for calculating pair interactions on SIMD architectures. Comput. Phys. Commun. (2013) (accepted for publication)

    Google Scholar 

  14. W. Eckhardt, A. Heinecke, W. Hölzl, H.-J. Bungartz,Vectorization of multi-center, highly-parallel rigid-body molecular dynamics simulations, in Supercomputing 2013, The International Conference for High Performance Computing, Networking, Storage and Analysis, Denver, (IEEE, Poster abstract, 2013)

    Google Scholar 

  15. S. Pennycook, C. Hughes, M. Smelyanskiy, S. Jarvis, Exploring SIMD for molecular dynamics, using Intel Xeon processors and Intel Xeon Phi coprocessors, in IEEE 27th International Symposium on Parallel Distributed Processing (IPDPS), pp. 1085–1097 (2013)

    Google Scholar 

  16. W. Eckhardt, A. Heinecke, An efficient vectorization of linked-cell particle simulations. in ACM International Conference on Computing Frontiers (Cagliari, 2012), pp. 241–243

    Google Scholar 

  17. W. Eckhardt, A. Heinecke, R. Bader, M. Brehm, N. Hammer, H. Huber, H.-G. Kleinhenz, J. Vrabec, H. Hasse, M. Horsch, M. Bernreuther, C. Glass, C. Niethammer, A. Bode, H.-J. Bungartz. 591 TFLOPS multi-trillion particles simulation on SuperMUC, in Proceedings of the International Supercomputing Conference (ISC), Lecture Notes in Computer Science. vol. 7905 (Springer, Leipzig, 2013), pp. 1–12

    Google Scholar 

  18. J. Roth, F. Gähler, H.-R. Trebin, A molecular dynamics run with 5 180 116 000 particles. Int. J. Mod. Phys. C 11(02), 317–322 (2000)

    Article  Google Scholar 

  19. T.C. Germann, K. Kadau, Trillion-atom molecular dynamics becomes a reality. Int. J. Mod. Phys. C 19(09), 1315–1319 (2008)

    Article  MATH  Google Scholar 

  20. K. Kadau, T.C. Germann, P.S. Lomdahl, Molecular dynamics comes of age: 320 billion atom simulation on BlueGene/L. Int. J. Mod. Phys. C 17(12), 1755–1761 (2006)

    Article  Google Scholar 

  21. A. Rahimian, I. Lashuk, S. Veerapaneni, A. Chandramowlishwaran, D. Malhotra, L. Moon, R. Sampath, A. Shringarpure, J. Vetter, R. Vuduc, D. Zorin, G. Biros, Petascale direct numerical simulation of blood flow on 200k cores and heterogeneous architectures, in Proceedings of the 2010 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis, SC’10 (IEEE Computer Society, Washington, 2010), pp. 1–11

    Google Scholar 

  22. I. Kabadshow, H. Dachsel, J. Hammond, Poster: passing the three trillion particle limit with an error-controlled fast multipole method, in Proceedings of the 2011 Companion on High Performance Computing Networking, Storage and Analysis Companion, SC’11 Companion (ACM, New York, 2011), pp. 73–74

    Google Scholar 

  23. W. Eckhardt, T. Neckel, Memory-efficient implementation of a rigid-body molecular dynamics simulation, in Proceedings of the 11th International Symposium on Parallel and Distributed Computing—ISPDC 2012 (IEEE, Munich, 2012), pp. 103–110

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexander Heinecke .

Rights and permissions

Reprints and permissions

Copyright information

© 2015 The Author(s)

About this chapter

Cite this chapter

Heinecke, A., Eckhardt, W., Horsch, M., Bungartz, HJ. (2015). Efficient Implementation of the Force Calculation in MD Simulations. In: Supercomputing for Molecular Dynamics Simulations. SpringerBriefs in Computer Science. Springer, Cham. https://doi.org/10.1007/978-3-319-17148-7_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-17148-7_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-17147-0

  • Online ISBN: 978-3-319-17148-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics