Skip to main content

Benchmarking LAMMPS: Sensitivity to Task Location Under CPU-Based Weak-Scaling

  • Conference paper
  • First Online:
Book cover High Performance Computing (CARLA 2018)

Abstract

This investigation summarizes a set of executions completed on the supercomputers Stampede at TACC (USA), Helios at IFERC (Japan), and Eagle at PSNC (Poland), with the molecular dynamics solver LAMMPS, compiled for CPUs. A communication-intensive benchmark based on long-distance interactions tackled by the Fast Fourier Transform operator has been selected to test its sensitivity to rather different patterns of tasks location, hence to identify the best way to accomplish further simulations for this family of problems. Weak-scaling tests show that the attained execution time of LAMMPS is closely linked to the cluster topology and this is revealed by the varying time-execution observed in scale up to thousands of MPI tasks involved in the tests. It is noticeable that two clusters exhibit time saving (up to 61% within the parallelization range) when the MPI-task mapping follows a concentration pattern over as few nodes as possible. Besides this result is useful from the user’s standpoint, it may also help to improve the clusters throughput by, for instance, adding live-migration decisions in the scheduling policies in those cases of communication-intensive behaviour detected in characterization tests. Also, it opens a similar output for a more efficient usage of the cluster from the energy consumption point of view.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. TOP500 Supercomputers homepage. http://www.top500.org

  2. Shalf, J., Quinlan, D., Janssen, C.: Rethinking hardware-software codesign for exascale systems. Computer 44(11), 22–30 (2011). https://doi.org/10.1109/MC.2011.300

    Article  Google Scholar 

  3. Exascale Computing Project (ECP) homepage. https://www.exascaleproject.org

  4. http://eurohpc.eu

  5. Partnership Research for Advance Computing in Europe. http://www.prace-ri.eu

  6. National Supercomputing Center in Tianjin homepage. http://www.nscc-tj.gov.cn

  7. Post-K supercomputer. www.fujitsu.com/global/Images/post-k-supercomputer.pdf

  8. Jeannot, E., Mercier, G., Tessier, F.: Process placement in multicore clusters: algorithmic issues and practical techniques. IEEE Trans. Parallel Distrib. Syst. 25(4), 993–1002 (2014). https://doi.org/10.1109/TPDS.2013.104

    Article  Google Scholar 

  9. Chavarría-Miranda, D., Nieplocha, J., Tipparaju, V.: Topology-aware tile mapping for clusters of SMPs. In: Proceedings of the Third Conference on Computing Frontiers, Ischia, Italy (2006). https://doi.org/10.1145/1128022.1128073

  10. Smith, B.E., Bode, B.: Performance effects of node mappings on the IBM BlueGene/L machine. In: Cunha, J.C., Medeiros, P.D. (eds.) Euro-Par 2005. LNCS, vol. 3648, pp. 1005–1013. Springer, Heidelberg (2005). https://doi.org/10.1007/11549468_110

    Chapter  Google Scholar 

  11. Rodrigues E.R., Madruga F.L., Navaux P.O.A., Panetta J.: Multi-core Aware Process Mapping and its Impact on Communication Overhead of Parallel Applica- tions. In: Proceedings of the IEEE Symposium Computers and Communication, Sousse, Tunisia, pp. 811–817 (2009). https://doi.org/10.1109/ISCC.2009.5202271

  12. León, E.A., Karlin, I., Moody, A.T.: System noise revisited: enabling application scalability and reproducibility with SMT. In: IEEE International Parallel and Distributed Processing Symposium, pp. 596–607. IEEE, Chicago (2016). https://doi.org/10.1109/IPDPS.2016.48

  13. Chai, L., Gao, Q., Panda, D.K.: Understanding the impact of multi-core architecture in cluster computing: a case study with intel dual-core system. In: Proceedings of the 7th IEEE International Symposium Cluster Computing and the Grid (CCGrid), Rio De Janeiro, Brazil, pp. 471–478 (2007). https://doi.org/10.1109/CCGRID.2007.119

  14. Shainer, G., Lui, P., Liu, T., Wilde, T., Layton, J.: The impact of inter-node latency versus intra-node latency on HPC applications. In: Proceedings of the IASTED International Conference Parallel and Distributed Computing and Systems, pp. 455–460 (2011). https://doi.org/10.2316/P.2011.757-005

  15. Xingfu, W., Taylor, V.: Using processor partitioning to evaluate the performance of MPI, OpenMP and hybrid parallel applications on dual- and quad-core cray XT4 systems. In: Cray UG Proceedings (CUG 2009), Atlanta, USA, pp. 4–7 (2009)

    Google Scholar 

  16. Ribeiro, C.P., et al.: Evaluating CPU and memory affinity for numerical scientific multithreaded benchmarks on multi-cores. Int. J. Comput. Sci. Inform. Syst. 7(1), 79–93 (2012)

    Google Scholar 

  17. Xingfu, W., Taylor, V.: Processor partitioning: an experimental performance analysis of parallel applications on SMP clusters systems. In: 19th International Conference Parallel Distributed Computing and Systems (PDCS-07), Massachusetts, USA, Cambridge, pp. 13–18 (2007)

    Google Scholar 

  18. Rodríguez-Pascual, M., Moríñigo, J.A., Mayo-García, R.: Benchmarking performance: influence of task location on cluster throughput. In: Mocskos, E., Nesmachnow, S. (eds.) CARLA 2017. CCIS, vol. 796, pp. 125–138. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-73353-1_9

    Chapter  Google Scholar 

  19. Moríñigo, J.A., Rodríguez-Pascual, M., Mayo-García, R.: Slurm Configuration Impact on Benchmarking. In: Slurm User Group Meeting, Athens, Greece (2016). https://slurm.schedmd.com/publications.html

  20. Zhang, C., Yuan, X., Srinivasan, A.: Processor affinity and MPI performance on SMP-CMP clusters. In: IEEE International Symposium Parallel and Distributed Processing, Workshops and PhD Forum, Atlanta, USA, pp. 1–8 (2010). https://doi.org/10.1109/IPDPSW.2010.5470774

  21. McKenna, G.: Performance Analysis and Optimisation of LAMMPS on XCmaster, HPCx and BlueGene. MSc, University of Edinburgh, EPCC (2007)

    Google Scholar 

  22. Liu, J.: LAMMPS on Advanced SGI Architectures. White Paper SGI (2010)

    Google Scholar 

  23. Cornebize, T., Heinrich, F., Legrand, A., Vienne, J.: Emulating High Performance Linpack on a Commodity Server at the Scale of a Supercomputer, HAL-id: hal-01654804 (2017)

    Google Scholar 

  24. Stampede supercomputer. https://www.tacc.utexas.edu/systems/stampede

  25. Helios supercomputer. http://www.iferc.org/CSC_Scope.html#Systems

  26. Eagle supercomputer. https://wiki.man.poznan.pl/hpc/index.php?title=Strona_glowna

  27. LAMMPS homepage. http://lammps.sandia.gov

  28. CHARMM homepage. https://www.charmm.org

  29. Plimpton, S., Pollock, R., Stevens, M.: Particle-Mesh Ewald and rRESPA for parallel molecular dynamics simulations. In: Eighth SIAM Conference on Parallel Processing for Scientific Computing (1997)

    Google Scholar 

  30. Fast Fourier Transform of the West homepage. http://www.fftw.org

  31. Plimpton, S.: Fast parallel algorithms for short-range molecular dynamics. J. Comput. Phys. 117(1), 1–19 (1995). https://doi.org/10.1006/jcph.1995.1039

    Article  MATH  Google Scholar 

Download references

Acknowledgment

This work was partially funded by the Spanish Ministry of Economy, Industry and Competitiveness project CODEC2 (TIN2015-63562-R) with European Regional Development Fund (ERDF) as well as carried out on computing facilities provided by the CYTED Network RICAP (517RT0529) and Poznań Supercomputing and Networking Center. The support of Marcin Pospieszny, system administrator at PSNC, is gratefully acknowledged.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to José A. Moríñigo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Moríñigo, J.A., García-Muller, P., Rubio-Montero, A.J., Gómez-Iglesias, A., Meyer, N., Mayo-García, R. (2019). Benchmarking LAMMPS: Sensitivity to Task Location Under CPU-Based Weak-Scaling. In: Meneses, E., Castro, H., Barrios Hernández, C., Ramos-Pollan, R. (eds) High Performance Computing. CARLA 2018. Communications in Computer and Information Science, vol 979. Springer, Cham. https://doi.org/10.1007/978-3-030-16205-4_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-16205-4_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-16204-7

  • Online ISBN: 978-3-030-16205-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics