Advertisement

Benchmarking LAMMPS: Sensitivity to Task Location Under CPU-Based Weak-Scaling

  • José A. MoríñigoEmail author
  • Pablo García-Muller
  • Antonio J. Rubio-Montero
  • Antonio Gómez-Iglesias
  • Norbert Meyer
  • Rafael Mayo-García
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 979)

Abstract

This investigation summarizes a set of executions completed on the supercomputers Stampede at TACC (USA), Helios at IFERC (Japan), and Eagle at PSNC (Poland), with the molecular dynamics solver LAMMPS, compiled for CPUs. A communication-intensive benchmark based on long-distance interactions tackled by the Fast Fourier Transform operator has been selected to test its sensitivity to rather different patterns of tasks location, hence to identify the best way to accomplish further simulations for this family of problems. Weak-scaling tests show that the attained execution time of LAMMPS is closely linked to the cluster topology and this is revealed by the varying time-execution observed in scale up to thousands of MPI tasks involved in the tests. It is noticeable that two clusters exhibit time saving (up to 61% within the parallelization range) when the MPI-task mapping follows a concentration pattern over as few nodes as possible. Besides this result is useful from the user’s standpoint, it may also help to improve the clusters throughput by, for instance, adding live-migration decisions in the scheduling policies in those cases of communication-intensive behaviour detected in characterization tests. Also, it opens a similar output for a more efficient usage of the cluster from the energy consumption point of view.

Keywords

Cluster throughput LAMMPS benchmarking MPI application performance Weak scaling 

Notes

Acknowledgment

This work was partially funded by the Spanish Ministry of Economy, Industry and Competitiveness project CODEC2 (TIN2015-63562-R) with European Regional Development Fund (ERDF) as well as carried out on computing facilities provided by the CYTED Network RICAP (517RT0529) and Poznań Supercomputing and Networking Center. The support of Marcin Pospieszny, system administrator at PSNC, is gratefully acknowledged.

References

  1. 1.
    TOP500 Supercomputers homepage. http://www.top500.org
  2. 2.
    Shalf, J., Quinlan, D., Janssen, C.: Rethinking hardware-software codesign for exascale systems. Computer 44(11), 22–30 (2011).  https://doi.org/10.1109/MC.2011.300CrossRefGoogle Scholar
  3. 3.
    Exascale Computing Project (ECP) homepage. https://www.exascaleproject.org
  4. 4.
  5. 5.
    Partnership Research for Advance Computing in Europe. http://www.prace-ri.eu
  6. 6.
    National Supercomputing Center in Tianjin homepage. http://www.nscc-tj.gov.cn
  7. 7.
  8. 8.
    Jeannot, E., Mercier, G., Tessier, F.: Process placement in multicore clusters: algorithmic issues and practical techniques. IEEE Trans. Parallel Distrib. Syst. 25(4), 993–1002 (2014).  https://doi.org/10.1109/TPDS.2013.104CrossRefGoogle Scholar
  9. 9.
    Chavarría-Miranda, D., Nieplocha, J., Tipparaju, V.: Topology-aware tile mapping for clusters of SMPs. In: Proceedings of the Third Conference on Computing Frontiers, Ischia, Italy (2006).  https://doi.org/10.1145/1128022.1128073
  10. 10.
    Smith, B.E., Bode, B.: Performance effects of node mappings on the IBM BlueGene/L machine. In: Cunha, J.C., Medeiros, P.D. (eds.) Euro-Par 2005. LNCS, vol. 3648, pp. 1005–1013. Springer, Heidelberg (2005).  https://doi.org/10.1007/11549468_110CrossRefGoogle Scholar
  11. 11.
    Rodrigues E.R., Madruga F.L., Navaux P.O.A., Panetta J.: Multi-core Aware Process Mapping and its Impact on Communication Overhead of Parallel Applica- tions. In: Proceedings of the IEEE Symposium Computers and Communication, Sousse, Tunisia, pp. 811–817 (2009).  https://doi.org/10.1109/ISCC.2009.5202271
  12. 12.
    León, E.A., Karlin, I., Moody, A.T.: System noise revisited: enabling application scalability and reproducibility with SMT. In: IEEE International Parallel and Distributed Processing Symposium, pp. 596–607. IEEE, Chicago (2016).  https://doi.org/10.1109/IPDPS.2016.48
  13. 13.
    Chai, L., Gao, Q., Panda, D.K.: Understanding the impact of multi-core architecture in cluster computing: a case study with intel dual-core system. In: Proceedings of the 7th IEEE International Symposium Cluster Computing and the Grid (CCGrid), Rio De Janeiro, Brazil, pp. 471–478 (2007).  https://doi.org/10.1109/CCGRID.2007.119
  14. 14.
    Shainer, G., Lui, P., Liu, T., Wilde, T., Layton, J.: The impact of inter-node latency versus intra-node latency on HPC applications. In: Proceedings of the IASTED International Conference Parallel and Distributed Computing and Systems, pp. 455–460 (2011).  https://doi.org/10.2316/P.2011.757-005
  15. 15.
    Xingfu, W., Taylor, V.: Using processor partitioning to evaluate the performance of MPI, OpenMP and hybrid parallel applications on dual- and quad-core cray XT4 systems. In: Cray UG Proceedings (CUG 2009), Atlanta, USA, pp. 4–7 (2009)Google Scholar
  16. 16.
    Ribeiro, C.P., et al.: Evaluating CPU and memory affinity for numerical scientific multithreaded benchmarks on multi-cores. Int. J. Comput. Sci. Inform. Syst. 7(1), 79–93 (2012)Google Scholar
  17. 17.
    Xingfu, W., Taylor, V.: Processor partitioning: an experimental performance analysis of parallel applications on SMP clusters systems. In: 19th International Conference Parallel Distributed Computing and Systems (PDCS-07), Massachusetts, USA, Cambridge, pp. 13–18 (2007)Google Scholar
  18. 18.
    Rodríguez-Pascual, M., Moríñigo, J.A., Mayo-García, R.: Benchmarking performance: influence of task location on cluster throughput. In: Mocskos, E., Nesmachnow, S. (eds.) CARLA 2017. CCIS, vol. 796, pp. 125–138. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-73353-1_9CrossRefGoogle Scholar
  19. 19.
    Moríñigo, J.A., Rodríguez-Pascual, M., Mayo-García, R.: Slurm Configuration Impact on Benchmarking. In: Slurm User Group Meeting, Athens, Greece (2016). https://slurm.schedmd.com/publications.html
  20. 20.
    Zhang, C., Yuan, X., Srinivasan, A.: Processor affinity and MPI performance on SMP-CMP clusters. In: IEEE International Symposium Parallel and Distributed Processing, Workshops and PhD Forum, Atlanta, USA, pp. 1–8 (2010).  https://doi.org/10.1109/IPDPSW.2010.5470774
  21. 21.
    McKenna, G.: Performance Analysis and Optimisation of LAMMPS on XCmaster, HPCx and BlueGene. MSc, University of Edinburgh, EPCC (2007)Google Scholar
  22. 22.
    Liu, J.: LAMMPS on Advanced SGI Architectures. White Paper SGI (2010)Google Scholar
  23. 23.
    Cornebize, T., Heinrich, F., Legrand, A., Vienne, J.: Emulating High Performance Linpack on a Commodity Server at the Scale of a Supercomputer, HAL-id: hal-01654804 (2017)Google Scholar
  24. 24.
  25. 25.
  26. 26.
  27. 27.
    LAMMPS homepage. http://lammps.sandia.gov
  28. 28.
    CHARMM homepage. https://www.charmm.org
  29. 29.
    Plimpton, S., Pollock, R., Stevens, M.: Particle-Mesh Ewald and rRESPA for parallel molecular dynamics simulations. In: Eighth SIAM Conference on Parallel Processing for Scientific Computing (1997)Google Scholar
  30. 30.
    Fast Fourier Transform of the West homepage. http://www.fftw.org
  31. 31.
    Plimpton, S.: Fast parallel algorithms for short-range molecular dynamics. J. Comput. Phys. 117(1), 1–19 (1995).  https://doi.org/10.1006/jcph.1995.1039CrossRefzbMATHGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • José A. Moríñigo
    • 1
    Email author
  • Pablo García-Muller
    • 1
  • Antonio J. Rubio-Montero
    • 1
  • Antonio Gómez-Iglesias
    • 2
  • Norbert Meyer
    • 3
  • Rafael Mayo-García
    • 1
  1. 1.Centro de Investigaciones EnergéticasMedioambientales y Tecnológicas CIEMATMadridSpain
  2. 2.Oak Ridge National LaboratoryOak RidgeUSA
  3. 3.Poznań Supercomputing and Networking CenterPoznańPoland

Personalised recommendations