Advertisement

Benchmarking Performance: Influence of Task Location on Cluster Throughput

  • Manuel Rodríguez-Pascual
  • José Antonio MoríñigoEmail author
  • Rafael Mayo-García
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 796)

Abstract

A variety of properties characterizes the execution of scientific applications on HPC environments (CPU, I/O or memory-bound, execution time, degree of parallelism, dedicated computational resources, strong- and weak-scaling behaviour, to cite some). This situation causes scheduling decisions to have a great influence on the performance of the applications, making difficult to achieve an optimal exploitation with cost-effective strategies of the HPC resources. In this work the NAS Parallel Benchmarks have been executed in a systematic way in a modern state-of-the-art and an older cluster, to identify dependencies between MPI tasks mapping and the speedup or resource occupation. A full characterization with micro-benchmarks has been performed. Then, an examination on how different task grouping strategies and cluster setups affect the execution time of jobs and infrastructure throughput. As a result, criteria for cluster setup arise linked to maximize performance of individual jobs, total cluster throughput or achieving better scheduling. It is expected that this work will be of interest on the design of scheduling policies and useful to HPC administrators.

Keywords

MPI application performance Benchmarking Cluster throughput NAS Parallel Benchmarks 

Notes

Acknowledgment

This work was supported by the COST Action NESUS (IC1305) and partially funded by the Spanish Ministry of Economy and Competitiveness project CODEC2 (TIN2015-63562-R) and EU H2020 project HPC4E (grant agreement n 689772).

References

  1. 1.
    Top 500. www.top500.org
  2. 2.
    Jeannot, E., Mercier, G., Tessier, F.: Process placement in multicore clusters: algorithmic issues and practical techniques. IEEE Trans. Parallel Distrib. Syst. 25(4), 993–1002 (2014)CrossRefGoogle Scholar
  3. 3.
    Chavarría-Miranda, D., Nieplocha, J., Tipparaju, V.: Topology-aware tile mapping for clusters of SMPs. In: Proceedings of the 3rd Conference on Computing Frontiers (CF 2006), pp. 383–392. ACM (2006)Google Scholar
  4. 4.
    Smith, B.E., Bode, B.: Performance effects of node mappings on the IBM BlueGene/L machine. In: Cunha, J.C., Medeiros, P.D. (eds.) Euro-Par 2005. LNCS, vol. 3648, pp. 1005–1013. Springer, Heidelberg (2005).  https://doi.org/10.1007/11549468_110 CrossRefGoogle Scholar
  5. 5.
    Rodrigues, E.R., Madruga, F.L., Navaux, P.O.A., Panetta, J.: Multi-core aware process mapping and its impact on communication overhead of parallel applications. In: Proceedings of the IEEE Symposium on Computers and Communications, pp. 811–817 (2009)Google Scholar
  6. 6.
    Chai, L., Gao, Q., Panda, D.K.: Understanding the impact of multi-core architecture in cluster computing: a case study with Intel dual-core system. In: Proceedings of the 7th IEEE International Symposium on Cluster Computing and the Grid, CCGrid, pp. 471–478 (2007)Google Scholar
  7. 7.
    Shainer, G., Lui, P., Liu, T., Wilde, T., Layton, J.: The impact of inter-node latency versus intra-node latency on HPC applications. In: Proceedings of the IASTED International Conference on Parallel and Distributed Computing and Systems, pp. 455–460 (2011)Google Scholar
  8. 8.
    Xingfu, W., Taylor, V.: Using processor partitioning to evaluate the performance of MPI, OpenMP and hybrid parallel applications on dual- and quad-core Cray XT4 systems. In: Cray UG Proceedings (CUG 2009), Atlanta, USA, pp. 4–7 (2009)Google Scholar
  9. 9.
    Ribeiro, C.P., et al.: Evaluating CPU and memory affinity for numerical scientific multithreaded benchmarks on multi-cores. IJCSIS 7(1), 79–93 (2012)Google Scholar
  10. 10.
    Wu, X., Taylor, V.: Processor partitioning: an experimental performance analysis of parallel applications on SMP clusters systems. In: 19th International Conference on Parallel Distributed Computing and Systems (PDCS 2007), CA, USA, pp. 13–18 (2007)Google Scholar
  11. 11.
    Wu, X., Taylor, V.: Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore. J. Comput. Syst. Sci. 79(8), 1256–1268 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Zhang, C., Yuan, X., Srinivasan, A.: Processor affinity and MPI performance on SMP-CMP clusters. In: IEEE International Symposium on Parallel & Distributed Processing, Workshops and Ph.D. Forum (IPDPSW), Atlanta, USA, pp. 1–8 (2010)Google Scholar
  13. 13.
    McCalpin, J.D.: Memory bandwidth and machine balance in current high performance computers. In: IEEE Computer Society Technical Committee on Computer Architecture (TCCA) Newsletter, pp. 19–25 (1995)Google Scholar
  14. 14.
  15. 15.
  16. 16.
    Intel Memory Latency Checker 3.1. www.intel.com/software/mlc
  17. 17.
    Bailey, D., et al.: The NAS parallel benchmarks. Technical report (1994)Google Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  • Manuel Rodríguez-Pascual
    • 1
  • José Antonio Moríñigo
    • 1
    Email author
  • Rafael Mayo-García
    • 1
  1. 1.Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas CIEMATMadridSpain

Personalised recommendations