On the Performance of MPI-OpenMP on a 12 Nodes Multi-core Cluster

  • Abdelgadir Tageldin Abdelgadir
  • Al-Sakib Khan Pathan
  • Mohiuddin Ahmed
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7017)


With the increasing number of Quad-Core-based clusters and the introduction of compute nodes designed with large memory capacity shared by multiple cores, new problems related to scalability arise. In this paper, we analyze the overall performance of a cluster built with nodes having a dual Quad-Core Processor on each node. Some benchmark results are presented and some observations are mentioned when handling such processors on a benchmark test. A Quad-Core-based cluster’s complexity arises from the fact that both local communication and network communications between the running processes need to be addressed. The potentials of an MPI-OpenMP approach are pinpointed because of its reduced communication overhead. At the end, we come to a conclusion that an MPI-OpenMP solution should be considered in such clusters since optimizing network communications between nodes is as important as optimizing local communications between processors in a multi-core cluster.


MPI-OpenMP hybrid Multi-Core Cluster 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Buyya, R. (ed.): High Performance Cluster Computing: Architectures and Systems, vol. 1. Prentice Hall PTR, NJ (1999)Google Scholar
  2. 2.
    Dongarra, J.J., Luszczek, P., Petitet, A.: The LINPACK Benchmark: Past, Present, and Future. Concurrency and Computation: Practice and Experience 15, 1–18 (2003)CrossRefGoogle Scholar
  3. 3.
    Gepner, P., Fraser, D.L., Kowalik, M.F.: Second Generation Quad-Core Intel Xeon Processors Bring 45 nm Technology and a New Level of Performance to HPC Applications. In: Bubak, M., et al. (eds.) ICCS 2008, Part I. LNCS, vol. 5101, pp. 417–426. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  4. 4.
    Pase, D.M.: Linpack HPL Performance on IBM eServer 326 and xSeries 336 Servers. IBM (July 2005),
  5. 5.
    Saini, S., Ciotti, R., Gunney, B.T.N., Spelce, T.E., Koniges, A., Dossa, D., Adamidis, P., Rabenseifner, R., Tiyyagura, S.R., Mueller, M.: Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks. Journal of Computer and System Sciences 74(6) (2007), doi:10.1016/j.jcss.2007.07.002Google Scholar
  6. 6.
    Rane, A., Stanzione, D.: Experiences in Tuning Performance of Hybrid MPI/OpenMP Applications on Quad-core Systems. In: Proceedings of the 10th LCI International Conference on High-Performance Clustered Computing (2009)Google Scholar
  7. 7.
    Tools for the Classic HPC Developer. Whitepaper published by The Portland Group, v2.0 (September 2008),
  8. 8.
    Wu, X., Taylor, V.: Performance Characteristics of Hybrid MPI/OpenMP Implementations of NAS Parallel Benchmarks SP and BT on Large-Scale Multicore Clusters. The Computer Journal (to be published, 2011) Google Scholar
  9. 9.
  10. 10.

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Abdelgadir Tageldin Abdelgadir
    • 1
  • Al-Sakib Khan Pathan
    • 1
  • Mohiuddin Ahmed
    • 2
  1. 1.Department of Computer ScienceInternational Islamic University MalaysiaKuala LumpurMalaysia
  2. 2.Department of Computer NetworkJazan UniversitySaudi Arabia

Personalised recommendations