Advertisement

Analysis of the Component Architecture Overhead in Open MPI

  • B. Barrett
  • J. M. Squyres
  • A. Lumsdaine
  • R. L. Graham
  • G. Bosilca
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3666)

Abstract

Component architectures provide a useful framework for developing an extensible and maintainable code base upon which large-scale software projects can be built.Component methodologies have only recently been incorporated into applications by the High Performance Computing community, in part because of the perception that component architectures necessarily incur an unacceptable performance penalty.The Open MPI project is creating a new implementation of the Message Passing Interface standard, based on a custom component architecture the Modular Component Architecture (MCA) to enable straightforward customization of a high-performance MPI implementation. This paper reports on a detailed analysis of the performance overhead in Open MPI introduced by the MCA. We compare the MCA-based implementation of Open MPI with a modified version that bypasses the component infrastructure. The overhead of the MCA is shown to be low, on the order of 1%, for both latency and bandwidth microbenchmarks as well as for the NAS Parallel Benchmark suite.

Keywords

Message Passing Interface High Performance Computing Static Library Component Architecture Component Infrastructure 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Apple Computer, Inc. Mach-O Runtime Architecture for Mac OS X version 10.3. Technical report (August 2004)Google Scholar
  2. 2.
    Armstrong, R., Gannon, D., Geist, A., Keahey, K., Kohn, S.R., McInnes, L., Parker, S.R., Smolinski, B.A.: Toward a common component architecture for high-performance scientific computing. In: HPDC (1999)Google Scholar
  3. 3.
    Bernholdt, D.E., et al.: A component architecture for high-performance scientific computing. (to appear) in Intl. J. High-Performance Computing ApplicationsGoogle Scholar
  4. 4.
    van der Wijngaart, R.F.: NAS Parallel Benchmarks version 2.4. Technical Report NAS Technical Report NAS-02-007, NASA Advanced Supercomputing Division, NASA Ames Research Center (October 2002)Google Scholar
  5. 5.
    Fagg, G.E., Bukovsky, A., Dongarra, J.J.: HARNESS and fault tolerant MPI. Parallel Computing 27, 1479–1496 (2001)zbMATHCrossRefGoogle Scholar
  6. 6.
    Garbriel, E., et al.: Open MPI: Goals, concept, and design of a next generation MPI implementation. In: Proceedings, 11th European PVM/MPI Users’ Group Meeting (2004)Google Scholar
  7. 7.
    Geist, A., Gropp, W., Huss-Lederman, S., Lumsdaine, A., Lusk, E., Saphir, W., Skjellum, T., Snir, M.: MPI-2: Extending the Message-Passing Interface. In: Euro-Par 1996 Parallel Processing, pp. 128–135. Springer, Heidelberg (1996)Google Scholar
  8. 8.
    Graham, R.L., Choi, S.-E., Daniel, D.J., Desai, N.N., Minnich, R.G., Rasmussen, C.E., Risinger, L.D., Sukalksi, M.W.: A network-failure-tolerant message-passing system for terascale clusters. International Journal of Parallel Programming 31(4) (August 2003)Google Scholar
  9. 9.
    Gropp, W., Lusk, E., Doss, N., Skjellum, A.: A high-performance, portable implementation of the MPI message passing interface standard. Parallel Computing 22(6), 789–828 (1996)zbMATHCrossRefGoogle Scholar
  10. 10.
    Keller, R., Gabriel, E., Krammer, B., Mueller, M.S., Resch, M.M.: Towards efficient execution of parallel applications on the grid: porting and optimization issues. International Journal of Grid Computing 1(2), 133–149 (2003)CrossRefGoogle Scholar
  11. 11.
    Levine, J.R.: Linkers and Loaders. Morgan Kaufmann, San Francisco (2000)Google Scholar
  12. 12.
    Message Passing Interface Forum. MPI: A Message Passing Interface. In: Proc. of Supercomputing 1993, November 1993, pp. 878–883. IEEE Computer Society Press, Los Alamitos (1993)Google Scholar
  13. 13.
    Squyres, J.M., Lumsdaine, A.: A Component Architecture for LAM/MPI. In: Proceedings, 10th European PVM/MPI Users’ Group Meeting, Venice, Italy, September 2003. LNCS. Springer, Heidelberg (2003)Google Scholar
  14. 14.
    Woodall, T.S., et al.: TEG: A high-performance, scalable, multi-network point-to-point communications methodology. In: Proceedings, 11th European PVM/MPI Users’ Group Meeting, Budapest, Hungary (September 2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • B. Barrett
    • 1
  • J. M. Squyres
    • 1
  • A. Lumsdaine
    • 1
  • R. L. Graham
    • 2
  • G. Bosilca
    • 3
  1. 1.Open Systems LaboratoryIndiana University 
  2. 2.Los Alamos National Lab 
  3. 3.Innovative Computing LaboratoryUniversity of Tennessee 

Personalised recommendations