Advertisement

Implementing MPI-2 Extended Collective Operations

  • Pedro Silva
  • João Gabriel Silva
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1697)

Abstract

This paper describes a first approach to implement MPI-2’s Extended Collective Operations. We aimed to ascertain the feasibility and effectiveness of such a project based on existing algorithms. The focus is on the intercommunicator operations, as these represent the main extension of the MPI-2 standard on the MPI-1 collective operations. We expose the algorithms, key features and some performance results. The implementation was done on top of WMPI and honors the MPICH layering, therefore the algorithms can be easily incorporated into other MPICH based MPI implementations.

Keywords

Shared Memory Message Passing Interface Remote Communication Message Size Collective Operation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Bibliography

  1. 1.
  2. 2.
    M. Snir, S. Otto, S. Huss-Lederman, D. Walker, and J. Dongarra. MPI: The Complete Reference, Vol. 1-The MPI Core (2nd Edition). MIT Press, 1998.Google Scholar
  3. 3.
    W. Gropp, S. Huss-Lederman, A. Lumsdaine, E. Lusk, B. Nitzberg, W. Saphir, and M. Snir. MPI: The Complete Reference, Vol.2-The MPI Extensions. MIT Press, 1998.Google Scholar
  4. 4.
  5. 5.
    Local Area Multiprocessor at the C&C Research Laboratories, NEC Europe Ltd. http://www.ccrl-nece.technopark.gmd.de/~maciej/LAMP.html
  6. 6.
    T. Kielmann, R. Hofman, H. Bal, A. Plaaat, and R. Bhoedjang. MagPIe: MPI’s Collective Communication Operations for Clustered Wide Area Systems. In Symposium on Principles and Practice of Parallel Programming, Atlanta, GA, May 1999.Google Scholar
  7. 7.
    Jehoshua Bruck, Danny Dolev, Ching-Tien Ho, Marcel-Catalin Rosu, and Ray Strong. Efficient Message Passing Interface (MPI) for Parallel Computing on Clusters of Workstations, 7th Annual ACM Symposium on Parallel Algorithms and Architectures, SPAA-95 Santa Barbara, California, July 1995.Google Scholar
  8. 8.
    B. Lowekamp and A. Beguelin. ECO: Efficient Collective Operations for Communication on Heterogeneous Networks. In International Parallel Processing Symposium, pages 399–405, Honolulu, HI, 1996.Google Scholar
  9. 9.
    M. Banikazemi, V. Moorthy, and D. Panda. Efficient Collective Communication on Heterogeneous Networks of Workstations. In International Conference on Parallel Processing, pages 460–467, Minneapolis, MN, August 1998.Google Scholar
  10. 10.
    J. Marinho and J.G. Silva. WMPI-Message Passing Interface for Win32 Clusters. In Proc. of the 5th European PVM/MPI Users’ Group Meeting, pp. 113–120, September 1998.Google Scholar
  11. 11.

Copyright information

© Springer-Verlag Berlin Heidelberg 1999

Authors and Affiliations

  • Pedro Silva
    • 1
  • João Gabriel Silva
    • 1
  1. 1.Dependable Systems Group, Dept. de Engenharia InformáticaUniversidade de CoimbraPortugal

Personalised recommendations