Advertisement

MPI Collective Operations over IP Multicast

  • Hsiang Ann Chen
  • Yvette O. Carrasco
  • Amy W. Apon
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1800)

Abstract

Many common implementations of Message P assing Interface (MPI) implement collectiv e operations over poin t-to-poin toperations. This work examines IP multicast as a framework for collectiv e operations. IP multicast is not reliable. If a receiver is not ready when a message is sent via IP multicast, the message is lost. Two techniques for ensuring that a message is not lost due to a slow receiving process are examined. The techniques are implemented and compared experimentally over both a shared and a switched Fast Ethernet. The average performance of collective operations is improved as a function of the number of participating processes and message size for both networks.

Keywords

Binary Tree Multicast Group Message Size Linear Algorithm Collective Operation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    D. E. Comer. Internetworking with TCP/IP Vol. I: Principles, Protocols, and Architecture. Prentice Hall, 1995.Google Scholar
  2. [2]
    T. H. Dunigan and K. A. Hall. PVM and IP Multicast. Technical Report ORNL/TM-13030, Oak Ridge National Laboratory, 1996.Google Scholar
  3. [3]
    W. Gropp, E. Lusk, N. Doss, and A. Skjellum. A High-Performance, Portable Implementation of the MPI Message Passing Interface Standard. Technical Report Preprint MCS-P567-0296, Argonne National Laboratory, March 1996.Google Scholar
  4. [4]
    N. Nupairoj and L. M. Ni. Performance Evaluation of Some MPI Implementations on Workstation Clusters. In Proceedings of the 1994 Scalable Parallel Libraties Conference, pages 98–105. IEEE Computer Society Press, October 1994.Google Scholar
  5. [5]
    P. Pacheo. Parallel Programming with MPI. Morgan Kaufmann, 1997.Google Scholar
  6. [6]
    The LAM source code. http://www.mpi.nd.edu/lam.
  7. [7]
  8. [8]
    A. S. Tannenbaum, M. F. Kaashoek, and H. E. Bal. Parallel Programming Using Shared Objects and Broadcasting. Computer, 25(8), 1992.Google Scholar
  9. [9]
    The Virtual Interface Architecture Standard. http://www.viarch.org.
  10. [10]
    D. Towsley, J. Kurose, and S. Pingali. A Comparison of Sender-Initiated and Receiver-Initiated Reliable Multicast Protocols. IEEE JSAC, 15(3), April 1997.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Hsiang Ann Chen
    • 1
  • Yvette O. Carrasco
    • 1
  • Amy W. Apon
    • 1
  1. 1.Computer Science and Computer EngineeringUniversity of ArkansasFayettevilleUSA

Personalised recommendations