Point-To-Point Communication Using Migrating Ports

  • Ian T. Foster
  • David R. KohrJr.
  • Robert Olson
  • Steven Tuecke
  • Ming Q. Xu


We describe and evaluate an implementation of a port-based communication model for task-parallel programs. This model permits tasks to communicate without explicit knowledge of the location or identity of their communication partners, which facilitates modular programming and the development of programs that consist of multiple communicating tasks per processing node. We present the protocols used for point-to-point communication and port migration, expressed in terms of portable abstractions for naming, threading, and asynchronous communication provided by the Nexus runtime system. Experimental results on an IBM SP allow us to quantify the performance costs of the functionality associated with port-based communication. We conclude that task-to-task communication via ports can be achieved efficiently even using standard system software, but that improvements in the integration of threading and communication mechanisms would improve performance significantly.


Message Passing Runtime System Global Pointer Computer Science Division Distribute Data Structure 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    K. M. Chandy and I. Foster. A deterministic notation for cooperating processes. IEEE Trans. Parallel and Distributed Syst., 1995. To appear.Google Scholar
  2. [2]
    K. M. Chandy and C. Kesselman. CC++: A declarative concurrent object-oriented programming notation. In Research Directions in Concurrent Object-Oriented Programming. MIT Press, 1993.Google Scholar
  3. [3]
    B. Chapman, P. Mehrotra, and H. Zima. Programming in Vienna Fortran. Scientific Programming, 1(1):31–50, 1992.Google Scholar
  4. [4]
    M. Young et al. The duality of memory and communication in Mach. In Proc. of the 11 th Symposium on Operating Systems Principles,1987.Google Scholar
  5. [5]
    I. Foster, C. Kesselman, and S. Tuecke. Portable mechanisms for multithreaded distributed computations. Mathematics and Computer Science Division Preprint P494–0195, Argonne National Laboratory, 1995.Google Scholar
  6. [6]
    I. Foster and S. Taylor. Strand: New Concepts in Parallel Programming. Prentice-Hall, 1989.Google Scholar
  7. [7]
    W. Gropp, E. Lusk, and A. Skjellum. Using MPI.: Portable Parallel Programming with the Message Passing Interface. MIT Press, 1995.Google Scholar
  8. [8]
    M. Haines, D. Cronk, and P. Mehrotra. On the design of Chant: A talking threads package. In Proc. of Supercomputing 94, November 1993.Google Scholar
  9. [9]
    S. Hiranandani, K. Kennedy, and C. Tseng. Preliminary experiences with the Fortran D compiler. In Proc. of Supercomputing 93, Portland, November 1993.Google Scholar
  10. [10]
    C. A. R. Hoare. Communicating Sequential Processes. Prentice Hall, 1984.Google Scholar
  11. [11]
    G. Jones. Programming in occam. Prentice Hall International, 1987.Google Scholar
  12. [12]
    V. Sunderam. PVM: A framework for parallel distributed computing. Concurrency: Practice & Experience,2(4):315–339, 1990.CrossRefGoogle Scholar
  13. [13]
    T. von Eicken, D. Culler, S. Goldstein, and K. Schauser. Active Messages: A mechanism for integrated communication and computation. In Proc. 19th Int’l. Symposium on Computer Architecture, May 1992.Google Scholar

Copyright information

© Springer Science+Business Media New York 1996

Authors and Affiliations

  • Ian T. Foster
    • 1
  • David R. KohrJr.
    • 1
  • Robert Olson
    • 1
  • Steven Tuecke
    • 1
  • Ming Q. Xu
    • 1
  1. 1.Mathematics and Computer Science DivisionArgonne National LaboratoryArgonneUSA

Personalised recommendations