Advertisement

Building MPI for Multi-Programming Systems Using Implicit Information

  • Frederick C. Wong
  • Andrea C. Arpaci-Dusseau
  • David E. Culler
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1697)

Abstract

With the growing importance of fast system area networks in the parallel community, it is becoming common for message passing programs to run in multi-programming environments. Competing sequential and parallel jobs can distort the global coordination of communicating processes. In this paper, we describe our implementation of MPI using implicit information for global co-scheduling. Our results show that MPI program performance is, indeed, sensitive to local scheduling variations. Further, the integration of implicit co-scheduling with the MPI runtime system achieves robust performance in a multi-programming environment, without compromising performance in dedicated use.

Keywords

Execution Time Message Size Active Message Implicit Information Message Passing Program 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Reference

  1. 1.
    R. Arpaci, A. Dusseau, A. Vahdat, L. Liu, T. Anderson, D. Patterson: The Interaction of Parallel and Sequential Workloads on a Network of Workstations. In Proceedings of ACM Joint Intr. Conf. on Measurement and Modeling of Computer Systems, pp. 267–278, May 1995.Google Scholar
  2. 2.
    A. Arpaci-Dusseau, D. Culler, A. Mainwaring: Scheduling with Implicit Information in Distributed Systems. ACM SIGMETRICS’98/ PERFORMANCE’98.Google Scholar
  3. 3.
    D. Bailey, E. Barszcz, J. Barton, D. Browning, R. Carter, L. Dagum, R. Fatoohi, S. Fineberg, P. Frederickson, T. Lasinski, R. Schreiber, H. Simon, V. Venkatakrishnan, and S. Weeratunga: The NAS Parallel Benchmarks. Intr. J. of Supercomputer Applications. 5(3):66–73, 1991.Google Scholar
  4. 4.
    N. Boden, D. Cohen, R. Felderman, A. Kulawik, C. Seitz, J. Seizovic, and W. Su: Myrinet-A Gigabet-per-Second Local-Area Network. IEEE Micro, 15(1):29–38, Feb. 1995.CrossRefGoogle Scholar
  5. 5.
    J. Dongarra and T. Dunnigan: Message Passing Performance of Various Computers. University of Tennessee Technical Report CS-95-299, May 1995.Google Scholar
  6. 6.
    T. von Eicken, D. Culler, S. Goldstein, and K. Schauser: Active Messages: a Mechanism for Integrated Communication and Computation. In Proc. of the 19th ISCA, 1992, pp.256–266.Google Scholar
  7. 7.
    D. Ghormley, D. Petrou, S. Rodrigues, A. Vahdat, and T. Anderson: GLUnix: A Global Layer Unix for a Network of Workstations. Software Practice and Experience, 1989.Google Scholar
  8. 8.
    W. Gropp, E. Lusk, N. Doss, and A. Skjellum: A High-Performance, Portable Implementation of the (MPI) Message Passing Interface Standard. Parallel Computing 22(6):789–828, Sept. 1996.MATHCrossRefGoogle Scholar
  9. 9. A. Mainwaring: Active Message Application Programming Interface and Communication Subsystem Organization. University of California at Berkeley, T.R. UCB CSD-96-918, 1996.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1999

Authors and Affiliations

  • Frederick C. Wong
    • 1
  • Andrea C. Arpaci-Dusseau
    • 2
  • David E. Culler
    • 1
  1. 1.Computer Science DivisionUniversity of CaliforniaBerkeley
  2. 2.Computer Systems LaboratoryStanford UniversityBerkeley

Personalised recommendations