Advertisement

Performance Evaluation of MPI/MBCF with the NAS Parallel Benchmarks

  • Kenji Morimoto
  • Takashi Matsumoto
  • Kei Hiraki
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1697)

Abstract

MPI/MBCF is a high-performance MPI library targeting a cluster of workstations connected by a commodity network. It is implemented with the Memory-Based Communication Facilities (MBCF), which provides software mechanisms for users to access remote task’s memory space with off-the-shelf network hardware. MPI/MBCF uses Memory-Based FIFO for message buffering and Remote Write for communication without buffering from among the functions of MBCF. In this paper, we evaluate the performance of MPI/MBCF on a cluster of workstations with the NAS Parallel Benchmarks. We verify whether a message passing library implemented on the shared memory model achieves higher performance than that on the message passing model.

Keywords

Execution Time Message Passing Interface Message Passing Communication Operation Network Hardware 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    D. Bailey, E. Barszcz, J. Barton, D. Browning, R. Carter, L. Dagum, R. Fatoohi, S. Fineberg, P. Frederickson, T. Lasinski, R. Schreiber, H. Simon, V. Venkatakrishnan, and S. Weeratunga. The NAS parallel benchmarks. Technical Report RNR-94-007, NASA Ames Research Center, March 1994.Google Scholar
  2. 2.
    D. Bailey, T. Harris, W. Saphir, R. Wijngaart, A. Woo, and M. Yarrow. The NAS parallel benchmarks 2.0. Technical Report NAS-95-020, NASA Ames Research Center, December 1995.Google Scholar
  3. 3.
    Message Passing Interface Forum. MPI: A message-passing interface standard. http://www.mpi-forum.org/, June 1995.
  4. 4.
    Message Passing Interface Forum. MPI-2: Extensions to the message-passing interface. http://www.mpi-forum.org/, July 1997.
  5. 5.
    W. Gropp, E. Lusk, N. Doss, and A. Skjellum. A high-performance, portable implementation of the MPI message-passing interface standard. Parallel Computing, 22(6):789–828, September 1996.zbMATHCrossRefGoogle Scholar
  6. 6.
    T. Matsumoto, S. Furuso, and K. Hiraki. Resource management methods of the general-purpose massively-parallel operating system: SSS-CORE (in Japanese). In Proc. of 11th Conf. of JSSST, pages 13–16, October 1994.Google Scholar
  7. 7.
    T. Matsumoto and K. Hiraki. Memory-based communication facilities of the general-purpose massively-parallel operating system: SSS-CORE (in Japanese). In Proc. of 53rd Annual Convention of IPSJ (1), pages 37–38, September 1996.Google Scholar
  8. 8.
    T. Matsumoto and K. Hiraki. MBCF: A protected and virtualized high-speed user-level memory-based communication facility. In Proc. of Int. Conf. on Supercomputing’ 98, pages 259–266, July 1998.Google Scholar
  9. 9.
    K. Morimoto. Implementing message passing communication with a shared memory communication mechanism. Master’s thesis, Graduate School of University of Tokyo, March 1999.Google Scholar
  10. 10.
    K. Morimoto, T. Matsumoto, and K. Hiraki. Implementing MPI with the memory-based communication facilities on the SSS-CORE operating system. In V. Alexandrov and J. Dongarra, editors, Recent Advances in Parallel Virtual Machine and Message Passing Interface, volume 1497 of Lecture Notes in Computer Science, pages 223–230. Springer-Verlag, September 1998.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1999

Authors and Affiliations

  • Kenji Morimoto
    • 1
  • Takashi Matsumoto
    • 1
  • Kei Hiraki
    • 1
  1. 1.Department of Information Science, Faculty of ScienceUniversity of TokyoBunkyo Ward, TokyoJapan

Personalised recommendations