Advertisement

Design Alternatives and Performance Trade-Offs for Implementing MPI-2 over InfiniBand

  • Wei Huang
  • Gopalakrishnan Santhanaraman
  • Hyun-Wook Jin
  • Dhabaleswar K. Panda
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3666)

Abstract

MPICH2 provides a layered architecture to achieve both portability and performance. For implementations of MPI-2 over InfiniBand, it provides the flexibility for researchers at the RDMA channel, CH3 or ADI3 layer. In this paper we analyze the performance and complexity trade-offs associated with implementations at these layers. We describe our designs and implementations, as well as optimizations at each layer. To show the performance impacts of these design schemes and optimizations, we evaluate our implementations with different micro-benchmarks, HPCC and NAS test suite. Our experiments show that although the ADI3 layers adds complexity in implementation, the benefits achieved through optimizations justify moving to the ADI layer to extract the best performance.

Keywords

MPI-2 InfiniBand RDMA channel CH3 ADI3 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bailey, D.H., Barszcz, E., Dagum, L., Simon, H.D.: NAS Parallel Benchmark Results. Technical Report 94-006, RNR (1994)Google Scholar
  2. 2.
    HPC Challenge Benchmark, http://icl.cs.utk.edu/hpcc/
  3. 3.
    Grabner, R., Mietke, F., Rehm, W.: An MPICH2 Channel Device Implementation over VAPI on InfiniBand. In: Proceedings of the International Parallel and Distributed Processing Symposium (2004)Google Scholar
  4. 4.
    Huang, W., Santhanaraman, G., Jin, H.W., Panda, D.K.: Scheduling of MPI-2 One Sided Operations over InfiniBand. In: Workshop On Communication Architecture on Clusters (CAC), in conjunction with IPDPS 2005 (April 2005)Google Scholar
  5. 5.
    InfiniBand Trade Association. InfiniBand Architecture Specification, Release 1.2Google Scholar
  6. 6.
    Network Based Computing Laboratory, http://nowlab.cis.ohio-state.edu/
  7. 7.
    Liu, J., Jiang, W., Jin, H.W., Panda, D.K., Gropp, W., Thakur, R.: High Performance MPI-2 One-Sided Communication over InfiniBand. In: International Symposium on Cluster Computing and the Grid (CCGrid 2004) (April 2004)Google Scholar
  8. 8.
    Liu, J., Jiang, W., Wyckoff, P., Panda, D.K., Ashton, D., Buntinas, D., Gropp, W., Toonen, B.: Design and Implementation of MPICH2 over InfiniBand with RDMA Support. In: Proceedings of the International Parallel and Distributed Processing Symposium (2004)Google Scholar
  9. 9.
    Message Passing Interface Forum. MPI-2: A Message Passing Interface Standard. High Performance Computing Applications 12(1–2), 1–299 (1998)Google Scholar
  10. 10.
  11. 11.
    Santhanaraman, G., Wu, J., Panda, D.K.: Zero-Copy MPI Derived Datatype Communication over InfiniBand. In: EuroPVM-MPI 2004 (September 2004)Google Scholar
  12. 12.
    Snir, M., Otto, S., Huss-Lederman, S., Walker, D., Dongarra, J.: MPI–The Complete Reference, 2nd edn. The MPI-1 Core, vol. 1. The MIT Press, Cambridge (1998)Google Scholar
  13. 13.
    Tezuka, H., O’Carroll, F., Hori, A., Ishikawa, Y.: Pin-down cache: A virtual memory management technique for zero-copy communication. In: Proceedings of the 12th International Parallel Processing Symposium (1998)Google Scholar
  14. 14.
    Wu, J., Wyckoff, P., Panda, D.K.: High Performance Implementation of MPI Datatype Communication over InfiniBand. In: Proceedings of the International Parallel and Distributed Processing Symposium (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Wei Huang
    • 1
  • Gopalakrishnan Santhanaraman
    • 1
  • Hyun-Wook Jin
    • 1
  • Dhabaleswar K. Panda
    • 1
  1. 1.Department of Computer Science and EngineeringThe Ohio State University 

Personalised recommendations