Advertisement

Designing a Portable MPI-2 over Modern Interconnects Using uDAPL Interface

  • L. Chai
  • R. Noronha
  • P. Gupta
  • G. Brown
  • D. K. Panda
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3666)

Abstract

In the high performance computing arena, there exist several implementations of MPI-1 and MPI-2 for different networks. Some implementations allow the developer to work with multiple networks. However, most of them require the implementation of a new device, before they can be deployed on a new networking interconnect. The emerging uDAPL interface provides a network-independent interface to the native transport of different networks. Designing a portable MPI library with uDAPL might allow the user to move quickly from one networking technology to another. In this paper, we have designed the popular MVAPICH2 library to use uDAPL for communication operations. To the best of our knowledge, this is the first open-source MPI-2 compliant implementation over uDAPL. Evaluation with micro-benchmarks and applications on InfiniBand shows that the implementation with uDAPL performs comparably with that of MVAPICH2. Evaluation with micro-benchmarks on Myrinet and Gigabit Ethernet shows that the implementation with uDAPL delivers performance close to that of the underlying uDAPL library.

Keywords

MPI-1 MPI-2 uDAPL InfiniBand Myrinet Gigabit Ethernet Cluster 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ammasso, Inc.: The Ammasso 1100 High Performance Ethernet Adapter User Guide (February 2005), http://www.ammasso.com/amso1100_usersguide.pdf
  2. 2.
    Argonne National Laboratory. MPICH - A Portable Implementation of MPI, http://www-unix.mcs.anl.gov/mpi/mpich
  3. 3.
    Bailey, D.H., Barszcz, E., Dagum, L., Simon, H.D.: NAS Parallel Benchmark Results. Technical Report 94-006, RNR (1994)Google Scholar
  4. 4.
    DAT Collaborative. uDAPL: User Direct Access Programming Library Version 1.2 (July 2004), http://www.datcollaborative.org/udapl.html
  5. 5.
    Infiniband Trade Association, http://www.infinibandta.org/
  6. 6.
    Liu, J., Jiang, W., Wyckoff, P., Panda, D.K., Ashton, D., Buntinas, D., Gropp, W., Toonen, B.: Design and Implementation of MPICH2 over InfiniBand with RDMA Support. In: International Parallel and Distributed Processing Symposium (2004)Google Scholar
  7. 7.
    Message Passing Interface Forum. MPI-2: A Message Passing Interface Standard. High Performance Computing Applications, 12(1–2), 1–299 (1998)Google Scholar
  8. 8.
    MPICH-GM Software, www.myrinet.com/scs
  9. 9.
    Boden, N.J., Cohen, D., Felderman, R.E., Kulawik, A.E., Seitz, C.L., Seizovic, J., Su, W.: Myrinet - a gigabit per second local area network (February 1995)Google Scholar
  10. 10.
    Network-Based Computing Laboratory. MPI over InfiniBand Project, http://nowlab.cis.ohio-state.edu/projects/mpi-iba/index.html
  11. 11.
    Quadrics Ltd., http://www.quadrics.com
  12. 12.
    RDMA Consortium. RDMA Protocol Verb Specification (April 2003), http://www.rdmaconsortium.com/home
  13. 13.
    Marc Snir, S., Otto, S., Huss-Lederman, D., Walker, D., Dongarra, J.: MPI–The Complete Reference, 2nd edn. The MPI-1 Core, vol. 1. MIT Press, Cambridge (1998)Google Scholar
  14. 14.
    The ASCI Blue Benchmarks, http://www.llnl.gov/asci_benchmarks

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • L. Chai
    • 1
  • R. Noronha
    • 1
  • P. Gupta
    • 1
  • G. Brown
    • 1
  • D. K. Panda
    • 1
  1. 1.Department of Computer Science and EngineeringThe Ohio State UniversityUSA

Personalised recommendations