Advertisement

Implementing MPI-IO Shared File Pointers Without File System Support

  • Robert Latham
  • Robert Ross
  • Rajeev Thakur
  • Brian Toonen
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3666)

Abstract

The ROMIO implementation of the MPI-IO standard provides a portable infrastructure for use on top of any number of different underlying storage targets. These targets vary widely in their capabilities, and in some cases additional effort is needed within ROMIO to support all MPI-IO semantics. The MPI-2 standard defines a class of file access routines that use a shared file pointer. These routines require communication internal to the MPI-IO implementation in order to allow processes to atomically update this shared value. We discuss a technique that leverages MPI-2 one-sided operations and can be used to implement this concept without requiring any features from the underlying file system. We then demonstrate through a simulation that our algorithm adds reasonable overhead for independent accesses and very small overhead for collective accesses.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    The MPI Forum: MPI-2: Extensions to the Message-Passing Interface (1997)Google Scholar
  2. 2.
    Thakur, R., Gropp, W., Lusk, E.: On implementing MPI-IO portably and with high performance. In: Proceedings of the Sixth Workshop on Input/Output in Parallel and Distributed Systems, pp. 23–32 (1999)Google Scholar
  3. 3.
    Prost, J.P., Treumann, R., Hedges, R., Jia, B., Koniges, A.: MPI-IO/GPFS, an optimized implementation of MPI-IO on top of GPFS. In: Proceedings of SC 2001 (2001)Google Scholar
  4. 4.
    Thakur, R., Gropp, W., Lusk, E.: A case for using MPI’s derived datatypes to improve I/O performance. In: Proceedings of SC 1998: High Performance Networking and Computing. ACM Press, New York (1998)Google Scholar
  5. 5.
    Latham, R., Ross, R., Thakur, R.: The impact of file systems on MPI-IO scalability. In: Proceedings of EuroPVM/MPI 2004 (2004)Google Scholar
  6. 6.
    IEEE/ANSI Std. 1003.1: Portable operating system interface (POSIX)–Part 1: System application program interface (API) [C language] (1996 edition)Google Scholar
  7. 7.
    Corbett, P.F., Feitelson, D.G.: Design and implementation of the Vesta parallel file system. In: Proceedings of the Scalable High-Performance Computing Conference, pp. 63–70 (1994)Google Scholar
  8. 8.
    Intel Supercomputing Division: Paragon System User’s Guide (1993)Google Scholar
  9. 9.
    Pierce, P.: A concurrent file system for a highly parallel mass storage system. In: Proceedings of the Fourth Conference on Hypercube Concurrent Computers and Applications, Monterey, CA, Golden Gate Enterprises, Los Altos, CA, pp. 155–160 (1989)Google Scholar
  10. 10.
    Freedman, C.S., Burger, J., Dewitt, D.J.: SPIFFI — a scalable parallel file system for the Intel Paragon. IEEE Transactions on Parallel and Distributed Systems 7, 1185–1200 (1996)CrossRefGoogle Scholar
  11. 11.
    Ross, R., Latham, R., Gropp, W., Thakur, R., Toonen, B.: Implementing MPI-IO atomic mode without file system support. In: Proceedings of CCGrid 2005 (2005)Google Scholar
  12. 12.
    Thakur, R., Gropp, W., Toonen, B.: Minimizing synchronization overhead in the implementation of MPI one-sided communication. In: Proceedings of the 11th European PVM/MPI Users’ Group Meeting (Euro PVM/MPI 2004), pp. 57–67 (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Robert Latham
    • 1
  • Robert Ross
    • 1
  • Rajeev Thakur
    • 1
  • Brian Toonen
    • 1
  1. 1.Mathematics and Computer Science DivisionArgonne National LaboratoryArgonneUSA

Personalised recommendations