Abstract
MPI is the de-facto standard for message passing in parallel scientific applications. MPI-IO is a part of the MPI-2 specification defining file I/O operations in the MPI world. MPI-IO enables performance optimizations for collective file I/O operations as it acts as a portability layer between the application and the file system. The goal of this study is to optimize collective file I/O operations. Three different algorithms for performing collective I/O operations have been developed, implemented, and evaluated on a PVFS2 file system and over NFS. The results indicate that different algorithms promise the highest write bandwidth for different number of processors, application settings and file systems, making a one-size-fits-all solution inefficient.
Chapter PDF
Similar content being viewed by others
References
May, J.: Parallel I/O for High Performance Computing. Morgan Kaufmann, San Francisco (2001)
Message Passing Interface Forum: MPI-2: Extensions to the Message Passing Interface (July 1997), http://www.mpi-forum.org
Thakur, R., Gropp, W., Lusk, E.: On Implementing MPI-IO Portably and with High Performance. In: Proceedings of the sixth workshop on I/O in parallel and distributed systems, pp. 23–32 (1999)
Thakur, R., Gropp, W., Lusk, E.: Data Sieving and Collective I/O in ROMIO. In: FRONTIERS 1999: Proceedings of the The 7th Symposium on the Frontiers of Massively Parallel Computation, p. 182. IEEE Computer Society, Los Alamitos (1999)
Prost, J.P., Treumann, R., Hedges, R., Jia, B., Koniges, A.: MPI-IO/GPFS, an Optimized Implementation of MPI-IO on top of GPFS. In: Proceedings of Supercomputing (November 2001)
Isaila, F., Malpohl, G., Olaru, V., Szeder, G., Tichy, W.: Integrating collective I/O and cooperative caching into the “clusterfile” parallel file system. In: Proceedings of the 18th Annual International Conference on Supercomputing, Sain-Malo, France, pp. 58–67. ACM Press, New York (2004)
Ching, A., Choudhary, A., Liao, W.K., Ross, R., Gropp, W.: Efficient structured data access in parallel file systems. In: Proceedings of the IEEE International Conference on Cluster Computing (December 2003)
Worringen, J.: Self-adaptive Hints for Collective I/O. In: Mohr, B., Träff, J.L., Worringen, J., Dongarra, J. (eds.) PVM/MPI 2006. LNCS, vol. 4192, pp. 202–211. Springer, Heidelberg (2006)
Ohtani, A., Aono, H., Tomaru, H.: A File Sharing Method for Storage Area Network and Its Performance Verification. NEC Res. Dev. 44(1), 85–90 (2003)
Yu, W., Vetter, J., Canon, R.S., Jiang, S.: Exploiting lustre file joining for effective collective io. In: CCGRID 2007: Proceedings of the Seventh IEEE International Symposium on Cluster Computing and the Grid, Washington, DC, USA, pp. 267–274. IEEE Computer Society Press, Los Alamitos (2007)
Wadleigh, K.R., Crawford, I.L.: Software Optimization for High-Performance Computing. Prentice-Hall, Englewood Cliffs (2000)
Simms, S.C., Pike, G.G., Balog, D.: Wide Area Filesystem Performance using Lustre on the TeraGrid. In: Teragrid Conference (2007), http://datacapacitor.researchtechnologies.uits.iu.edu/lustre_wan_tg07.pdf
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Chaarawi, M., Chandok, S., Gabriel, E. (2009). Performance Evaluation of Collective Write Algorithms in MPI I/O. In: Allen, G., Nabrzyski, J., Seidel, E., van Albada, G.D., Dongarra, J., Sloot, P.M.A. (eds) Computational Science – ICCS 2009. Lecture Notes in Computer Science, vol 5544. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-01970-8_19
Download citation
DOI: https://doi.org/10.1007/978-3-642-01970-8_19
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-01969-2
Online ISBN: 978-3-642-01970-8
eBook Packages: Computer ScienceComputer Science (R0)