Abstract
Hybrid parallel programming with MPI for internode communication in conjunction with a shared-memory programming model to manage intranode parallelism has become a dominant approach to scalable parallel programming. While this model provides a great deal of flexibility and performance potential, it saddles programmers with the complexity of utilizing two parallel programming systems in the same application. We introduce an MPI-integrated shared-memory programming model that is incorporated into MPI through a small extension to the one-sided communication interface. We discuss the integration of this interface with the upcoming MPI 3.0 one-sided semantics and describe solutions for providing portable and efficient data sharing, atomic operations, and memory consistency. We describe an implementation of the new interface in the MPICH2 and Open MPI implementations and demonstrate an average performance improvement of 40% to the communication component of a five-point stencil solver.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
MPI Forum: MPI: A Message-Passing Interface Standard. Version 2.2 (September 4, 2009), http://www.mpi-forum.org/docs/mpi-2.2/mpi22-report.pdf
Smith, L., Bull, M.: Development of mixed mode MPI / OpenMP applications. Scientific Programming 9(2,3), 83–98 (2001)
Rabenseifner, R., Hager, G., Jost, G.: Hybrid MPI/OpenMP parallel programming on clusters of multi-core SMP nodes. In: Proc. of the 17th Euromicro Intl. Conf. on Parallel, Distributed and Network-Based Processing (February 2009)
Demaine, E.: A threads-only MPI implementation for the development of parallel programs. In: Proceedings of the 11th Intl. Symp. on HPC Systems, pp. 153–163 (1997)
Bhargava, P.: MPI-LITE: Multithreading Support for MPI (1997), http://pcl.cs.ucla.edu/projects/sesame/mpi_lite/mpi_lite.html
Shen, K., Tang, H., Yang, T.: Adaptive two-level thread management for fast MPI execution on shared memory machines. In: Proceedings of the ACM/IEEE Conference on Supercomputing (1999)
Tang, H., Shen, K., Yang, T.: Program transformation and runtime support for threaded MPI execution on shared memory machines. ACM Transactions on Programming Languages and Systems 22, 673–700 (2000)
Shirley, D.: Enhancing MPI applications through selective use of shared memory on SMPs. In: Proc. of the 1st SIAM Conference on CSE (September 2000)
Los Alamos National Laboratory: Unified Parallel Software Users’ Guide and Reference Manual (2001)
Lee, E.A.: The problem with threads. Computer 39(5), 33–42 (2006)
Heroux, M.A., Brightwell, R., Wolf, M.M.: Bi-modal MPI and MPI+threads computing on scalable multicore systems. IJHPCA (2011) (submitted)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Hoefler, T. et al. (2012). Leveraging MPI’s One-Sided Communication Interface for Shared-Memory Programming. In: Träff, J.L., Benkner, S., Dongarra, J.J. (eds) Recent Advances in the Message Passing Interface. EuroMPI 2012. Lecture Notes in Computer Science, vol 7490. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-33518-1_18
Download citation
DOI: https://doi.org/10.1007/978-3-642-33518-1_18
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-33517-4
Online ISBN: 978-3-642-33518-1
eBook Packages: Computer ScienceComputer Science (R0)