Abstract
Fine-Grain MPI (FG-MPI) supports function-level parallelism while staying within the MPI process model. It provides a runtime that is directly integrated into the MPICH2 middleware and uses light-weight coroutines to implement an MPI-aware scheduler. Our key observation is that having multiple MPI processes per OS-process, with a runtime scheduler can be used to simplify MPI programming and achieve performance without adding complexity to the program. The performance part of the program is now outside of the specification of the program in the runtime where performance can be tuned with few, if any, changes to the code.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Argonne National Laboratory. MPICH2: Performance and portability. In: SC 2007 Flyer (2007)
Balaji, P., Buntinas, D., Goodell, D., Gropp, W.D., Thakur, R.: Toward Efficient Support for Multithreaded MPI Communication. In: Lastovetsky, A., Kechadi, T., Dongarra, J. (eds.) EuroPVM/MPI 2008. LNCS, vol. 5205, pp. 120–129. Springer, Heidelberg (2008)
Buntinas, D., Gropp, W., Mercier, G.: Design and evaluation of Nemesis, a scalable, low-latency, message-passing communication subsystem. In: Proc. of the 6th IEEE Intl. Symp. on Cluster Computing and the Grid, pp. 521–530. IEEE Computer Society, Washington, DC (2006)
CoSMoS. Complex systems modelling and simulation infrastructutre, http://www.cosmos-research.org/
Demaine, E.: A threads-only MPI implementation for the development of parallel programs. In: Proceedings of the 11th International Symposium on High Performance Computing Systems, pp. 153–163 (1997)
Gropp, W.D.: Learning from the Success of MPI. In: Monien, B., Prasanna, V.K., Vajapeyam, S. (eds.) HiPC 2001. LNCS, vol. 2228, pp. 81–94. Springer, Heidelberg (2001)
Huang, C., Lawlor, O.S., Kal, L.V.: Adaptive MPI. In: Rauchwerger, L. (ed.) LCPC 2003. LNCS, vol. 2958, pp. 306–322. Springer, Heidelberg (2004)
Kamal, H., Mirtaheri, S.M., Wagner, A.: Scalability of communicators and groups in MPI. In: Proc. of the 19th ACM Intl. Symposium on High Performance Distributed Computing, HPDC 2010, pp. 264–275. ACM, New York (2010)
Kamal, H., Wagner, A.: FG-MPI: Fine-Grain MPI for multicore and clusters. In: 11th IEEE Intl. Workshop on Parallel and Distributed Scientific and Engineering Computing (PDSEC) held in conjunction with IPDPS-24, pp. 1–8 (April 2010)
Saltzer, J.: On the naming and binding of network destinations. Network Working Group (1993), http://tools.ietf.org/html/rfc1498
Tang, H., Yang, T.: Optimizing threaded MPI execution on SMP clusters. In: ICS 2001: Proc. of 15th Intl. Conf. on Supercomputing, pp. 381–392. ACM, New York (2001)
Thakur, R., Gropp, W.: Test Suite for Evaluating Performance of MPI Implementations That Support MPI THREAD MULTIPLE. In: Cappello, F., Herault, T., Dongarra, J. (eds.) PVM/MPI 2007. LNCS, vol. 4757, pp. 46–55. Springer, Heidelberg (2007)
von Behren, R., Condit, J., Zhou, F., Necula, G., Brewer, E.: Capriccio: scalable threads for internet services. In: SOSP 19, pp. 268–281. ACM, New York (2003)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kamal, H., Wagner, A. (2012). An Integrated Runtime Scheduler for MPI. In: Träff, J.L., Benkner, S., Dongarra, J.J. (eds) Recent Advances in the Message Passing Interface. EuroMPI 2012. Lecture Notes in Computer Science, vol 7490. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-33518-1_22
Download citation
DOI: https://doi.org/10.1007/978-3-642-33518-1_22
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-33517-4
Online ISBN: 978-3-642-33518-1
eBook Packages: Computer ScienceComputer Science (R0)