Skip to main content

An Integrated Runtime Scheduler for MPI

  • Conference paper
  • 1401 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 7490))

Abstract

Fine-Grain MPI (FG-MPI) supports function-level parallelism while staying within the MPI process model. It provides a runtime that is directly integrated into the MPICH2 middleware and uses light-weight coroutines to implement an MPI-aware scheduler. Our key observation is that having multiple MPI processes per OS-process, with a runtime scheduler can be used to simplify MPI programming and achieve performance without adding complexity to the program. The performance part of the program is now outside of the specification of the program in the runtime where performance can be tuned with few, if any, changes to the code.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Argonne National Laboratory. MPICH2: Performance and portability. In: SC 2007 Flyer (2007)

    Google Scholar 

  2. Balaji, P., Buntinas, D., Goodell, D., Gropp, W.D., Thakur, R.: Toward Efficient Support for Multithreaded MPI Communication. In: Lastovetsky, A., Kechadi, T., Dongarra, J. (eds.) EuroPVM/MPI 2008. LNCS, vol. 5205, pp. 120–129. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  3. Buntinas, D., Gropp, W., Mercier, G.: Design and evaluation of Nemesis, a scalable, low-latency, message-passing communication subsystem. In: Proc. of the 6th IEEE Intl. Symp. on Cluster Computing and the Grid, pp. 521–530. IEEE Computer Society, Washington, DC (2006)

    Google Scholar 

  4. CoSMoS. Complex systems modelling and simulation infrastructutre, http://www.cosmos-research.org/

  5. Demaine, E.: A threads-only MPI implementation for the development of parallel programs. In: Proceedings of the 11th International Symposium on High Performance Computing Systems, pp. 153–163 (1997)

    Google Scholar 

  6. Gropp, W.D.: Learning from the Success of MPI. In: Monien, B., Prasanna, V.K., Vajapeyam, S. (eds.) HiPC 2001. LNCS, vol. 2228, pp. 81–94. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  7. Huang, C., Lawlor, O.S., Kal, L.V.: Adaptive MPI. In: Rauchwerger, L. (ed.) LCPC 2003. LNCS, vol. 2958, pp. 306–322. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  8. Kamal, H., Mirtaheri, S.M., Wagner, A.: Scalability of communicators and groups in MPI. In: Proc. of the 19th ACM Intl. Symposium on High Performance Distributed Computing, HPDC 2010, pp. 264–275. ACM, New York (2010)

    Chapter  Google Scholar 

  9. Kamal, H., Wagner, A.: FG-MPI: Fine-Grain MPI for multicore and clusters. In: 11th IEEE Intl. Workshop on Parallel and Distributed Scientific and Engineering Computing (PDSEC) held in conjunction with IPDPS-24, pp. 1–8 (April 2010)

    Google Scholar 

  10. Saltzer, J.: On the naming and binding of network destinations. Network Working Group (1993), http://tools.ietf.org/html/rfc1498

  11. Tang, H., Yang, T.: Optimizing threaded MPI execution on SMP clusters. In: ICS 2001: Proc. of 15th Intl. Conf. on Supercomputing, pp. 381–392. ACM, New York (2001)

    Chapter  Google Scholar 

  12. Thakur, R., Gropp, W.: Test Suite for Evaluating Performance of MPI Implementations That Support MPI THREAD MULTIPLE. In: Cappello, F., Herault, T., Dongarra, J. (eds.) PVM/MPI 2007. LNCS, vol. 4757, pp. 46–55. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  13. von Behren, R., Condit, J., Zhou, F., Necula, G., Brewer, E.: Capriccio: scalable threads for internet services. In: SOSP 19, pp. 268–281. ACM, New York (2003)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kamal, H., Wagner, A. (2012). An Integrated Runtime Scheduler for MPI. In: Träff, J.L., Benkner, S., Dongarra, J.J. (eds) Recent Advances in the Message Passing Interface. EuroMPI 2012. Lecture Notes in Computer Science, vol 7490. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-33518-1_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-33518-1_22

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-33517-4

  • Online ISBN: 978-3-642-33518-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics