Skip to main content

MPI 3 and Beyond: Why MPI Is Successful and What Challenges It Faces

  • Conference paper
Recent Advances in the Message Passing Interface (EuroMPI 2012)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 7490))

Included in the following conference series:

Abstract

The Message Passing Interface (MPI) was developed over eighteen years ago and continues to be the preferred programming model for scientific computing. Contributing to that success was a combination of forward-looking features, precise definition, and judgment based on the experience of developers, vendors and users. Today, MPI continues to adapt to the changing needs of parallel programming, with MPI-3 introducing enhancements for collective and one-sided communication, multi-threaded programming, support of performance tools for MPI programming, etc. However, MPI faces many challenges as the nature of parallel computing changes more radically than at any time in the history of MPI. This talk will touch on some of the less obvious but important reasons for MPI success, discuss some of the challenges that MPI faces, and makes suggestions for future directions in MPI and parallel programming language research.

This work was supported in part by the Office of Advanced Scientific Computing Research, Office of Science, U.S. Department of Energy award DE-FG02-08ER25835 and award DE-SC0004131.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Kjolstad, F., Hoefler, T., Snir, M.: Automatic datatype generation and optimization. In: Ramanujam, J., Sadayappan, P. (eds.) PPOPP, pp. 327–328. ACM (2012)

    Google Scholar 

  2. Träff, J.L., Hempel, R., Ritzdorf, H., Zimmermann, F.: Flattening on the Fly: Efficient Handling of MPI Derived Datatypes. In: Dongarra, J., Luque, E., Margalef, T. (eds.) PVM/MPI 1999. LNCS, vol. 1697, pp. 109–116. Springer, Heidelberg (1999)

    Chapter  Google Scholar 

  3. Byna, S., Sun, X.-H., Thakur, R., Gropp, W.D.: Automatic Memory Optimizations for Improving MPI Derived Datatype Performance. In: Mohr, B., Träff, J.L., Worringen, J., Dongarra, J. (eds.) PVM/MPI 2006. LNCS, vol. 4192, pp. 238–246. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  4. Ross, R., Miller, N., Gropp, W.D.: Implementing Fast and Reusable Datatype Processing. In: Dongarra, J., Laforenza, D., Orlando, S. (eds.) EuroPVM/MPI 2003. LNCS, vol. 2840, pp. 404–413. Springer, Heidelberg (2003)

    Chapter  Google Scholar 

  5. Hoefler, T., Gottlieb, S.: Parallel Zero-Copy Algorithms for Fast Fourier Transform and Conjugate Gradient Using MPI Datatypes. In: Keller, R., Gabriel, E., Resch, M., Dongarra, J. (eds.) EuroMPI 2010. LNCS, vol. 6305, pp. 132–141. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  6. Li, G., Palmer, R., Delisi, M., Gopalakrishnan, G., Kirby, R.M.: Formal specification of MPI 2.0: Case study in specifying a practical concurrent programming API. Sci. Comput. Program. 76(2), 65–81 (2011)

    Article  MATH  Google Scholar 

  7. Gropp, W.D.: Learning from the Success of MPI. In: Monien, B., Prasanna, V.K., Vajapeyam, S. (eds.) HiPC 2001. LNCS, vol. 2228, pp. 81–92. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  8. Goodell, D., Gropp, W., Zhao, X., Thakur, R.: Scalable Memory Use in MPI: A Case Study with MPICH2. In: Cotronis, Y., Danalis, A., Nikolopoulos, D.S., Dongarra, J. (eds.) EuroMPI 2011. LNCS, vol. 6960, pp. 140–149. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  9. Gropp, W.D., Lusk, E.: Fault tolerance in MPI programs. International Journal of High Performance Computer Applications 18(3), 363–372 (2004)

    Article  Google Scholar 

  10. Bonachea, D., Duell, J.: Problems with using MPI 1.1 and 2.0 as compilation targets for parallel language implementations. IJHPCN 1(1/2/3), 91–99 (2004)

    Article  Google Scholar 

  11. Boehm, H.J., Adve, S.V.: You don’t know jack about shared variables or memory models. Commun. ACM 55(2), 48–54 (2012)

    Article  Google Scholar 

  12. Balaji, P., Buntinas, D., Goodell, D., Gropp, W., Hoefler, T., Kumar, S., Lusk, E., Thakur, R., Träff, J.L.: MPI on millions of cores. Parallel Processing Letters 21(1), 45–60 (2011)

    Article  MathSciNet  Google Scholar 

  13. Hoefler, T., Rabenseifner, R., Ritzdorf, H., de Supinski, B.R., Thakur, R., Träff, J.L.: The scalable process topology interface of MPI 2.2. Concurrency and Computation: Practice and Experience 23, 293–310 (2011)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Gropp, W. (2012). MPI 3 and Beyond: Why MPI Is Successful and What Challenges It Faces. In: Träff, J.L., Benkner, S., Dongarra, J.J. (eds) Recent Advances in the Message Passing Interface. EuroMPI 2012. Lecture Notes in Computer Science, vol 7490. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-33518-1_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-33518-1_1

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-33517-4

  • Online ISBN: 978-3-642-33518-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics