Skip to main content

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 4757))

  • 839 Accesses

Abstract

This talk will trace the origins of MPI from the early message-passing, distributed memory, parallel computers in the 1980’s, to today’s parallel supercomputers. In these early days, parallel computing companies implemented proprietary message-passing libraries to support distributed memory parallel programs. At the time, there was also great instability in the market and parallel computing companies could come and go, with their demise taking with them a great deal of effort in parallel programs written to their specific call specifications. In January 1992, Geoffrey Fox and Ken Kennedy had initiated a community effort called Fortran D, a precursor to High Performance Fortran and a high level data parallel language that compiled down to a distributed memory architecture. In the event, HPF proved an over ambitious goal: what was clearly achievable was a message-passing standard that enabled portability across a variety of distributed memory message-passing machines. In Europe, there was enthusiasm for PARMACS libraries: in the US, PVM was gaining adherents for distributed computing. For these reasons, in November 1992 Jack Dongarra and David Walker from the USA and Rolf Hempel and Tony Hey from Europe wrote an initial draft of the MPI standard. After a birds of a feather session at the 1992 Supercomputing Conference, Bill Gropp and Rusty Lusk from Argonne volunteered to create an open source implementation of the emerging MPI standard. This proved crucial in accelerating take up of the community-based standard, as did support from IBM, Intel and Meiko. Because of the need for the MPI standardization process to converge to agreement in a little over a year, the final agreed version of MPI contains more communication calls than most users now require. A later standardization process increased the functionally of MPI as MPI-2.

Where are we now? It is clear that MPI provides effective portability for data parallel distributed memory message passing programs. Moreover, such MPI programs can scale to large numbers of processors. MPI therefore still retains its appeal for closely coupled distributed computing and the rise of HPC clusters as a local resource has made MPI ubiquitous for serious parallel programmers. However, there are two trends that may limit the usefulness of MPI in the future. The first is the rise of the Web and of Web Services as a paradigm for distributed, service oriented computing. In principle, using Web Service protocols that expose functionality as a service offers the prospect of building more robust software for distributed systems. The second trend is the move towards Multi-Core processors as semi-conductor manufacturers are finding that they can no longer increase the clock speed as the feature size continues to shrink. Thus, although Moore’s Law, in the sense that the feature size will continue to shrink, will continue for perhaps a decade or more, the accompanying increase in speed as the clock speed is increased will no longer be possible. For this reason, 2, 4 and 8 core chips, in which the processor is replicated several times, are already becoming commonplace. However, this means that any significant performance increase will depend entirely on the programmer’s ability to exploit parallelism in their applications. This talk will end by reviewing these trends and examining the applicability of MPI in the future.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Author information

Authors and Affiliations

Authors

Editor information

Franck Cappello Thomas Herault Jack Dongarra

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Hey, T. (2007). MPI: Past, Present and Future. In: Cappello, F., Herault, T., Dongarra, J. (eds) Recent Advances in Parallel Virtual Machine and Message Passing Interface. EuroPVM/MPI 2007. Lecture Notes in Computer Science, vol 4757. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-75416-9_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-75416-9_3

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-75415-2

  • Online ISBN: 978-3-540-75416-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics