Abstract
MPI is a standardized Application Programming Interface (API) that allows one to provide unambiguously the interface (that is, the declaration of functions, procedures, data-types, constants, etc.) with the precise semantic of communication protocols and global calculation routines, among others. Thus a parallel program using distributed memory can be implemented using various implementations of the MPI interface provided by several vendors (like the prominent OpenMPI, MPICH2, etc.). Communications can either be synchronous or asynchronous, bufferized or not bufferized, and one can define synchronization barriers where all processes have to wait for each other before further carrying computations.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Hypertext Markup Language .
- 2.
An example of binary operator that is not commutative is the division since \(p/q \not = q/p\).
- 3.
See the manual online: https://www.open-mpi.org/doc/v1.4/man3/MPI_Send.3.php.
- 4.
In that case, either a time-out signal can be emitted externally to kill all the processes, or we need to manually kill the processes using their process identity number using Shell command line instructions.
- 5.
- 6.
See manual online at https://www.open-mpi.org/doc/v1.5/man3/MPI_Reduce.3.php.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
- 19.
References
Kernighan, B.W., Ritchie, D.M.: The C Programming Language, 2nd edn. Prentice Hall Professional Technical Reference, Englewood Cliffs (1988)
Stroustrup, Bjarne: The C++ Programming Language, 3rd edn. Addison-Wesley Longman Publishing Co. Inc, Boston (2000)
Kumar, V., Grama, A., Gupta, A., Karypis, G.: Introduction to Parallel Computing: Design and Analysis of Algorithms. Benjamin-Cummings Publishing Co. Inc, Redwood City (1994)
Casanova, H., Legrand, A., Robert, Y.: Parallel Algorithms. Chapman and Hall/CRC numerical analysis and scientific computing. CRC Press (2009)
Snir, M., Otto, S., Huss-Lederman, S., Walker, D., Dongarra, J.: MPI-The Complete Reference, Vol. 1: The MPI Core, 2nd edn. MIT Press, Cambridge (1998). (revised)
Gropp, W.D., Huss-Lederman, S., Lumsdaine, A., Inc netLibrary, : MPI : The Complete Reference. Vol. 2, The MPI-2 Extensions. Scientific and engineering computation series. MIT Press, Cambridge (1998)
Gropp, W., Hoefler, T., Thakur, R., Lusk, E.: Using Advanced MPI: Modern Features of the Message-Passing Interface. MIT Press (2014)
Sanders, p., Larsson Träff, J.: Parallel prefix (scan) algorithms for MPI. In: Recent Advances in Parallel Virtual Machine and Message Passing Interface, pp. 49–57. Springer (2006)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Nielsen, F. (2016). Introduction to MPI: The Message Passing Interface. In: Introduction to HPC with MPI for Data Science. Undergraduate Topics in Computer Science. Springer, Cham. https://doi.org/10.1007/978-3-319-21903-5_2
Download citation
DOI: https://doi.org/10.1007/978-3-319-21903-5_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-21902-8
Online ISBN: 978-3-319-21903-5
eBook Packages: Computer ScienceComputer Science (R0)