Abstract
A significant number of modern multiple-processor computer systems belong to the class of multicomputers. This type of system is characterized by distributed memory organization, i.e., an arbitrary processor cannot directly access the address space of another processor. In order to implement interprocessor communication, an approach is currently being adopted which consists in receiving and passing messages between computational nodes via a communication network.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Grama, A., Kumar, V., Gupta, A., Karypis, G.: Introduction to Parallel Computing, 2nd edn. Addison-Wesley, Boston (2003)
MPI Forum: home page (2018). http://www.mpi-forum.org/
MPICH: home page (2018). http://www.mpich.org/
Open MPI: Open Source High Performance Computing (2018). http://www.open-mpi.org/
Pacheco, P.S.: An Introduction to Parallel Programming. Elsevier, Amsterdam (2011)
Quinn, M.J.: Parallel Programming in C with MPI and OpenMP. McGraw-Hill Higher Education, Boston (2004)
Rauber, T., Rünger, G.: Parallel Programming for Multicore and Cluster Systems, 2nd edn. Springer, Berlin (2013)
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Kurgalin, S., Borzunov, S. (2019). The MPI Technology. In: A Practical Approach to High-Performance Computing. Springer, Cham. https://doi.org/10.1007/978-3-030-27558-7_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-27558-7_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-27557-0
Online ISBN: 978-3-030-27558-7
eBook Packages: Computer ScienceComputer Science (R0)