Abstract
Many applications from scientific computing are computationally intensive and can therefore benefit from a realization on a parallel or a distributed platform. The parallelization strategy and the resulting efficiency strongly depends on the characteristics of the target architecture (shared address space or distributed address space) and the programming model used for the implementation. For selected problems from scientific computing, we discuss parallelization strategies using message-passing programming for distributed address space.
Preview
Unable to display preview. Download preview PDF.
References
1. D.R. Butenhof. Programming with POSIX Threads. Addison-Wesley, 1997.
2. D.E. Culler, J.P. Singh, and A. Gupta. Parallel Computer Architecture: A Hardware Software Approach. Morgan Kaufmann, 1999.
3. A. Geist, A. Beguelin, J. Dongarra, W. Jiang, R. Manchek, and V. Sunderam. PVM Parallel Virtual Machine: A User’s Guide and Tutorial for Networked Parallel Computing. MIT Press, Camdridge, MA, 1996. www.netlib.org/pvm3/book/pvm_book.html.
4. D. Kerbyson, H. Alme, A. Hoisie, F. Petrini, H. Wasserman, and M. Gittings. Predictive Performance and Scalability Modeling of a Large-Scale Application. In Proc. of the ACM/IEEE Supercomputing, Denver, USA, 2001.
5. M. Kühnemann, T. Rauber, and G. Rünger. Performance Modelling for Task-Parallel Programs. In Proc. of the Communication Networks and Distributed Systems Modeling and Simulation Conference (CNDS 2002), pages 148–154, San Antonio, USA, 2002.
6. OpenMP C and C++ Application Program Interface, Version 1.0, October 1998.
7. P.S. Pacheco. Parallel programming with MPI. Morgan Kaufmann Publ., 1997.
8. T. Rauber and G. Rünger. PVM and MPI Communication Operations on the IBM SP2: Modeling and Comparison. In Proc. 11th Symp. on High Performance Computing Systems (HPCS’97), pages 141–152, 1997.
9. T. Rauber and G. Rünger. Parallele und Verteilte Programmierung. Springer Verlag, Heidelberg, 2000.
10. T. Rauber and G. Rünger. Library Support for Hierarchical Multi-Processor Tasks. In Proc. of the ACM/IEEE Supercomputing, Baltimore, USA, 2002.
11. M. Snir, S. Otto, S. Huss-Ledermann, D. Walker, and J. Dongarra. MPI: The Complete Reference. MIT Press, Camdridge, MA, 1996. www.netlib.org/utk/papers/mpi_book/mpi_book.html.
12. Th. Sterling and J. Salmonand D.J. Becker. How to Build a Beowulf: A Guide to the Implementation and Application of PC Clusters. MIT Press, 1999.
13. J. Stoer and R. Bulirsch. Numerische Mathematik II. Springer, 3 edition, 1990.
14. V. Strassen. Gaussian Elimination is not Optimal. Numerische Mathematik, 13:354–356, 1969.
15. Z. Xu and K. Hwang. Early Prediction of MPP Performance: SP2, T3D and Paragon Experiences. Parallel Computing, 22:917–942, 1996.
Author information
Authors and Affiliations
Editor information
Rights and permissions
About this chapter
Cite this chapter
Rauber, T., Rünger, G. 13. Parallel Implementation Strategiesfor Algorithms from Scientific Computing. In: Hergert, W., Däne, M., Ernst, A. (eds) Computational Materials Science. Lecture Notes in Physics, vol 642. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-39915-5_13
Download citation
DOI: https://doi.org/10.1007/978-3-540-39915-5_13
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-21051-1
Online ISBN: 978-3-540-39915-5
eBook Packages: Springer Book Archive