Advertisement

13. Parallel Implementation Strategiesfor Algorithms from Scientific Computing

  • T. Rauber
  • G. Rünger
Part III Modern Methods of Scienti.c Computing
  • 2k Downloads
Part of the Lecture Notes in Physics book series (LNP, volume 642)

Abstract

Many applications from scientific computing are computationally intensive and can therefore benefit from a realization on a parallel or a distributed platform. The parallelization strategy and the resulting efficiency strongly depends on the characteristics of the target architecture (shared address space or distributed address space) and the programming model used for the implementation. For selected problems from scientific computing, we discuss parallelization strategies using message-passing programming for distributed address space.

Keywords

Message Passing Interface Address Space Iteration Matrix Library Function Iteration Vector 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1. D.R. Butenhof. Programming with POSIX Threads. Addison-Wesley, 1997.Google Scholar
  2. 2. D.E. Culler, J.P. Singh, and A. Gupta. Parallel Computer Architecture: A Hardware Software Approach. Morgan Kaufmann, 1999.Google Scholar
  3. 3. A. Geist, A. Beguelin, J. Dongarra, W. Jiang, R. Manchek, and V. Sunderam. PVM Parallel Virtual Machine: A User’s Guide and Tutorial for Networked Parallel Computing. MIT Press, Camdridge, MA, 1996. www.netlib.org/pvm3/book/pvm_book.html.Google Scholar
  4. 4. D. Kerbyson, H. Alme, A. Hoisie, F. Petrini, H. Wasserman, and M. Gittings. Predictive Performance and Scalability Modeling of a Large-Scale Application. In Proc. of the ACM/IEEE Supercomputing, Denver, USA, 2001.Google Scholar
  5. 5. M. Kühnemann, T. Rauber, and G. Rünger. Performance Modelling for Task-Parallel Programs. In Proc. of the Communication Networks and Distributed Systems Modeling and Simulation Conference (CNDS 2002), pages 148–154, San Antonio, USA, 2002.Google Scholar
  6. 6. OpenMP C and C++ Application Program Interface, Version 1.0, October 1998.Google Scholar
  7. 7. P.S. Pacheco. Parallel programming with MPI. Morgan Kaufmann Publ., 1997.Google Scholar
  8. 8. T. Rauber and G. Rünger. PVM and MPI Communication Operations on the IBM SP2: Modeling and Comparison. In Proc. 11th Symp. on High Performance Computing Systems (HPCS’97), pages 141–152, 1997.Google Scholar
  9. 9. T. Rauber and G. Rünger. Parallele und Verteilte Programmierung. Springer Verlag, Heidelberg, 2000.Google Scholar
  10. 10. T. Rauber and G. Rünger. Library Support for Hierarchical Multi-Processor Tasks. In Proc. of the ACM/IEEE Supercomputing, Baltimore, USA, 2002.Google Scholar
  11. 11. M. Snir, S. Otto, S. Huss-Ledermann, D. Walker, and J. Dongarra. MPI: The Complete Reference. MIT Press, Camdridge, MA, 1996. www.netlib.org/utk/papers/mpi_book/mpi_book.html.Google Scholar
  12. 12. Th. Sterling and J. Salmonand D.J. Becker. How to Build a Beowulf: A Guide to the Implementation and Application of PC Clusters. MIT Press, 1999.Google Scholar
  13. 13. J. Stoer and R. Bulirsch. Numerische Mathematik II. Springer, 3 edition, 1990.Google Scholar
  14. 14. V. Strassen. Gaussian Elimination is not Optimal. Numerische Mathematik, 13:354–356, 1969.Google Scholar
  15. 15. Z. Xu and K. Hwang. Early Prediction of MPP Performance: SP2, T3D and Paragon Experiences. Parallel Computing, 22:917–942, 1996.Google Scholar

Authors and Affiliations

  • T. Rauber
    • 1
  • G. Rünger
    • 2
  1. 1.Universität Bayreuth, Fakultät für Mathematik und Physik 
  2. 2.Technische Universität Chemnitz, Fakultät für Informatik 

Personalised recommendations