Advertisement

Multiprocessor systems for large numerical applications

  • G. Fritsch
  • J. Volkert
Submitted Papers
Part of the Lecture Notes in Computer Science book series (LNCS, volume 342)

Abstract

Numerical simulation in physics, chemistry and engineering sciences, as for instance in fluid dynamics can be grouped in two classes: Continuum models and many-body-models. The mathematical approximative methods used are numerical grid methods, molecular dynamics, Monte Carlo methods etc. The more complicate the considered phenomenon and the more refined the model is, the higher is the demand for computational power and storage capacity. Future high performance computers will be parallel machines in order to be able to safisfy the users of large numerical applications. Appropriate parallel architectures in particular of the multiple-instruction-multiple-data type (MIMD) are discussed in view of the mapping requirements and varying subtask structure of the considered numerical applications. Two distributed memory architectures are presented in more detail: SUPRENUM, a German supercomputer project and the Erlangen multiprocessor architecture. The SUPRENUM prototype, based on the message-passing communication principle, will consist of 256 processors with a theoretical overall peak performance of 2 GFLOPS. The Erlangen architectural concept is characterized by interprocessor communication via distributed shared memory (DSM) and a functional hierarchy of 3 levels. This multiprocessor architecture adapts especially well to the mapping requirements of most numerical simulation problems. This is due to the fact that DSM architectures match efficiently the local communication needs of the considered problem classes.

Keywords

Monte Carlo Shared Memory Parallel Machine User Problem Partial Differential Equation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

5. References

  1. /1/.
    Händler, W.; Bode, A.; Fritsch, G.; Henning, W.; Volkert, J.: A tightly coupled and hierarchical multiprocessor architecture. Comp. Phys. Comm. 37 (1985), 87–93. North Holland Amsterdam.Google Scholar
  2. /2/.
    Henning, W.; Volkert, J.: Programming EGPA systems. Proc. 5th Int. Conf. Distributed Computing Systems, Denver/Col., May 13–17, 1985, 552–559.Google Scholar
  3. /3/.
    Bode, A.; Fritsch, G.; Händler, W.; Hofmann, F.; Volkert, J.: Multi-grid oriented computer architecture. Proc. 1985 Int. Conf. Parallel Processing, St. Charles, 81–95. IEEE Comp. Soc. 1985.Google Scholar
  4. /4/.
    Volkert, J.; Henning, W.: Multigrid algorithms implemented on EGPA multiprocessor. Proc. 1985 Int. Conf. Parallel Processing, 799–805, IEEE Comp. Soc. Press 1985.Google Scholar
  5. /5/.
    Trottenberg, U.: The SUPRENUM Projekct: Idea and Current State. SPEEDUP, Vol. 2, No. 1, 1988, 20–24. Universität Bern/Switzerland.Google Scholar
  6. /6/.
    Behr, P.M.; Giloi, W.K.; Mühlenbein, H.: SUPRENUM: The German Supercomputer architecture — rationale and concepts. Proc. 1986 Int. Conf. Parallel Processing, Aug. 19–22, 1986, 567–575. IEEE Comp. Soc. Press 1986.Google Scholar
  7. /7/.
    Seitz, C.L.: The cosmic cube. CACM Vol. 28, 22–33 (1985).Google Scholar
  8. /8/.
    Pfister, G.F.; et al.: The IBM Research Parallel Processor Prototype (RP3). Proc. 1985, Int. Conf. Parallel Processing; IEEE Comp. Soc. Press, Washington D.C. (1985).Google Scholar
  9. /9/.
    Regenspurg, G.: Hochleistungsrechner — Architekturprinzipien, Kap. 3.6, Mc Graw-Hill Book Comp. GmbH Hamburg (1987).Google Scholar
  10. /10/.
    Händler, W.; Hofmann, F.; Schneider, H.J.: A General Purpose Array with a Broad Spectrum of Applications. In: Händler, W.: Computer Architecture, Informatik Fachberichte, Springer Verlag Berlin Heidelberg New York, 4, 311–35 (1976).Google Scholar
  11. /11/.
    Händler, W.; Herzog, U.; Hofmann, F.; Schneider, H.J.: Multiprozessoren für breite Anwendungsgebiete: Erlangen General Purpose Array. GI/NTG-Fachtagung "Architektur und Anwendungsgebiete: Erlangen General Purpose Array. GI/NTG-Fachtagung "Architektur und Betrieb von Rechensystemen", Informatik-Fachberichte, Springer Verlag Berlin Heidelberg New York, 78, 195–208 (1984).Google Scholar
  12. /12/.
    Händler, W.; Rohrer, H.: Thoughts on a Computer Construction Kit. Elektronische Rechenanlagen 22, 1, 3–13; 1980.Google Scholar
  13. /13/.
    Händler, W.; Maehle, E.; Wirl, K.: DIRMU Multiprocessor Configurations, Proc. 1985 Int. Conf. on Parallel Processing, St. Charles 1985, 652–656. IEEE Comp. Soc. 1985.Google Scholar
  14. /14/.
    Hoshino, T., et al.: Highly parallel processor array PAX for wide scientific applications. Proc. 1983 Int. Conf. Parallel Processing, 95–105. IEEE Comp. Soc. Press (1983).Google Scholar
  15. /15/.
    Hoshino, T.: An invitation to the world of PAX. Computer, May 1986, 68–79.Google Scholar
  16. /16/.
    Momoi, Sh.; Shimada, Sh.; Kobayashi, M.; Ishikawa, T.: Hierarchical array processor system (HAP). CONPAR 86, Aachen/F.R.Germany, Sept. 17–19 1986.Google Scholar
  17. /17/.
    Maehle, E. and Wirl, K.: Parallel programs for numerical and signal processing on the Multiprocessor System DIRMU 25; in: Highly Parallel Computers (Ed.: G.L. Reijns, M.H. Barton), Elsevier Science Pub., IFIP 1987.Google Scholar
  18. /18/.
    Hoshino, T.; Takenouchi, K.: Processing of the molecular dynamics model by the parallel computer PAX. Computer Phys. Comm. 31, 287–296 (1984).Google Scholar
  19. /19/.
    Händler, W.; Fritsch, G.; Volkert, J.: Applications implemented on the Erlangen General Purpose Array. Proc. Parcella 84. Math. Forschung, Bd. 25. Akademie Verlag Berlin 1985.Google Scholar
  20. /20/.
    Pfister, G.F.; Norton, V.A.: "Hot Spot" Contention and Combining in Multistage Interconnection Networks. IEEE Trans. Comp., Vol. C-34, 10, pp. 934–948; (1985).Google Scholar
  21. /21/.
    Hackbusch, W.: Frequency Decomposition Method. 4. GAMM Workshop on Robust Multigrid Methods, Notes on Fluid Mechanics, Vieweg (1988).Google Scholar
  22. /22/.
    Graves, R.: Numerical Aerodynamic Simulation-Creating the Digital Wind Tunnel. International Conference on Supercomputers, pp. 181–197,Paris (1984).Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1989

Authors and Affiliations

  • G. Fritsch
    • 1
  • J. Volkert
    • 1
  1. 1.Institut für Mathematische Maschinen und Datenverarbeitung (Informatik III)Universität Erlangen-NürnbergErlangenF.R. Germany

Personalised recommendations