Advertisement

Konzepte der Parallelarbeit

  • Wolfgang K. Giloi
Part of the Springer-Lehrbuch book series (SLB)

Zusammenfassung

Die wichtigste organisatorische Maßnahme zur Leistungssteigerung in einer Rechnerarchitektur ist die Einführung eines möglichst hohen Grades a n Parallelarbeit. Sind die Programme oder die Daten durch ein Operationsprinzip a priori strukturiert, so ist dadurch auch die Art der Parallelarbeit in gewissem Grade vorgezeichnet. Die grundsätzliche Aufgabe ist dabei, für eine gegebene Art und Zahl von Hardware-Betriebsmitteln die Kontrollstrukturen des Operationsprinzips auf die Kooperationsregeln der Hardwarestruktur abzubilden.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Literatur zu Kapitel 5

  1. [AaN 88]
    Aiken A., Nikolau A.: Optimal Loop Parallelization, Tech. Report 88-905, Cornell University, Dept. of Computer Science 1988Google Scholar
  2. [BAN 88]
    Banerjee U.: Dependence Analysis for Supercomputing, Kluwer Academic Publishers, Boston 1988CrossRefGoogle Scholar
  3. [BaP 90]
    Beckmann C.J., Polychronopoulos C.D.: Fast Barrier Synchronization Hardware. Proc. Supercomputing’ 90, IEEE-Computer Society Press order no. 2056 (1990), 180–189Google Scholar
  4. [CaK 90]
    Carr S., Kennedy K.: Compiling for Nachines with Complex Memory Hierarchies, Proc. IBM Workshop on Parallel Software Support Tools, IBM Europe Institute, Oberlech, Austria 1990Google Scholar
  5. [CHA 89]
    Chatterjee A.: Futures: A Mechanism for Concurrency Among Objects, Proc. Supercomputing’ 89, IEEE-CS Order no. 2021 (1989), 562–567Google Scholar
  6. [CHE 74]
    Chen T.C.: Parallelism, Pipelining and Performance Enhancement by Multiprocessing, in Hasselmeier/ Spruth(eds.): Rechnerstrukturen, Oldenbourg-Verlag, München 1974, 104–150Google Scholar
  7. [CON 63]
    Conway M.E.: A Multiprocessor System Design, Proc. AFIPS FJCC 1963, 139–146Google Scholar
  8. [CYT 86]
    Cytron R.: Doacross: Beyond Vectorization for Multiprocessors, Proc. Internat. Conf. on Parallel Processing 1986, IEEE Catalog no. 86CH2355-6, 836–844Google Scholar
  9. [DaH 66]
    Dennis J.B., Van Horn E.C.: Programming Semantics for Multiprogrammed Computations, Comm. ACM 9,3 (March 1966), 143–155zbMATHCrossRefGoogle Scholar
  10. [Dea 89]
    Dietz H., Schwederski T., O’Keefe M., Zaafrani A.: Static Synchronization Beyond VLIW, Proc. Supercomputing’ 89, IEEE-CS Order no. 2021 (1989), 416–425Google Scholar
  11. [DIJ 68]
    Dijkstra E.W.: Co-operating Sequential Processes, in Genuys F.(ed.): Programming Languages, Academic Press, New York 1968Google Scholar
  12. [DIJ 76]
    Dijkstra E.W.: A Discipline of Programming, Prentice-Hall, Englewood Cliffs, NJ 1976zbMATHGoogle Scholar
  13. [FOW 89]
    Ferrante J., Ottenstein K.J., Warren J.D.: The Program Dependence Graph and Its Use in Optimization, ACM Trans. on Programming Languages and Systems 9,3 (July 1987), 319–349zbMATHCrossRefGoogle Scholar
  14. [GaB 77]
    Giloi W.K., Berg H.K.: Introducing the Concept of Data Structure Architecture, Proc. 1977 Internat. Conf. on Parallel Processing, IEEE Catalog no. 78CH1284-9C, 175–181Google Scholar
  15. [GaB 78]
    Giloi W.K., Berg H.K.: Data Structure Architectures — A Major Operational Principle, Proc. 5th Annual Sympos. on Computer Architecture, IEEE Catalog no. 78CH1284-9C, 44–51Google Scholar
  16. [GaS 89]
    Giloi W.K., Schroeder W.: Very High Speed Communication in Large MIMD Supercomputers, Proc. Supercomputing’ 89, ACM Order No. 415891, 313–321Google Scholar
  17. [GAS 89]
    Gasperoni F.: Compilation Techniques for VLIW Architectures, Tech. Report 435, New York University, Computer Science Department, Courant Institute 1989Google Scholar
  18. [GER 89]
    Gerndt H.M.: Automatic Parallelization for Distributed-Memory Multiprocessing Systems, Dissertation, University of Bonn 1989Google Scholar
  19. [HAB 76]
    Habermann A.N.: Introduction to Operating System Design, Science Research Associates Chicago 1976Google Scholar
  20. [HaB 85]
    Hwang K.H., Briggs F.A.: Computer Architecture and Parallel Processing, McGraw-Hill, New York 1985Google Scholar
  21. [HaJ 88]
    Hockney R.W., Jesshope C.R.: Parallel Computers 2, Adam Hilger, Bristol and Philadelphia 1988zbMATHGoogle Scholar
  22. [HaP 90]
    Hennesy J.L., Patterson D.A.: Computer Architecture: A Quantitative Approach, Morgan Kaufmann Publishers Inc., San Mateo, CA 1990Google Scholar
  23. [HAE 93]
    Haenich R.: SNAP! Prototyping a Sequential and Numerical Application Palallelizer, Proc. Internat. Workshop on Automatic Distributed Memory Parallelization, Automatic Data Distribution, and Automatic Parallel Performance Prediction (March 1993), Springer WICSGoogle Scholar
  24. [HPF 92]
    High Performance Fortran Forum: High Performance Fortran — Language Specification (DRAFT), Version 0.4, November 1992Google Scholar
  25. [Iea 79]
    Ichbiah J.D. et al.: Rationale for the Design of the ADA Programming Language, ACM SIGPLAN NOTICES 14,16 (June 1979)Google Scholar
  26. [KAH 74]
    Kahne G.: The Semantics of a Simple Language for Parallel Programming, Proc. IFIP Congress 1974, North-Holland 1974, 471–475Google Scholar
  27. [KNI 92]
    Knittel G.: A Scalable Multiprocessor System Based on Hardware Controlled Loop Level Parallelism, Diplomarbeit, Technische Universität Berlin, FB Elektrotechnik 1992Google Scholar
  28. [KUC 77]
    Kuck D.J.: A Survey of Parallel Machine Organization and Programming, Computing Surveys 9,1 (March 1977), 29–59MathSciNetzbMATHCrossRefGoogle Scholar
  29. [LAM 88]
    Lam M.: Software-Pipelining: An Effective Scheduling Technique for VLIW Machines, Proc. SIGPLAN’ 88 Conf. on Programming Language Design and Implementation (June 1988), 318–328Google Scholar
  30. [LIS 79]
    Liskov B.H.: Primitives for Distributed Computing, MIT Laboratory for Computer Science, Computations Structure Group Memo 175 (May 1979)Google Scholar
  31. [LIU 91]
    Liu D.: Methods of Fine-grain Optimization for Parallel Computer Architectures, Dissertation, Technische Universität Berlin, FB Informatik 1991Google Scholar
  32. [SCH 88]
    Schröder W.: The Distributed PEACE Operating System and Its Suitability for MIMD Message-passing Architectures, in Jesshope and Reinartz(eds.): CONPAR 88, Cambridge University Press 1989, 27–34Google Scholar
  33. [WIL 91]
    Wildner U.: Entwicklung von Parallelsierungsstrategien für eine asynchrone MIMD-Architektur, Studienarbeit, Technische Universität Berlin, FB Informatik 1991Google Scholar
  34. [Yea 87]
    Yonezawa A., Shibayama E., Takada T., Honda Y.: Modelling and Programming in an Object-Oriented Concurrent Language ABCL/1, in Yonezawa A., Tokoro M.(eds.): Object-Oriented Concurrent Programming, The MIT Press, Cambridge, Mass. 1987Google Scholar
  35. [ZaC 90]
    Zima H., Chapman B.: Supercompilers for Parallel and Vector Computers, Addison-Wesley Publishing Co. 1990Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1993

Authors and Affiliations

  • Wolfgang K. Giloi
    • 1
  1. 1.GMD und TU BerlinBerlin

Personalised recommendations