Skip to main content

Konzepte der Parallelarbeit

  • Chapter
Rechnerarchitektur

Part of the book series: Springer-Lehrbuch ((SLB))

  • 132 Accesses

Zusammenfassung

Die wichtigste organisatorische Maßnahme zur Leistungssteigerung in einer Rechnerarchitektur ist die Einführung eines möglichst hohen Grades a n Parallelarbeit. Sind die Programme oder die Daten durch ein Operationsprinzip a priori strukturiert, so ist dadurch auch die Art der Parallelarbeit in gewissem Grade vorgezeichnet. Die grundsätzliche Aufgabe ist dabei, für eine gegebene Art und Zahl von Hardware-Betriebsmitteln die Kontrollstrukturen des Operationsprinzips auf die Kooperationsregeln der Hardwarestruktur abzubilden.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.95
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Literatur zu Kapitel 5

  1. Aiken A., Nikolau A.: Optimal Loop Parallelization, Tech. Report 88-905, Cornell University, Dept. of Computer Science 1988

    Google Scholar 

  2. Banerjee U.: Dependence Analysis for Supercomputing, Kluwer Academic Publishers, Boston 1988

    Book  Google Scholar 

  3. Beckmann C.J., Polychronopoulos C.D.: Fast Barrier Synchronization Hardware. Proc. Supercomputing’ 90, IEEE-Computer Society Press order no. 2056 (1990), 180–189

    Google Scholar 

  4. Carr S., Kennedy K.: Compiling for Nachines with Complex Memory Hierarchies, Proc. IBM Workshop on Parallel Software Support Tools, IBM Europe Institute, Oberlech, Austria 1990

    Google Scholar 

  5. Chatterjee A.: Futures: A Mechanism for Concurrency Among Objects, Proc. Supercomputing’ 89, IEEE-CS Order no. 2021 (1989), 562–567

    Google Scholar 

  6. Chen T.C.: Parallelism, Pipelining and Performance Enhancement by Multiprocessing, in Hasselmeier/ Spruth(eds.): Rechnerstrukturen, Oldenbourg-Verlag, München 1974, 104–150

    Google Scholar 

  7. Conway M.E.: A Multiprocessor System Design, Proc. AFIPS FJCC 1963, 139–146

    Google Scholar 

  8. Cytron R.: Doacross: Beyond Vectorization for Multiprocessors, Proc. Internat. Conf. on Parallel Processing 1986, IEEE Catalog no. 86CH2355-6, 836–844

    Google Scholar 

  9. Dennis J.B., Van Horn E.C.: Programming Semantics for Multiprogrammed Computations, Comm. ACM 9,3 (March 1966), 143–155

    Article  MATH  Google Scholar 

  10. Dietz H., Schwederski T., O’Keefe M., Zaafrani A.: Static Synchronization Beyond VLIW, Proc. Supercomputing’ 89, IEEE-CS Order no. 2021 (1989), 416–425

    Google Scholar 

  11. Dijkstra E.W.: Co-operating Sequential Processes, in Genuys F.(ed.): Programming Languages, Academic Press, New York 1968

    Google Scholar 

  12. Dijkstra E.W.: A Discipline of Programming, Prentice-Hall, Englewood Cliffs, NJ 1976

    MATH  Google Scholar 

  13. Ferrante J., Ottenstein K.J., Warren J.D.: The Program Dependence Graph and Its Use in Optimization, ACM Trans. on Programming Languages and Systems 9,3 (July 1987), 319–349

    Article  MATH  Google Scholar 

  14. Giloi W.K., Berg H.K.: Introducing the Concept of Data Structure Architecture, Proc. 1977 Internat. Conf. on Parallel Processing, IEEE Catalog no. 78CH1284-9C, 175–181

    Google Scholar 

  15. Giloi W.K., Berg H.K.: Data Structure Architectures — A Major Operational Principle, Proc. 5th Annual Sympos. on Computer Architecture, IEEE Catalog no. 78CH1284-9C, 44–51

    Google Scholar 

  16. Giloi W.K., Schroeder W.: Very High Speed Communication in Large MIMD Supercomputers, Proc. Supercomputing’ 89, ACM Order No. 415891, 313–321

    Google Scholar 

  17. Gasperoni F.: Compilation Techniques for VLIW Architectures, Tech. Report 435, New York University, Computer Science Department, Courant Institute 1989

    Google Scholar 

  18. Gerndt H.M.: Automatic Parallelization for Distributed-Memory Multiprocessing Systems, Dissertation, University of Bonn 1989

    Google Scholar 

  19. Habermann A.N.: Introduction to Operating System Design, Science Research Associates Chicago 1976

    Google Scholar 

  20. Hwang K.H., Briggs F.A.: Computer Architecture and Parallel Processing, McGraw-Hill, New York 1985

    Google Scholar 

  21. Hockney R.W., Jesshope C.R.: Parallel Computers 2, Adam Hilger, Bristol and Philadelphia 1988

    MATH  Google Scholar 

  22. Hennesy J.L., Patterson D.A.: Computer Architecture: A Quantitative Approach, Morgan Kaufmann Publishers Inc., San Mateo, CA 1990

    Google Scholar 

  23. Haenich R.: SNAP! Prototyping a Sequential and Numerical Application Palallelizer, Proc. Internat. Workshop on Automatic Distributed Memory Parallelization, Automatic Data Distribution, and Automatic Parallel Performance Prediction (March 1993), Springer WICS

    Google Scholar 

  24. High Performance Fortran Forum: High Performance Fortran — Language Specification (DRAFT), Version 0.4, November 1992

    Google Scholar 

  25. Ichbiah J.D. et al.: Rationale for the Design of the ADA Programming Language, ACM SIGPLAN NOTICES 14,16 (June 1979)

    Google Scholar 

  26. Kahne G.: The Semantics of a Simple Language for Parallel Programming, Proc. IFIP Congress 1974, North-Holland 1974, 471–475

    Google Scholar 

  27. Knittel G.: A Scalable Multiprocessor System Based on Hardware Controlled Loop Level Parallelism, Diplomarbeit, Technische Universität Berlin, FB Elektrotechnik 1992

    Google Scholar 

  28. Kuck D.J.: A Survey of Parallel Machine Organization and Programming, Computing Surveys 9,1 (March 1977), 29–59

    Article  MathSciNet  MATH  Google Scholar 

  29. Lam M.: Software-Pipelining: An Effective Scheduling Technique for VLIW Machines, Proc. SIGPLAN’ 88 Conf. on Programming Language Design and Implementation (June 1988), 318–328

    Google Scholar 

  30. Liskov B.H.: Primitives for Distributed Computing, MIT Laboratory for Computer Science, Computations Structure Group Memo 175 (May 1979)

    Google Scholar 

  31. Liu D.: Methods of Fine-grain Optimization for Parallel Computer Architectures, Dissertation, Technische Universität Berlin, FB Informatik 1991

    Google Scholar 

  32. Schröder W.: The Distributed PEACE Operating System and Its Suitability for MIMD Message-passing Architectures, in Jesshope and Reinartz(eds.): CONPAR 88, Cambridge University Press 1989, 27–34

    Google Scholar 

  33. Wildner U.: Entwicklung von Parallelsierungsstrategien für eine asynchrone MIMD-Architektur, Studienarbeit, Technische Universität Berlin, FB Informatik 1991

    Google Scholar 

  34. Yonezawa A., Shibayama E., Takada T., Honda Y.: Modelling and Programming in an Object-Oriented Concurrent Language ABCL/1, in Yonezawa A., Tokoro M.(eds.): Object-Oriented Concurrent Programming, The MIT Press, Cambridge, Mass. 1987

    Google Scholar 

  35. Zima H., Chapman B.: Supercompilers for Parallel and Vector Computers, Addison-Wesley Publishing Co. 1990

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 1993 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Giloi, W.K. (1993). Konzepte der Parallelarbeit. In: Rechnerarchitektur. Springer-Lehrbuch. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-58054-3_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-58054-3_5

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-56355-6

  • Online ISBN: 978-3-642-58054-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics