Zusammenfassung
In den letzten Jahren fand auf dem Gebiet der Rechnerarchitekturen eine bemerkenswerte Entwicklung statt. Insbesondere die Leistungsfähigkeit von Prozessoren hat sich drastisch erhöht. Maßgeblich hierfür ist zum einen die Einführung neuer Konzepte, z.B. RISC-Prozessoren [12, 38], zum anderen aber auch die Verbesserung der Technologie und die damit verbundene Erhöhung der Taktfrequenz. Abbildung 5.1 zeigt diese Entwicklung für Mikroprozessoren. Aus dieser Darstellung wird klar, daß der Anstieg der Komplexität in den letzten Jahren allmählich nachgelassen hat. Eine weitere Steigerung der Leistungsfähigkeit wird demnach in absehbarer Zeit nur noch durch neue Konzepte denkbar sein. Eines der vielversprechendsten Konzepte und heute schon wichtigsten Forschungsgebiete kann in der Weiterentwicklung der Parallelverarbeitung gesehen werden. Dieses Kapitel behandelt vor allem die Parallelität auf Mikrobefehlsebene, Instruktionsebene und Blockebene.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Literaturverzeichnis
A.V. Aho, R. Sethi, and J. D. Ullman. Compiler-Bau (Teil 1 + 2). Addison-Wesley, 1988.
G.S. Almasi and A. Gottlieb. Highly Parallel Computing. The Benja-min/Cummings Publishing Company Inc., 1990.
Arvind and D.E. Culler. Managing Resources in a Parallel Machine. In Proceedings of IFIP TC-10 Working Conference on Fifth Generation Computer Architecture, Manchester, England. North-Holland Publishing Company, Juli 1985.
Arvind, D.E. Culler, and G.K. Maa. Assessing the Benefits of Fine-Grain Parallelism in Dataflow Programs. The International Journal of Supercomputer Applications, 2(3), Nov. 1988.
Arvind and K. Ekanadham. Future Scientific Programming on Parallel Machines. The Journal of Parallel and Distributed Computing, 5(5):460–493, Okt. 1988.
Arvind, S.K. Heller, and R.S. Nikhil. Programming Generality and Parallel Computers. In Proceedings of the Fourth International Symposium on Biological and Artificial Intelligence Systems, S. 255–286, Trento, Italien, Sept. 1988. ESCOM.
Arvind and R. A. Iannucci. Two Fundamental Issues in Multiprocessing. In Proceedings of DFVLR — Conference 1987 on Parallel Processing in Science and Engineering, Bonn-Bad Godesberg, W. Germany, Juni 1987.
Arvind, R.S. Nikhil, and K.K. Pingali. I Structures: Data Structures for Parallel Computing. Technical Report Computation Structures Group Memo 269, MIT Laboratory for Computer Science, 545 Technology Square, Cambridge, MA, Feb. 1987.
M.T. Austin and G.S. Sohi. Dynamic Dependency Analysis of Ordinary Programs. In The 19th Annual International Symposium on Computer Architecture, S. 342–351, 1992.
T. Ball and J.R. Larus. Branch Prediction for Free. Technical Report 1137, Computer Sciences Department, University of Wisconsin — Madison, 1210 W. Dayton Street, Madison, WI 53706,USA, 1993.
U. Banerjee, R. Eigenmann, A. Nicolau, and D. Padua. Automatic Program Parallelization. In Proceedings of the IEEE, Bd. 81, Nr. 2, S. 211–243, 1993.
A. Bode. Architektur von RISC-Rechnern. RISC-Architekturen, Reihe Informatik, Bd. 93, 1990.
S.A. Brobst. Instruction Scheduling and Token Storage Requirements in a Dataflow Supercomputer. Master’s thesis, Massachusetts Institute of Technology, Dept. of EECS, 77 Massachusetts Ave, Cambridge, MA, Mai 1986.
M. Butler, T. Yeh, and Y. Patt. Single Instruction Stream Parallelism Is Greater than Two. In The 18th Annual International Symposium on Computer Architecture, S. 276–286, 1991.
P.P. Chang, S.A. Mahlke, W.Y. Chen, N.J. Warter, and W.W. Hwu. IMPACT: An Architectural Framework for Multiple-Instruction-Issue Processors. In The 18th Annual International Symposium on Computer Architecture, S. 266–275, 1991.
IBM Corp. PowerPC and Power2: Technical Aspects of the New IBM RISC System/6000. IBM Corp., 1990.
D. Culler, R. Karp, D. Patterson, et. al. Log P: Towards a Realistic Model of Parallel Computation. In Proceedings of the Forth ACM SIG-PLAN Symposium on Principles & Practice of Parallel Programming PPOPP, S. 1–12, 1993.
D.E. Culler, A. Sah, K.E. Schauser, T. von Eicken, and J. Wawrzynek. Fine-grain Parallelism with Minimal Hardware Support: A Compiler-Controlled Threaded Abstract Machine. In 4th Int. Conference on Architectural Support for Programming Languages and Operating Systems, April 1991.
David E. Culler. Managing Parallelism and Resources in Scientific Dataflow Programs. PhD thesis, MIT Dept. of Electrical Engineering and Computer Science, Cambridge, MA, June 1989. MIT Laboratory for Computer Science, Technical Report, TR446.
R. Cytron. Doacross: Beyond Vectorization for Multiprocessors. In Proceedings of the 1986 International Conference on Parallel Processing, S. 836–844, Aug. 1986.
R. Cytron. Limited Processor Scheduling of Doacross Loops. In Proceedings of the 1987 International Conference on Parallel Processing, S. 226–234, Aug. 1987.
R. Cytron and J. Ferrante. What’s in a Name? -or- The Value of Renaming for Parallelism Detection and Storage Allocation. In Proceedings of the 1987 International Conference on Parallel Processing, S. 19–27, Aug. 1987.
R. Cytron, J. Ferrante, B.K. Rosen, M.N. Wegman, and F.K. Zadeck. Efficiently Computing Static Single Assignment Form and the Control Dependence Graph. In ACM Transactions on Programming Languages and Systems, Vol 13, No.4, S. 451–490, 1991.
J.R.B. Davies. Parallel Loop Constructs for Multiprocessors. Master’s thesis, University of Illinois, Urbana-Champaign, Mai 1981. Rep. No. UIUCDCS-R-81–1070.
J.A. Fisher. Trace scheduling: A technique for global microcode compaction. IEEE Transactions on Computers, C-30(7):478–490, Juli 1981.
J.A. Fisher and S.M. Freudenberger. Predicting Conditional Branch Directions From Previous Runs of a Program. In Proceedings of the 5th International Conference on Architectural Support for Programming Languages and Operating Systems (ACM SIGPLAN Notices), S. 85–95, 1992.
P. Gutberlet and W. Rosenstiel. Scheduling Between Basic Blocks in the CADDY Synthesis System. In Proceedings of the European Conference on Design Automation, S. 496–500, 1992.
J.L. Hennesy and D.A. Patterson. Computer Architecture: A Quantitive Approach. Morgan Kaufmann Publishers Inc., 1990.
S. Hiranandani, K. Kennedy, and C. Tseng. Compiler Optimizations for Fortran D on MIMD-Distributed-Memory Machines. In Proceedings Supercomputing ’91, S. 86–100, 1991.
R.A. Iannucci. Toward a Dataflow/von Neumann Hybrid Architecture. In Proc. 15th Int. Symp. on Computer Architecture, pages 131–140, 1988.
W. Karl. Parallele Prozessorarchitekturen: Codegenerierung für super-skalare, superpipelined und VLIW-Prozessoren. BI-Wissenschaftsverlag (Reihe Informatik, Bd. 93, 1993.
D.J. Kuck, R.H. Kuhn, D.A. Padua, B. Leasure, and M. Wolfe. Dependence Graphs and Compiler Optimizations. In Proceedings of ACM Symposium on Principles of Programming Languages, Jan. 1981.
M.S. Lam and R.P. Wilson. Limits of Control Flow on Parallelism. In The 19th Annual International Symposium on Computer Architecture, S. 46–57, 1992.
J. Loeliger, R. Metzger, M. Seligman, and S. Stroud. Pointer Target Tracking — An Empirical Study. In Proceedings Supercomputing 91, S. 14–23, 1991.
S. Melvin and Y. Patt. Exploiting Fine-Grained Parallelism Through a Combination of Hardware and Software Techniques. In The 18th Annual International Symposium on Computer Architecture, S. 287–296, 1991.
S.W. Melvin. Performance Enhancement Through Dynamic Scheduling and Large Execution Atomic Units in Single Instruction Stream Processors. PhD thesis, University of California in Berkeley, 1990.
R.S. Nikhil and Arvind. Can Dataflow Subsume von Neumann Computing? In Proceedings of the 16th Annual International Symposium on Computer Architecture, Jerusalem, Israel, Mai 1989.
D.A. Patterson. Reduced Instruction Set Computers. Communications of the Association for Computing Machinery, 28(1):9–21, Jan. 1985.
C.D. Polychronopoulos. On Program Restructuring, Scheduling, and Communication for Parallel Processor Systems. PhD thesis, University of Illinois, Urbana-Champaign, Center for Supercomputing Research and Decelopment, Aug. 1986. CSRD Rpt. Nr. 595, UILU-ENG-86–8006.
CD. Polychronopoulos, D.J. Kuck, and D.A. Padua. Execution of Parallel Loops on Parallel Processor Systems. In Proceedings of the 1986 International Conference on Parallel Processing, S. 519–527, 1986.
W. Pugh. The Omega Test: a fast and practical integer programming algorithm test for dependence analysis. In Proceedings Supercomputing ’91, S. 4–13, 1991.
V. Sarkar. Partitioning and Scheduling Parallel Programs for Execution on Multiprocessors. PhD thesis, Stanford University, Computer Systems Lab, Dept. of EE and CS, April 1987. CSL-TR-87–328.
V. Sarkar and J. Hennesy. Compile-time Partitioning and Scheduling of Parallel Programs. In Proceedings of the SIGPLAN ’86 Symposium on Compiler Construction, S. 17–26, Palo Alto, CA, Juli 1986. ACM.
K.E. Schauser, D.E. Culler, and T. von Eicken. Compiler-Controlled Multithreading for Lenient Parallel Languages. In Conference on Functional Programming Languages and Computer Architecture, 1991.
S. Tjiang, M. Wolf, M. Lam, K. Pieper, and J. Hennessy. Integrationg Scalar Optimiziation and Parallelization. In Languages and Compilers for Parallel Computing, S. 137–151, 1991.
L.G. Valiant. A Bridging Model for Parallal Computation. In Communications of the ACM, Bd. 33, Nr. 8, S. 103–111, 1990.
D.W. Wall. Limits of Instruction-Level Parallelism. In 4th Int. Conference on Architectural Support for Programming Languages and Operating Systems, S. 176–188, 1991.
J. Wedeck and Rosenstiel. Codegenerierung für parallele Rechensysteme. In 31. Internationales wissenschaftliches Kolloquium Ilmenau, Bd. 2, S. 262–267, 1992.
J. Wedeck and Rosenstiel. Parallelism obtainable from sequential programs. Technical Report SFB 358-A2–1/93, SFB 358, Universität Tübingen, 1993.
J. Wedeck and Rosenstiel. Untersuchungen zur Bestimmung des Parallelitätsgrades in Befehlsströmen in SISD-Rechnern. Technical Report SFB 358-A2–2/93, SFB 358, Universität Tübingen, 1993.
J. Wedeck and W. Rosenstiel. Compiling C Programs into Threads. In Massively Parallel Processing Applications and Development, 1994.
M. J. Wolfe. Optimizing Supercompilers for Supercomputer. PhD thesis, University of Illinois at Urbana-Champaign, 1982.
M.J. Wolfe and U. Banerjee. Data Dependence and Its Application to Parallel Processing. International Journal of Parallel Processing, 16(2):137–178, April 1987.
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1995 B. G. Teubner Stuttgart
About this chapter
Cite this chapter
Rosenstiel, W., Wedeck, J. (1995). Parallelität auf Block- und Instruktionsebene. In: Waldschmidt, K. (eds) Parallelrechner. Leitfäden der Informatik. Vieweg+Teubner Verlag. https://doi.org/10.1007/978-3-322-86771-1_5
Download citation
DOI: https://doi.org/10.1007/978-3-322-86771-1_5
Publisher Name: Vieweg+Teubner Verlag
Print ISBN: 978-3-519-02135-3
Online ISBN: 978-3-322-86771-1
eBook Packages: Springer Book Archive