Skip to main content

Correlation of Algorithms, Software and Hardware of Parallel Computers

  • Chapter
Algorithms, Software and Hardware of Parallel Computers

Abstract

In the past, the speed of computers was mainly increased by increasing the speed of their logic element. Thus, the memory cycle time has increased by two orders of magnitude. Improvements in technology achieved in the last 20 years have increased the speed of processors by as much as three orders. Today, since the physical barrier of the speed of transfer of an electric signal has been reached, it is possible to achieve additional speed only by improving the computer organization or by using it more effectively. Current technology has made it possible for the processors to be combined into large parallel structures, and by a suitable organization of n processors it is possible to reach an n-fold increase in the rate of computation. Parallelism in computation has brought with it new problems both in the creation of new algorithms and programs, and in the design of computer architectures. Parallel algorithms and programs are closely connected with the architecture of parallel computers, and therefore design and analysis of parallel algorithms and programs cannot be considered independently of their implementation and the architecture of the computer on which they are to be implemented. Several examples are known from the history of parallel data processing, where a valuable concept in the design of algorithms, programs or computers has had a large impact on the efficiency of computation.

“It is easy to design computers, but it is hard to know what kind of computer to design...”,

D. J. Kuck [16]

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Baer, J. L.: Survey of some theoretical aspects of multiprocessing. Comp. Surveys. 5, 1973, 1. 31–80

    Article  MATH  Google Scholar 

  2. Barnes, G. et al.: The ILLIAC IV computer. IEEE Trans. on Computers, C-17, 1968, 746–757.

    Google Scholar 

  3. Batcher, K. E.: Sorting networks and their applications. Spring Joint Comp. Conf., 1968, AFIPS Proc., 32. Thompson, Washington, 1968, pp. 307–314.

    Google Scholar 

  4. Chon, S. CH. and Kuck, D. J.: Time and parallel processor bounds for linear recurrence systems. IEEE Trans. on Computers, C-24, 1975, 701–717.

    Google Scholar 

  5. Control Data Corporation, STAR-100 Computer Hardware Reference Manual, 1974.

    Google Scholar 

  6. Conway, M. E.: A multiprocessor system design. AFIPS Conf. Proc. 1963, FJCC 24. Spartan Books, Baltimore, 1963, pp. 139–146.

    Google Scholar 

  7. Dijkstra, E. W.: Cooperating sequential processes. In: Programming Languages. F Genuys (Editor). Academic Press, New York, 1968. pp. 43–1122.

    Google Scholar 

  8. Duff, M. J. and Watson, D.: A parallel computer for array processing. Proc. IFIP Congress, North-Holland Publ. Co., Amsterdam, 1975, pp. 94–99.

    Google Scholar 

  9. Enslow. P.. Jr. (Editor): Multiprocessors and Parallel Processing. Willey—Intel—science, New York, 1974.

    MATH  Google Scholar 

  10. Flanders, P. M. et al.: Efficient high-speed computing with the distributed array processor. In: High-Speed Computers and Algorithms Organization. D. J. Kuck. D. H. Lawrio and A H. Sameh (Editors). Academic Press, New York, 1977. pp. 113–128.

    Google Scholar 

  11. Flynn, M. J.: Toward more efficient computer organizations. Proc. Spring Joint Comp Conf.. AFIPS Press, 1972, pp. 1211–1217.

    Google Scholar 

  12. Gentleman. W. M.: Some complexity results for matrix computations on parallel processors. J. ACM, 25, 1978, 1. 112–115.

    Article  MathSciNet  MATH  Google Scholar 

  13. Graham, R. L.: Bounds on multiprocessing timing anomalies. SIAM J. Appl. Math.. 17. 1999, 2. 416–429.

    Google Scholar 

  14. Ihnat, J. P. et al.: The use of two levels of parallelism to implement an efficient programmable signal processing computer. Sagamore Comp. Conf. on Parallel Processing, Sagamore, 1973, pp. 113–119.

    Google Scholar 

  15. Keck. D.: ILLIAC IV software and application programming. IEEE Trans. on Computers. C-17. 1968. 8, 758–770.

    Article  Google Scholar 

  16. Kuck, D.: Multioperation machine computational complexity. In: Complexity of Sequential and Parallel Numerical Algorithms. J. F. Traub (Editor). Academic Press. New York. 1973. pp. 17–47.

    Google Scholar 

  17. Kung. H. T.: Synchronized and asynchronous parallel algorithms for multiprocessors. In: Algorithms and Complexity. J. F. Traub (Editor). Academic Press. New York. 1976. pp. 153–200.

    Google Scholar 

  18. Lambiotto, J. J. and Voigt, R. G.: The solution of tridiagonal systems of equations on the CDC STAR-100 computer. ACM Trans. on Math. Software, 1, 1975, 4, 308–329.

    Article  MathSciNet  Google Scholar 

  19. Lawrie, D. H. et al.: GLYPNIR–a programming language for ILLIAC IV. Comm. CAC. 15. 1975, 3, 157–164.

    Article  Google Scholar 

  20. Madsen. N. K. et al.: Matrix multiplication by diagonals on a vector parallel processor. Inform. Proc. Lett., 5, 1976, 2, 41–45.

    Article  MathSciNet  MATH  Google Scholar 

  21. Mirenkov, N. N.: Strukturnoe parallelnoe programmirovanie. Programmirovanie. 3. 1975. 3–14.

    MathSciNet  Google Scholar 

  22. Owens, J. L.: The influence of machine organization on algorithms. In: Complexity of Sequential and Parallel Numerical Algorithms. J. F. Traub (Editor). Academic Press, New York, 1973, pp. 111–130.

    Google Scholar 

  23. Raj Reddy. D.: Some numerical problems in artificial intelligence: Implications for complexity and machine architecture. In: Complexity of Sequential and Parallel Numerical Algorithms. J. F. Traub (Editor). Academic Press, New York. 1973, pp. 131–147.

    Google Scholar 

  24. Sl Aran: System description. A new class of computer. Goodyear Aerospace Corp.. Akron. Ohio, 1974.

    Google Scholar 

  25. Stone, H. S.: Parallel processing with perfect schuffle. IEEE Trans. on Computers, C-20, 1971, 2. 153–161.

    Article  Google Scholar 

  26. Stone, H. S.: Parallel tridiagonal equation solver. ACM Trans. on Math. Software, 1. 1975, 289–307.

    Article  MATH  Google Scholar 

  27. Stone, H. S. (Editor): Introduction to Computer Architecture. Sci. Res. Assoc., Inc., Chicago, 1975.

    Google Scholar 

  28. Stone. H. S.: An efficient parallel algorithm for the solution of a tridiagonal system of equations. J. ACM, 20. 1973, 27–30.

    Article  MATH  Google Scholar 

  29. Swan, R. J. et al.: The structure and architecture of CM*: A modular multiprocessor. Tech. Report, Dep. Comp. Sci., Carnegie-Mellon Univ., Pittsburg, 1977.

    Google Scholar 

  30. Shakhbazyan, K. V. and Tushkina, T. A.: Obzor metodov sostavleniya raspisanii dls’a mnogoprotsessornykh sistem. Zap. nauch. semin. LOMI, AN SSSR, Leningrad, 5-I. 1975. pp. 229–258.

    Google Scholar 

  31. Thompson, C. D.: Generalized connection networks for parallel processor intercommunication. Tech. Report, Dep. Comp. Sci., Carnegie-Mellon Univ., Pittsburg, 1977.

    Google Scholar 

  32. Thurber, K. J.: Large scale computer architecture. In: Parallel and Associative Processors. Hayden Book Co., Rochello Part, N. J., 1976.

    Google Scholar 

  33. Tutle, P. G.: Implementation of selected eigenvalue algorithms on a vector computer. Tech. Report NPGD-TM-330, Babcock and Wilcox 1975.

    Google Scholar 

  34. Vairavan, K. and Demilt.o, R. A.: On the computational complexity of a generalized scheduling problem. IEEE Trans. on Computers, C-25, 1976, 11, 1067–1073.

    Article  MathSciNet  Google Scholar 

  35. Wulf, W. A. and Bell, C. G.: C mmp — a multi-miniprocessor. AFIPS Conf. Proc. 1972, FJCC 41. AFIPS Press, Montwale, N. J., pp. 765–777.

    Google Scholar 

  36. Stone, H. S.: Parallel processing with the perfect shuffle. IEEE Trans. on Computers. C-20. 1971, 2, 153–161.

    Article  Google Scholar 

  37. Fixo, B. J. and Algazi, V. R.: A unified treatment of discrete fast unitary transforms. SIAM J. Computing, 6, 1977, 4, 700–717.

    Google Scholar 

  38. Batcher, K. E.: Sorting networks and their applications. Spring Joint Computer Conf. AFIPS Proc., Vol. 32. Thompson, Washington, D. C., 1968, pp. 307–314.

    Google Scholar 

  39. Brigham, E. O.: The Fast Fourier Transform. Prentice Hall, Englewood Cliffs. N. J., 1974.

    MATH  Google Scholar 

  40. Clos, C.: A study of non-blocking switching networks. Bell Syst. Tech. J., 32, 1953, 406–424.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1984 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Mikloško, J., Kotov, V.E. (1984). Correlation of Algorithms, Software and Hardware of Parallel Computers. In: Mikloško, J., Kotov, V.E. (eds) Algorithms, Software and Hardware of Parallel Computers. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-11106-2_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-662-11106-2_12

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-662-11108-6

  • Online ISBN: 978-3-662-11106-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics