Correlation of Algorithms, Software and Hardware of Parallel Computers

  • Jozef Mikloško
  • Vadim Evgenich Kotov

Abstract

In the past, the speed of computers was mainly increased by increasing the speed of their logic element. Thus, the memory cycle time has increased by two orders of magnitude. Improvements in technology achieved in the last 20 years have increased the speed of processors by as much as three orders. Today, since the physical barrier of the speed of transfer of an electric signal has been reached, it is possible to achieve additional speed only by improving the computer organization or by using it more effectively. Current technology has made it possible for the processors to be combined into large parallel structures, and by a suitable organization of n processors it is possible to reach an n-fold increase in the rate of computation. Parallelism in computation has brought with it new problems both in the creation of new algorithms and programs, and in the design of computer architectures. Parallel algorithms and programs are closely connected with the architecture of parallel computers, and therefore design and analysis of parallel algorithms and programs cannot be considered independently of their implementation and the architecture of the computer on which they are to be implemented. Several examples are known from the history of parallel data processing, where a valuable concept in the design of algorithms, programs or computers has had a large impact on the efficiency of computation.

Keywords

Neral Radar Convolution Sorting Akron 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Baer, J. L.: Survey of some theoretical aspects of multiprocessing. Comp. Surveys. 5, 1973, 1. 31–80MATHCrossRefGoogle Scholar
  2. [2]
    Barnes, G. et al.: The ILLIAC IV computer. IEEE Trans. on Computers, C-17, 1968, 746–757.Google Scholar
  3. [3]
    Batcher, K. E.: Sorting networks and their applications. Spring Joint Comp. Conf., 1968, AFIPS Proc., 32. Thompson, Washington, 1968, pp. 307–314.Google Scholar
  4. [4]
    Chon, S. CH. and Kuck, D. J.: Time and parallel processor bounds for linear recurrence systems. IEEE Trans. on Computers, C-24, 1975, 701–717.Google Scholar
  5. [5]
    Control Data Corporation, STAR-100 Computer Hardware Reference Manual, 1974.Google Scholar
  6. [6]
    Conway, M. E.: A multiprocessor system design. AFIPS Conf. Proc. 1963, FJCC 24. Spartan Books, Baltimore, 1963, pp. 139–146.Google Scholar
  7. [7]
    Dijkstra, E. W.: Cooperating sequential processes. In: Programming Languages. F Genuys (Editor). Academic Press, New York, 1968. pp. 43–1122.Google Scholar
  8. [8]
    Duff, M. J. and Watson, D.: A parallel computer for array processing. Proc. IFIP Congress, North-Holland Publ. Co., Amsterdam, 1975, pp. 94–99.Google Scholar
  9. [9]
    Enslow. P.. Jr. (Editor): Multiprocessors and Parallel Processing. Willey—Intel—science, New York, 1974.MATHGoogle Scholar
  10. [10]
    Flanders, P. M. et al.: Efficient high-speed computing with the distributed array processor. In: High-Speed Computers and Algorithms Organization. D. J. Kuck. D. H. Lawrio and A H. Sameh (Editors). Academic Press, New York, 1977. pp. 113–128.Google Scholar
  11. [1l]
    Flynn, M. J.: Toward more efficient computer organizations. Proc. Spring Joint Comp Conf.. AFIPS Press, 1972, pp. 1211–1217.Google Scholar
  12. [12]
    Gentleman. W. M.: Some complexity results for matrix computations on parallel processors. J. ACM, 25, 1978, 1. 112–115.MathSciNetMATHCrossRefGoogle Scholar
  13. [13]
    Graham, R. L.: Bounds on multiprocessing timing anomalies. SIAM J. Appl. Math.. 17. 1999, 2. 416–429.Google Scholar
  14. [14]
    Ihnat, J. P. et al.: The use of two levels of parallelism to implement an efficient programmable signal processing computer. Sagamore Comp. Conf. on Parallel Processing, Sagamore, 1973, pp. 113–119.Google Scholar
  15. [15]
    Keck. D.: ILLIAC IV software and application programming. IEEE Trans. on Computers. C-17. 1968. 8, 758–770.CrossRefGoogle Scholar
  16. [16]
    Kuck, D.: Multioperation machine computational complexity. In: Complexity of Sequential and Parallel Numerical Algorithms. J. F. Traub (Editor). Academic Press. New York. 1973. pp. 17–47.Google Scholar
  17. [17]
    Kung. H. T.: Synchronized and asynchronous parallel algorithms for multiprocessors. In: Algorithms and Complexity. J. F. Traub (Editor). Academic Press. New York. 1976. pp. 153–200.Google Scholar
  18. [18]
    Lambiotto, J. J. and Voigt, R. G.: The solution of tridiagonal systems of equations on the CDC STAR-100 computer. ACM Trans. on Math. Software, 1, 1975, 4, 308–329.MathSciNetCrossRefGoogle Scholar
  19. [19]
    Lawrie, D. H. et al.: GLYPNIR–a programming language for ILLIAC IV. Comm. CAC. 15. 1975, 3, 157–164.CrossRefGoogle Scholar
  20. [20]
    Madsen. N. K. et al.: Matrix multiplication by diagonals on a vector parallel processor. Inform. Proc. Lett., 5, 1976, 2, 41–45.MathSciNetMATHCrossRefGoogle Scholar
  21. [21]
    Mirenkov, N. N.: Strukturnoe parallelnoe programmirovanie. Programmirovanie. 3. 1975. 3–14.MathSciNetGoogle Scholar
  22. [22]
    Owens, J. L.: The influence of machine organization on algorithms. In: Complexity of Sequential and Parallel Numerical Algorithms. J. F. Traub (Editor). Academic Press, New York, 1973, pp. 111–130.Google Scholar
  23. [23]
    Raj Reddy. D.: Some numerical problems in artificial intelligence: Implications for complexity and machine architecture. In: Complexity of Sequential and Parallel Numerical Algorithms. J. F. Traub (Editor). Academic Press, New York. 1973, pp. 131–147.Google Scholar
  24. [24]
    Sl Aran: System description. A new class of computer. Goodyear Aerospace Corp.. Akron. Ohio, 1974.Google Scholar
  25. [25]
    Stone, H. S.: Parallel processing with perfect schuffle. IEEE Trans. on Computers, C-20, 1971, 2. 153–161.CrossRefGoogle Scholar
  26. [26]
    Stone, H. S.: Parallel tridiagonal equation solver. ACM Trans. on Math. Software, 1. 1975, 289–307.MATHCrossRefGoogle Scholar
  27. [27]
    Stone, H. S. (Editor): Introduction to Computer Architecture. Sci. Res. Assoc., Inc., Chicago, 1975.Google Scholar
  28. [28]
    Stone. H. S.: An efficient parallel algorithm for the solution of a tridiagonal system of equations. J. ACM, 20. 1973, 27–30.MATHCrossRefGoogle Scholar
  29. [29]
    Swan, R. J. et al.: The structure and architecture of CM*: A modular multiprocessor. Tech. Report, Dep. Comp. Sci., Carnegie-Mellon Univ., Pittsburg, 1977.Google Scholar
  30. [30]
    Shakhbazyan, K. V. and Tushkina, T. A.: Obzor metodov sostavleniya raspisanii dls’a mnogoprotsessornykh sistem. Zap. nauch. semin. LOMI, AN SSSR, Leningrad, 5-I. 1975. pp. 229–258.Google Scholar
  31. [31]
    Thompson, C. D.: Generalized connection networks for parallel processor intercommunication. Tech. Report, Dep. Comp. Sci., Carnegie-Mellon Univ., Pittsburg, 1977.Google Scholar
  32. [32]
    Thurber, K. J.: Large scale computer architecture. In: Parallel and Associative Processors. Hayden Book Co., Rochello Part, N. J., 1976.Google Scholar
  33. [33]
    Tutle, P. G.: Implementation of selected eigenvalue algorithms on a vector computer. Tech. Report NPGD-TM-330, Babcock and Wilcox 1975.Google Scholar
  34. [34]
    Vairavan, K. and Demilt.o, R. A.: On the computational complexity of a generalized scheduling problem. IEEE Trans. on Computers, C-25, 1976, 11, 1067–1073.MathSciNetCrossRefGoogle Scholar
  35. [35]
    Wulf, W. A. and Bell, C. G.: C mmp — a multi-miniprocessor. AFIPS Conf. Proc. 1972, FJCC 41. AFIPS Press, Montwale, N. J., pp. 765–777.Google Scholar
  36. [1]
    Stone, H. S.: Parallel processing with the perfect shuffle. IEEE Trans. on Computers. C-20. 1971, 2, 153–161.CrossRefGoogle Scholar
  37. [2]
    Fixo, B. J. and Algazi, V. R.: A unified treatment of discrete fast unitary transforms. SIAM J. Computing, 6, 1977, 4, 700–717.Google Scholar
  38. [3]
    Batcher, K. E.: Sorting networks and their applications. Spring Joint Computer Conf. AFIPS Proc., Vol. 32. Thompson, Washington, D. C., 1968, pp. 307–314.Google Scholar
  39. [4]
    Brigham, E. O.: The Fast Fourier Transform. Prentice Hall, Englewood Cliffs. N. J., 1974.MATHGoogle Scholar
  40. [5]
    Clos, C.: A study of non-blocking switching networks. Bell Syst. Tech. J., 32, 1953, 406–424.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1984

Authors and Affiliations

  • Jozef Mikloško
    • 1
  • Vadim Evgenich Kotov
    • 2
  1. 1.Institute of Technical CyberneticsSlovak Academy of SciencesBratislavaCzechoslovakia
  2. 2.Computer CentreSibirian Branch of the Academy of Sciences of the USSRNovosibirsk-90USSR

Personalised recommendations