Skip to main content

Why I like Vector Computers

  • Conference paper
Supercomputer ’89

Part of the book series: Informatik-Fachberichte ((INFORMATIK,volume 211))

Abstract

The requirements for supercomputing in technical sciences in an industrial R&D environment or in a versatile job profile university environment are specified: 100 GFLOPS sustained performance, 64 Gwords main memory, flexible data transfer operations (compress, expand, merge, gather, scatter), portable Fortran 8x, fastest scalar speed. Then the reasons are discussed why the”usual” trend to parallelism via MM) (message passing systems, shared memory systems, hybrid systems) fail to meet the requirements of the users. The proposition of a Continuous Pipe Vector Computer (CPVC) serves to explain in the form of 10 notes the ideas how parallelism should be organized that it is completely transparent to the user. The proposed CPVC minimizes the lost cycles of a supercomputer so that one gets close to the theoretical peak performance by the most user-friendly architecture.

With kind permission of Computing Center, University of Karlsruhe, where this Contribution has been published as Internal Report Nr. 35/89, January 1989, together with an appendix’Epilog’

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Literatur

  1. W. Schönauer, Scientific Computing on Vector Computers, North-Holland, Amsterdam, New York, 1987

    Google Scholar 

  2. Proceedings of the”2nd International SUPRENUM Colloquium 1987”, Bonn, Sept. 30 to Oct. 2, 1987, to appear as special issue of “Parallel Computing”

    Google Scholar 

  3. I.S. Duff, A survey of Supercomputing in Europe, to appear in [2]

    Google Scholar 

  4. W. Schönauer, W. Gentzsch, The Efficient Use of Vector Computers with Emphasis to Computational Fluid Dynamics, Vieweg, Braunschweig/Wiesbaden, 1986

    Google Scholar 

  5. W. Schönauer, E. Schnepf, FIDISOL, a”black box” solver for partial differential equations, to appear in Parallel Computing

    Google Scholar 

  6. W. Schönauer, E. Schnepf, H. Müller, The FIDISOL Program Package, Interner Bericht Nr. 27/85 des Rechenzentrums der Universität Karlsruhe, 1985. This internal report is the documentation for the customers of FIDISOL.

    Google Scholar 

  7. W. Schönauer, E. Schnepf, H. Müller, Designing PDE software for vector computers as a “data flow Algorithm”, Computer Physics Communications 37 (1985), pp. 233–237 and

    Article  MathSciNet  Google Scholar 

  8. W. Schönauer, E. Schnepf, H. Müller, Designing PDE software for vector computers as a “data flow Algorithm”, I.S. Duff, J.K. Reid (Eds), Vector and Parallel Processors in Computational Science, North-Holland, Amsterdam, New York 1985, pp. 233–237

    Google Scholar 

  9. M. Metcalf, Fortran 8x — the emerging standard, Computer Physics Communications 45 (1987), pp. 259–268

    Article  Google Scholar 

  10. Fortran, X3J3/S8.104, June 1987, American National Standards Institute. This is the actual draft under discussion for Fortran 8x.

    Google Scholar 

  11. K. Hwang, S.M. Jacobs, E.E. Swartzlander (Eds), Proceedings of the 1986 Internat. Conf. on Parallel Processing, IEEE, Washington D.C., 1986

    Google Scholar 

  12. J.L. Gustafson, S. Hawkinson, K. Scott, The architecture of a homogeneous vector supercomputer, in [10], pp. 649–652

    Google Scholar 

  13. E. Schmidt, Rechnergiganten aus dem Baukasten, VDI nachrichten 16, 18. April 1986, p. 17

    Google Scholar 

  14. J.J. Dongarra (Ed), Experimental Parallel Computing Architectures, North-Holland, Amsterdam, New York 1987

    MATH  Google Scholar 

  15. D. Degroot (Ed), Proceedings of the 1985 Internat. Conf. on Parallel Processing, IEEE, Washington, D.C., 1985.

    MATH  Google Scholar 

  16. Proceedings of the Int. Conf. on Vector and Parallel Computing, Loen, Norway, Parallel Computing 5 (1987) pp. 1–263

    Google Scholar 

  17. Proceedings of the 1984 IBM Europe Institute course on Highly Parallel Processing, Parallel Computing 2 (1985), pp. 185–288

    Google Scholar 

  18. M.T. Heath (Ed), Hypercube Multiprocessors 1987, SIAM, Philadelphia, 1987

    Google Scholar 

  19. W. Giloi, The SUPRENUM architecture, to appear in [2]

    Google Scholar 

  20. V.P. Srini, Anarchitectural comparison of dataflow systems, Computer, vol 19, No. 3 (1986) pp. 68–88

    Article  Google Scholar 

  21. J. Gurd, C. Kirkham, W. Böhm, The Manchester dataflow computing system, in [13], pp. 177–219

    Google Scholar 

  22. P.M. Behr, W.K. Giloi, H. Mühlenbein, SUPRENUM: The German supercomputer architecture — rationale and concepts, in [10], pp. 567–575

    Google Scholar 

  23. Alliant Computer Systems Corporation, Acton, Mass., FX/ Series Product Summary, 1985

    Google Scholar 

  24. W.D. Hillis, The Connection Machine, MIT press, Cambridge, Mass. 1985

    Google Scholar 

  25. G.F. Pfister, W.C. Brantley, D.A. George, L.S. Harvey, W.J. Kleinfelder, K.P. McAuliffe, E.A. Melton, V.A. Norton, J. Weiss, An introduction to the IBM Research Parallel Processor Prototype (RP3), in [13], pp. 123–140

    Google Scholar 

  26. G.C. Fox, Questions and unexpected answers in concurrent computation, in [13], pp. 97–121

    Google Scholar 

  27. Intel, iPSC User,s Guide, Intel, Portland, Oregon, 1985

    Google Scholar 

  28. M.C. Chen, Very-high-level parallel programming in Crystal, in [17], pp. 39–47

    Google Scholar 

  29. W. Williams, Load balancing and Hypercubes: A preliminary look, in [17], pp. 108–113

    Google Scholar 

  30. J.H. Sultz, M.C. Chen: Automated problem mapping: The Crystal runtime system, in [17], pp. 130–140

    Google Scholar 

  31. K. Schwan, W. Bo, N. Bauman, P. Sadayappan, F. Ercal, Mapping parallel applications to a hypercube, in [17], pp.141–151

    Google Scholar 

  32. D.W. Walker, G.C. Fox, A. Ho, G.R. Montry, A comparision of the performance of the Caltech Mark II hypercube and the Elxsi 6400, in [17], pp. 210–219

    Google Scholar 

  33. J.M. Francioni, J.A. Jackson, An implementation of a 2 d -section root finding method for the FPS T-series hypercube, in [17], pp. 495–500

    Google Scholar 

  34. R.M. Chamberlain, An alternative view of LU factorization with partial pivoting on a hypercube multiprocessor, in [17], pp. 569–575

    Google Scholar 

  35. V.K. Naik, S. Taasan, Performance studies of the multigrid algorithms implemented on hypercube multiprocessor systems, in [17], pp. 720–729

    Google Scholar 

  36. D.J. Kuck, E.S. Davidson, D.H. Lawrie, A.H. Sameh, Parallel supercomputing today and the CEDAR approach, in [13], pp. 1–23

    Google Scholar 

  37. A. Gottlieb, An overview of the NYU Ultracomputer project, in [13], pp. 25–95

    Google Scholar 

  38. M.J. Flynn, Some computer organizations and their effectiveness, IEEE Trans. Comput. C-21 (1972), pp. 948–960

    Google Scholar 

  39. B.L. Buzbee, Applications of MMD machines, Computer Physics Communications 37 (1985), pp. 1–5, or

    Article  MathSciNet  Google Scholar 

  40. B.L. Buzbee, Applications of MMD machines, LS. Duff, J.K. Reid (Eds), Vector and Parallel Processors in Computational Science, North-Holland, Amsterdam, New York, Oxford, Tokyo 1985, pp. 1–5

    Google Scholar 

  41. R.W. Hockney, (r,m1/2, s1/2)measurements on the 2-CPU CRAY X-MP, Parallel Computing, vol 2, Nr. 1, March 1985, pp.1–14

    Article  MathSciNet  Google Scholar 

  42. A.K. Dave, The efficient use of the CRAY X-MP multiprocessor vector computer in computational fluid dynamics, in [4], pp. 209–220

    Google Scholar 

  43. R.W. Hockney, C.R. Jesshope, Parallel Computers, Adam Hilger, Bristol, 1981

    MATH  Google Scholar 

  44. W. Myers, Getting the cycles out of a supercomputer, Computer, vol 19 (1986), pp. 89–92

    Article  Google Scholar 

  45. A.H. Karp, Programming for Parallelism, Computer, vol 20 (1987), pp. 43–57

    Article  Google Scholar 

  46. E. Clementi, J. Detrich, Large scale parallel computation on a loosely coupled array of processors, in [13], pp. 141–176

    Google Scholar 

  47. R.W. Hockney, MIMD computing in the USA-1984, Parallel Computing, vol 2 (1985), pp. 119–136

    Article  MathSciNet  Google Scholar 

  48. U. Haas, Modelling of a program by an artificial benchmark program for vector computers, with discussion of the efficiency of the veetorization, Interner Berich Nr. 31/87 des Rechenzentrums der Universität Karlsruhe, 1987. Free copies of this internal report can be obtained on request.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1989 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Schönauer, W. (1989). Why I like Vector Computers. In: Meuer, H.W. (eds) Supercomputer ’89. Informatik-Fachberichte, vol 211. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-74844-8_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-74844-8_10

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-51310-0

  • Online ISBN: 978-3-642-74844-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics