Advertisement

PTAH Introduction to a new parallel architecture for highly numeric processing

  • Franck Cappello
  • Jean-Luc Béchennec
  • Jean-Louis Giavitto
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 605)

Abstract

This paper proposes a new architectural design for high performance parallel computers: the one-cycle machine. In such a computer the memory access, network access, instruction sequencing, data computation take the same duration: one clock cycle. We first consider the communication network efficiency as the main critical resource. We show that the adaptation of the network performance to the processing element power is more important than the CPU power in itself with respect to the global processing effectiveness. Two guidelines are derived from our analysis and conduct to the design of PTAH. Two simple examples are used to illustrate the interest of PTAH for the execution of numeric applications. Finally, some hardware features are proposed for a PTAH implementation being able to reach the TeraFLOPS.

Keywords

Parallel Architecture Instruction Sequencing Anticipation Unit Connection Pattern Synchronization Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Réferences

  1. 1.
    D. Korn, N. Rushfield, “Washcloth Simulation of Three-Dimensional Weather Forecasting Code”, New York University, May 1983.Google Scholar
  2. 2.
    J.J. Dongarra, “Experimental Parallel Computing Architectures”, North Holland, 1987.Google Scholar
  3. 3.
    R.W. Hockney, C.R. Jesshope, “Parallel Computer 2”, Adam Hilger, 1998.Google Scholar
  4. 4.
    W.D. Robb, “The Cray YMP C90 Computer System”, Supercomputing Europe 92, Paris, 1992.Google Scholar
  5. 5.
    T. Watanabe, “SX-3 Series Architecture & Technology Trend”, Supercomputing Europe 92, Paris, 1992.Google Scholar
  6. 6.
    “The Connection Machine CM-5 Technical Summary”, Thinking Machines Corporation, Oct 91.Google Scholar
  7. 7.
    J. Beetem, M. Denneau, D. Weingarten, “The GF11 Supercomputer”, in Proceedings of the 12th Internationnal Symposium on Computer Architecture, IEEE Computer Society, Boston 1985.Google Scholar
  8. 8.
    “Paragon XP/S Product Overview”, Intel Corporation, 1991.Google Scholar
  9. 9.
    “The TC2000 Massively Parallel Supercomputer”, in “Parallel Computing: Past, Present and Future”, BBN Advanced Computers Inc, 1990.Google Scholar
  10. 10.
    R. D. Rettberg, W. R. Crowther, P. P. Corvey and R. S. Tomlinson, “The Monarch Parallel Processor Hardware Design”, Computer, April 91.Google Scholar
  11. 11.
    J.R. Moulic, “Parallel Systems”, Supercomputing Europe 92, Paris, 1992.Google Scholar
  12. 12.
    S. Nelson, “Toward TeraFLOP Computing”, Supercomputing Europe 91 Conference, Fevrier 1991.Google Scholar
  13. 13.
    J.E. Smith, W.C. Hsu, C. Hsiung, Supercomputing 90.Google Scholar
  14. 14.
    W.J. Dally, “Wire-efficient VLSI Multiprocessor Communications”, 1987 Stanford Conference on Advanced Research in VLSI, 1987, pp 391–415.Google Scholar
  15. 15.
    D.A. Reed, R.M. Fujimoto, “Multicomputers Networks-Message-based Parallel Processing”, The MIT Press, 1987.Google Scholar
  16. 16.
    W.C. Athas, C.L. Seitz, “Multicomputers: Message-Passing Concurrent Computers”, COMPUTER, Aug 1988.Google Scholar
  17. 17.
    C. Germain, J-L. Béchennec, D. Etiemble and J-P. Sansonnet, “A New Communication Design for Massively Parallel Message-Passing Architectures”, IFIP Working Conf. on Decentralized Systems 1989, North-Holland ed.Google Scholar
  18. 18.
    TMC, The Essential *Lisp Manual, Cambridge MA, 1986Google Scholar
  19. 19.
    S. F. Nugent, “The iPSC/2 Direct-Connect Communications Technology”, 3rd Conf. on Hypercube Concurrent Computers and Applications, 1988.Google Scholar
  20. 20.
    W. Crowther, J. Goodhue, E. Starr, R. Thomas, W. Milliken and T. Blackadar “Performance mesureament on a 128-nodes Butterfly Parallel Processor”, proc. of the Int. Conf. on Parallel Processing, pp 450–457, 1985.Google Scholar
  21. 21.
    C. Germain, J-L Béchennec, D. Etiemble, J-P. Sansonnet, “An interconnection network and a routing scheme for a massively parallel message-passing multicomputer”, 3rd Symposium on Frontiers of Massively Parallel Computation, Oct 8–10 1990, College Park, MD.Google Scholar
  22. 22.
    H.T. Kung, “Why Systolic Architecture ?”, COMPUTER, Jan 1982.Google Scholar
  23. 23.
    M.M. Denneau, “The Yorktown Simulation Engine”, ACM/IEEE 19th Desing Automation Conference Proceedings, 1982.Google Scholar
  24. 24.
    S.J. Hong, R. Nair, “Wire-Routing Machines, New Tools for VLSI Physical Design”, Proceedings of the IEEE, Jan 1983.Google Scholar
  25. 25.
    K. Batcher, “Design of a Massively Parallel Processor”, IEEE Transaction on Computer, Sep 1980.Google Scholar
  26. 26.
    H.J. Siegel, L.J. Siegel, F.C. Kemmer, P.T. Muller, H.E. Smalley, S.D. Smith, “PASM: A Partitionable SIMD/MSIMD System for Image Processing and Pattern Recognition”, IEEE Transaction on Computer, Dec 1981.Google Scholar
  27. 27.
    A. Merigot, S. Bouaziz, P. Clermont, F. Devos, M. Echer, J. Mehat, Y. Ni, “SPHINX un processeur pyramidal massivement parallèle pour la vision artificielle”, 7ième congrès RFIA, Nov 1989.Google Scholar
  28. 28.
    V. Benes, “Optimal Rearrangeable Multistage Connecting Networks”, Bell System Technical Journal, Vol 43, no 4, Part 2, Jul 1964.Google Scholar
  29. 29.
    H.S. Stone, “Parallel Processing with the Perfect Shuffle”, IEEE transaction on Computers, Feb 1971.Google Scholar
  30. 30.
    D.D. Gajski, D.H. Lawrie, D.J. Kuck, and A.H. Sameh, “CEDAR”, IEEE COMPCON'84 Proceedings, March 1, 1984.Google Scholar
  31. 31.
    E. Denning Dahl, “Mapping and Compilated communication on the Connection Machine System”, Proceedings of the fifth Distributed Memory Computing Conference, Apr 1990.Google Scholar
  32. 32.
    David Elliot Shaw, “SIMD and MSIMD Variant of NON-VON Supercomputer”, IEEE COMPCON'84 Proceedings, March 1 1984.Google Scholar
  33. 33.
    D.A. Patterson, “Reduced Instruction Set Computers”, Communication of the ACM, Jan 1985.Google Scholar
  34. 34.
    Tse-Yun Feng, “A Survey of Interconnection Network”, Computer, December 81.Google Scholar
  35. 35.
    C. Lawson, R. Hanson, D. Kincaid, F. Krogh, “Basic Linear Algebra Subprograms for Fortran usage”, ACM Transaction on Mathematic Software, 1979.Google Scholar
  36. 36.
    J.-L. Béchennec, F. Cappello, D. Etiemble, “A 3D Hardware Package for highly Parallel Architectures”, Euromicro 91, 1991.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1992

Authors and Affiliations

  • Franck Cappello
    • 1
  • Jean-Luc Béchennec
    • 1
  • Jean-Louis Giavitto
    • 1
  1. 1.LRI - UA 410 CNRS Bâtiment 490Université de Paris-SudOrsay Cedex

Personalised recommendations