Skip to main content

Scalability and programmability of massively parallel processors

  • Keynote Addresses
  • Conference paper
  • First Online:
Parallel Processing: CONPAR 94 — VAPP VI (VAPP 1994, CONPAR 1994)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 854))

  • 152 Accesses

Abstract

In this talk, we examine massively parallel processing (MPP) systems and their research, development, and application issues. We start with a classification of basic MPP models. Scalability attributes, programming requirements, and underlying hardware/ software technologies are assessed. Then we evaluate MPP systems currently available from industry as well as explored in research institutions. Major challenges in R/D and applications of MPPs are identified with key references.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. K. Hwang, Advanced Computer Architecture: Parallelism, Scalability, Programmability, McGraw-Hill, Inc., New York, 1993.

    Google Scholar 

  2. V. Lo, “Performance Enhancements for Operating System Implementations of Distributed Shared Memory”, Advances in Computers, Vol. 39, Academic Press, 1994.

    Google Scholar 

  3. P. Stenstrom, T. Joe, and A. Gupta, “Comparative Performance Evaluation of Cache-Coherent NUMA and COMA Architectures”, Proceedings of 19th Annual International Symposium on Computer Architecture, 1992.

    Google Scholar 

  4. W.J. Dally, “Performance Analysis of k-ary n-Cube Interconnection Networks”, IEEE Trans. on Computers, Vol. 39, No. 6, 1990.

    Google Scholar 

  5. C.E. Leiserson, “Fat Trees” Universal Networks for Hardware-Efficient Supercomputing”, IEEE Trans. on Computers, 1985.

    Google Scholar 

  6. J. Celuch, “The IBM 9076 SP1 High-Performance Communication Network”, Technical Report, 46KA/Bldg 202, IBM Kingston, N.Y., Feb. 1994.

    Google Scholar 

  7. A.Y. Grama, A. Gupta, and V. Kumar, “Isoefficiency: Measuring the Scalability of Parallel Algorithms and Architectures”, IEEE Parallel & Distributed Technology, August 1993.

    Google Scholar 

  8. D. Lenoski, et al “The DASH Prototype: Logic Overhead and Performance”, IEEE Trans. on Parallel and Distributed Systems, Jan. 1993.

    Google Scholar 

  9. J. Kuskin, et al, “The Stanford FLASH Multiprocessor”, Proceedings of The 21st Annual International Symposium on Computer Architecture”, April 1994.

    Google Scholar 

  10. A. Agarwal, et al, “The MIT Alewife machine: A Large-Scale Distributed-Memory Multiprocessor”, MIT Lab. for Computer Science, Technical Report TM-454, Cambridge, MA 02139, 1993.

    Google Scholar 

  11. K. Hwang, P.S. Tseng, and D. Kim, “An Orthogonal Multiprocessor for Parallel Scientific Computations”, IEEE Trans. on Computers, Jan. 1989.

    Google Scholar 

  12. R.H. Saavedra, W. Mao, and K. Hwang, “Performance and Optimization of Data Prefetching Strategies in Scalable Multiprocessors”, Journal of Parallel and Distributed Computing, Sept. 1994.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Bruno Buchberger Jens Volkert

Rights and permissions

Reprints and permissions

Copyright information

© 1994 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Hwang, K. (1994). Scalability and programmability of massively parallel processors. In: Buchberger, B., Volkert, J. (eds) Parallel Processing: CONPAR 94 — VAPP VI. VAPP CONPAR 1994 1994. Lecture Notes in Computer Science, vol 854. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-58430-7_1

Download citation

  • DOI: https://doi.org/10.1007/3-540-58430-7_1

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-58430-8

  • Online ISBN: 978-3-540-48789-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics