Skip to main content

Abstract

During the past decades microprocessor performance has improved dramatically in comparison to the performance of larger parallel systems. From a hardware point of view, this trend has made parallel systems increasingly attractive since high performance computers can be built by combining large numbers of microprocessors that have been bought at commodity prices. This design details vary greatly from one computer to another, but most recent computers adopt the MIMD (Multiple Instruction streams Multiple Data streams) model in which each processor may perform different computations on different data. Some computers use a shared address space for memory; others require that processors communicate via explicit message sending and receiving. It is even possible to use a network of workstations as a parallel computer system since they are often available. All of these designs are intended for medium-grain or coarse-grain computations in which processors execute a substantial number of instructions between communications or other interactions with other processors. If the computation grain becomes too small, performance may suffer.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. D. Adams, Cray T3D system overview manual, Cray Research Inc., Sep. 1993.

    Google Scholar 

  2. J. Adams, W. Brainerd, J. Martin, B. Smith, and J. Wagener, The Fortran 90 Handbook, McGraw-Hill, 1992.

    Google Scholar 

  3. J. Adams, W. Brainerd, J. Martin, B. Smith, and J. Wagener, The Fortran 95 Handbook, the MIT Press, 1997.

    Google Scholar 

  4. C. Amza, A. Cox, S. Dwarkadas, P. Keleher, H. Lu, R. Rajamony, W. Yu, and W. Zwaenepoel, TreadMarks: Shared-memory computing on networks of workstations, IEEE Computer 29, No. 2, 1996, 18–

    Article  Google Scholar 

  5. T.E. Anderson, D. Culler, and D. A. Patterson, A Case for NOW (Networks of Workstations), Dec. 1994.

    Google Scholar 

  6. ANSI Technical Committee X3H5, Parallel Processing Model for High Level Programming Languages, 1993.

    Google Scholar 

  7. Gordon Bell, Ultracomputers: a Teraflop before its time, Communication of the ACM, Vol.35, No. 8, August 1992, 27–47.

    Article  Google Scholar 

  8. Gordon Bell, Scalable, parallel computers: alternatives, issues, and challenges, International Journal of Parallel Programming, Vol. 22, No. 1, 1994, 3–46.

    Article  Google Scholar 

  9. R. Butler and E. Lusk, Monitors, message, and clusters: the P4 parallel programming system. Parallel Computing 20, 1994, 547–5

    Article  MATH  Google Scholar 

  10. N. Carriero and D. Gelernter, Linda in context, Communication of the ACM, Vol. 32, No. 4, 1989, 444–458.

    Article  Google Scholar 

  11. D. Culler, J.P. Singh, and A. Gupta, Parallel Computer Architecture: A Hardware/Software Approach, Morgan Kaufmann Publishers, 1998.

    Google Scholar 

  12. A. Geist, A. Beguelin, J. Dongarra, W. Jiang, R. Manchek, and V. Sunderam, PVM: Parallel Virtual Machine-A Users’ Guide and Tutorial for Networked Parallel Computing, the MIT Press, 1994.

    Google Scholar 

  13. J.L. Gustafson, A paradigm for grand challenge performance evaluation, http://www.scl.ameslab.gov/Publications/publicationsjohn.html.

  14. Kai Hwang and Zhiwei Xu, Scalable Parallel Computing: Technology, Architecture, and Programming, WCB/McGraw-Hill, 1998.

    Google Scholar 

  15. K. Kennedy, C.F. Bender, J.W.D. Connolly, J.L. Hennessy, M.K. Vernon, and L. Smarr, A nationwide parallel computing environment, Communication of the ACM, Vol. 40, No. 11, Nov. 1997.

    Google Scholar 

  16. C. Koelbel, D. Loveman, R. Schreiber, G. Steele, and M. Zosel, The High Performance Fortran Handbook, the MIT Press, 1994.

    Google Scholar 

  17. A. Kolawa, Parasoft: a comprehensive approach to parallel and distributed computing, In: Proc. of the Workshop on Cluster Computing, 1992.

    Google Scholar 

  18. T.G. Lewis and H. El-Rewini, Introduction to Parallel Computing, Prentice Hall, Inc., 1992.

    Google Scholar 

  19. T.G. Mattson, Programming environments for parallel and distributed computing: a comparison of P4, PVM, Linda, and TCGMSG, The International Journal of Supercomputer Application and High Performance Computing, Vol. 9, No.2, Summer 1995, 138–161.

    Article  Google Scholar 

  20. T.G. Mattson, D. Scott, and S. Wheat, A TeraFLOPS Supercomputer in 1996, In: Proc. of the 6th International Parallel Processing Symposium, 1996.

    Google Scholar 

  21. O.A. McBryan, An overview of message passing environments, Parallel Computing 20, 1994, 417–4

    Article  MATH  Google Scholar 

  22. Open MP Standards Board, OpenMP: A Proposed Industry Standard API for Shared Memory Programming, Oct. 1997. See also: http:/www.openmp.org/openmp/mp-documents/paper/paper.html http:/www.openmp.org/openmp/mp-documents/paper/paper.html.

    Google Scholar 

  23. P.S. Pacheco, Parallel Programming with MP I, Morgan Kaufmann Publishers, Inc., 1997

    Google Scholar 

  24. P. Pierce and G. Regnier, The Paragon implementation of the NX message passing interface, In: Proc. of the Scalable High Performance Computing Conference, May 1994, 184–190.

    Google Scholar 

  25. L.W. Tucker and A. Mainwaring, CMMD: active messages on the CM-5, Parallel Computing 20, 1994, 481–496.

    Article  MATH  Google Scholar 

  26. Xingfu Wu, Design and implementation of parallel algorithms for fractal image generation, ROVPIA96, Nov. 1996

    Google Scholar 

  27. Xingfu Wu, Scalable Parallel Computing Performance Models and Their Applications, Ph.D. thesis, Beijing University of Aeronautics and Astronautics, 1996.

    Google Scholar 

  28. Zhiwei Xu and Kai Hwang, MPPs and clusters for scalable computing, In: Proc. of the 1996 International Symposium of Parallel Architectures, Algorithms and Networks, IEEE Computer Society Press, June 1996, 117–123.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer Science+Business Media New York

About this chapter

Cite this chapter

Wu, X. (1999). Introduction. In: Performance Evaluation, Prediction and Visualization of Parallel Systems. The Kluwer International Series on Asian Studies in Computer and Information Science, vol 4. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-5147-8_1

Download citation

  • DOI: https://doi.org/10.1007/978-1-4615-5147-8_1

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4613-7343-8

  • Online ISBN: 978-1-4615-5147-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics