Abstract
During the past decades microprocessor performance has improved dramatically in comparison to the performance of larger parallel systems. From a hardware point of view, this trend has made parallel systems increasingly attractive since high performance computers can be built by combining large numbers of microprocessors that have been bought at commodity prices. This design details vary greatly from one computer to another, but most recent computers adopt the MIMD (Multiple Instruction streams Multiple Data streams) model in which each processor may perform different computations on different data. Some computers use a shared address space for memory; others require that processors communicate via explicit message sending and receiving. It is even possible to use a network of workstations as a parallel computer system since they are often available. All of these designs are intended for medium-grain or coarse-grain computations in which processors execute a substantial number of instructions between communications or other interactions with other processors. If the computation grain becomes too small, performance may suffer.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
D. Adams, Cray T3D system overview manual, Cray Research Inc., Sep. 1993.
J. Adams, W. Brainerd, J. Martin, B. Smith, and J. Wagener, The Fortran 90 Handbook, McGraw-Hill, 1992.
J. Adams, W. Brainerd, J. Martin, B. Smith, and J. Wagener, The Fortran 95 Handbook, the MIT Press, 1997.
C. Amza, A. Cox, S. Dwarkadas, P. Keleher, H. Lu, R. Rajamony, W. Yu, and W. Zwaenepoel, TreadMarks: Shared-memory computing on networks of workstations, IEEE Computer 29, No. 2, 1996, 18–
T.E. Anderson, D. Culler, and D. A. Patterson, A Case for NOW (Networks of Workstations), Dec. 1994.
ANSI Technical Committee X3H5, Parallel Processing Model for High Level Programming Languages, 1993.
Gordon Bell, Ultracomputers: a Teraflop before its time, Communication of the ACM, Vol.35, No. 8, August 1992, 27–47.
Gordon Bell, Scalable, parallel computers: alternatives, issues, and challenges, International Journal of Parallel Programming, Vol. 22, No. 1, 1994, 3–46.
R. Butler and E. Lusk, Monitors, message, and clusters: the P4 parallel programming system. Parallel Computing 20, 1994, 547–5
N. Carriero and D. Gelernter, Linda in context, Communication of the ACM, Vol. 32, No. 4, 1989, 444–458.
D. Culler, J.P. Singh, and A. Gupta, Parallel Computer Architecture: A Hardware/Software Approach, Morgan Kaufmann Publishers, 1998.
A. Geist, A. Beguelin, J. Dongarra, W. Jiang, R. Manchek, and V. Sunderam, PVM: Parallel Virtual Machine-A Users’ Guide and Tutorial for Networked Parallel Computing, the MIT Press, 1994.
J.L. Gustafson, A paradigm for grand challenge performance evaluation, http://www.scl.ameslab.gov/Publications/publicationsjohn.html.
Kai Hwang and Zhiwei Xu, Scalable Parallel Computing: Technology, Architecture, and Programming, WCB/McGraw-Hill, 1998.
K. Kennedy, C.F. Bender, J.W.D. Connolly, J.L. Hennessy, M.K. Vernon, and L. Smarr, A nationwide parallel computing environment, Communication of the ACM, Vol. 40, No. 11, Nov. 1997.
C. Koelbel, D. Loveman, R. Schreiber, G. Steele, and M. Zosel, The High Performance Fortran Handbook, the MIT Press, 1994.
A. Kolawa, Parasoft: a comprehensive approach to parallel and distributed computing, In: Proc. of the Workshop on Cluster Computing, 1992.
T.G. Lewis and H. El-Rewini, Introduction to Parallel Computing, Prentice Hall, Inc., 1992.
T.G. Mattson, Programming environments for parallel and distributed computing: a comparison of P4, PVM, Linda, and TCGMSG, The International Journal of Supercomputer Application and High Performance Computing, Vol. 9, No.2, Summer 1995, 138–161.
T.G. Mattson, D. Scott, and S. Wheat, A TeraFLOPS Supercomputer in 1996, In: Proc. of the 6th International Parallel Processing Symposium, 1996.
O.A. McBryan, An overview of message passing environments, Parallel Computing 20, 1994, 417–4
Open MP Standards Board, OpenMP: A Proposed Industry Standard API for Shared Memory Programming, Oct. 1997. See also: http:/www.openmp.org/openmp/mp-documents/paper/paper.html http:/www.openmp.org/openmp/mp-documents/paper/paper.html.
P.S. Pacheco, Parallel Programming with MP I, Morgan Kaufmann Publishers, Inc., 1997
P. Pierce and G. Regnier, The Paragon implementation of the NX message passing interface, In: Proc. of the Scalable High Performance Computing Conference, May 1994, 184–190.
L.W. Tucker and A. Mainwaring, CMMD: active messages on the CM-5, Parallel Computing 20, 1994, 481–496.
Xingfu Wu, Design and implementation of parallel algorithms for fractal image generation, ROVPIA96, Nov. 1996
Xingfu Wu, Scalable Parallel Computing Performance Models and Their Applications, Ph.D. thesis, Beijing University of Aeronautics and Astronautics, 1996.
Zhiwei Xu and Kai Hwang, MPPs and clusters for scalable computing, In: Proc. of the 1996 International Symposium of Parallel Architectures, Algorithms and Networks, IEEE Computer Society Press, June 1996, 117–123.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1999 Springer Science+Business Media New York
About this chapter
Cite this chapter
Wu, X. (1999). Introduction. In: Performance Evaluation, Prediction and Visualization of Parallel Systems. The Kluwer International Series on Asian Studies in Computer and Information Science, vol 4. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-5147-8_1
Download citation
DOI: https://doi.org/10.1007/978-1-4615-5147-8_1
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4613-7343-8
Online ISBN: 978-1-4615-5147-8
eBook Packages: Springer Book Archive