To the applications-oriented user, parallel computing is the use of a computer system that contains multiple, replicated arithmetic-logical units (ALUs), programmable to cooperate concurrently on a single task. This definition does not include other kinds of concurrency not typically visible to the applications programmer, such as the overlapping of floating point and integer operations, or launching of multiple concurrent instructions on “superscalar” microprocessor chips.
Parallel computing exists because, despite quick and steady advances in computing technology, there always exist problems that a single processor cannot solve in an acceptable amount of time. Thus, parallel computing is necessarily high performance computing, and computational efficiency tends to be a prime concern in developing parallel applications.
KINDS OF PARALLEL COMPUTERS
The taxonomy of Flynn (1972) classifies parallel computers as either “SIMD” or “MIMD.” In a SIMD (Single Instruction, Multiple Data)...
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Barr, R.S. and Hickman, B.L. (1993). “Reporting Computational Experiments with Parallel Algorithms: Issues, Measures and Experts = Opinions,” ORSA Jl. Computing 5, 2–18.
Bertsekas, D.P. and Tsitsiklis, J. (1989). Parallel and Distributed Computation: Numerical Methods, Prentice-Hall, Englewood Cliffs, New Jersey.
Eckstein, J. (1993). “Large-Scale Parallel Computing, Optimization, and Operations Research: A Survey,” ORSA Computer Science Technical Section Newsletter 14(2), 1, 8–12.
Flynn, M.J. (1972). “Some Computer Organizations and their Effectiveness,” IEEE Transactions Computers C-21, 948–960.
Geist, A., Beguelin, A., Dongarra, J., et al. (1994). PVM: Parallel Virtual Machine: A Users' Guide and Tutorial for Network Parallel Computing, MIT Press, Cambridge, Massachusetts.
Kindervater, G.A.P. and Lenstra, J.K. (1988). “Parallel Computing in Combinatorial Optimization,” Annals Operations Research 14, 245–289.
Koelbel, C.H., Loveman, D.B., Schreiber, R.S., et al. (1993). The High Performance Fortran Handbook, MIT Press, Cambridge, Massachusetts.
Kumar, V. and Gupta, A. (1994). “Analyzing Scalability of Parallel Algorithms and Architectures,” Jl. Parallel and Distributed Computing.
Leighton, F.T. (1991). Introduction to Parallel Algorithms and Architectures: Arrays, Trees, and Hypercubes, Morgan Kaufmann, San Mateo, California.
Metcalf, M. and Reid, J. (1990). Fortran 90 Explained, Oxford University Press, Oxford, United Kingdom.
Snir, M., Otto, S.W., Huss-Lederman, S., et al. (1996). MPI: The Complete Reference, MIT Press, Cambridge, Massachusetts.
Zenios, S.A. (1989). “Parallel Numerical Optimization: Current Status and an Annotated Bibliography,” ORSA Jl. Computing 1, 20–43.
Zenios, S.A. (1994). “Parallel and Supercomputing in the Practice of Management Science,” Interfaces, 24, 122–140.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Kluwer Academic Publishers
About this entry
Cite this entry
Eckstein, J. (2001). Parallel computing . In: Gass, S.I., Harris, C.M. (eds) Encyclopedia of Operations Research and Management Science. Springer, New York, NY. https://doi.org/10.1007/1-4020-0611-X_728
Download citation
DOI: https://doi.org/10.1007/1-4020-0611-X_728
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-0-7923-7827-3
Online ISBN: 978-1-4020-0611-1
eBook Packages: Springer Book Archive