To the applications-oriented user, parallel computing is the use of a computer system that contains multiple, replicated arithmetic-logical units (ALUs), programmable to cooperate concurrently on a single task. This definition does not include other kinds of concurrency not typically visible to the applications programmer, such as the overlapping of floating point and integer operations, or launching of multiple concurrent instructions on “superscalar” microprocessor chips.
Parallel computing exists because, despite quick and steady advances in computing technology, there always exist problems that a single processor cannot solve in an acceptable amount of time. Thus, parallel computing is necessarily high performance computing, and computational efficiency tends to be a prime concern in developing parallel applications.
KINDS OF PARALLEL COMPUTERS
The taxonomy of Flynn (1972) classifies parallel computers as either “SIMD” or “MIMD.” In a SIMD (Single Instruction, Multiple Data)...
- Barr, R.S. and Hickman, B.L. (1993). “Reporting Computational Experiments with Parallel Algorithms: Issues, Measures and Experts = Opinions,” ORSA Jl. Computing 5, 2–18.Google Scholar
- Bertsekas, D.P. and Tsitsiklis, J. (1989). Parallel and Distributed Computation: Numerical Methods, Prentice-Hall, Englewood Cliffs, New Jersey.Google Scholar
- Eckstein, J. (1993). “Large-Scale Parallel Computing, Optimization, and Operations Research: A Survey,” ORSA Computer Science Technical Section Newsletter 14(2), 1, 8–12.Google Scholar
- Flynn, M.J. (1972). “Some Computer Organizations and their Effectiveness,” IEEE Transactions Computers C-21, 948–960.Google Scholar
- Geist, A., Beguelin, A., Dongarra, J., et al. (1994). PVM: Parallel Virtual Machine: A Users' Guide and Tutorial for Network Parallel Computing, MIT Press, Cambridge, Massachusetts.Google Scholar
- Kindervater, G.A.P. and Lenstra, J.K. (1988). “Parallel Computing in Combinatorial Optimization,” Annals Operations Research 14, 245–289.Google Scholar
- Koelbel, C.H., Loveman, D.B., Schreiber, R.S., et al. (1993). The High Performance Fortran Handbook, MIT Press, Cambridge, Massachusetts.Google Scholar
- Kumar, V. and Gupta, A. (1994). “Analyzing Scalability of Parallel Algorithms and Architectures,” Jl. Parallel and Distributed Computing. Google Scholar
- Leighton, F.T. (1991). Introduction to Parallel Algorithms and Architectures: Arrays, Trees, and Hypercubes, Morgan Kaufmann, San Mateo, California.Google Scholar
- Metcalf, M. and Reid, J. (1990). Fortran 90 Explained, Oxford University Press, Oxford, United Kingdom.Google Scholar
- Snir, M., Otto, S.W., Huss-Lederman, S., et al. (1996). MPI: The Complete Reference, MIT Press, Cambridge, Massachusetts.Google Scholar
- Zenios, S.A. (1989). “Parallel Numerical Optimization: Current Status and an Annotated Bibliography,” ORSA Jl. Computing 1, 20–43.Google Scholar
- Zenios, S.A. (1994). “Parallel and Supercomputing in the Practice of Management Science,” Interfaces, 24, 122–140.Google Scholar