Encyclopedia of Operations Research and Management Science

2001 Edition
| Editors: Saul I. Gass, Carl M. Harris

Parallel computing

  • Jonathan Eckstein
Reference work entry
DOI: https://doi.org/10.1007/1-4020-0611-X_728

To the applications-oriented user, parallel computing is the use of a computer system that contains multiple, replicated arithmetic-logical units (ALUs), programmable to cooperate concurrently on a single task. This definition does not include other kinds of concurrency not typically visible to the applications programmer, such as the overlapping of floating point and integer operations, or launching of multiple concurrent instructions on “superscalar” microprocessor chips.

Parallel computing exists because, despite quick and steady advances in computing technology, there always exist problems that a single processor cannot solve in an acceptable amount of time. Thus, parallel computing is necessarily high performance computing, and computational efficiency tends to be a prime concern in developing parallel applications.

KINDS OF PARALLEL COMPUTERS

The taxonomy of Flynn (1972) classifies parallel computers as either “SIMD” or “MIMD.” In a SIMD (Single Instruction, Multiple Data)...

This is a preview of subscription content, log in to check access.

References

  1. [1]
    Barr, R.S. and Hickman, B.L. (1993). “Reporting Computational Experiments with Parallel Algorithms: Issues, Measures and Experts = Opinions,” ORSA Jl. Computing 5, 2–18.Google Scholar
  2. [2]
    Bertsekas, D.P. and Tsitsiklis, J. (1989). Parallel and Distributed Computation: Numerical Methods, Prentice-Hall, Englewood Cliffs, New Jersey.Google Scholar
  3. [3]
    Eckstein, J. (1993). “Large-Scale Parallel Computing, Optimization, and Operations Research: A Survey,” ORSA Computer Science Technical Section Newsletter 14(2), 1, 8–12.Google Scholar
  4. [4]
    Flynn, M.J. (1972). “Some Computer Organizations and their Effectiveness,” IEEE Transactions Computers C-21, 948–960.Google Scholar
  5. [5]
    Geist, A., Beguelin, A., Dongarra, J., et al. (1994). PVM: Parallel Virtual Machine: A Users' Guide and Tutorial for Network Parallel Computing, MIT Press, Cambridge, Massachusetts.Google Scholar
  6. [6]
    Kindervater, G.A.P. and Lenstra, J.K. (1988). “Parallel Computing in Combinatorial Optimization,” Annals Operations Research 14, 245–289.Google Scholar
  7. [7]
    Koelbel, C.H., Loveman, D.B., Schreiber, R.S., et al. (1993). The High Performance Fortran Handbook, MIT Press, Cambridge, Massachusetts.Google Scholar
  8. [8]
    Kumar, V. and Gupta, A. (1994). “Analyzing Scalability of Parallel Algorithms and Architectures,” Jl. Parallel and Distributed Computing. Google Scholar
  9. [9]
    Leighton, F.T. (1991). Introduction to Parallel Algorithms and Architectures: Arrays, Trees, and Hypercubes, Morgan Kaufmann, San Mateo, California.Google Scholar
  10. [10]
    Metcalf, M. and Reid, J. (1990). Fortran 90 Explained, Oxford University Press, Oxford, United Kingdom.Google Scholar
  11. [11]
    Snir, M., Otto, S.W., Huss-Lederman, S., et al. (1996). MPI: The Complete Reference, MIT Press, Cambridge, Massachusetts.Google Scholar
  12. [12]
    Zenios, S.A. (1989). “Parallel Numerical Optimization: Current Status and an Annotated Bibliography,” ORSA Jl. Computing 1, 20–43.Google Scholar
  13. [13]
    Zenios, S.A. (1994). “Parallel and Supercomputing in the Practice of Management Science,” Interfaces, 24, 122–140.Google Scholar

Copyright information

© Kluwer Academic Publishers 2001

Authors and Affiliations

  • Jonathan Eckstein
    • 1
  1. 1.Rutgers UniversityPiscatawayUSA