Optimal Multiprogramming Control for Parallel Computations
Traditionally, jobs on parallel computers are run one at a time, and control of parallelism so far was mainly guided by the desire to determine the optimal number of processors for the algorithm under consideration. Here, we want to depart from this course and consider the goal of optimizing the performance of the overall parallel system, assuming more than one job is available for execution. Thus the central issue of this paper is the question how the available processors of a parallel machine should be distributed among a number of jobs. In order to obtain guidelines for such multiprogramming control, we use the speedup-behaviour and the accumulated processor time of a job as its characterization.
KeywordsExecution Time Service Time Parallel Machine Parallel System Processor Utilization
Unable to display preview. Download preview PDF.
- 1.D.P. Helmbold and C.E. McDowell. Modeling Speedup(n) greater than n. In Proceedings of the International Conference on Parallel Processing, pages III-219225, 1989.Google Scholar
- 2.W. Ertel. OR-Parallel Theorem Proving with Random Competition. In Proceedings of LPAR’92,pages 226–237, St. Petersburg, Russia, 1992. Springer LNAI 624.Google Scholar
- 3.W. Ertel. Parallele Suche mit randomisiertem Wettbewerb in Inferenzsystemen, volume 25 of DISKI. Infix-Verlag, 1993.Google Scholar
- 5.M. Greenberger. The Priority Problem. Technical Report MIT-MAC 22, 1965.Google Scholar
- 6.R.R. Muntz and J. Wong. Asymptotic Properties of Closed Queueing Network Models. Proceedings of the Eight Annual Princeton Conference on Information Sciences and Systems, 1974.Google Scholar
- 7.M. Barton and G. Withers. Computing Performance as a Function of the Speed, Quantity, and Cost of the Processors. In Proceedings of the 1989 International Conference on Supercomputing, pages 759–764, 1989.Google Scholar
- 8.H.P. Flatt and K. Kennedy. Performance of Parallel Processors. Parallel Computing 12, pages 1 — 20, 1989.Google Scholar
- 9.A. Gupta and V. Kumar. Analyzing Performance of Large Scale Parallel Systems. Technical report, University of Minnesota, 1992.Google Scholar
- 10.L. Kleinrock. Power and Deterministic Rules of Thumb for Probabilistic Problems in Computer Communications. In Proceedings of the International Conference of Communications, pages 43.1.1–43. 1. 10, 1979.Google Scholar
- 12.D.L. Eager, J. Zahorjan, and E.D. Lazowska. Speedup Versus Efficiency in Parallel Systems. IEEE Transactions on Computers, 38 (3), 1989.Google Scholar