Advertisement

Optimal Multiprogramming Control for Parallel Computations

  • Eike Jessen
  • Wolfgang Ertel
  • Christian B. Suttner
Part of the Lecture Notes in Computer Science book series (LNCS, volume 732)

Abstract

Traditionally, jobs on parallel computers are run one at a time, and control of parallelism so far was mainly guided by the desire to determine the optimal number of processors for the algorithm under consideration. Here, we want to depart from this course and consider the goal of optimizing the performance of the overall parallel system, assuming more than one job is available for execution. Thus the central issue of this paper is the question how the available processors of a parallel machine should be distributed among a number of jobs. In order to obtain guidelines for such multiprogramming control, we use the speedup-behaviour and the accumulated processor time of a job as its characterization.

Keywords

Execution Time Service Time Parallel Machine Parallel System Processor Utilization 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    D.P. Helmbold and C.E. McDowell. Modeling Speedup(n) greater than n. In Proceedings of the International Conference on Parallel Processing, pages III-219225, 1989.Google Scholar
  2. 2.
    W. Ertel. OR-Parallel Theorem Proving with Random Competition. In Proceedings of LPAR’92,pages 226–237, St. Petersburg, Russia, 1992. Springer LNAI 624.Google Scholar
  3. 3.
    W. Ertel. Parallele Suche mit randomisiertem Wettbewerb in Inferenzsystemen, volume 25 of DISKI. Infix-Verlag, 1993.Google Scholar
  4. 4.
    R. Letz, J. Schumann, S. Bayerl, and W. Bibel. SETHEO: A High-Performance Theorem Prover. Journal of Automated Reasoning, 8 (2): 183–212, 1992.CrossRefMATHMathSciNetGoogle Scholar
  5. 5.
    M. Greenberger. The Priority Problem. Technical Report MIT-MAC 22, 1965.Google Scholar
  6. 6.
    R.R. Muntz and J. Wong. Asymptotic Properties of Closed Queueing Network Models. Proceedings of the Eight Annual Princeton Conference on Information Sciences and Systems, 1974.Google Scholar
  7. 7.
    M. Barton and G. Withers. Computing Performance as a Function of the Speed, Quantity, and Cost of the Processors. In Proceedings of the 1989 International Conference on Supercomputing, pages 759–764, 1989.Google Scholar
  8. 8.
    H.P. Flatt and K. Kennedy. Performance of Parallel Processors. Parallel Computing 12, pages 1 — 20, 1989.Google Scholar
  9. 9.
    A. Gupta and V. Kumar. Analyzing Performance of Large Scale Parallel Systems. Technical report, University of Minnesota, 1992.Google Scholar
  10. 10.
    L. Kleinrock. Power and Deterministic Rules of Thumb for Probabilistic Problems in Computer Communications. In Proceedings of the International Conference of Communications, pages 43.1.1–43. 1. 10, 1979.Google Scholar
  11. 11.
    L. Kleinrock and J.-H. Huang. On Parallel Processing Systems: Amdahl’s Law Generalized and Some Results on Optimal Design. IEEE Transactions on Software Engineering, 18 (5): 434–447, 1992.CrossRefGoogle Scholar
  12. 12.
    D.L. Eager, J. Zahorjan, and E.D. Lazowska. Speedup Versus Efficiency in Parallel Systems. IEEE Transactions on Computers, 38 (3), 1989.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1993

Authors and Affiliations

  • Eike Jessen
    • 1
  • Wolfgang Ertel
    • 1
  • Christian B. Suttner
    • 1
  1. 1.Institut für InformatikTU MünchenMünchen 2Germany

Personalised recommendations