Advertisement

Adaptive Scheduling for Master-Worker Applications on the Computational Grid

  • Elisa Heymann
  • Miquel A. Senar
  • Emilio Luque
  • Miron Livny
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1971)

Abstract

We address the problem of how many workers should be allocated for executing a distributed application that follows the master-worker paradigm, and how to assign tasks to workers in order to maximize resource efficiency and minimize application execution time. We propose a simple but effective scheduling strategy that dynamically measures the execution times of tasks and uses this information to dynamically adjust the number of workers to achieve a desirable efficiency, minimizing the impact in loss of speedup. The scheduling strategy has been implemented using an extended version of MW, a runtime library that allows quick and easy development of master-worker computations on a computational grid. We report on an initial set of experiments that we have conducted on a Condor pool using our extended version of MW to evaluate the effectiveness of the scheduling strategy.

Keywords

Execution Time Computational Grid Schedule Strategy Grid Environment Average Execution Time 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    I. Foster and C. Kesselman, “The Grid: Blueprint for a New Computing Infraestructure”, Morgan-Kaufmann, 1999.Google Scholar
  2. 2.
    H. Casanova and J. Dongarra, “NetSolve: Network enabled solvers”, IEEE Computational Science and Engineering, 5(3) pp. 57–67, 1998.CrossRefGoogle Scholar
  3. 3.
    D. Abramson, J. Giddy, and L. Kotler, “High Performance Parametric Modeling with Nimrod/G: Killer Application for the Global Grid?”, in Proc. of IPPS/SPDP’2000, 2000.Google Scholar
  4. 4.
    J.-P. Goux, S. Kulkarni, J. Linderoth, M. Yoder, “An enabling framework for masterworker applications on the computational grid”, Tech. Report, University of Wisconsin-Madison, March, 2000.Google Scholar
  5. 5.
    L. M. Silva and R. Buyya, “Parallel programming models and paradigms”, in R. Buyya (ed.), “High Performance Cluster Computing: Architectures and Systems: Volume 2”, Prentice Hall PTR, NJ, USA, 1999.Google Scholar
  6. 6.
    F. Berman, R. Wolski, S. Figueira, J. Schopf and G. Shao, “Application-Level Scheduling on Distributed Heterogeneous Networks”, Proc. of Supercomputing’96.Google Scholar
  7. 7.
    H. Casanova, M. Kim, J. S. Plank and J. Dongarra, “Adaptive scheduling for task farming with Grid middleware”, International Journal of Supercomputer Applications and High-Performance Computing, pp. 231–240, Volume 13, Number 3, Fall 1999.CrossRefGoogle Scholar
  8. 8.
    G. Shao, R. Wolski and F. Berman, “Performance effects of scheduling strategies for Master/Slave distributed applications”, Technical Report TR-CS98-598, University of California, San Diego, September 1998.Google Scholar
  9. 9.
    R. Wolski, N. T. Spring and J. Hayes, “The Network Weather Service: a distributed resource performance forecasting service for metacomputing”, Journal of Future Generation Computing Systems, Vol. 15, October, 1999.Google Scholar
  10. 10.
    T. B. Brecht and K. Guha, “Using parallel program characteristics in dynamic processor allocation policies”, Performance Evaluation, Vol. 27 and 28, pp. 519–539, 1996.Google Scholar
  11. 11.
    T. D. Nguyen, R. Vaswani and J. Zahorjan, “Maximizing speedup through self-tuning of processor allocation”, in Proc. of the Int. Par. Proces. Symp. (IPPS’96), 1996.Google Scholar
  12. 12.
    V. Govindan and M. Franklin, “Application Load Imbalance on Parallel Processors”, in Proc. of the Int. Paral. Proc. Symposium (IPPS’96), 1996.Google Scholar
  13. 13.
    E. Cantu-Paz, “Designing efficient master-slave parallel genetic algorithms”, in J. Koza, W. Banzhaf, K. Chellapilla, K. Deb, M. Dorigo, D. Fogel, M. Garzon D. E. Goldberg, H. Iba and R. Riolo, editors, Genetic Programming: Proceeding of the Third Annual Conference, San Francisco, Morgan Kaufmann, 1998.Google Scholar
  14. 14.
    J. Basney, B. Raman and M. Livny, “High throughput Monte Carlo”, Proceedings of the Ninth SIAM Conference on Parallel Processing for Scientific Computing, San Antonio Texas, 1999.Google Scholar
  15. 15.
    J. Pruyne and M. Livny, “Interfacing Condor and PVM to harness the cycles of workstation clusters”, Journal on Future Generations of Computer Systems, Vol. 12, 1996.Google Scholar
  16. 16.
    L. A. Hall, “Aproximation algorithms for scheduling”, in Dorit S. Hochbaum (ed.), “Approximation algorithms for NP-hard problems”, PWS Publishing Company, 1997.Google Scholar
  17. 17.
    D. L. Eager, J. Zahorjan and E. D. Lazowska, “Speedup versus efficiency in parallel systems”, IEEE Transactions on Computers, vol. 38, pp. 408–423, 1989.CrossRefGoogle Scholar
  18. 18.
    E. Heymann, M. Senar, E. Luque, M. Livny. “Evaluation of an Adaptive Scheduling Strategy for Master-Worker Applications on Clusters of Workstations”. Proceedings of 7th Int. Conf. on High Performance Computing (HiPC’2000) (to appear).Google Scholar
  19. 19.
    M. Livny, J. Basney, R. Raman and T. Tannenbaum, “Mechanisms for high throughput computing”, SPEEDUP, 11, 1997.Google Scholar
  20. 20.
    A. Geist, A. Beguelin, J. Dongarra, W. Jiang, R. Manchek and V. Sunderam, “PVM: Parallel Virtual Machine A User’s Guide and Tutorial for Networked Parallel Computing”, MIT Press, 1994.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Elisa Heymann
    • 1
  • Miquel A. Senar
    • 1
  • Emilio Luque
    • 1
  • Miron Livny
    • 2
  1. 1.Unitat d’Arquitectura d’Ordinadors i Sistemes OperatiusUniversitat Autònoma de BarcelonaBarcelonaSpain
  2. 2.Department of Computer SciencesUniversity of Wisconsin- MadisonWisconsinUSA

Personalised recommendations