Advertisement

An energy-aware scheduling algorithm for budget-constrained scientific workflows based on multi-objective reinforcement learning

  • Yao Qin
  • Hua WangEmail author
  • Shanwen Yi
  • Xiaole Li
  • Linbo Zhai
Article
  • 79 Downloads

Abstract

Since scientific workflow scheduling becomes a major energy contributor in clouds, much attention has been paid to reduce the energy consumed by workflows. This paper considers a multi-objective workflow scheduling problem with the budget constraint. Most existing works of budget-constrained workflow scheduling cannot always satisfy the budget constraint and guarantee the feasibility of solutions. Instead, they discuss the success rate in the experiments. Only a few works can always figure out feasible solutions. These methods work, but they are too complicated. In workflow scheduling, it has been a trend to consider more than one objective. However, the weight selection is usually ignored in these works. The inappropriate weights will reduce the quality of solutions. In this paper, we propose an energy-aware multi-objective reinforcement learning (EnMORL) algorithm. We design a much simpler method to ensure the feasibility of solutions. This method is based on the remaining cheapest budget. Reinforcement learning based on the Chebyshev scalarization function is a new framework, which is effective in solving the weight selection problem. Therefore, we design EnMORL based on it. Our goal is to minimize the makespan and energy consumption of the workflow. Finally, we compare EnMORL with two state-of-the-art multi-objective meta-heuristics in the case of four different workflows. The results show that our proposed EnMORL outperforms these existing methods.

Keywords

Scientific workflows Cloud computing Energy saving Reinforcement learning Multi-objective optimization The budget constraint 

Notes

Acknowledgements

We would like to thank anonymous referees for their helpful suggestions to improve this paper. This work was supported in part by the National Natural Science Foundation of China under Grant NSFC 61672323, in part by the Fundamental Research Funds of Shandong University under Grant 2017JC043, in part by the Key Research and Development Program of Shandong Province under Grant 2017GGX10122 and Grant 2017GGX10142, and in part by the Natural Science Foundation of Shandong Province Grant ZR2019MF072.

References

  1. 1.
    Senyo PK, Addae E, Boateng R (2018) Cloud computing research: a review of research themes, frameworks, methods and future research directions. Int J Inf Manag 38(1):128–139CrossRefGoogle Scholar
  2. 2.
    Khattar N, Sidhu J, Singh J (2019) Toward energy-efficient cloud computing: a survey of dynamic power management and heuristics-based optimization techniques. J Supercomput 75(8):4750–4810CrossRefGoogle Scholar
  3. 3.
    Kintsakis AM, Psomopoulos FE, Mitkas PA (2019) Reinforcement learning based scheduling in a workflow management system. Eng Appl Artif Intell 81:94–106CrossRefGoogle Scholar
  4. 4.
    Andrae ASG, Edler T (2015) On global electricity usage of communication technology: trends to 2030. Challenges 6(1):117–157CrossRefGoogle Scholar
  5. 5.
    Ismayilov G, Topcuoglu HR (2020) Neural network based multi-objective evolutionary algorithm for dynamic workflow scheduling in cloud computing. Futur Gener Comput Syst 102:307–322CrossRefGoogle Scholar
  6. 6.
    Belkhir L, Elmeligi A (2018) Assessing ict global emissions footprint: trends to 2040 and recommendations. J Clean Prod 177:448–463CrossRefGoogle Scholar
  7. 7.
    Zhangjun W, Liu X, Ni Z, Yuan D, Yang Y (2013) A market-oriented hierarchical scheduling strategy in cloud workflow systems. J Supercomput 63(1):256–293CrossRefGoogle Scholar
  8. 8.
    Verma A, Kaushal S (2017) A hybrid multi-objective particle swarm optimization for scientific workflow scheduling. Parallel Comput 62:1–19MathSciNetCrossRefGoogle Scholar
  9. 9.
    Arabnejad H, Barbosa JG (2014) A budget constrained scheduling algorithm for workflow applications. J Grid Comput 12(4):665–679CrossRefGoogle Scholar
  10. 10.
    Garg R, Singh AK (2014) Multi-objective workflow grid scheduling using \(\varepsilon \)-fuzzy dominance sort based discrete particle swarm optimization. J Supercomput 68(2):709–732Google Scholar
  11. 11.
    Wu CQ, Lin X, Yu D, Xu W, Li L (2014) End-to-end delay minimization for scientific workflows in clouds under budget constraint. IEEE Trans Cloud Comput 3(2):169–181CrossRefGoogle Scholar
  12. 12.
    Chen W, Xie G, Li R, Bai Y, Fan C, Li K (2017) Efficient task scheduling for budget constrained parallel applications on heterogeneous cloud computing systems. Futur Gener Comput Syst 74:1–11CrossRefGoogle Scholar
  13. 13.
    Sofia AS, GaneshKumar P (2018) Multi-objective task scheduling to minimize energy consumption and makespan of cloud computing using NSGA-ii. J Netw Syst Manag 26(2):463–485CrossRefGoogle Scholar
  14. 14.
    Das I, Dennis JE (1997) A closer look at drawbacks of minimizing weighted sums of objectives for pareto set generation in multicriteria optimization problems. Struct Optim 14(1):63–69CrossRefGoogle Scholar
  15. 15.
    Van Moffaert K, Drugan MM, Nowé A (2013) Scalarized multi-objective reinforcement learning: Novel design techniques. In: 2013 IEEE symposium on adaptive dynamic programming and reinforcement learning (ADPRL), IEEE, pp 191–199Google Scholar
  16. 16.
    Zhu D, Melhem R, Childers BR (2003) Scheduling with dynamic voltage/speed adjustment using slack reclamation in multiprocessor real-time systems. IEEE Trans Parallel Distrib Syst 14(7):686–700CrossRefGoogle Scholar
  17. 17.
    Zhou J, Wang T, Cong P, Lu P, Wei T, Chen M (2019) Cost and makespan-aware workflow scheduling in hybrid clouds. J Syst Arch.  https://doi.org/10.1016/j.sysarc.2019.08.004 CrossRefGoogle Scholar
  18. 18.
    Gábor Z, Kalmár Z, Szepesvári C (1998) Multi-criteria reinforcement learning. In: Proceedings of the Fifteenth International Conference on Machine Learning, Morgan Kaufmann Publishers Inc, San Francisco, CA, USA, pp 197–205Google Scholar
  19. 19.
    Zitzler E, Thiele L, Laumanns M, Fonseca CM, Da Fonseca GV (2002) Performance assessment of multiobjective optimizers: an analysis and review. TIK-Report, vol 139Google Scholar
  20. 20.
    Li Z, Ge J, Haiyang H, Song W, Hao H, Luo B (2015) Cost and energy aware scheduling algorithm for scientific workflows with deadline constraint in clouds. IEEE Trans Serv Comput 11(4):713–726CrossRefGoogle Scholar
  21. 21.
    Qureshi B (2019) Profile-based power-aware workflow scheduling framework for energy-efficient data centers. Futur Gener Comput Syst 94:453–467CrossRefGoogle Scholar
  22. 22.
    Deb K, Pratap A, Agarwal S, Meyarivan TAMT (2002) A fast and elitist multiobjective genetic algorithm: NSGA-ii. IEEE Trans Evolut Comput 6(2):182–197CrossRefGoogle Scholar
  23. 23.
    Topcuoglu H, Hariri S, Min-you W (2002) Performance-effective and low-complexity task scheduling for heterogeneous computing. IEEE Trans Parallel Distrib Syst 13(3):260–274CrossRefGoogle Scholar
  24. 24.
    Coello CAC, Pulido GT, Lechuga MS (2004) Handling multiple objectives with particle swarm optimization. IEEE Trans Evolut Comput 8(3):256–279CrossRefGoogle Scholar
  25. 25.
    Mossalam H, Assael YM, Roijers DM, Shimon W (2016) Multi-objective deep reinforcement learning. arXiv preprint arXiv:1610.02707
  26. 26.
    Van Moffaert K, Nowé A (2014) Multi-objective reinforcement learning using sets of pareto dominating policies. J Mach Learn Res 15(1):3483–3512MathSciNetzbMATHGoogle Scholar
  27. 27.
    Lee YC, Zomaya AY (2010) Energy conscious scheduling for distributed computing systems under different operating conditions. IEEE Trans Parallel Distrib Syst 22(8):1374–1381CrossRefGoogle Scholar
  28. 28.
    Atkinson M, Gesing S, Montagnat J (2017) and Ian Taylor. Past, present and future, Scientific workflowsGoogle Scholar
  29. 29.
    Sutton RS, Barto AG (2018) Reinforcement learning: an introduction. MIT press, New YorkzbMATHGoogle Scholar
  30. 30.
    Watkins CJCH (1989) Learning from delayed rewardsGoogle Scholar
  31. 31.
    Tsitsiklis JN (1994) Asynchronous stochastic approximation and q-learning. Mach Learn 16((3):185–202zbMATHGoogle Scholar
  32. 32.
    Wiering MA, De Jong ED (2007) Computing optimal stationary policies for multi-objective Markov decision processes. In: 2007 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning, IEEE, pp 158–165Google Scholar
  33. 33.
    Vamplew P, Yearwood J, Dazeley R, Berry A (2008) On the limitations of scalarisation for multi-objective reinforcement learning of pareto fronts. In: Australasian Joint Conference on Artificial Intelligence, Springer, New York, pp 372–378Google Scholar
  34. 34.
    Voß T, Beume N, Rudolph G, Igel C(2008) Scalarization versus indicator-based selection in multi-objective CMA evolution strategies. In: 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), IEEE, pp 3036–3043Google Scholar
  35. 35.
    Bharathi S, Chervenak A, Deelman E, Mehta G, Su M-H, Vahi K (2008) Characterization of scientific workflows. In: 2008 Third Workshop on Workflows in Support of Large-Scale Science, IEEE, pp 1–10Google Scholar
  36. 36.
    Calheiros RN, Ranjan R, Beloglazov A, De Rose CAF, Buyya R (2011) Cloudsim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms. Softw Pract Exp 41(1):23–50CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.School of Computer Science and TechnologyShandong UniversityJinanChina
  2. 2.School of SoftwareShandong UniversityJinanChina
  3. 3.School of Information Science and EngineeringLinyi UniversityLinyiChina
  4. 4.School of Information Science and EngineeringShandong Normal UniversityJinanChina

Personalised recommendations