A fair and efficient resource sharing scheme using modified grey wolf optimizer

Abstract

With the rapid upsurge in large-scale computing, resource sharing in heterogeneous distributed systems where users’ goals are conflicting has become a paramount research issue. The resource sharing or resource allocation problem is attributed to allocating users’ tasks across multiple computing resources so that the utility of the resources is improved while ensuring the quality of services. In this paper, the resource allocation problem is considered as a bi-objective optimization problem, including minimizing response time and the utilization imbalance between resources. To optimize both the objectives simultaneously, the contributions of this paper are manifolds. First, the problem is modeled as an optimization problem by considering both the objectives in an integrated manner. Second, the resource allocation problem is formulated as a non-cooperative game. Finally, to derive the game’s solution in a distributed manner, a Best Response dynamics based Modified Grey Wolf Optimizer BR-MGWO is proposed. Further, to assess the efficacy of BR-MGWO, it is compared with two other approaches, i.e., GOS and NCOP on problem instances of various settings. The experimental results show that BR-MGWO not only provides less response time while provides better improvements in utilization imbalance, which is reduced by 50% and 71%, respectively, in comparison to the GOS and NCOP.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

References

  1. 1.

    Krauter K, Buyya R, Maheswaran Muthucumaru (2002) A taxonomy and survey of grid resource management systems for distributed computing. Softw Pract Exp 32(2):135–164

    Article  Google Scholar 

  2. 2.

    Ajeena Beegom AS, Rajasree MS (2019) Integer-pso: a discrete pso algorithm for task scheduling in cloud computing systems. Evol Intell 12(2):227–239

    Article  Google Scholar 

  3. 3.

    Sun N, Li Y, Ma L, Chen W, Cynthia D (2019) Research on cloud computing in the resource sharing system of university library services. Evol Intell 12(3):377–384

    Article  Google Scholar 

  4. 4.

    Varghese B, Buyya R (2018) Next generation cloud computing: new trends and research directions. Future Gener Comput Syst 79:849–861

    Article  Google Scholar 

  5. 5.

    Buyya R, Yeo CS, Venugopal S, Broberg J, Brandic I (2009) Cloud computing and emerging it platforms: vision, hype, and reality for delivering computing as the 5th utility. Future Gener Comput Syst 25(6):599–616

    Article  Google Scholar 

  6. 6.

    Grosu D, Chronopoulos AT (2005) Noncooperative load balancing in distributed systems. J Parallel Distrib Comput 65(9):1022–1034

    Article  Google Scholar 

  7. 7.

    Tripathi R, Vignesh S, Tamarapalli V, Chronopoulos AT, Siar H (2017) Non-cooperative power and latency aware load balancing in distributed data centers. J Parallel Distrib Comput 107:76–86

    Article  Google Scholar 

  8. 8.

    Tiwary M, Puthal D, Sahoo KS, Sahoo B, Yang LT (2018) Response time optimization for cloudlets in mobile edge computing. J Parallel Distrib Comput 119:81–91

    Article  Google Scholar 

  9. 9.

    Xiao Z, Tong Z, Li K, Li K (2017) Learning non-cooperative game for load balancing under self-interested distributed environment. Appl Soft Comput 52:376–386

    Article  Google Scholar 

  10. 10.

    Li K, Liu C, Li K, Zomaya AY (2016) A framework of price bidding configurations for resource usage in cloud computing. IEEE Trans Parallel Distrib Syst 27(8):2168–2181

    Article  Google Scholar 

  11. 11.

    Zhang H, Xiao Y, Bu S, Yu R, Niyato D, Han Z (2020) Distributed resource allocation for data center networks: a hierarchical game approach. IEEE Trans Cloud Comput 8(3):778–789.

  12. 12.

    Liu C, Li K, Li K (2018) A game approach to multi-servers load balancing with load-dependent server availability consideration. IEEE Trans Cloud Comput (Early Access) https://doi.org/10.1109/TCC.2018.2790404.

  13. 13.

    Song S, Lv T, Chen X (2014) Load balancing for future internet: an approach based on game theory. J Appl Math 2014:959 782:1– 959 782:11.

  14. 14.

    Avni G, Tamir T (2016) Cost-sharing scheduling games on restricted unrelated machines. Theor Comput Sci 646:26–39

    MathSciNet  Article  Google Scholar 

  15. 15.

    Kishor A, Niyogi R, Veeravalli B (2020) A game-theoretic approach for cost-aware load balancing in distributed systems. Future Gener Comput Syst 109:29–44

    Article  Google Scholar 

  16. 16.

    Subrata R, Zomaya AY, Landfeldt B (2008) A cooperative game framework for QoS guided job allocation schemes in grids. IEEE Trans Comput 57(10):1413–1422

    MathSciNet  Article  Google Scholar 

  17. 17.

    Penmatsa S, Chronopoulos AT (2011) Game-theoretic static load balancing for distributed systems. J Parallel Distrib Comput 71(4):537–555

    Article  Google Scholar 

  18. 18.

    Liu C, Li K, Tang Z, Li K (2018) Bargaining game-based scheduling for performance guarantees in cloud computing. ACM Trans Model Perform Eval Comput Syst: TOMPECS 3(1):1–25

    Article  Google Scholar 

  19. 19.

    Yang B, Li Z, Chen S, Wang T, Li K (2016) Stackelberg game approach for energy-aware resource allocation in data centers. IEEE Trans Parallel Distrib Syst 27(12):3646–3658

    Article  Google Scholar 

  20. 20.

    Kalyampudi PSL, Krishna PV, Kuppani S, Saritha V (2019) A work load prediction strategy for power optimization on cloud based data centre using deep machine learning. Evol Intell (Early Access). https://doi.org/10.1007/s12065-019-00289-4.

  21. 21.

    Genez TAL, Pietri I, Sakellariou R, Bittencourt LF, Madeira ERM (2015) A particle swarm optimization approach for workflow scheduling on cloud resources priced by CPU frequency. In: Proceedings of the 8th international conference on utility and cloud computing. IEEE Press, pp 237–241

  22. 22.

    Nanivadekar SS, Kolekar UD (2018) A hybrid optimization model for resource allocation in ofdm-based cognitive radio system. Evol Intell (Early Access). https://doi.org/10.1007/s12065-018-0173-1

  23. 23.

    Min-Allah N, Qureshi MB, Alrashed S, Rana OF (2019) Cost efficient resource allocation for real-time tasks in embedded systems. Sustain Cities Soc 48:101523

    Article  Google Scholar 

  24. 24.

    Brun O, Prabhu B (2016) Worst-case analysis of non-cooperative load balancing. Ann Oper Res 239(2):471–495

    MathSciNet  Article  Google Scholar 

  25. 25.

    Cardellini V, Casalicchio E, Colajanni M, Philip SY (June 2002) The state of the art in locally distributed web-server systems. ACM Comput Surv 34(2):263–311

  26. 26.

    König J, Schröder C (2018) Inequality-minimization with a given public budget. J Econ Inequal 16(4):607–629

    Article  Google Scholar 

  27. 27.

    Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61

    Article  Google Scholar 

  28. 28.

    Kishor A, Singh PK (2016) Empirical study of grey wolf optimizer. In: Proceedings of fifth international conference on soft computing for problem solving. Springer, pp 1037–1049

  29. 29.

    Gämperle R, Müller SD, Koumoutsakos P (2002) A parameter study for differential evolution. Adv Intell Syst Fuzzy Syst Evol Comput 10(10):293–298

    Google Scholar 

  30. 30.

    Tang X, Chanson ST (2000) Optimizing static job scheduling in a network of heterogeneous computers. In: International conference on parallel processing. IEEE, pp 373–382

  31. 31.

    Derrac J, García S, Molina D, Herrera F (2011) A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol Comput 1(1):3–18

    Article  Google Scholar 

  32. 32.

    García S, Molina D, Lozano M, Herrera F (2009) A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: a case study on the cec’2005 special session on real parameter optimization. J Heuristics 15(6):617–644

    Article  Google Scholar 

  33. 33.

    Kishor A, Chandra M, Singh PK (2017) An astute artificial bee colony algorithm. In: Proceedings of sixth international conference on soft computing for problem solving. Springer, pp 153–162

  34. 34.

    Jadon SS, Tiwari R, Sharma H, Bansal JC (2017) Hybrid artificial bee colony algorithm with differential evolution. Appl Soft Comput 58:11–24

    Article  Google Scholar 

  35. 35.

    Zar JH (1999) Biostatistical analysis. Pearson Education India, Noida

    Google Scholar 

Download references

Acknowledgements

The authors would also like to thank the associate editor and anonymous reviewers for their valuable comments and helpful suggestions. The second author was in part supported by a research grant from Google.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Avadh Kishor.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A Performance Analysis of MGWO

This appendix section presents the rationale behind the development of MGWO, i.e., the performance analysis of MGWO with respect to its canonical version—GWO.

Since our purpose (following the scope of this paper) is to justify the reasoning of introducing MGWO instead of not adopting GWO. In this regard, we consider four benchmarks functions [28] to divulge the performance of MGWO and GWO. This suite of benchmark that is also utilized in various prior studies [27, 28, 33, 34], has two types of functions: (a) unimodal and (b) multimodel. The surface plots of the functions are shown in Fig. 10 and their mathematical representation, along with their properties, are discussed below. Note that all the functions considered are minimization functions, with the global minimum for all the functions is 0.

Unimodal functions

A function is called unimodal if it has only one local optimum (referred to as global optimum). A unimodal function is used to test the exploitation capability (i.e., the algorithm’s convergence rate). In this experiment, we have considered two unimodal functions:

  • Sphere function it is considered as the simplest optimization function to solve. The surface plot for this function is shown in Fig. 10a. The search range for this function is selected [− 100,100]. Mathematically, it is defined as:

    $$\begin{aligned} F_1 ({\mathbf {x}}) = \sum _{i=1}^{d} x_i^2, \end{aligned}$$
    (18)

    where \({\mathbf {x}} = (x_i)_{i=1}^d\) is a d-dimensional vector.

  • Schwefel function as shown in Fig. 10b, this function has sharp corners of the contours, which increase the chances of getting stuck for an algorithm. So, it is considered as one of the challenging problems to solve. The search range for this function is selected [− 10,10]. Mathematically, it is defined as:

    $$\begin{aligned} F_2 ({\mathbf {x}}) = \sum _{i=1}^{d} |x_i| + \prod _{i=1}^{d} |x_i|. \end{aligned}$$
    (19)

Multimodal functions

A function is called multimodal if it has more than one local optimum other than the global optimum. A multimodal function is used to test the exploration capability (i.e., the capability of escaping from local optima trapping) of the algorithm. Here, we have considered two multimodal functions:

  • Ackley function as shown in Fig. 10c, this function contains many local optima, and the global optimum lies in the narrow basin. The search range for this function is [− 32, 32]. Mathematically, it is defined as:

    $$\begin{aligned} F_3 ({\mathbf {x}})= & {} -20*e^{\Big (-0.2 \sqrt{\frac{1}{d}\sum _{i=1}^{d}x_i^2}\Big )} - e^{\Big (\frac{1}{d}\sum _{i=1}^{d}cos(2\pi x_i)\Big )}\nonumber \\&+ 20 +e . \end{aligned}$$
    (20)
  • Griewank function as shown in Fig. 10d, this function is a complex function, having regularly distributed many local optima. There are high chances for an algorithm to get stuck in local optimum. The search range for this function is selected [− 5.12,5.12]. Mathematically, it is defined as:

    $$\begin{aligned} F_4 ({\mathbf {x}}) = \sum _{i=1}^{d} (x_i^2 - 10cos(2\pi x_i) +10). \end{aligned}$$
    (21)
Fig. 10
figure10

Surface plot for benchmark functions

Parameter settings

The parameters used for experiments are as follows. For each function, dimension size and population size are fixed to 30 [27, 28]. The value of \(p_r = 0.5\). The stopping criteria for MGWO and GWO are the maximum number of iterations that is set to 500. For each experiment, 30 independent runs are performed, and final results are reported after averaging 30 runs.

Fig. 11
figure11

Illustrating the convergence behavior

Experimental results and discussion

In this set of experiments, to gauge the goodness of MGWO and GWO, two evaluation metrics, i.e., convergence rate and the average optimal value is adopted. The convergence rate of an optimization algorithm is defined as the speed at which the algorithm approaches the objective function’s optimal value. Here, optimal value means what is the objection function value after the final iteration of the algorithm.

The convergence behavior of MGWO and GWO for four different benchmark functions are shown in Fig. 11. From Fig. 11a, b, it can be observed that in early stage of the optimization process the convergence rate of GWO better than MGWO, and as iteration increases MGWO outperforms GWO. Finally, the optimal value achieved by MGWO, i.e., 1.73e−253, is much better than the value, i.e., 5.17e−28, achieved by GWO.

From Fig. 11c, d, it can be seen that in both the multimodal functions GWO gets trapped into local optima and converges prematurely while MGWO with its better exploration capability achieves a better value without getting trapped in early iterations. The optimal value achieved by MGWO for \(F_3\) and \(F_4\) is 0 and 8.86e−16, respectively.

Finally, from empirical results, it can be inferred that MGWO outperforms GWO in terms of exploration and exploitation capability. This better performance of MGWO over GWO motivates and led us to use MGWO for finding NE in combination with best response dynamics.

Parameter sensitivity analysis

In our proposed MGWO, we have used only one additional parameter, i.e., \(p_r\) other than GWO. To perform all the experiments, we have fixed the value of \(p_r = 0.5\). The reason for fixing this value is established experimentally in this section.

In order to analyze the sensitivity of \(p_r\) over the performance of MGWO, we considered one unimodal function and one multimodal function. All other parameters used in this experiment are considered the same as in the previous experiment. The final objective value function achieved over varying range of \(p_r\) (i.e., from 0.1 to 0.9) are reported in the Table 8. From the results shown in the second column of the Table 8, it can be seen that:

Table 8 Objective functions values over varying \(p_r\)
  • for unimodal function when the value of \(p_r\) is less than 0.5, the objective function value is better, and on the other hand, as value of \(p_r\) increases beyond 0.5, the performance of the MGWO deteriorates.

  • for multimodal function when the value of \(p_r\) decreases beyond 0.5, the performance of the MGWO deteriorates, and on the other hand, the performance remains consistent if the value of \(p_r\) is 0.5 or beyond.

From these results, it is clear that the value of \(p_r\) less than 0.5 is better for unimodal but affects the algorithm’s performance severely in the case of multimodal. Therefore, the best-suited value of \(p_r\) is 0.5.

Appendix B Wilcoxon signed-rank test

The Wilcoxon signed-rank test [31, 32] is a pairwise test that is designed to investigate the significant difference between two sample means; it is nonparametric procedure and is analogous to paired t-test. If these samples are refer to the output of two different algorithms, then this test is used for answering the following question: does there exist any significant difference between the performance of the two algorithms?

Let there are Q different output points of the algorithms on different function (or different configuration settings), and \(\delta _i\) be the performance difference of the two algorithms on i th output point. Let \(rk^+\) be the sum of ranks for the output points where first algorithm outperforms second, and \(rk^-\) is the sum of ranks for opposite outcome. Ranks of \(\delta _i =0\) are split evenly among the sums [31]; if there is an odd number of them, then one is ignored:

$$\begin{aligned} \begin{aligned} rk^+&= \sum _{\delta _i >0} rank(\delta _i) + \frac{1}{2}\sum _{\delta _i = 0} rank(\delta _i)\\ rk^-&= \sum _{\delta _i < 0} rank(\delta _i) + \frac{1}{2}\sum _{\delta _i = 0} rank(\delta _i) \end{aligned} \end{aligned}$$
(22)

Let \({\mathbb {T}} = \min (rk+, rk_)\) be the smaller of the ranks sums. If \({\mathbb {T}}\) is less than or equal to the value of the distribution of Wilcoxon for Q degrees of freedom ([35] Table B.12), then the null hypothesis of equality of performance is rejected; this means that the performance of the given algorithm is better than the other one, with the p value associated.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Kishor, A., Niyogi, R. A fair and efficient resource sharing scheme using modified grey wolf optimizer. Evol. Intel. (2021). https://doi.org/10.1007/s12065-020-00509-2

Download citation

Keywords

  • Load balancing
  • Distributed systems
  • Fair utilization
  • Non-cooperative game
  • Nash equilibrium