Advertisement

Journal of Heuristics

, Volume 24, Issue 4, pp 581–616 | Cite as

Evaluating selection methods on hyper-heuristic multi-objective particle swarm optimization

  • Olacir R. CastroJr.
  • Gian Mauricio Fritsche
  • Aurora Pozo
Article
  • 133 Downloads

Abstract

Multi-objective particle swarm optimization (MOPSO) is a promising meta-heuristic to solve multi-objective problems (MOPs). Previous works have shown that selecting a proper combination of leader and archiving methods, which is a challenging task, improves the search ability of the algorithm. A previous study has employed a simple hyper-heuristic to select these components, obtaining good results. In this research, an analysis is made to verify if using more advanced heuristic selection methods improves the search ability of the algorithm. Empirical studies are conducted to investigate this hypothesis. In these studies, first, four heuristic selection methods are compared: a choice function, a multi-armed bandit, a random one, and the previously proposed roulette wheel. A second study is made to identify if it is best to adapt only the leader method, the archiving method, or both simultaneously. Moreover, the influence of the interval used to replace the low-level heuristic is analyzed. At last, a final study compares the best variant to a hyper-heuristic framework that combines a Multi-Armed Bandit algorithm into the multi-objective optimization based on decomposition with dynamical resource allocation (MOEA/D-DRA) and a state-of-the-art MOPSO. Our results indicate that the resulting algorithm outperforms the hyper-heuristic framework in most of the problems investigated. Moreover, it achieves competitive results compared to a state-of-the-art MOPSO.

Keywords

Multi-objective particle swarm optimization Multi-objective Hyper-heuristics Leader selection Archiving Fitness-rate-rank-based multi-armed bandit Adaptive choice function 

Notes

Acknowledgements

The authors would like to thank the Academic Publishing Advisory Center (Centro de Assessoria de Publicação Acadêmica, CAPA - www.capa.ufpr.br) of the Federal University of Paraná for assistance with English language editing. Also, the authors would like to thank CNPq (National Council for Scientific and Technological Development) and CAPES (Coordination for the Improvement of Higher Education Personnel) for the financial support.

References

  1. Bader, J., Deb, K., Zitzler, E.: Faster hypervolume-based search using Monte Carlo sampling. Multiple Criteria Decision Making for Sustainable Energy and Transportation Systems, Lecture Notes in Economics and Mathematical Systems, vol. 634, pp. 313–326. Springer, Berlin (2010)Google Scholar
  2. Bilgin, B., Özcan, E., Korkmaz, E.: An experimental study on hyper-heuristics and exam timetabling. Practice and Theory of Automated Timetabling VI, Lecture Notes in Computer Science, vol. 3867, pp. 394–412. Springer, Berlin (2007)CrossRefGoogle Scholar
  3. Blazewicz, J., Burke, E., Kendall, G., Mruczkiewicz, W., Oguz, C., Swiercz, A.: A hyper-heuristic approach to sequencing by hybridization of DNA sequences. Ann. Op. Res. 207(1), 27–41 (2013).  https://doi.org/10.1007/s10479-011-0927-y MathSciNetCrossRefzbMATHGoogle Scholar
  4. Bringmann, K., Friedrich, T.: An efficient algorithm for computing hypervolume contributions. Evol. Comput. 18(3), 383–402 (2010).  https://doi.org/10.1162/EVCO_a_00012 CrossRefGoogle Scholar
  5. Britto, A., Pozo, A.: Using archiving methods to control convergence and diversity for many-objective problems in particle swarm optimization. In: IEEE Congress on Evolutionary Computation, pp 1–8, (2012).  https://doi.org/10.1109/CEC.2012.6256149
  6. Brockhoff, D., Wagner, T., Trautmann, H.: On the properties of the R2 indicator. In: Proceedings of the 14th Annual Conference on Genetic and Evolutionary Computation, ACM, New York, NY, USA, GECCO ’12, pp 465–472, (2012).  https://doi.org/10.1145/2330163.2330230
  7. Burke, E.K., Gendreau, M., Hyde, M., Kendall, G., Ochoa, G., Ozcan, E., Qu, R.: Hyper-heuristics: a survey of the state of the art. J. Oper. Res. Soc. 64(12), 1695–1724 (2013)CrossRefGoogle Scholar
  8. Castro, Jr. O.R., Pozo, A.: A MOPSO based on hyper-heuristic to optimize many-objective problems. In: IEEE Symposium on Swarm Intelligence (SIS), 2014, pp 1–8, (2014).  https://doi.org/10.1109/SIS.2014.7011803
  9. Castro Jr., O.R., Pozo, A.: Using hyper-heuristic to select leader and archiving methods for many-objective problems. In: Gaspar-Cunha, A., Henggeler Antunes, C., Coello, C.C. (eds.) Evolutionary Multi-Criterion Optimization, Lecture Notes in Computer Science, vol. 9018, pp. 109–123. Springer International Publishing, Berlin (2015).  https://doi.org/10.1007/978-3-319-15934-8_8 Google Scholar
  10. Castro, Jr. O.R., Britto, A., Pozo, A.: A comparison of methods for leader selection in many-objective problems. In: IEEE Congress on Evolutionary Computation, pp 1–8, (2012).  https://doi.org/10.1109/CEC.2012.6256415
  11. Coello, C.A.C., Cortés, N.C.: Solving multiobjective optimization problems using an artificial immune system. Genet. Progr. Evol. Mach. 6(2), 163–190 (2005).  https://doi.org/10.1007/s10710-005-6164-x CrossRefGoogle Scholar
  12. Coello, C.A.C., Lamont, G.B., Veldhuizen, D.A.V.: Evolutionary Algorithms for Solving Multi-Objective Problems (Genetic and Evolutionary Computation). Springer-Verlag, Secaucus, NJ, USA (2006)zbMATHGoogle Scholar
  13. Cowling, P., Kendall, G., Soubeiga, E.: A Hyperheuristic Approach to Scheduling a Sales Summit. In: Burke, E., Erben, W. (eds.) Practice and Theory of Automated Timetabling III, Lecture Notes in Computer Science, vol. 2079, pp. 176–190. Springer, Berlin Heidelberg (2001)CrossRefGoogle Scholar
  14. Deb, K., Jain, H.: An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part i: solving problems with box constraints. IEEE Trans. Evolut. Comput. 18(4), 577–601 (2014).  https://doi.org/10.1109/TEVC.2013.2281535 CrossRefGoogle Scholar
  15. Deb, K., Agrawal, S., Pratap, A., Meyarivan, T.: A fast elitist non-dominated sorting genetic algorithm for multi-objective optimisation: NSGA-II. In: Proceedings of the 6th International Conference on Parallel Problem Solving from Nature, Springer-Verlag, London, UK, PPSN VI, pp 849–858 (2000)Google Scholar
  16. Deb, K., Thiele, L., Laumanns, M., Zitzler, E.: Scalable multi-objective optimization test problems. IEEE Congr. Evolut. Comput. 1, 825–830 (2002)zbMATHGoogle Scholar
  17. do Nascimento Ferreira, T., Kuk, J.N., Pozo, A., Vergilio, S.R.: Product selection based on upper confidence bound moea/d-dra for testing software product lines. In: 2016 IEEE Congress on Evolutionary Computation (CEC), pp 4135–4142, (2016)  https://doi.org/10.1109/CEC.2016.7744315
  18. Drake, J., Ozcan, E., Burke, E.: An improved choice function heuristic selection for cross domain heuristic search. In: Coello, C., Cutello, V., Deb, K., Forrest, S., Nicosia, G., Pavone, M. (eds.) Parallel Problem Solving from Nature—PPSN XII, Lecture Notes in Computer Science, pp. 307–316. Springer, Berlin Heidelberg (2012).  https://doi.org/10.1007/978-3-642-32964-7_31 CrossRefGoogle Scholar
  19. Durillo, J.J., García-Nieto, J., Nebro, A.J., Coello, C.A.C., Luna, F., Alba, E.: Multi-objective particle swarm optimizers: An experimental comparison. In: Proceedings of the 5th International Conference on Evolutionary Multi-Criterion Optimization, Springer-Verlag, Berlin, Heidelberg, EMO ’09, pp 495–509 (2009)Google Scholar
  20. Eberhart, R.C., Shi, Y.: Particle swarm optimization: developments, applications and resources. In: Proceedings of the 2001 Congress on Evolutionary Computation, vol 1, pp 81–86, (2001)  https://doi.org/10.1109/CEC.2001.934374
  21. Elarbi, M., Bechikh, S., Gupta, A., Said, L.B., Ong, Y.s.: A New Decomposition-Based NSGA-II for Many-Objective Optimization. IEEE Transactions on Systems, Man, and Cybernetics - Systems pp 1–20, (2017).  https://doi.org/10.1109/TSMC.2017.2654301
  22. Ferreira, A.S., Gonçalves, R.A., Pozo, A.T.R.: A multi-armed bandit hyper-heuristic. In: 2015 Brazilian Conference on Intelligent Systems (BRACIS), pp 13–18, (2015).  https://doi.org/10.1109/BRACIS.2015.31
  23. Fialho, A., Da Costa, L., Schoenauer, M., Sebag, M.: Analyzing bandit-based adaptive operator selection mechanisms. Ann. Math. Artif. Intell. 60(1–2), 25–64 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  24. Friedman, M.: The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J. Am. Stat. Assoc. 32(200), 675–701 (1937)CrossRefzbMATHGoogle Scholar
  25. Gonçalves, R., Kuk, J., Almeida, C., Venske, S.: MOEA/D-HH: a hyper-heuristic for multi-objective problems. In: Gaspar-Cunha, A., Henggeler Antunes, C., Coello, C.C. (eds.) Evolutionary Multi-Criterion Optimization, Lecture Notes in Computer Science, vol. 9018, pp. 94–108. Springer International Publishing, Berlin (2015a).  https://doi.org/10.1007/978-3-319-15934-8_7 Google Scholar
  26. Gonçalves, R.A., Almeida, C.P., Kuk, J.N., Pozo, A.: Moea/d with adaptive operator selection for the environmental/economic dispatch problem. In: 2015 Latin America Congress on Computational Intelligence (LA-CCI), pp 1–6, (2015b)  https://doi.org/10.1109/LA-CCI.2015.7435971
  27. Gonçalves, R.A., Almeida, C.P., Pozo, A.: Upper confidence bound (UCB) algorithms for adaptive operator selection in moea/d. In: Gaspar-Cunha, A., Henggeler Antunes, C., Coello, C.C. (eds.) Evolutionary Multi-Criterion Optimization, Lecture Notes in Computer Science, vol. 9018, pp. 411–425. Springer International Publishing, Berlin (2015c).  https://doi.org/10.1007/978-3-319-15934-8_28 Google Scholar
  28. Gonçalves, R.A., Almeida, C.P., Pozo, A.: Upper Confidence Bound (UCB) Algorithms for Adaptive Operator Selection in MOEA/D, Springer International Publishing, Cham, pp 411–425. (2015).  https://doi.org/10.1007/978-3-319-15934-8_28
  29. Guizzo, G., Fritsche, G.M., Vergilio, S.R., Pozo, A.T.R.: A hyper-heuristic for the multi-objective integration and test order problem. In: Proceedings of the 2015 on Genetic and Evolutionary Computation Conference, ACM, New York, NY, USA, GECCO ’15, pp 1343–1350, (2015).  https://doi.org/10.1145/2739480.2754725
  30. Hansen, M.P., Jaszkiewicz, A.: Evaluating the quality of approximations to the non-dominated set. Tech. Rep. IMM-REP-1998-7, Technical University of Denmark (1998)Google Scholar
  31. Helbig, M., Engelbrecht, A.P.: Heterogeneous dynamic vector evaluated particle swarm optimisation for dynamic multi-objective optimisation. In: 2014 IEEE Congress on Evolutionary Computation (CEC), pp 3151–3159, (2014).  https://doi.org/10.1109/CEC.2014.6900303
  32. Hitomi, N., Selva, D.: A classification and comparison of credit assignment strategies in multiobjective adaptive operator selection. IEEE Trans. Evol. Comput. 21(2), 294–314 (2017).  https://doi.org/10.1109/TEVC.2016.2602348 CrossRefGoogle Scholar
  33. Huband, S., Hingston, P., Barone, L., While, L.: A review of multiobjective test problems and a scalable test problem toolkit. IEEE Trans. Evol. Comput. 10(5), 477–506 (2006).  https://doi.org/10.1109/TEVC.2005.861417 CrossRefzbMATHGoogle Scholar
  34. Jiang, S., Ong, Y.S., Zhang, J., Feng, L.: Consistencies and contradictions of performance metrics in multiobjective optimization. IEEE Trans. Cybern. 44(12), 2391–2404 (2014).  https://doi.org/10.1109/TCYB.2014.2307319 CrossRefGoogle Scholar
  35. Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of IEEE International Conference on Neural Networks, pp 1942–1948 (1995)Google Scholar
  36. Krempser, E., Fialho, Á., Barbosa, H.: Adaptive operator selection at the hyper-level. Parallel Problem Solving from Nature-PPSN XII pp 378–387 (2012)Google Scholar
  37. Kruskal, W.H., Wallis, W.A.: Use of ranks in one-criterion variance analysis. J. Am. Stat. Assoc. 47(260), 583–621 (1952)CrossRefzbMATHGoogle Scholar
  38. Laumanns, M., Zenklusen, R.: Stochastic convergence of random search methods to fixed size Pareto front approximations. Eur. J. Oper. Res. 213(2), 414–421 (2011).  https://doi.org/10.1016/j.ejor.2011.03.039 MathSciNetCrossRefzbMATHGoogle Scholar
  39. Li, K., Fialho, Á., Kwong, S.: Multi-objective differential evolution with adaptive control of parameters and operators. Learning and Intelligent Optimization, pp. 473–487. Springer, Berlin Heidelberg (2011)CrossRefGoogle Scholar
  40. Li, K., Fialho, A., Kwong, S., Zhang, Q.: Adaptive operator selection with bandits for a multiobjective evolutionary algorithm based on decomposition. IEEE Trans. Evol. Comput. 18(1), 114–130 (2014a).  https://doi.org/10.1109/tevc.2013.2239648 CrossRefGoogle Scholar
  41. Li, K., Deb, K., Zhang, Q., Kwong, S.: An evolutionary many-objective optimization algorithm based on dominance and decomposition. IEEE Trans. Evol. Comput. 19(5), 694–716 (2015).  https://doi.org/10.1109/TEVC.2014.2373386 CrossRefGoogle Scholar
  42. Li, M., Yang, S., Liu, X.: Diversity comparison of pareto front approximations in many-objective optimization. IEEE Trans. Cybern. 44(12), 2568–2584 (2014b).  https://doi.org/10.1109/TCYB.2014.2310651 CrossRefGoogle Scholar
  43. Luke, S.: Essentials of Metaheuristics, 2nd edn. Lulu, available for free at (2013). http://cs.gmu.edu/~sean/book/metaheuristics/
  44. Maashi, M., Özcan, E., Kendall, G.: A multi-objective hyper-heuristic based on choice function. Expert Syst. Appl. 41(9), 4475–4493 (2014)CrossRefGoogle Scholar
  45. Maturana, J., Fialho, Á., Saubion, F., Schoenauer, M., Lardeux, F., Sebag, M.: Adaptive operator selection and management in evolutionary algorithms. Autonomous Search, pp. 161–189. Springer, Berlin, Heidelberg (2012)Google Scholar
  46. Mostaghim, S., Teich, J.: Strategies for finding good local guides in multi-objective particle swarm optimization (MOPSO). In: Proceedings of the 2003 IEEE Swarm Intelligence Symposium., pp 26–33 (2003)Google Scholar
  47. Nebro, A.J., Durillo, J.J., Garcia-Nieto, J., Coello, C.A.C., Luna, F., Alba, E.: SMPSO: A new PSO-based metaheuristic for multi-objective optimization. In: Computational intelligence in multi-criteria decision-making., IEEE, pp 66–73 (2009)Google Scholar
  48. Ozcan, E., Bykov, Y., Birben, M., Burke, E.: Examination timetabling using late acceptance hyper-heuristics. In: IEEE Congress on Evolutionary Computation, 2009. CEC ’09. pp 997–1004, (2009).  https://doi.org/10.1109/CEC.2009.4983054
  49. Padhye, N., Branke, J., Mostaghim, S.: Empirical comparison of MOPSO methods: guide selection and diversity preservation. In: Proceedings of the Eleventh Congress on Evolutionary Computation, IEEE Press, Piscataway, NJ, USA, CEC’09, pp 2516–2523 (2009)Google Scholar
  50. Parsopoulos, K.E., Vrahatis, M.N.: Multi-Objective Particles Swarm Optimization Approaches. In: Multi-Objective Optimization in Computational Intelligence, IGI Global, pp 20–42, 10.4018/978-1-59904-498-9.ch002, (2008). URL http://services.igi-global.com/resolvedoi/resolve.aspx?doi=10.4018/978-1-59904-498-9.ch002
  51. Phan, D., Suzuki, J.: R2-IBEA: R2 indicator based evolutionary algorithm for multiobjective optimization. In: IEEE Congress on Evolutionary Computation, pp 1836–1845, (2013).  https://doi.org/10.1109/CEC.2013.6557783
  52. Reyes-Sierra, M., Coello, C.A.C.: Multi-objective particle swarm optimizers: a survey of the state-of-the-art. Int. J. Comput. Intell. Res. 2(3), 287–308 (2006)MathSciNetGoogle Scholar
  53. Sabar, N., Ayob, M., Kendall, G., Qu, R.: A dynamic multiarmed bandit-gene expression programming hyper-heuristic for combinatorial optimization problems. IEEE Trans. Cybern. 45(2), 217–228 (2015).  https://doi.org/10.1109/TCYB.2014.2323936 CrossRefGoogle Scholar
  54. Shi, Y., Eberhart, R.: A modified particle swarm optimizer. In: IEEE International Conference on Evolutionary Computation, pp 69–73, (1998).  https://doi.org/10.1109/ICEC.1998.699146
  55. Trivedi, A., Srinivasan, D., Sanyal, K., Ghosh, A.: A survey of multiobjective evolutionary algorithms based on decomposition. IEEE Transactions on Evolutionary Computation PP(99): 1–1, (2016).  https://doi.org/10.1109/TEVC.2016.2608507
  56. While, L., Bradstreet, L., Barone, L.: A fast way of calculating exact hypervolumes. IEEE Trans. Evol. Comput. 16(1), 86–95 (2012)CrossRefGoogle Scholar
  57. Yuan, Y., Xu, H., Wang, B., Yao, X.: A new dominance relation-based evolutionary algorithm for many-objective optimization. IEEE Trans. Evol. Comput. 20(1), 16–37 (2016).  https://doi.org/10.1109/TEVC.2015.2420112 CrossRefGoogle Scholar
  58. Zhang, Q., Liu, W., Li, H.: The performance of a new version of MOEA/D on CEC09 unconstrained MOP test instances. In: IEEE Congress on Evolutionary Computation, pp. 203–208, (2009).  https://doi.org/10.1109/CEC.2009.4982949
  59. Zhu, Q., Lin, Q., Chen, W., Wong, K.C., Coello, C.A.C., Li, J., Chen, J., Zhang, J.: An external archive-guided multiobjective particle swarm optimization algorithm. IEEE Trans. Cybern. 47(9), 2794–2808 (2017).  https://doi.org/10.1109/TCYB.2017.2710133 CrossRefGoogle Scholar
  60. Zitzler, E., Thiele, L.: Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach. IEEE Trans. Evol. Comput. 3(4), 257–271 (1999)CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  • Olacir R. CastroJr.
    • 1
  • Gian Mauricio Fritsche
    • 1
  • Aurora Pozo
    • 1
  1. 1.Computer Science’s DepartmentFederal University of ParanáCuritibaBrazil

Personalised recommendations