Advertisement

Instance-Specific Selection of AOS Methods for Solving Combinatorial Optimisation Problems via Neural Networks

  • Teck-Hou TengEmail author
  • Hoong Chuin LauEmail author
  • Aldy GunawanEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11353)

Abstract

Solving combinatorial optimization problems using a fixed set of operators has been known to produce poor quality solutions. Thus, adaptive operator selection (AOS) methods have been proposed. But, despite such effort, challenges such as the choice of suitable AOS method and configuring it correctly for given specific problem instances remain. To overcome these challenges, this work proposes a novel approach known as I-AOS-DOE to perform Instance-specific selection of AOS methods prior to evolutionary search. Furthermore, to configure the AOS methods for the respective problem instances, we apply a Design of Experiment (DOE) technique to determine promising regions of parameter values and to pick the best parameter values from those regions. Our main contribution lies in the use a self-organizing neural network as the offline-trained AOS selection mechanism. This work trains a variant of FALCON known as FL-FALCON using performance data of applying AOS methods on training instances. The performance data comprises derived fitness landscape features, choices of AOS methods and feedback signals. The hypothesis is that a trained FL-FALCON is capable of selecting suitable AOS methods for unknown problem instances. Experiments are conducted to test this hypothesis and compare I-AOS-DOE with existing approaches. Experiment results reveal that I-AOS-DOE can indeed yield the best performance outcome for a sample set of quadratic assignment problem (QAP) instances.

Notes

Acknowledgments

This research project is funded by National Research Foundation Singapore under its Corp Lab @ University scheme and Fujitsu Limited.

References

  1. 1.
    Ansotegui, C., Gabas, J., Malitsky, Y., Sellmann, M.: MaxSAT by improved instance-specific algorithm configuration. Artif. Intell. 235, 26–39 (2016)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Bischl, B., Mersmann, O., Trautmann, H., Preuß, M.: Algorithm selection based on exploratory landscape analysis and cost-sensitive learning. In: Proceedings of the 14th Annual Conference on Genetic and Evolutionary Computation, pp. 313–320. ACM (2012)Google Scholar
  3. 3.
    Carpenter, G.A., Grossberg, S.: A massively parallel architecture for a self-organizing neural pattern recognition machine. Comput. Vis. Graph. Image Process. 37(1), 54–115 (1987)CrossRefGoogle Scholar
  4. 4.
    Consoli, P.A., Mei, Y., Minku, L.L., Yao, X.: Dynamic selection of evolutionary operators based on online learning and fitness landscape analysis. Soft Comput. 20(10), 3889–3914 (2016)CrossRefGoogle Scholar
  5. 5.
    Fialho, Á., Da Costa, L., Schoenauer, M., Sebag, M.: Analyzing bandit-based adaptive operator selection mechanisms. Ann. Math. Artif. Intell. 60(1), 25–64 (2010)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Gunawan, A., Lau, H.C.: Lindawati: fine-tuning algorithm parameters using the design of experiments approach. In: Coello Coello, C. (ed.) LION’11, pp. 278–292. Springer, LNCS (2011)Google Scholar
  7. 7.
    Kadioglu, S., Malitsky, Y., Sellmann, M., Tierney, K.: ISAC-instance-specific algorithm configuration. In: Proceedings of the 2010 Conference on ECAI 2010: 19th European Conference on Artificial Intelligence, pp. 751–756 (2010)Google Scholar
  8. 8.
    Kohl, N., Miikkulainen, R.: An integrated neuroevolutionary approach to reactive control and high-level strategy. IEEE Trans. Evol. Comput. 16(4), 472–488 (2012)CrossRefGoogle Scholar
  9. 9.
    Montgomery, D.: Design and Analysis of Expeirments, 6th edn. Wiley, Inc., New Jercy (2005)Google Scholar
  10. 10.
    Muñoz, M.A., Kirley, M., Halgamuge, S.K.: A meta-learning prediction model of algorithm performance for continuous optimization problems. In: Coello Coello, C.A., Cutello, V., Deb, K., Forrest, S., Nicosia, G., Pavone, M. (eds.) PPSN 2012. LNCS, vol. 7491, pp. 226–235. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-32937-1_23
  11. 11.
    Ochoa, G., et al.: HyFlex: A Benchmark Framework for Cross-Domain Heuristic Search. In: Hao, J.-K., Middendorf, M. (eds.) EvoCOP 2012. LNCS, vol. 7245, pp. 136–147. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-29124-1_12
  12. 12.
    Pitzer, E., Affenzeller, M.: A comprehensive survey on fitness landscape analysis. Recent Adv. Intell. Eng. Syst. 378, 161–191 (2012)CrossRefGoogle Scholar
  13. 13.
    Sallam, K.M., Elsayed, S.M., Sarker, R.A., Essam, D.L.: Landscape-based adaptive operator selection mechanism for differential evolution. Inf. Sci. 418, 383–404 (2017)CrossRefGoogle Scholar
  14. 14.
    Tan, A.H.: FALCON: a fusion architecture for learning, cognition, and navigation. In: Proceedings of the IJCNN, pp. 3297–3302 (2004)Google Scholar
  15. 15.
    Teng, T.H., Tan, A.H., Zurada, J.M.: Self-organizing neural networks integrating domain knowledge and reinforcement learning. IEEE Trans. Neural Netw. Learn. Syst. 26(5), 889–902 (2015)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Teng, T.-H., Handoko, S.D., Lau, H.C.: Self-organizing neural network for adaptive operator selection in evolutionary search. In: Festa, P., Sellmann, M., Vanschoren, J. (eds.) LION 2016. LNCS, vol. 10079, pp. 187–202. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-50349-3_13
  17. 17.
    Teng, T.H., Tan, A.H.: Fast reinforcement learning under uncertainties with self-organizing neural networks. In: Proceedings of IAT, pp. 51–58 (2015)Google Scholar
  18. 18.
    Tuson, A., Ross, P.: Adapting operator settings in genetic algorithms. Evol. Comput. 6(2), 161–184 (1998).  https://doi.org/10.1162/evco.1998.6.2.161
  19. 19.
    Xu, L., Hoos, H.H., Leyton-Brown, K.: Hydra: automatically configuring algorithms for portfolio-based selection. In: Proceedings of the 24th AAAI Conference on Artificial Intelligence, pp. 210–216 (2010)Google Scholar
  20. 20.
    Xu, L., Hutter, F., Hoos, H.H., Leyton-Brown, K.: SATzilla: portfolio-based algorithm selection for SAT. J. Artif. Intell. Res. 32(1), 565–606 (2008)CrossRefGoogle Scholar
  21. 21.
    Xu, L., Hutter, F., Shen, J., Hoos, H.H., Leyton-Brown, K.: Satzilla 2012: improved algorithm selection based on cost-sensitive classification models. In: Proceedings of SAT Challenge, pp. 57–58 (2012)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.School of Information SystemsSingapore Management UniversitySingaporeSingapore
  2. 2.Fujitsu-SMU Urban Computing and Engineering Corporate LaboratorySingapore Management UniversitySingaporeSingapore

Personalised recommendations