Abstract
Solving combinatorial optimization problems using a fixed set of operators has been known to produce poor quality solutions. Thus, adaptive operator selection (AOS) methods have been proposed. But, despite such effort, challenges such as the choice of suitable AOS method and configuring it correctly for given specific problem instances remain. To overcome these challenges, this work proposes a novel approach known as I-AOS-DOE to perform Instance-specific selection of AOS methods prior to evolutionary search. Furthermore, to configure the AOS methods for the respective problem instances, we apply a Design of Experiment (DOE) technique to determine promising regions of parameter values and to pick the best parameter values from those regions. Our main contribution lies in the use a self-organizing neural network as the offline-trained AOS selection mechanism. This work trains a variant of FALCON known as FL-FALCON using performance data of applying AOS methods on training instances. The performance data comprises derived fitness landscape features, choices of AOS methods and feedback signals. The hypothesis is that a trained FL-FALCON is capable of selecting suitable AOS methods for unknown problem instances. Experiments are conducted to test this hypothesis and compare I-AOS-DOE with existing approaches. Experiment results reveal that I-AOS-DOE can indeed yield the best performance outcome for a sample set of quadratic assignment problem (QAP) instances.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ansotegui, C., Gabas, J., Malitsky, Y., Sellmann, M.: MaxSAT by improved instance-specific algorithm configuration. Artif. Intell. 235, 26–39 (2016)
Bischl, B., Mersmann, O., Trautmann, H., Preuß, M.: Algorithm selection based on exploratory landscape analysis and cost-sensitive learning. In: Proceedings of the 14th Annual Conference on Genetic and Evolutionary Computation, pp. 313–320. ACM (2012)
Carpenter, G.A., Grossberg, S.: A massively parallel architecture for a self-organizing neural pattern recognition machine. Comput. Vis. Graph. Image Process. 37(1), 54–115 (1987)
Consoli, P.A., Mei, Y., Minku, L.L., Yao, X.: Dynamic selection of evolutionary operators based on online learning and fitness landscape analysis. Soft Comput. 20(10), 3889–3914 (2016)
Fialho, Á., Da Costa, L., Schoenauer, M., Sebag, M.: Analyzing bandit-based adaptive operator selection mechanisms. Ann. Math. Artif. Intell. 60(1), 25–64 (2010)
Gunawan, A., Lau, H.C.: Lindawati: fine-tuning algorithm parameters using the design of experiments approach. In: Coello Coello, C. (ed.) LION’11, pp. 278–292. Springer, LNCS (2011)
Kadioglu, S., Malitsky, Y., Sellmann, M., Tierney, K.: ISAC-instance-specific algorithm configuration. In: Proceedings of the 2010 Conference on ECAI 2010: 19th European Conference on Artificial Intelligence, pp. 751–756 (2010)
Kohl, N., Miikkulainen, R.: An integrated neuroevolutionary approach to reactive control and high-level strategy. IEEE Trans. Evol. Comput. 16(4), 472–488 (2012)
Montgomery, D.: Design and Analysis of Expeirments, 6th edn. Wiley, Inc., New Jercy (2005)
Muñoz, M.A., Kirley, M., Halgamuge, S.K.: A meta-learning prediction model of algorithm performance for continuous optimization problems. In: Coello Coello, C.A., Cutello, V., Deb, K., Forrest, S., Nicosia, G., Pavone, M. (eds.) PPSN 2012. LNCS, vol. 7491, pp. 226–235. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-32937-1_23
Ochoa, G., et al.: HyFlex: A Benchmark Framework for Cross-Domain Heuristic Search. In: Hao, J.-K., Middendorf, M. (eds.) EvoCOP 2012. LNCS, vol. 7245, pp. 136–147. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-29124-1_12
Pitzer, E., Affenzeller, M.: A comprehensive survey on fitness landscape analysis. Recent Adv. Intell. Eng. Syst. 378, 161–191 (2012)
Sallam, K.M., Elsayed, S.M., Sarker, R.A., Essam, D.L.: Landscape-based adaptive operator selection mechanism for differential evolution. Inf. Sci. 418, 383–404 (2017)
Tan, A.H.: FALCON: a fusion architecture for learning, cognition, and navigation. In: Proceedings of the IJCNN, pp. 3297–3302 (2004)
Teng, T.H., Tan, A.H., Zurada, J.M.: Self-organizing neural networks integrating domain knowledge and reinforcement learning. IEEE Trans. Neural Netw. Learn. Syst. 26(5), 889–902 (2015)
Teng, T.-H., Handoko, S.D., Lau, H.C.: Self-organizing neural network for adaptive operator selection in evolutionary search. In: Festa, P., Sellmann, M., Vanschoren, J. (eds.) LION 2016. LNCS, vol. 10079, pp. 187–202. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-50349-3_13
Teng, T.H., Tan, A.H.: Fast reinforcement learning under uncertainties with self-organizing neural networks. In: Proceedings of IAT, pp. 51–58 (2015)
Tuson, A., Ross, P.: Adapting operator settings in genetic algorithms. Evol. Comput. 6(2), 161–184 (1998). https://doi.org/10.1162/evco.1998.6.2.161
Xu, L., Hoos, H.H., Leyton-Brown, K.: Hydra: automatically configuring algorithms for portfolio-based selection. In: Proceedings of the 24th AAAI Conference on Artificial Intelligence, pp. 210–216 (2010)
Xu, L., Hutter, F., Hoos, H.H., Leyton-Brown, K.: SATzilla: portfolio-based algorithm selection for SAT. J. Artif. Intell. Res. 32(1), 565–606 (2008)
Xu, L., Hutter, F., Shen, J., Hoos, H.H., Leyton-Brown, K.: Satzilla 2012: improved algorithm selection based on cost-sensitive classification models. In: Proceedings of SAT Challenge, pp. 57–58 (2012)
Acknowledgments
This research project is funded by National Research Foundation Singapore under its Corp Lab @ University scheme and Fujitsu Limited.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Teng, TH., Chuin Lau, H., Gunawan, A. (2019). Instance-Specific Selection of AOS Methods for Solving Combinatorial Optimisation Problems via Neural Networks. In: Battiti, R., Brunato, M., Kotsireas, I., Pardalos, P. (eds) Learning and Intelligent Optimization. LION 12 2018. Lecture Notes in Computer Science(), vol 11353. Springer, Cham. https://doi.org/10.1007/978-3-030-05348-2_9
Download citation
DOI: https://doi.org/10.1007/978-3-030-05348-2_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-05347-5
Online ISBN: 978-3-030-05348-2
eBook Packages: Computer ScienceComputer Science (R0)