Abstract
In this chapter, we build upon the foundations of Chap. 4 to develop a theoretically principled optimization algorithm in the image of an adaptive memetic automaton. For the most part, we retain the abstract interpretation of memes as computationally encoded probabilistic building-blocks of knowledge that can be learned from one task and spontaneously transmitted (for reuse) to another. Most importantly, we make the assumption that the set of all tasks faced by the memetic automatons are put forth sequentially, such that the transfer of memes occurs in a unidirectional manner—from the past to the present. One of the main challenges emerging in this regard is that, given a diverse pool of memes accumulated over time, an appropriate selection and integration of (source) memes must be carried out in order to induce a search bias that suits the ongoing target task of interest. To this end, we propose a mixture modeling approach capable of adaptive online integration of all available knowledge memes—driven entirely by the data generated during the course of a search. Our proposal is particularly well-suited for black-box optimization problems where task-specific datasets may not be available for offline assessments. We conclude the chapter by illustrating how the basic idea of online mixture modeling extends to the case of computationally expensive problems as well.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Smyth, P., & Wolpert, D. (1998). Stacked density estimation. In Advances in neural information processing systems (pp. 668–674).
Wolpert, D. H. (1992). Stacked generalization. Neural Networks, 5(2), 241–259.
Pardoe, D., & Stone, P. (2010, June). Boosting for regression transfer. In Proceedings of the 27th International Conference on International Conference on Machine Learning (pp. 863–870).
Feng, L., Ong, Y. S., Tsang, I. W. H., & Tan, A. H. (2012, June). An evolutionary search paradigm that learns with past experiences. In 2012 IEEE Congress on Evolutionary Computation (CEC) (pp. 1–8). IEEE.
Feng, L., Ong, Y. S., Lim, M. H., & Tsang, I. W. (2015). Memetic search with interdomain learning: A realization between CVRP and CARP. IEEE Transactions on Evolutionary Computation, 19(5), 644–658.
Feng, L., Ong, Y. S., Tan, A. H., & Tsang, I. W. (2015). Memes as building blocks: a case study on evolutionary optimization + transfer learning for routing problems. Memetic Computing, 7(3), 159–180.
Feng, L., Ong, Y. S., & Lim, M. H. (2013). Extreme learning machine guided memetic computation for vehicle routing. IEEE Intelligent Systems, 28(6), 38–41.
Lim, D., Ong, Y. S., Gupta, A., Goh, C. K., & Dutta, P. S. (2016). Towards a new Praxis in optinformatics targeting knowledge re-use in evolutionary computation: simultaneous problem learning and optimization. Evolutionary Intelligence, 9(4), 203–220.
Feng, L., Ong, Y. S., Jiang, S., & Gupta, A. (2017). Autoencoding evolutionary search with learning across heterogeneous problems. IEEE Transactions on Evolutionary Computation, 21(5), 760–772.
Mühlenbein, H. (1997). The equation for response to selection and its use for prediction. Evolutionary Computation, 5(3), 303–346.
Pelikan, M., Goldberg, D. E., & Cantú-Paz, E. (1999, July). BOA: The Bayesian optimization algorithm. In Proceedings of the 1st Annual Conference on Genetic and Evolutionary Computation (pp. 525–532). Morgan Kaufmann Publishers Inc.
Gallagher, M., Frean, M., & Downs, T. (1999, July). Real-valued evolutionary optimization using a flexible probability density estimator. In Proceedings of the 1st Annual Conference on Genetic and Evolutionary Computation-Volume 1 (pp. 840–846). Morgan Kaufmann Publishers Inc.
Van den Oord, A., & Schrauwen, B. (2014). Factoring variations in natural images with deep Gaussian mixture models. In Advances in Neural Information Processing Systems (pp. 3518–3526).
Zhang, Q., & Muhlenbein, H. (2004). On the convergence of a class of estimation of distribution algorithms. IEEE Transactions on Evolutionary Computation, 8(2), 127–136.
Baluja, S., & Caruana, R. (1995). Removing the genetics from the standard genetic algorithm. In Machine Learning Proceedings 1995 (pp. 38–46).
Blume, M. (2002). Expectation maximization: A gentle introduction. Technical University of Munich Institute for Computer Science. https://pdfs.semanticscholar.org/7954/99e0d5724613d676bf6281097709c803708c.pdf.
Devroye, L., Györfi, L., & Lugosi, G. (2013). A probabilistic theory of pattern recognition (Vol. 31). Springer Science & Business Media.
MacKay, D. J. (2003). Information theory, inference and learning algorithms. Cambridge University Press.
Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society. Series B (Methodological), 1–38.
Gomez, F., Schmidhuber, J., & Miikkulainen, R. (2008). Accelerated neural evolution through cooperatively coevolved synapses. Journal of Machine Learning Research, 9(May), 937–965.
Deb, K., & Agrawal, R. B. (1994). Simulated binary crossover for continuous search space. Complex Systems, 9(3), 1–15.
Deb, K., & Deb, D. (2014). Analysing mutation schemes for real-parameter genetic algorithms. IJAISC, 4(1), 1–28.
Zhou, Z., Ong, Y. S., Lim, M. H., & Lee, B. S. (2007). Memetic algorithm using multi-surrogates for computationally expensive optimization problems. Soft Computing, 11(10), 957–971.
Lim, D., Ong, Y. S., Jin, Y., & Sendhoff, B. (2007, July). A study on metamodeling techniques, ensembles, and multi-surrogates in evolutionary computation. In Proceedings of the 9th annual conference on Genetic and evolutionary computation (pp. 1288–1295). ACM.
Min, A. T. W., Ong, Y. S., Gupta, A., & Goh, C. K. (2017). Multi-problem surrogates: Transfer evolutionary multiobjective optimization of computationally expensive problems. IEEE Transactions on Evolutionary Computation. Early Access.
Gupta, A., Ong, Y. S., Feng, L., & Tan, K. C. (2017). Multiobjective multifactorial optimization in evolutionary multitasking. IEEE Transactions on Cybernetics, 47(7), 1652–1665.
Rasmussen, C. E. (2004). Gaussian processes in machine learning. In Advanced lectures on machine learning (pp. 63–71). Berlin: Springer.
Shahriari, B., Swersky, K., Wang, Z., Adams, R. P., & De Freitas, N. (2016). Taking the human out of the loop: A review of bayesian optimization. Proceedings of the IEEE, 104(1), 148–175.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Gupta, A., Ong, YS. (2019). Sequential Knowledge Transfer Across Problems. In: Memetic Computation. Adaptation, Learning, and Optimization, vol 21. Springer, Cham. https://doi.org/10.1007/978-3-030-02729-2_5
Download citation
DOI: https://doi.org/10.1007/978-3-030-02729-2_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-02728-5
Online ISBN: 978-3-030-02729-2
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)