Skip to main content

Sequential Knowledge Transfer Across Problems

  • Chapter
  • First Online:
Memetic Computation

Part of the book series: Adaptation, Learning, and Optimization ((ALO,volume 21))

Abstract

In this chapter, we build upon the foundations of Chap. 4 to develop a theoretically principled optimization algorithm in the image of an adaptive memetic automaton. For the most part, we retain the abstract interpretation of memes as computationally encoded probabilistic building-blocks of knowledge that can be learned from one task and spontaneously transmitted (for reuse) to another. Most importantly, we make the assumption that the set of all tasks faced by the memetic automatons are put forth sequentially, such that the transfer of memes occurs in a unidirectional manner—from the past to the present. One of the main challenges emerging in this regard is that, given a diverse pool of memes accumulated over time, an appropriate selection and integration of (source) memes must be carried out in order to induce a search bias that suits the ongoing target task of interest. To this end, we propose a mixture modeling approach capable of adaptive online integration of all available knowledge memes—driven entirely by the data generated during the course of a search. Our proposal is particularly well-suited for black-box optimization problems where task-specific datasets may not be available for offline assessments. We conclude the chapter by illustrating how the basic idea of online mixture modeling extends to the case of computationally expensive problems as well.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Smyth, P., & Wolpert, D. (1998). Stacked density estimation. In Advances in neural information processing systems (pp. 668–674).

    Google Scholar 

  2. Wolpert, D. H. (1992). Stacked generalization. Neural Networks, 5(2), 241–259.

    Article  Google Scholar 

  3. Pardoe, D., & Stone, P. (2010, June). Boosting for regression transfer. In Proceedings of the 27th International Conference on International Conference on Machine Learning (pp. 863–870).

    Google Scholar 

  4. Feng, L., Ong, Y. S., Tsang, I. W. H., & Tan, A. H. (2012, June). An evolutionary search paradigm that learns with past experiences. In 2012 IEEE Congress on Evolutionary Computation (CEC) (pp. 1–8). IEEE.

    Google Scholar 

  5. Feng, L., Ong, Y. S., Lim, M. H., & Tsang, I. W. (2015). Memetic search with interdomain learning: A realization between CVRP and CARP. IEEE Transactions on Evolutionary Computation, 19(5), 644–658.

    Article  Google Scholar 

  6. Feng, L., Ong, Y. S., Tan, A. H., & Tsang, I. W. (2015). Memes as building blocks: a case study on evolutionary optimization + transfer learning for routing problems. Memetic Computing, 7(3), 159–180.

    Article  Google Scholar 

  7. Feng, L., Ong, Y. S., & Lim, M. H. (2013). Extreme learning machine guided memetic computation for vehicle routing. IEEE Intelligent Systems, 28(6), 38–41.

    Google Scholar 

  8. Lim, D., Ong, Y. S., Gupta, A., Goh, C. K., & Dutta, P. S. (2016). Towards a new Praxis in optinformatics targeting knowledge re-use in evolutionary computation: simultaneous problem learning and optimization. Evolutionary Intelligence, 9(4), 203–220.

    Article  Google Scholar 

  9. Feng, L., Ong, Y. S., Jiang, S., & Gupta, A. (2017). Autoencoding evolutionary search with learning across heterogeneous problems. IEEE Transactions on Evolutionary Computation, 21(5), 760–772.

    Article  Google Scholar 

  10. Mühlenbein, H. (1997). The equation for response to selection and its use for prediction. Evolutionary Computation, 5(3), 303–346.

    Article  Google Scholar 

  11. Pelikan, M., Goldberg, D. E., & Cantú-Paz, E. (1999, July). BOA: The Bayesian optimization algorithm. In Proceedings of the 1st Annual Conference on Genetic and Evolutionary Computation (pp. 525–532). Morgan Kaufmann Publishers Inc.

    Google Scholar 

  12. Gallagher, M., Frean, M., & Downs, T. (1999, July). Real-valued evolutionary optimization using a flexible probability density estimator. In Proceedings of the 1st Annual Conference on Genetic and Evolutionary Computation-Volume 1 (pp. 840–846). Morgan Kaufmann Publishers Inc.

    Google Scholar 

  13. Van den Oord, A., & Schrauwen, B. (2014). Factoring variations in natural images with deep Gaussian mixture models. In Advances in Neural Information Processing Systems (pp. 3518–3526).

    Google Scholar 

  14. Zhang, Q., & Muhlenbein, H. (2004). On the convergence of a class of estimation of distribution algorithms. IEEE Transactions on Evolutionary Computation, 8(2), 127–136.

    Article  Google Scholar 

  15. Baluja, S., & Caruana, R. (1995). Removing the genetics from the standard genetic algorithm. In Machine Learning Proceedings 1995 (pp. 38–46).

    Chapter  Google Scholar 

  16. Blume, M. (2002). Expectation maximization: A gentle introduction. Technical University of Munich Institute for Computer Science. https://pdfs.semanticscholar.org/7954/99e0d5724613d676bf6281097709c803708c.pdf.

  17. Devroye, L., Györfi, L., & Lugosi, G. (2013). A probabilistic theory of pattern recognition (Vol. 31). Springer Science & Business Media.

    Google Scholar 

  18. MacKay, D. J. (2003). Information theory, inference and learning algorithms. Cambridge University Press.

    Google Scholar 

  19. Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society. Series B (Methodological), 1–38.

    MathSciNet  MATH  Google Scholar 

  20. Gomez, F., Schmidhuber, J., & Miikkulainen, R. (2008). Accelerated neural evolution through cooperatively coevolved synapses. Journal of Machine Learning Research, 9(May), 937–965.

    MathSciNet  MATH  Google Scholar 

  21. Deb, K., & Agrawal, R. B. (1994). Simulated binary crossover for continuous search space. Complex Systems, 9(3), 1–15.

    MathSciNet  MATH  Google Scholar 

  22. Deb, K., & Deb, D. (2014). Analysing mutation schemes for real-parameter genetic algorithms. IJAISC, 4(1), 1–28.

    Article  MathSciNet  Google Scholar 

  23. Zhou, Z., Ong, Y. S., Lim, M. H., & Lee, B. S. (2007). Memetic algorithm using multi-surrogates for computationally expensive optimization problems. Soft Computing, 11(10), 957–971.

    Article  Google Scholar 

  24. Lim, D., Ong, Y. S., Jin, Y., & Sendhoff, B. (2007, July). A study on metamodeling techniques, ensembles, and multi-surrogates in evolutionary computation. In Proceedings of the 9th annual conference on Genetic and evolutionary computation (pp. 1288–1295). ACM.

    Google Scholar 

  25. Min, A. T. W., Ong, Y. S., Gupta, A., & Goh, C. K. (2017). Multi-problem surrogates: Transfer evolutionary multiobjective optimization of computationally expensive problems. IEEE Transactions on Evolutionary Computation. Early Access.

    Google Scholar 

  26. Gupta, A., Ong, Y. S., Feng, L., & Tan, K. C. (2017). Multiobjective multifactorial optimization in evolutionary multitasking. IEEE Transactions on Cybernetics, 47(7), 1652–1665.

    Article  Google Scholar 

  27. Rasmussen, C. E. (2004). Gaussian processes in machine learning. In Advanced lectures on machine learning (pp. 63–71). Berlin: Springer.

    Chapter  Google Scholar 

  28. Shahriari, B., Swersky, K., Wang, Z., Adams, R. P., & De Freitas, N. (2016). Taking the human out of the loop: A review of bayesian optimization. Proceedings of the IEEE, 104(1), 148–175.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Abhishek Gupta .

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Gupta, A., Ong, YS. (2019). Sequential Knowledge Transfer Across Problems. In: Memetic Computation. Adaptation, Learning, and Optimization, vol 21. Springer, Cham. https://doi.org/10.1007/978-3-030-02729-2_5

Download citation

Publish with us

Policies and ethics