Skip to main content

Average Convergence Rate of Evolutionary Algorithms II: Continuous Optimisation

  • Conference paper
  • First Online:
Artificial Intelligence Algorithms and Applications (ISICA 2019)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1205))

Included in the following conference series:

Abstract

Previously a theoretical study of the average convergence rate was conducted for discrete optimisation. This paper extends it to a further analysis for continuous optimisation. First, the strategies of generating new solutions are classified into two categories: landscape-invariant and landscape-adaptive. Then, it is proven that the average convergence rate of evolutionary algorithms using positive-adaptive generators is asymptotically positive, but that of algorithms using landscape-invariant generators and zero-adaptive generators asymptotically converges to zero. A case study is made to validate the applicability of theoretical results. Besides the theoretical study, numerical simulations are presented to show the feasibility of the average convergence rate in practical applications. In case of unknown optimum, an alternative definition of the average convergence rate is also considered.

The first author was supported by the National Science Foundation of China (NSFC) under Grant No. 61303028; The second author was supported by EPSRC under Grant No. EP/I009809/1.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We don’t add a converge order because for discrete optimisation, \(e_t\) converges linearly to 0 according to [15, Theorem 1]. For continuous optimisation, a conjecture is that \(e_t\) also converges linearly unless gradient information is used in the search.

References

  1. Agapie, A., Agapie, M., Rudolph, G., Zbaganu, G.: Convergence of evolutionary algorithms on the \(n\)-dimensional continuous space. IEEE Trans. Cybern. 43(5), 1462–1472 (2013)

    Article  Google Scholar 

  2. Akimoto, Y., Auger, A., Hansen, N.: Quality gain analysis of the weighted recombination evolution strategy on general convex quadratic functions. In: Proceedings of the 14th ACM/SIGEVO Conference on Foundations of Genetic Algorithms, pp. 111–126. ACM (2017)

    Google Scholar 

  3. Auger, A.: Convergence results for the (1, \(\lambda \))-sa-es using the theory of \(\phi \)-irreducible markov chains. Theoret. Comput. Sci. 334(1–3), 35–69 (2005)

    Article  MathSciNet  Google Scholar 

  4. Auger, A., Hansen, N.: Reconsidering the progress rate theory for evolution strategies in finite dimensions. In: Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, pp. 445–452. ACM (2006)

    Google Scholar 

  5. Auger, A., Hansen, N.: Theory of evolution strategies: a new perspective. In: Theory of Randomized Search Heuristics: Foundations and Recent Developments, pp. 289–325. World Scientific (2011)

    Google Scholar 

  6. Auger, A., Hansen, N.: Linear convergence of comparison-based step-size adaptive randomized search via stability of markov chains. SIAM J. Optim. 26(3), 1589–1624 (2016)

    Article  MathSciNet  Google Scholar 

  7. Beyer, H.G.: The Theory of Evolution Strategies. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-662-04378-3

    Book  Google Scholar 

  8. Beyer, H.G., Hellwig, M.: The dynamics of cumulative step size adaptation on the ellipsoid model. Evol. Comput. 24(1), 25–57 (2016)

    Article  Google Scholar 

  9. Beyer, H.G., Melkozerov, A.: The dynamics of self-adaptive multirecombinant evolution strategies on the general ellipsoid model. IEEE Trans. Evol. Comput. 18(5), 764–778 (2014)

    Article  Google Scholar 

  10. Chen, Y., Zou, X., He, J.: Drift conditions for estimating the first hitting times of evolutionary algorithm. Int. J. Comput. Math. 88(1), 37–50 (2011)

    Article  MathSciNet  Google Scholar 

  11. Ding, L., Kang, L.: Convergence rates for a class of evolutionary algorithms with elitist strategy. Acta Mathematica Scientia 21(4), 531–540 (2001)

    Article  MathSciNet  Google Scholar 

  12. Doob, J.L.: Stochastic Processes. Wiley, New York (1953)

    MATH  Google Scholar 

  13. Droste, S., Jansen, T., Wegener, I.: On the analysis of the (1+1) evolutionary algorithm. Theoret. Comput. Sci. 276(1–2), 51–81 (2002)

    Article  MathSciNet  Google Scholar 

  14. He, J., Kang, L.: On the convergence rate of genetic algorithms. Theoret. Comput. Sci. 229(1–2), 23–39 (1999)

    Article  MathSciNet  Google Scholar 

  15. He, J., Lin, G.: Average convergence rate of evolutionary algorithms. IEEE Trans. Evol. Comput. 20(2), 316–321 (2016)

    Article  Google Scholar 

  16. He, J., Yao, X.: Drift analysis and average time complexity of evolutionary algorithms. Artif. Intell. 127(1), 57–85 (2001)

    Article  MathSciNet  Google Scholar 

  17. He, J., Yu, X.: Conditions for the convergence of evolutionary algorithms. J. Syst. Architect. 47(7), 601–612 (2001)

    Article  Google Scholar 

  18. He, J., Zhou, Y., Lin, G.: An initial error analysis for evolutionary algorithms. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 317–318. ACM (2017)

    Google Scholar 

  19. Huang, H., Xu, W., Zhang, Y., Lin, Z., Hao, Z.: Runtime analysis for continuous (1+1) evolutionary algorithm based on average gain model. SCIENTIA SINICA Informationis 44(6), 811–824 (2014)

    Article  Google Scholar 

  20. Jebalia, M., Auger, A., Hansen, N.: Log-linear convergence and divergence of the scale-invariant (1+1)-es in noisy environments. Algorithmica 59(3), 425–460 (2011)

    Article  MathSciNet  Google Scholar 

  21. Meyn, S., Tweedie, R.: Markov Chains and Stochastic Stability. Springer, London (1993). https://doi.org/10.1007/978-1-4471-3267-7

    Book  MATH  Google Scholar 

  22. Rudolph, G.: Local convergence rates of simple evolutionary algorithms with Cauchy mutations. IEEE Trans. Evol. Comput. 1(4), 249–258 (1997)

    Article  Google Scholar 

  23. Rudolph, G., et al.: Convergence rates of evolutionary algorithms for a class of convex objective functions. Control Cybern. 26, 375–390 (1997)

    MathSciNet  MATH  Google Scholar 

  24. Varga, R.: Matrix Iterative Analysis. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-05156-2

    Book  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yu Chen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chen, Y., He, J. (2020). Average Convergence Rate of Evolutionary Algorithms II: Continuous Optimisation. In: Li, K., Li, W., Wang, H., Liu, Y. (eds) Artificial Intelligence Algorithms and Applications. ISICA 2019. Communications in Computer and Information Science, vol 1205. Springer, Singapore. https://doi.org/10.1007/978-981-15-5577-0_3

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-5577-0_3

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-5576-3

  • Online ISBN: 978-981-15-5577-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics