Skip to main content

On the Codimension of the Set of Optima: Large Scale Optimisation with Few Relevant Variables

  • Conference paper
  • First Online:
Artificial Evolution (EA 2015)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9554))

  • 503 Accesses

Abstract

The complexity of continuous optimisation by comparison-based algorithms has been developed in several recent papers. Roughly speaking, these papers conclude that a precision \(\epsilon \) can be reached with cost \(\varTheta (n\log (1/\epsilon ))\) in dimension n within polylogarithmic factors for the sphere function. Compared to other (non comparison-based) algorithms, this rate is not excellent; on the other hand, it is classically considered that comparison-based algorithms have some robustness advantages, as well as scalability on parallel machines and simplicity. In the present paper we show another advantage, namely resilience to useless variables, thanks to a complexity bound \(\varTheta (m\log (1/\epsilon ))\) where m is the codimension of the set of optima, possibly \(m << n\). In addition, experiments show that some evolutionary algorithms have a negligible computational complexity even in high dimension, making them practical for huge problems with many useless variables.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Auger, A.: Convergence results for (1, \(\lambda \))-SA-ES using the theory of \(\varphi \)-irreducible Markov chains. Theor. Comput. Sci. 334, 35–69 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  2. Banzhaf, W., Langdon, W.B.: Some considerations on the reason for bloat. Genet. Program. Evolvable Mach. 3(1), 81–91 (2002)

    Article  MATH  Google Scholar 

  3. Beyer, H.G.: The Theory of Evolution Strategies. Natural Computing Series. Springer, Heideberg (2001)

    Book  MATH  Google Scholar 

  4. Beyer, H.-G., Sendhoff, B.: Covariance matrix adaptation revisited – the CMSA evolution strategy –. In: Rudolph, G., Jansen, T., Lucas, S., Poloni, C., Beume, N. (eds.) PPSN 2008. LNCS, vol. 5199, pp. 123–132. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  5. Bleuler, S., Brack, M., Thiele, L., Zitzler, E.: Multiobjective genetic programming: reducing bloat using SPEA2. In: Proceedings of the 2001 Congress on Evolutionary Computation CEC2001, pp. 536–543. IEEE Press, COEX, World Trade Center, 159 Samseong-dong, Gangnam-gu, Seoul, Korea (27–30 2001). http://citeseer.ist.psu.edu/bleuler01multiobjective.html

  6. Bratton, D., Kennedy, J.: Defining a standard for particle swarm optimization. In: IEEE Swarm Intelligence Symposium, pp. 120–127 (2007). http://dx.org/10.1109/SIS.2007.368035

  7. Brest, J., Greiner, S., Boskovic, B., Mernik, M., Zumer, V.: Self-adapting control parameters in differential evolution: a comparative study on numerical benchmark problems. IEEE Trans. Evol. Comput. 10(6), 646–657 (2006)

    Article  Google Scholar 

  8. Broyden, C.G.: The convergence of a class of double-rank minimization algorithms 2. New Algorithm. J. Inst. Math. Appl. 6, 222–231 (1970)

    Article  MathSciNet  MATH  Google Scholar 

  9. Clerc, M.: Beyond standard particle swarm optimisation. IJSIR 1(4), 46–61 (2010). http://dblp.uni-trier.de/db/journals/ijsir/ijsir1.html#Clerc10

    Google Scholar 

  10. Das, S., Suganthan, P.N.: Differential evolution: a survey of the state-of-the-art. IEEE Trans. Evol. Comput. 15(1), 4–31 (2011)

    Article  Google Scholar 

  11. De Jong, E.D., Watson, R.A., Pollack, J.B.: Reducing bloat and promoting diversity using multi-objective methods. In: Proceedings of the Genetic and Evolutionary Computation Conference, GECCO-2001, pp. 11–18. Morgan Kaufmann Publishers, San Francisco, CA (2001). http://citeseer.ist.psu.edu/dejong01reducing.html

  12. Ekárt, A., Németh, S.Z.: Maintaining the diversity of genetic programs. In: Foster, J.A., Lutton, E., Miller, J., Ryan, C., Tettamanzi, A.G.B. (eds.) EuroGP 2002. LNCS, vol. 2278, pp. 162–171. Springer, Heidelberg (2002)

    Chapter  Google Scholar 

  13. Fletcher, R.: A new approach to variable-metric algorithms. Comput. J. 13, 317–322 (1970)

    Article  MATH  Google Scholar 

  14. Fournier, H., Teytaud, O.: Lower bounds for comparison based evolution strategies using VC-dimension and sign patterns. Algorithmica 59(3), 387–408 (2010). http://hal.inria.fr/inria-00452791

    Article  MathSciNet  MATH  Google Scholar 

  15. Gallagher, M.: Clustering problems for more useful benchmarking of optimization algorithms. In: Dick, G., Browne, W.N., Whigham, P., Zhang, M., Bui, L.T., Ishibuchi, H., Jin, Y., Li, X., Shi, Y., Singh, P., Tan, K.C., Tang, K. (eds.) SEAL 2014. LNCS, vol. 8886, pp. 131–142. Springer, Heidelberg (2014)

    Google Scholar 

  16. Girosi, F.: An equivalence between sparse approximation and support vector machines. In: Proceedings of NIpPS 10, pp. 1455–1480. Morgan Kaufmann (1998)

    Google Scholar 

  17. Goldfarb, D.: A family of variable-metric algorithms derived by variational means. Math. Comput. 24, 23–26 (1970)

    Article  MathSciNet  MATH  Google Scholar 

  18. Gould, N.I.M., Orban, D., Toint, P.L.: Cuter and sifdec: a constrained and unconstrained testing environment, revisited. ACM Trans. Math. Softw. 29(4), 373–394 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  19. Hansen, N., Ostermeier, A.: Completely derandomized self-adaptation in evolution strategies. Evol. Comput. 11(1), 159–195 (2003)

    Article  Google Scholar 

  20. Hansen, N.: Adaptive Encoding for Optimization. Research report RR-6518, INRIA (2008). http://hal.inria.fr/inria-00275983/en/

  21. Hansen, N., Ros, R., Mauny, N., Schoenauer, M., Auger, A.: PSO Facing Non-Separable and Ill-Conditioned Problems. Research report RR-6447, INRIA (2008). http://hal.inria.fr/inria-00250078/en/

  22. Jagerskupper, J.: In between progress rate and stochastic convergence. Dagstuhl’s seminar (2006)

    Google Scholar 

  23. Jagerskupper, J., Witt, C.: Runtime analysis of a (mu+1)es for the sphere function. Technical report (2005)

    Google Scholar 

  24. Jamieson, K.G., Nowak, R.D., Recht, B.: Query complexity of derivative-free optimization. In: NIPS, pp. 2681–2689 (2012)

    Google Scholar 

  25. Kearns, M., Mansour, Y., Ng, A.: A sparse sampling algorithm for near-optimal planning in large markov decision processes. In: IJCAI, pp. 1324–1231 (1999). http://citeseer.ist.psu.edu/kearns99sparse.html

  26. Kennedy, J., Eberhart, R.C.: Particle swarm optimization. In: Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948 (1995)

    Google Scholar 

  27. Langdon, W.B., Poli, R.: Fitness causes bloat: mutation. In: Koza, J. (ed.) Late Breaking Papers at GP 1997, pp. 132–140. Stanford Bookstore, Stanford (1997)

    Google Scholar 

  28. Liu, J., Lampinen, J.: A fuzzy adaptive differential evolution algorithm. Soft Comput. 9(6), 448–462 (2005)

    Article  MATH  Google Scholar 

  29. Luke, S., Panait, L.: A comparison of bloat control methods for genetic programming. Evol. Comput. 14(3), 309–344 (2006)

    Article  Google Scholar 

  30. Nelder, J., Mead, R.: A simplex method for function minimization. Comput. J. 7, 308–311 (1965)

    Article  MATH  Google Scholar 

  31. Pošík, P., Klemš, V.: JADE, an adaptive differential evolution algorithm, benchmarked on the BBOB noiseless testbed. In: Proceedings of the 14th Annual Conference Companion on Genetic and Evolutionary Computation, pp. 197–204. ACM (2012)

    Google Scholar 

  32. Powell, M.J.D.: Developments of newuoa for minimization without derivatives. IMA J. Numer. Anal., pp. drm047+. http://dx.org/10.1093/imanum/drm047

  33. Ratitch, B., Precup, D.: Sparse distributed memories for on-line value-based reinforcement learning. In: Boulicaut, J.-F., Esposito, F., Giannotti, F., Pedreschi, D. (eds.) ECML 2004. LNCS (LNAI), vol. 3201, pp. 347–358. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  34. Rechenberg, I.: Evolutionstrategie: Optimierung Technischer Systeme nach Prinzipien des Biologischen Evolution. Fromman-Holzboog Verlag, Stuttgart (1973)

    Google Scholar 

  35. Shanno, D.F.: Conditioning of quasi-newton methods for function minimization. Math. Comput. 24, 647–656 (1970)

    Article  MathSciNet  MATH  Google Scholar 

  36. Shi, Y., Eberhart, R.C.: A modified particle swarm optimizer. In: Proceedings of IEEE International Conference on Evolutionary Computation, pp. 69–73. IEEE Computer Society, Washington, DC, May 1998

    Google Scholar 

  37. Silva, S., Costa, E.: Dynamic limits for bloat control. In: Deb, K., Tari, Z. (eds.) GECCO 2004. LNCS, vol. 3103, pp. 666–677. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  38. Soule, T.: Exons and code growth in genetic programming. In: Foster, J.A., Lutton, E., Miller, J., Ryan, C., Tettamanzi, A.G.B. (eds.) EuroGP 2002. LNCS, vol. 2278, pp. 142–151. Springer, Heidelberg (2002)

    Chapter  Google Scholar 

  39. St-Pierre, D.L., Louveaux, Q., Teytaud, O.: Online sparse bandit for card games. In: van den Herik, H.J., Plaat, A. (eds.) ACG 2011. LNCS, vol. 7168, pp. 295–305. Springer, Heidelberg (2012)

    Chapter  Google Scholar 

  40. Storn, R., Price, K.: Differential evolution: a simple and efficient heuristic for global optimization over continuous spaces. J. Global Optim. 11(4), 341–359 (1997). http://dx.org/10.1023/A:1008202821328

    Article  MathSciNet  MATH  Google Scholar 

  41. Sutton, R.: Generalization in reinforcement learning: successful examples using sparse coarse coding. In: Touretzky, D.S., Mozer, M.C., Hasselmo, M.E. (eds.) Advancesin Neural Information Processing Systems, vol. 8, pp. 1038–1044. The MIT Press, Cambridge (1996). http://citeseer.ist.psu.edu/sutton96generalization.html

    Google Scholar 

  42. Yu, W.J., Zhang, J.: Multi-population differential evolution with adaptive parameter control for global optimization. In: Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation. GECCO 2011, NY, USA, pp. 1093–1098 (2011). http://doi.acm.org/10.1145/2001576.2001724

  43. Zambrano-Bigiarini, M., Clerc, M., Rojas, R.: Standard particle swarm optimisation 2011 at cec-2013: a baseline for future PSO improvements. In: IEEE Congress on Evolutionary Computation, pp. 2337–2344. IEEE (2013). http://dblp.uni-trier.de/db/conf/cec/cec2013.html#Zambrano-BigiariniCR13

  44. Zhang, B.T., Ohm, P., Mühlenbein, H.: Evolutionary induction of sparse neural trees. Evol. Comput. 5(2), 213–236 (1997)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vincent Berthier .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Berthier, V., Teytaud, O. (2016). On the Codimension of the Set of Optima: Large Scale Optimisation with Few Relevant Variables. In: Bonnevay, S., Legrand, P., Monmarché, N., Lutton, E., Schoenauer, M. (eds) Artificial Evolution. EA 2015. Lecture Notes in Computer Science(), vol 9554. Springer, Cham. https://doi.org/10.1007/978-3-319-31471-6_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-31471-6_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-31470-9

  • Online ISBN: 978-3-319-31471-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics