An Introduction to Dynamical Search

  • Luc Pronzato
  • Henry P. Wynn
  • Anatoly A. Zhigljavsky
Part of the Nonconvex Optimization and Its Applications book series (NOIA, volume 62)


A number of classical optimisation algorithms which appear to converge smoothly behave in a haphazard fashion when looked at a local, or second order, level. Using different renormalisation procedures we link these algorithms to dynamical systems and then study these systems to get additional information about the rates of convergence of the original algorithms. These convergence rates are expressed in terms of the Lyapunov exponents and various entropies of the dynamical systems. Working in a dynamical system environment has suggested new types of algorithms and improvements over a number of classical algorithms. One result of the approach is that algorithms classically considered as optimal are in fact optimal in the worst-case, and, ergodically. the worst-case events have measure zero. We thus are often able to construct algorithms with improved ergodic performances. As the main application areas we consider line-search, ellipsoidal algorithms for linear and nonlinear programming, and gradient algorithms.


Lyapunov Exponent Golden Section Asymptotic Rate Invariant Density Renyi Entropy 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Akaike, H. (1959). On a successive transformation of probability distribution and its application to the analysis of the optimum gradient method. Ann. Inst. Statist. Math. Tokyo, 11: 1–16.MathSciNetzbMATHCrossRefGoogle Scholar
  2. Boender, C.G.E. and Romeijn, H.E. (1995). Stochastic methods. In Horst, R. and Pardalos, P.M., editors, Handbook of Global Optimization, pages 829–869. Kluwer, Dordrecht.Google Scholar
  3. Du, D.Z. and Hwang, F.K. (2000). Combinatorial Group Testing and its Applications. World Scientific, Singapore.zbMATHGoogle Scholar
  4. Forsythe, G. (1968). On the asymptotic directions of the s-dimensional optimum gradient method. Numerische Mathematik, 11: 57–76.MathSciNetzbMATHCrossRefGoogle Scholar
  5. Hansen, P. and Jaumard, B. (1995). Lipschitz optimization. In Horst, R. and Pardalos, P.M., editors, Handbook of Global Optimization I, pages 407–493. Kluwer, Dordrecht.Google Scholar
  6. Kiefer, J. (1957). Optimum sequential search and approximation methods under minimum regularity assumptions. J. Soc. Indust. Appl. Math., 5: 105–136.MathSciNetzbMATHCrossRefGoogle Scholar
  7. Luenberger, D. (1973). Introduction to Linear and Nonlinear Programming. Addison-Wesley, Reading, Massachusetts.Google Scholar
  8. Pronzato, L., Wynn, H.P., and Zhigljaysky, A.A. (1997). Stochastic analysis of convergence via dynamic representation for a class of line-search algorithms. Combinatorics, Probability and Computing, 6: 205–229.zbMATHCrossRefGoogle Scholar
  9. Pronzato, L., Wynn, H.P., and Zhigljaysky, A.A. (1997). Using Renyi entropies in search problems. Lectures in Applied Mathematics, 33: 253–268.Google Scholar
  10. Pronzato, L., Wynn, H.P., and Zhigljaysky, A.A. (1998). A generalised Golden-Section algorithm for line—search. IMA Journal on Math. Control and Information, 15: 185–214.zbMATHCrossRefGoogle Scholar
  11. Pronzato, L., Wynn, H.P., and Zhigljaysky, A.A. (1999). Finite sample behaviour of an ergodically fast line-search algorithm. Computational Optimisation and Applications, 14: 75–86.zbMATHCrossRefGoogle Scholar
  12. Pronzato, L., Wynn, H.P., and Zhigljaysky, A.A. (2000). Dynamical Search. Chapman & Hall/CRC, Boca Raton.zbMATHGoogle Scholar
  13. Pronzato, L., Wynn, H.P., and Zhigljaysky, A.A. (2001a). Analysis of performance of symmetric second—order line search algorithms through continued fractions. IMA Journal on Math. Control and Information, 18: 281–296.zbMATHCrossRefGoogle Scholar
  14. Pronzato, L., Wynn, H.P., and Zhigljaysky, A.A. (2001b). Renormalised steepest descent in Hilbert space converges to a two-point attrator. Acta Applicandae Mathematicae, 40. (to appear).Google Scholar
  15. Wynn, H.P. and Zhigljaysky, A.A. (1993). Chaotic behaviour of search algorithms. Acta Applicandae Mathematicae, 32: 123–156.MathSciNetzbMATHCrossRefGoogle Scholar
  16. Zhigljaysky, A.A. (1991). Theory of Global Random Search. Kluwer, Dordrecht.CrossRefGoogle Scholar
  17. Zhigljaysky, A.A. and Chekmasov, M.V. (1996). Comparison of independent, stratified and random covering sample schemes in optimization problems. Math. Comput. Modelling, 23: 97–110.MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2002

Authors and Affiliations

  • Luc Pronzato
    • 1
  • Henry P. Wynn
    • 2
  • Anatoly A. Zhigljavsky
    • 3
  1. 1.Laboratoire I3SCNRS/Université de Nice-Sophia AntipolisFrance
  2. 2.Department of StatisticsUniversity of WarwickUK
  3. 3.School of MathematicsCardiff UniversityUK

Personalised recommendations