Advertisement

Optimizing Without Constraint

  • Éric WalterEmail author
Chapter

Abstract

Here, the decision vector is just assumed to belong to \({\mathbb {R}}^n\). There is no equality constraint, and inequality constraints, if any, are assumed not to be saturated at any minimizer, so they may as well not exist.The first- and second-order theoretical optimality conditions are recalled and used to derive the linear least squares estimator. The reason why the nice mathematical formula thus obtained should never be used in practice is explained, and alternative, robust methods are advocated. Iterative methods that can be used when the linear least squares method does not apply are then described. A bad way of combining line searches is denounced, and a much better strategy is described. The principles, advantages and limitations of the main methods based on a Taylor expansion of the cost function are presented. These include quasi-Newton and conjugate-gradient methods. One very popular method able to deal with non-differentiable cost is also described. Additional topics covered include robust optimization in the presence of uncertainty, global optimization, and optimization on a budget.

References

  1. 1.
    Santner, T., Williams, B., Notz, W.: The Design and Analysis of Computer Experiments. Springer, New York (2003)CrossRefzbMATHGoogle Scholar
  2. 2.
    Lawson, C., Hanson, R.: Solving Least Squares Problems. Classics in Applied Mathematics. SIAM, Philadelphia (1995)CrossRefGoogle Scholar
  3. 3.
    Björck, A.: Numerical Methods for Least Squares Problems. SIAM, Philadelphia (1996)CrossRefzbMATHGoogle Scholar
  4. 4.
    Nievergelt, Y.: A tutorial history of least squares with applications to astronomy and geodesy. J. Comput. Appl. Math. 121, 37–72 (2000)CrossRefzbMATHMathSciNetGoogle Scholar
  5. 5.
    Golub, G., Van Loan, C.: Matrix Computations, 3rd edn. The Johns Hopkins University Press, Baltimore (1996)zbMATHGoogle Scholar
  6. 6.
    Golub, G., Kahan, W.: Calculating the singular values and pseudo-inverse of a matrix. J. Soc. Indust. Appl. Math. B. Numer. Anal. 2(2), 205–224 (1965)CrossRefzbMATHMathSciNetGoogle Scholar
  7. 7.
    Golub, G., Reinsch, C.: Singular value decomposition and least squares solution. Numer. Math. 14, 403–420 (1970)CrossRefzbMATHMathSciNetGoogle Scholar
  8. 8.
    Demmel, J.: Applied Numerical Linear Algebra. SIAM, Philadelphia (1997)CrossRefzbMATHGoogle Scholar
  9. 9.
    Demmel, J., Kahan, W.: Accurate singular values of bidiagonal matrices. SIAM J. Sci. Stat. Comput. 11(5), 873–912 (1990)CrossRefzbMATHMathSciNetGoogle Scholar
  10. 10.
    Walter, E., Pronzato, L.: Identification of Parametric Models. Springer, London (1997)zbMATHGoogle Scholar
  11. 11.
    Nocedal, J., Wright, S.: Numerical Optimization. Springer, New York (1999)CrossRefzbMATHGoogle Scholar
  12. 12.
    Brent, R.: Algorithms for Minimization Without Derivatives. Prentice-Hall, Englewood Cliffs (1973)zbMATHGoogle Scholar
  13. 13.
    Press, W., Flannery, B., Teukolsky, S., Vetterling, W.: Numerical Recipes. Cambridge University Press, Cambridge (1986)Google Scholar
  14. 14.
    Gill, P., Murray, W., Wright, M.: Practical Optimization. Elsevier, London (1986)Google Scholar
  15. 15.
    Bonnans, J., Gilbert, J.C., Lemaréchal, C., Sagastizabal, C.: Numerical Optimization—Theoretical and Practical Aspects. Springer, Berlin (2006)zbMATHGoogle Scholar
  16. 16.
    Polak, E.: Optimization—Algorithms and Consistent Approximations. Springer, New York (1997)Google Scholar
  17. 17.
    Levenberg, K.: A method for the solution of certain non-linear problems in least squares. Quart. Appl. Math. 2, 164–168 (1944)zbMATHMathSciNetGoogle Scholar
  18. 18.
    Marquardt, D.: An algorithm for least-squares estimation of nonlinear parameters. J. Soc. Indust. Appl. Math. 11(2), 431–441 (1963)CrossRefzbMATHMathSciNetGoogle Scholar
  19. 19.
    Dennis Jr, J., Moré, J.: Quasi-Newton methods, motivations and theory. SIAM Rev. 19(1), 46–89 (1977)CrossRefzbMATHMathSciNetGoogle Scholar
  20. 20.
    Broyden, C.: Quasi-Newton methods and their application to function minimization. Math. Comput. 21(99), 368–381 (1967)CrossRefzbMATHMathSciNetGoogle Scholar
  21. 21.
    Dixon, L.: Quasi Newton techniques generate identical points II: the proofs of four new theorems. Math. Program. 3, 345–358 (1972)CrossRefzbMATHGoogle Scholar
  22. 22.
    Gertz, E.: A quasi-Newton trust-region method. Math. Program. 100(3), 447–470 (2004)zbMATHMathSciNetGoogle Scholar
  23. 23.
    Shewchuk, J.: An introduction to the conjugate gradient method without the agonizing pain. School of Computer Science, Carnegie Mellon University, Pittsburgh, Technical report (1994)Google Scholar
  24. 24.
    Hager, W., Zhang, H.: A survey of nonlinear conjugate gradient methods. Pacific J. Optim. 2(1), 35–58 (2006)zbMATHMathSciNetGoogle Scholar
  25. 25.
    Polak, E.: Computational Methods in Optimization. Academic Press, New York (1971)Google Scholar
  26. 26.
    Minoux, M.: Mathematical Programming—Theory and Algorithms. Wiley, New York (1986)zbMATHGoogle Scholar
  27. 27.
    Shor, N.: Minimization Methods for Non-differentiable Functions. Springer, Berlin (1985)CrossRefzbMATHGoogle Scholar
  28. 28.
    Bertsekas, D.: Nonlinear Programming. Athena Scientific, Belmont (1999)zbMATHGoogle Scholar
  29. 29.
    Nesterov, Y.: Primal-dual subgradient methods for convex problems. Math. Program. B 120, 221–259 (2009)Google Scholar
  30. 30.
    Walters, F., Parker, L., Morgan, S., Deming, S.: Sequential Simplex Optimization. CRC Press, Boca Raton (1991)Google Scholar
  31. 31.
    Lagarias, J., Reeds, J., Wright, M., Wright, P.: Convergence properties of the Nelder-Mead simplex method in low dimensions. SIAM J. Optim. 9(1), 112–147 (1998)CrossRefzbMATHMathSciNetGoogle Scholar
  32. 32.
    Lagarias, J., Poonen, B., Wright, M.: Convergence of the restricted Nelder-Mead algorithm in two dimensions. SIAM J. Optim. 22(2), 501–532 (2012)CrossRefzbMATHMathSciNetGoogle Scholar
  33. 33.
    Ben-Tal, A., El Ghaoui, L., Nemirovski, A.: Robust Optimization. Princeton University Press, Princeton (2009)Google Scholar
  34. 34.
    Bertsimas, D., Brown, D., Caramanis, C.: Theory and applications of robust optimization. SIAM Rev. 53(3), 464–501 (2011)CrossRefzbMATHMathSciNetGoogle Scholar
  35. 35.
    Le Roux, N., Schmidt, M., Bach, F.: A stochastic gradient method with an exponential convegence rate for strongly-convex optimization with finite training sets. In: Neural Information Processing Systems (NIPS 2012). Lake Tahoe (2012)Google Scholar
  36. 36.
    Rustem, B., Howe, M.: Algorithms for Worst-Case Design and Applications to Risk Management. Princeton University Press, Princeton (2002)zbMATHGoogle Scholar
  37. 37.
    Shimizu, K., Aiyoshi, E.: Necessary conditions for min-max problems and algorithms by a relaxation procedure. IEEE Trans. Autom. Control AC-25(1), 62–66 (1980)Google Scholar
  38. 38.
    Horst, R., Tuy, H.: Global Optimization. Springer, Berlin (1990)CrossRefzbMATHGoogle Scholar
  39. 39.
    Pronzato, L., Walter, E.: Eliminating suboptimal local minimizers in nonlinear parameter estimation. Technometrics 43(4), 434–442 (2001)CrossRefMathSciNetGoogle Scholar
  40. 40.
    Whitley, L. (ed.): Foundations of Genetic Algorithms 2. Morgan Kaufmann, San Mateo (1993)Google Scholar
  41. 41.
    Goldberg, D.: Genetic Algorithms in Search. Optimization and Machine Learning. Addison-Wesley, Reading (1989)zbMATHGoogle Scholar
  42. 42.
    Storn, R., Price, K.: Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces. J. Global Optim. 11, 341–359 (1997)CrossRefzbMATHMathSciNetGoogle Scholar
  43. 43.
    Dorigo, M., Stützle, T.: Ant Colony Optimization. MIT Press, Cambridge (2004)CrossRefzbMATHGoogle Scholar
  44. 44.
    Kennedy, J., Eberhart, R., Shi, Y.: Swarm Intelligence. Morgan Kaufmann, San Francisco (2001)Google Scholar
  45. 45.
    Bekey, G., Masri, S.: Random search techniques for optimization of nonlinear systems with many parameters. Math. Comput. Simul. 25, 210–213 (1983)CrossRefGoogle Scholar
  46. 46.
    Pronzato, L., Walter, E., Venot, A., Lebruchec, J.F.: A general purpose global optimizer: implementation and applications. Math. Comput. Simul. 26, 412–422 (1984)CrossRefMathSciNetGoogle Scholar
  47. 47.
    Jaulin, L., Kieffer, M., Didrit, O., Walter, E.: Applied Interval Analysis. Springer, London (2001)CrossRefzbMATHGoogle Scholar
  48. 48.
    Neumaier, A.: Interval Methods for Systems of Equations. Cambridge University Press, Cambridge (1990)Google Scholar
  49. 49.
    Rump, S.: INTLAB—INTerval LABoratory. In: T. Csendes (ed.) Developments in Reliable Computing, pp. 77–104. Kluwer Academic Publishers, Dordrecht (1999)Google Scholar
  50. 50.
    Rump, S.: Verification methods: rigorous results using floating-point arithmetic. Acta Numerica, 287–449 (2010)Google Scholar
  51. 51.
    Hansen, E.: Global Optimization Using Interval Analysis. Marcel Dekker, New York (1992)zbMATHGoogle Scholar
  52. 52.
    Kearfott, R.: Globsol user guide. Optim. Methods Softw. 24(4–5), 687–708 (2009)CrossRefzbMATHMathSciNetGoogle Scholar
  53. 53.
    Ratschek, H., Rokne, J.: New Computer Methods for Global Optimization. Ellis Horwood, Chichester (1988)zbMATHGoogle Scholar
  54. 54.
    Jones, D., Schonlau, M., Welch, W.: Efficient global optimization of expensive black-box functions. J. Global Optim. 13(4), 455–492 (1998)CrossRefzbMATHMathSciNetGoogle Scholar
  55. 55.
    Mockus, J.: Bayesian Approach to Global Optimization. Kluwer, Dordrecht (1989)CrossRefzbMATHGoogle Scholar
  56. 56.
    Jones, D.: A taxonomy of global optimization methods based on response surfaces. J. Global Optim. 21, 345–383 (2001)CrossRefzbMATHMathSciNetGoogle Scholar
  57. 57.
    Marzat, J., Walter, E., Piet-Lahanier, H.: Worst-case global optimization of black-box functions through Kriging and relaxation. J. Global Optim. 55(4), 707–727 (2013)CrossRefzbMATHMathSciNetGoogle Scholar
  58. 58.
    Collette, Y., Siarry, P.: Multiobjective Optimization. Springer, Berlin (2003)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.Laboratoire des Signaux et SystèmesCNRS-SUPÉLEC-Université Paris-SudGif-sur-YvetteFrance

Personalised recommendations