Advertisement

Optimizing Under Constraints

  • Éric WalterEmail author
Chapter
  • 3.3k Downloads

Abstract

Many optimization problems become meaningless unless constraints are taken into account, and methods for doing so in practice are presented. The theoretical optimality conditions of the unconstrained case are no longer valid when there are constraints. They are replaced by the KKT conditions, which are derived. The use of penalty functions to transform constraint violation into cost increase is explained. The principles behind Dantzig’s simplex method for linear programming (a class of constrained optimization problems with tremendous economical importance) are described and illustrated on a simple example. What has been dubbed the interior-point revolution is then recounted and its application to convex optimization is described. One of the most spectacular achievements of the interior-point approach has been to show how linear programming could be carried out with algorithms for convex optimization that have a much smaller worst-case complexity than Dantzig’s simplex.

References

  1. 1.
    Bertsekas, D.: Constrained Optimization and Lagrange Multiplier Methods. Athena Scientific, Belmont (1996)Google Scholar
  2. 2.
    Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)CrossRefzbMATHGoogle Scholar
  3. 3.
    Papalambros, P., Wilde, D.: Principles of Optimal Design. Cambridge University Press, Cambridge (1988)Google Scholar
  4. 4.
    Wright, M.: The interior-point revolution in optimization: history, recent developments, and lasting consequences. Bull. Am. Math. Soc. 42(1), 39–56 (2004)Google Scholar
  5. 5.
    Gill, P., Murray, W., Wright, M.: Practical Optimization. Elsevier, London (1986)Google Scholar
  6. 6.
    Theodoridis, S., Slavakis, K., Yamada, I.: Adaptive learning in a world of projections. IEEE Sig. Process. Mag. 28(1), 97–123 (2011)CrossRefGoogle Scholar
  7. 7.
    Han, S.P., Mangasarian, O.: Exact penalty functions in nonlinear programming. Math. Program. 17, 251–269 (1979)CrossRefzbMATHMathSciNetGoogle Scholar
  8. 8.
    Zaslavski, A.: A sufficient condition for exact penalty in constrained optimization. SIAM J. Optim. 16, 250–262 (2005)CrossRefzbMATHMathSciNetGoogle Scholar
  9. 9.
    Polyak, B.: Introduction to Optimization. Optimization Software, New York (1987)Google Scholar
  10. 10.
    Bonnans, J., Gilbert, J.C., Lemaréchal, C., Sagastizabal, C.: Numerical Optimization: Theoretical and Practical Aspects. Springer, Berlin (2006)Google Scholar
  11. 11.
    Boggs, P., Tolle, J.: Sequential quadratic programming. Acta Numer. 4, 1–51 (1995)CrossRefMathSciNetGoogle Scholar
  12. 12.
    Boggs, P., Tolle, J.: Sequential quadratic programming for large-scale nonlinear optimization. J. Comput. Appl. Math. 124, 123–137 (2000)CrossRefzbMATHMathSciNetGoogle Scholar
  13. 13.
    Nocedal, J., Wright, S.: Numerical Optimization. Springer, New York (1999)CrossRefzbMATHGoogle Scholar
  14. 14.
    Matousek, J., Gärtner, B.: Understanding and Using Linear Programming. Springer, Berlin (2007)zbMATHGoogle Scholar
  15. 15.
    Gonin, R., Money, A.: Nonlinear \(L_p\)-Norm Estimation. Marcel Dekker, New York (1989)Google Scholar
  16. 16.
    Kiountouzis, E.: Linear programming techniques in regression analysis. J. R. Stat. Soc. Ser. C (Appl. Stat.) 22(1), 69–73 (1973)Google Scholar
  17. 17.
    Bronson, R.: Operations Research. Schaum’s Outline Series. McGraw-Hill, New York (1982)Google Scholar
  18. 18.
    Khachiyan, L.: A polynomial algorithm in linear programming. Sov. Math. Dokl. 20, 191–194 (1979)zbMATHGoogle Scholar
  19. 19.
    Karmarkar, N.: A new polynomial-time algorithm for linear programming. Combinatorica 4(4), 373–395 (1984)CrossRefzbMATHMathSciNetGoogle Scholar
  20. 20.
    Gill, P., Murray, W., Saunders, M., Tomlin, J., Wright, M.: On projected Newton barrier methods for linear programming and an equivalence to Karmarkar’s projective method. Math. Prog. 36, 183–209 (1986)CrossRefzbMATHMathSciNetGoogle Scholar
  21. 21.
    Byrd, R., Hribar, M., Nocedal, J.: An interior point algorithm for large-scale nonlinear programming. SIAM J. Optim. 9(4), 877–900 (1999)Google Scholar
  22. 22.
    Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Kluwer, Boston (2004)CrossRefGoogle Scholar
  23. 23.
    Hiriart-Urruty, J.B., Lemaréchal, C.: Convex Analysis and Minimization Algorithms: Fundamentals. Springer, Berlin (1993)zbMATHGoogle Scholar
  24. 24.
    Hiriart-Urruty, J.B., Lemaréchal, C.: Convex Analysis and Minimization Algorithms: Advanced Theory and Bundle Methods. Springer, Berlin (1993)zbMATHGoogle Scholar
  25. 25.
    Sasena, M., Papalambros, P., Goovaerts, P.: Exploration of metamodeling sampling criteria for constrained global optimization. Eng. Optim. 34(3), 263–278 (2002)CrossRefGoogle Scholar
  26. 26.
    Sasena, M.: Flexibility and efficiency enhancements for constrained global design optimization with kriging approximations. Ph.D. thesis, University of Michigan (2002)Google Scholar
  27. 27.
    Conn, A., Gould, N., Toint, P.: A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds. SIAM J. Numer. Anal. 28(2), 545–572 (1991)CrossRefzbMATHMathSciNetGoogle Scholar
  28. 28.
    Conn, A., Gould, N., Toint, P.: A globally convergent augmented Lagrangian barrier algorithm for optimization with general constraints and simple bounds. Technical Report 92/07 (2nd revision), IBM T.J. Watson Research Center, Yorktown Heights (1995)Google Scholar
  29. 29.
    Lewis, R., Torczon, V.: A globally convergent augmented Lagrangian pattern algorithm for optimization with general constraints and simple bounds. Technical Report 98–31, NASA–ICASE, NASA Langley Research Center, Hampton (1998)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.Laboratoire des Signaux et SystèmesCNRS-SUPÉLEC-Université Paris-SudGif-sur-YvetteFrance

Personalised recommendations