Advertisement

Unconstrained Optimization Problems

  • Wilhelm Forst
  • Dieter Hoffmann
Chapter
Part of the Springer Undergraduate Texts in Mathematics and Technology book series (SUMAT)

Abstarct

Unconstrained optimization methods seek a local minimum (or a local maximum) in the absence of restrictions, that is,
$$f(x) \longrightarrow \min (x \in D)$$
for a real-valued function f: D → ℝ defined on a nonempty subset D of ℝ n for a given n ∈ ℕ. Unconstrained optimization involves the theoretical study of optimality criteria and above all algorithmic methods for a wide variety of problems. In section 2.0 we have repeated — as essential basics — the well-known (firstand second-order) optimality conditions for smooth real-valued functions. Often constraints complicate a given task but in some cases they simplify it. Even though most optimization problems in ‘real life’ have restrictions to be satisfied, the study of unconstrained problems is useful for two reasons: Firstly, they occur directly in some applications, so they are important in their own right. Secondly, unconstrained problems often originate as a result of transformations of constrained optimization problems. Some methods, for example, solve a general problem by converting it into a sequence of unconstrained problems.

Keywords

Line Search Iteration Step Conjugate Gradient Method Descent Method Descent Direction 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bo/Va.
    S. Boyd, L. Vandenberghe (2004): Convex Optimization. Cambridge University Press, CambridgeMATHGoogle Scholar
  2. LWK.
    C.-J. Lin, R. C. Weng, S. S. Keerthi (2008): Trust Region Newton Methods for Large-Scale Logistic Regression. J. Machine Learning Research 9, 627–650MathSciNetGoogle Scholar
  3. Kel.
    C. T. Kelley (1999): Detection and Remediation of Stagnation in the Nelder-Mead Algorithm Using a Sufficient Decrease Condition. SIAM J. Optimization 10, pp. 43–55Google Scholar
  4. Shor.
    N. Z. Shor (1985): Minimization Methods for Non-Differentiable Functions. Springer, Berlin, Heidelberg, New YorkMATHGoogle Scholar
  5. Co/Jo.
    R. Courant, F. John (1989): Introduction to Calculus and Analysis II/1. Springer, Berlin, Heidelberg, New YorkGoogle Scholar
  6. John.
    F. John (1948): Extremum Problems with Inequalities as Subsidiary Conditions. In: Studies and Essays. Courant Anniversary Volume. Interscience, New York, pp. 187–204Google Scholar
  7. Bonn.
    J. F. Bonnans et al. (2003): Numerical Optimization. Springer, Berlin, Heidelberg, New YorkMATHGoogle Scholar
  8. Ja/St.
    F. Jarre, J. Stoer (2004): Optimierung. Springer, Berlin, Heidelberg, New YorkMATHGoogle Scholar
  9. HLP.
    G. H. Hardy, J. E. Littlewood, G. Pólya (1967): Inequalities. Cambridge University Press, CambridgeGoogle Scholar
  10. DGW.
    J. E. Dennis, D. M. Gay, R. E. Welsch (1981): An Adaptive Nonlinear Least-Squares Algorithm. ACM Trans. Math. Software 7, pp. 348–368Google Scholar
  11. No/Wr.
    J. Nocedal, S. J. Wright (2006): Numerical Optimization. Springer, Berlin, Heidelberg, New YorkMATHGoogle Scholar
  12. Me/Vo.
    J. A. Meijerink, H. A. van der Vorst (1977): An Iterative Solution Method for Linear Systems of which the Coefficient Matrix is a Symmetric M-Matrix. Math. Computation 31, pp. 148–162Google Scholar
  13. No/Wr.
    J. Nocedal, S. J. Wright (2006): Numerical Optimization. Springer, Berlin, Heidelberg, New YorkMATHGoogle Scholar
  14. Dav.
    W. C. Davidon (1991): Variable Metric Method for Minimization. SIAM J. Optimization 1, pp. 1–17Google Scholar
  15. Fl/Po.
    R. Fletcher, M. J. D. Powell (1963): A Rapidly Convergent Descent Method for Minimization. Computer J. 6, pp. 163–168MATHMathSciNetGoogle Scholar
  16. St/Bu.
    J. Stoer, R. Bulirsch (1996): Introduction to Numerical Analysis. Springer, Berlin, Heidelberg, New YorkGoogle Scholar
  17. No/Wr.
    J. Nocedal, S. J. Wright (2006): Numerical Optimization. Springer, Berlin, Heidelberg, New YorkMATHGoogle Scholar
  18. De/Sc.
    J. E. Dennis, R. B. Schnabel (1983): Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Prentice-Hall, Englewood CliffsMATHGoogle Scholar
  19. St/Bu.
    J. Stoer, R. Bulirsch (1996): Introduction to Numerical Analysis. Springer, Berlin, Heidelberg, New YorkGoogle Scholar
  20. GNS.
    I. Griva, S. G. Nash, A. Sofer (2009): Linear and Nonlinear Programming. SIAM, PhiladelphiaGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  1. 1.Fak. Mathematik und Wirtschaftswissenschaften Inst. Numerische MathematikUniversität UlmUlmGermany
  2. 2.FB Mathematik und StatistikUniversität KonstanzKonstanzGermany

Personalised recommendations