Advertisement

Necessary Conditions for Optimality

  • John L. Troutman
Part of the Undergraduate Texts in Mathematics book series (UTM)

Abstract

What conditions are necessary for optimal performance in our problems? In Chapter 10 we saw that if a control problem can be formulated on a fixed interval and its defining functions are suitably convex, then the methods of variational calculus can be adapted to suggest sufficient conditions for an optimal control. In particular, the minimum principle of §10.3 and §10.4 can guarantee optimality of a solution to the problem. In §11.1 we will discover that this principle is necessary for optimality whether or not convexity is present, even when the underlying interval is not fixed (Theorem 11.10). Then in §11.2, we examine the simple but important class of linear time-optimal problems for which the time interval itself is being minimized and the adjoint equation (a necessary condition) can be used to suggest sufficient conditions for optimality. Finally, in §11.3, we extend our control-theory approach to more general problems involving Lagrangian inequality constraints, and in Theorem 11.20 we obtain a Lagrangian multiplier rule of the Kuhn-Tucker type.

Keywords

Optimal Trajectory Fixed Interval Maximal Rank Adjoint Equation Minimum Principle 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer Science+Business Media New York 1996

Authors and Affiliations

  • John L. Troutman
    • 1
  1. 1.Department of MathematicsSyracuse UniversitySyracuseUSA

Personalised recommendations