Necessary Conditions for Optimality
What conditions are necessary for optimal performance in our problems? In Chapter 10 we saw that if a control problem can be formulated on a fixed interval and its defining functions are suitably convex, then the methods of variational calculus can be adapted to suggest sufficient conditions for an optimal control. In particular, the minimum principle of §10.3 and §10.4 can guarantee optimality of a solution to the problem. In §11.1 we will discover that this principle is necessary for optimality whether or not convexity is present, even when the underlying interval is not fixed (Theorem 11.10). Then in §11.2, we examine the simple but important class of linear time-optimal problems for which the time interval itself is being minimized and the adjoint equation (a necessary condition) can be used to suggest sufficient conditions for optimality. Finally, in §11.3, we extend our control-theory approach to more general problems involving Lagrangian inequality constraints, and in Theorem 11.20 we obtain a Lagrangian multiplier rule of the Kuhn-Tucker type.
KeywordsOptimal Trajectory Fixed Interval Maximal Rank Adjoint Equation Minimum Principle
Unable to display preview. Download preview PDF.