Abstract
What conditions are necessary for optimal performance in our problems? In Chapter 10 we saw that if a control problem can be formulated on a fixed interval and its defining functions are suitably convex, then the methods of variational calculus can be adapted to suggest sufficient conditions for an optimal control. In particular, the minimum principle of §10.3 and §10.4 can guarantee optimality of a solution to the problem. In §11.1 we will discover that this principle is necessary for optimality whether or not convexity is present, even when the underlying interval is not fixed (Theorem 11.10). Then in §11.2, we examine the simple but important class of linear time-optimal problems for which the time interval itself is being minimized and the adjoint equation (a necessary condition) can be used to suggest sufficient conditions for optimality. Finally, in §11.3, we extend our control-theory approach to more general problems involving Lagrangian inequality constraints, and in Theorem 11.20 we obtain a Lagrangian multiplier rule of the Kuhn-Tucker type.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1996 Springer Science+Business Media New York
About this chapter
Cite this chapter
Troutman, J.L. (1996). Necessary Conditions for Optimality. In: Variational Calculus and Optimal Control. Undergraduate Texts in Mathematics. Springer, New York, NY. https://doi.org/10.1007/978-1-4612-0737-5_12
Download citation
DOI: https://doi.org/10.1007/978-1-4612-0737-5_12
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4612-6887-1
Online ISBN: 978-1-4612-0737-5
eBook Packages: Springer Book Archive