Abstract
In the previous chapter, in order to be able to apply classical variational methods, the cost functional was assumed to be differentiable with respect to each control coordinate, and to simplify the optimal control problem, we also assumed that the terminal time was fixed. In this chapter, we will drop these restrictive and very undesirable assumptions. In order to handle the more general optimal control problem, we will introduce two commonly used methods, namely: the method of dynamic programming initiated by Bellman, and the minimum principle of Pontryagin.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1989 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Chui, C.K., Chen, G. (1989). Dynamic Programming. In: Linear Systems and Optimal Control. Springer Series in Information Sciences, vol 18. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-61312-8_8
Download citation
DOI: https://doi.org/10.1007/978-3-642-61312-8_8
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-64787-1
Online ISBN: 978-3-642-61312-8
eBook Packages: Springer Book Archive