Abstract
The part played by Bellman’s method of dynamic programming in the theory of optimal control is well known. Together with Pontryagin’s maximum principle, this method is an important tool for solving problems of optimal control and it is especially well suited for the use of computers.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1993 Springer Science+Business Media New York
About this chapter
Cite this chapter
Lurie, K.A. (1993). Bellman’s Method in Variational Problems with Partial Derivatives. In: Applied Optimal Control Theory of Distributed Systems. Mathematical Concepts and Methods in Science and Engineering, vol 43. Springer, Boston, MA. https://doi.org/10.1007/978-1-4757-9262-1_8
Download citation
DOI: https://doi.org/10.1007/978-1-4757-9262-1_8
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4757-9264-5
Online ISBN: 978-1-4757-9262-1
eBook Packages: Springer Book Archive