Encyclopedia of Systems and Control

Living Edition
| Editors: John Baillieul, Tariq Samad

Optimal Control and the Dynamic Programming Principle

Living reference work entry
DOI: https://doi.org/10.1007/978-1-4471-5102-9_209-1

Abstract

This entry illustrates the application of Bellman’s Dynamic Programming Principle within the context of optimal control problems for continuous-time dynamical systems. The approach leads to a characterization of the optimal value of the cost functional, over all possible trajectories given the initial conditions, in terms of a partial differential equation called the Hamilton–Jacobi–Bellman equation. Importantly, this can be used to synthesize the corresponding optimal control input as a state-feedback law.

Keywords

Continuous-time dynamics Hamilton–Jacobi–Bellman equation Optimization Nonlinear systems State feedback 
This is a preview of subscription content, log in to check access.

Bibliography

  1. Bellman R (1957) Dynamic programming. Princeton University Press, PrincetonMATHGoogle Scholar
  2. Bertsekas DP (1987) Dynamic programming: deterministic and stochastic models. Prentice Hall, Englewood CliffsMATHGoogle Scholar
  3. Bardi M, Capuzzo Dolcetta I (1997) Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman equations. Birkhäuser, BostonCrossRefMATHGoogle Scholar
  4. Barles G (1994) Solutions de viscosité des équations de Hamilton-Jacobi. In: Mathématiques et applications, vol 17. Springer, ParisGoogle Scholar
  5. Boltyanskii VG, Gamkrelidze RV, Pontryagin LS (1956) On the theory of optimal processes (in Russian). Doklady Akademii Nauk SSSR 110, 7–10MathSciNetGoogle Scholar
  6. Fleming WH, Rishel RW (1975) Deterministic and stochastic optimal control. Springer, New YorkCrossRefMATHGoogle Scholar
  7. Fleming WH, Soner HM (1993) Controlled Markov processes and viscosity solutions. Springer, New YorkMATHGoogle Scholar
  8. Howard RA (1960) Dynamic programming and Markov processes. Wiley, New YorkMATHGoogle Scholar
  9. Kushner HJ, Dupuis P (2001) Numerical methods for stochastic control problems in continuous time. Springer, BerlinCrossRefMATHGoogle Scholar
  10. Macki J, Strauss A (1982) Introduction to optimal control theory. Springer, Berlin/Heidelberg/New YorkCrossRefMATHGoogle Scholar
  11. Pontryagin LS, Boltyanskii VG, Gamkrelidze RV, Mishchenko EF (1961) Matematicheskaya teoriya optimal’ nykh prozessov. Fizmatgiz, Moscow. Translated into English. The mathematical theory of optimal processes. John Wiley and Sons (Interscience Publishers), New York, 1962Google Scholar
  12. Ross IM (2009) A primer on Pontryagin’s principle in optimal control. Collegiate Publishers, San FranciscoGoogle Scholar

Copyright information

© Springer-Verlag London 2014

Authors and Affiliations

  1. 1.Dipartimento di Matematica, SAPIENZA – Università di RomaRomeItaly