Skip to main content

Optimal Control and the Dynamic Programming Principle

  • Living reference work entry
  • First Online:
  • 501 Accesses

Abstract

This entry illustrates the application of Bellman’s Dynamic Programming Principle within the context of optimal control problems for continuous-time dynamical systems. The approach leads to a characterization of the optimal value of the cost functional, over all possible trajectories given the initial conditions, in terms of a partial differential equation called the Hamilton–Jacobi–Bellman equation. Importantly, this can be used to synthesize the corresponding optimal control input as a state-feedback law.

This is a preview of subscription content, log in via an institution.

Bibliography

  • Bellman R (1957) Dynamic programming. Princeton University Press, Princeton

    MATH  Google Scholar 

  • Bertsekas DP (1987) Dynamic programming: deterministic and stochastic models. Prentice Hall, Englewood Cliffs

    MATH  Google Scholar 

  • Bardi M, Capuzzo Dolcetta I (1997) Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman equations. Birkhäuser, Boston

    Book  MATH  Google Scholar 

  • Barles G (1994) Solutions de viscosité des équations de Hamilton-Jacobi. In: Mathématiques et applications, vol 17. Springer, Paris

    Google Scholar 

  • Boltyanskii VG, Gamkrelidze RV, Pontryagin LS (1956) On the theory of optimal processes (in Russian). Doklady Akademii Nauk SSSR 110, 7–10

    MathSciNet  Google Scholar 

  • Fleming WH, Rishel RW (1975) Deterministic and stochastic optimal control. Springer, New York

    Book  MATH  Google Scholar 

  • Fleming WH, Soner HM (1993) Controlled Markov processes and viscosity solutions. Springer, New York

    MATH  Google Scholar 

  • Howard RA (1960) Dynamic programming and Markov processes. Wiley, New York

    MATH  Google Scholar 

  • Kushner HJ, Dupuis P (2001) Numerical methods for stochastic control problems in continuous time. Springer, Berlin

    Book  MATH  Google Scholar 

  • Macki J, Strauss A (1982) Introduction to optimal control theory. Springer, Berlin/Heidelberg/New York

    Book  MATH  Google Scholar 

  • Pontryagin LS, Boltyanskii VG, Gamkrelidze RV, Mishchenko EF (1961) Matematicheskaya teoriya optimal’ nykh prozessov. Fizmatgiz, Moscow. Translated into English. The mathematical theory of optimal processes. John Wiley and Sons (Interscience Publishers), New York, 1962

    Google Scholar 

  • Ross IM (2009) A primer on Pontryagin’s principle in optimal control. Collegiate Publishers, San Francisco

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maurizio Falcone .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer-Verlag London

About this entry

Cite this entry

Falcone, M. (2014). Optimal Control and the Dynamic Programming Principle. In: Baillieul, J., Samad, T. (eds) Encyclopedia of Systems and Control. Springer, London. https://doi.org/10.1007/978-1-4471-5102-9_209-1

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-5102-9_209-1

  • Received:

  • Accepted:

  • Published:

  • Publisher Name: Springer, London

  • Online ISBN: 978-1-4471-5102-9

  • eBook Packages: Springer Reference EngineeringReference Module Computer Science and Engineering

Publish with us

Policies and ethics