Branch of engineering and applied mathematics dealing with optimization of a dynamical system in continuous time. Similar to dynamic programming, the (optimal) value function satisfies an optimality condition, the Hamilton-Jacobi-Bellman equation. For the special case of a linear time-invariant dynamical system with quadratic cost, an explicit solution for the optimal feedback control policy can be found by solving the Riccati equation.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Bryson, A. E., & Ho, Y. C. (1975). Applied optimal control. Washington, DC: Hemisphere.
Sethi, S. P., & Thompson, G. L. (2000). Optimal control theory: Applications to management science and economics (2nd ed.). New York: Springer.
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer Science+Business Media New York
About this entry
Cite this entry
(2013). Optimal Control. In: Gass, S.I., Fu, M.C. (eds) Encyclopedia of Operations Research and Management Science. Springer, Boston, MA. https://doi.org/10.1007/978-1-4419-1153-7_200547
Download citation
DOI: https://doi.org/10.1007/978-1-4419-1153-7_200547
Published:
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4419-1137-7
Online ISBN: 978-1-4419-1153-7
eBook Packages: Business and Economics