Abstract
We turn now to nonlinear systems with nonquadratic payoffs. We cannot hope for closed-form (explicit) solutions. However, since optimal control is an optimization problem, it is natural to look for necessary conditions of optimality for an optimal control. We explore this question in this section. Since it is a dynamic optimization, the role of time in expressing the necessary condition is important.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bensoussan, A., Mookerjee, R., Mookerjee, V., Yue, W.T., Maintaining Diagnostic Knowledge-based Systems: A Control Theoretic Approach 38 Management Science, 55(2), ( 2009)
Fleming, W.H., Soner, H.M., Controlled Markov Processes and Viscosity Solutions,Springer, New York, (2006)
Lions, P.L., Generalized Solutions of Hamilton-Jacobi Equations,Pitman, (1982)
Pontryagin, L.S., Boltyansky, V.G., Gamgrelidze, R.V., Mishenko, E.F., Mathematical Theory of Optimal Processes,Nauka, Moscow, (1969)
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this chapter
Cite this chapter
Bensoussan, A. (2018). Deterministic Optimal Control. In: Estimation and Control of Dynamical Systems. Interdisciplinary Applied Mathematics, vol 48. Springer, Cham. https://doi.org/10.1007/978-3-319-75456-7_10
Download citation
DOI: https://doi.org/10.1007/978-3-319-75456-7_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-75455-0
Online ISBN: 978-3-319-75456-7
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)