Abstract
In this Chapter we consider several optimal control problems whose value function is defined and continuous on the whole space ℝN. This setting is suitable for those problems where no a priori constraint is imposed on the state of the control system. For all the problems considered we establish the Dynamic Programming Principle and derive from it the appropriate Hamilton-Jacobi-Bellman equation for the value function. This allows us to apply the theory of Chapter II, and some extensions of it, to prove that the value function can in fact be characterized as the unique viscosity solution of the corresponding Hamilton-Jacobi-Bellman equation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1997 Springer Science+Business Media New York
About this chapter
Cite this chapter
Bardi, M., Capuzzo-Dolcetta, I. (1997). Optimal control problems with continuous value functions: unrestricted state space. In: Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations. Systems & Control: Foundations & Applications. Birkhäuser, Boston, MA. https://doi.org/10.1007/978-0-8176-4755-1_3
Download citation
DOI: https://doi.org/10.1007/978-0-8176-4755-1_3
Publisher Name: Birkhäuser, Boston, MA
Print ISBN: 978-0-8176-4754-4
Online ISBN: 978-0-8176-4755-1
eBook Packages: Springer Book Archive