Optimal control problems with continuous value functions: unrestricted state space
In this Chapter we consider several optimal control problems whose value function is defined and continuous on the whole space ℝ N . This setting is suitable for those problems where no a priori constraint is imposed on the state of the control system. For all the problems considered we establish the Dynamic Programming Principle and derive from it the appropriate Hamilton-Jacobi-Bellman equation for the value function. This allows us to apply the theory of Chapter II, and some extensions of it, to prove that the value function can in fact be characterized as the unique viscosity solution of the corresponding Hamilton-Jacobi-Bellman equation.
KeywordsViscosity Solution Pontryagin Maximum Principle Quasivariational Inequality Horizon Problem Viscosity Subsolution
Unable to display preview. Download preview PDF.