Optimal control problems with continuous value functions: unrestricted state space

  • Martino Bardi
  • Italo Capuzzo-Dolcetta
Part of the Systems & Control: Foundations & Applications book series (MBC)


In this Chapter we consider several optimal control problems whose value function is defined and continuous on the whole space ℝ N . This setting is suitable for those problems where no a priori constraint is imposed on the state of the control system. For all the problems considered we establish the Dynamic Programming Principle and derive from it the appropriate Hamilton-Jacobi-Bellman equation for the value function. This allows us to apply the theory of Chapter II, and some extensions of it, to prove that the value function can in fact be characterized as the unique viscosity solution of the corresponding Hamilton-Jacobi-Bellman equation.


Viscosity Solution Pontryagin Maximum Principle Quasivariational Inequality Horizon Problem Viscosity Subsolution 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer Science+Business Media New York 1997

Authors and Affiliations

  • Martino Bardi
    • 1
  • Italo Capuzzo-Dolcetta
    • 2
  1. 1.Dipartimento di Matematica P. ed A.Università di PadovaPadovaItaly
  2. 2.Dipartimento di MatematicaUniversità di Roma “La Sapienza”RomaItaly

Personalised recommendations