Optimal Control of Jump-Markov Processes and Viscosity Solutions
We investigate the Bellman equation that arises in the optimal control of Markov processes. This is a fully nonlinear integro-differential equation. The notion of viscosity solutions is introduced and then existence and uniqueness results are obtained. Also, the connection between the optimal control problem and the Bellman equation is developed.
Unable to display preview. Download preview PDF.
- M. Davis, Piecewise deterministic Markov processes: A general class of non-diffusion stochastic models, J. Royal Stat. Society (B), 46 (1984), pp. 343–388.Google Scholar
- W.H. Fleming and H.M. Soner, work in progress.Google Scholar
- P.-L. Lions and P.E. Souganidis, Viscosity solutions of second-order equations, stochastic control and stochastic differential games, IMA Volumes in Mathematics and its Applications (1987).Google Scholar
- H. Pragarauskas, On the control of jump processes, Lecture Notes in Control and Information Systems, 43, pp. 338–344.Google Scholar
- A.V. Skorokhod, Studies in the Theory of Random Processes, Dover, New York.Google Scholar