Abstract
In Chapters 10 to 15, we have shown the convergence of properly designed numerical approximations for a wide range of stochastic and deterministic optimal control problems. The approach to proving the convergence has been based on demonstrating the convergence of a sequence of controlled Markov chains to a controlled process (diffusion, jump diffusion, etc.) appropriate to the given stochastic or deterministic optimal control problem.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer Science+Business Media New York
About this chapter
Cite this chapter
Kushner, H.J., Dupuis, P. (2001). The Viscosity Solution Approach to Proving Convergence of Numerical Schemes. In: Numerical Methods for Stochastic Control Problems in Continuous Time. Stochastic Modelling and Applied Probability, vol 24. Springer, New York, NY. https://doi.org/10.1007/978-1-4613-0007-6_17
Download citation
DOI: https://doi.org/10.1007/978-1-4613-0007-6_17
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4612-6531-3
Online ISBN: 978-1-4613-0007-6
eBook Packages: Springer Book Archive