Abstract
In this chapter we define many of the standard control problems whose numerical solutions will concern us in the subsequent chapters. Other, less familiar control problems will be discussed separately in later chapters. We will first define cost functionals for uncontrolled processes, and then formally discuss the partial differential equations which they satisfy. Then the cost functionals for the controlled problems will be stated and the partial differential equations for the optimal cost formally derived. These partial differential equations are generally known as Bellman equations or dynamic programming equations. The main tool in the derivations is Itô’s formula.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1992 Springer-Verlag New York, Inc.
About this chapter
Cite this chapter
Kushner, H.J., Dupuis, P.G. (1992). Dynamic Programming Equations. In: Numerical Methods for Stochastic Control Problems in Continuous Time. Applications of Mathematics, vol 24. Springer, New York, NY. https://doi.org/10.1007/978-1-4684-0441-8_4
Download citation
DOI: https://doi.org/10.1007/978-1-4684-0441-8_4
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4684-0443-2
Online ISBN: 978-1-4684-0441-8
eBook Packages: Springer Book Archive