The New Palgrave Dictionary of Economics

2018 Edition
| Editors: Macmillan Publishers Ltd

Bellman Equation

  • Yongseok Shin
Reference work entry
DOI: https://doi.org/10.1057/978-1-349-95189-5_2392

Abstract

Dynamic programming is a method that solves a complicated multi-stage decision problem by first transforming it into a sequence of simpler problems. Bellman equations, named after the creator of dynamic programming Richard E. Bellman (1920–1984), are functional equations that embody this transformation.

Keywords

Bellman equation Consumption smoothing Convergence Dynamic programming Markov processes Neoclassical growth theory Value function 

JEL Classifications

C61 
This is a preview of subscription content, log in to check access.

Bibliography

  1. Bellman, R. 1957. Dynamic programming. Princeton: Princeton University Press.Google Scholar
  2. Benveniste, L., and J. Scheinkman. 1979. On the differentiability of the value function in dynamic models of economics. Econometrica 47: 727–732.CrossRefGoogle Scholar
  3. Bertsekas, D.P. 1976. Dynamic programming and stochastic control. New York: Academic Press.Google Scholar
  4. Blackwell, D. 1965. Discounted dynamic programming. Annals of Mathematical Statistics 36: 226–235.CrossRefGoogle Scholar
  5. Brock, W.A., and L. Mirman. 1972. Optimal economic growth and uncertainty: The discounted case. Journal of Economic Theory 4: 479–513.CrossRefGoogle Scholar
  6. Ljungqvist, L., and T.J. Sargent. 2004. Recursive macroeconomic theory. 2nd ed. Cambridge, MA: MIT Press.Google Scholar
  7. Miller, B.L. 1974. Optimal consumption with a stochastic income stream. Econometrica 42: 253–266.CrossRefGoogle Scholar
  8. Stokey, N.L., and R.E. Lucas Jr. 1989. Recursive methods in economic dynamics. Cambridge: Harvard University Press.Google Scholar

Copyright information

© Macmillan Publishers Ltd. 2018

Authors and Affiliations

  • Yongseok Shin
    • 1
  1. 1.