Eureka! Bellman’s Principle of Optimality is Valid!

  • Moshe Sniedovich
Part of the International Series in Operations Research & Management Science book series (ISOR, volume 46)


Ever since Bellman formulated his Principle of Optimality in the early 1950s, the Principle has been the subject of considerable criticism. In fact, a number of dynamic programming (DP) scholars quantified specific difficulties with the common interpretation of Bellman’s Principle and proposed constructive remedies. In the case of stochastic processes with a non-denumerable state space, the remedy requires the incorporation of the faithful “with probability one” clause. In this short article we are reminded that if one sticks to Bellman’s original version of the principle, then no such a fix is necessary. We also reiterate the central role that Bellman’s favourite “final state condition” plays in the theory of DP in general and the validity of the Principle of Optimality in particular.


dynamic programming principle of optimality final state condition stochastic processes non-denumerable state space 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Bellman, R. (1957). Dynamic Programming, Princeton University Press, Princeton, NJ.zbMATHGoogle Scholar
  2. Bertsekas, D.P. (1976). Dynamic Programming and Stochastic Control, Academic Press, NY.zbMATHGoogle Scholar
  3. Carraway R.L, T.L. Morin, and H. Moskovwitz. (1990). Generalized dynamic programmingfor multicriteria optimization, European Journal of Operations Research, 44, 95–104.CrossRefGoogle Scholar
  4. Denardo, E.V. and L.G. Mitten. (1967). Elements of sequential decision processes, Journal of Industrial Engineering, 18, 106–112.Google Scholar
  5. Denardo, E.V.(1982). Dynamic Programming Models and Applications, Prentic-Hall, Englewood Cliffs, NJ.Google Scholar
  6. Domingo A. and M. Sniedovich. (1993). Experiments with algorithms for nonseparable dynamic programming problems, European Journal of Operational Research 67(4.1), 172–187.CrossRefGoogle Scholar
  7. Karp, R.M. and M. Held. (1967). Finite-state processes and dynamic programming, SIAM Journal of Applied Mathematics, 15, 693–718.MathSciNetCrossRefGoogle Scholar
  8. Kushner, H. (1971). Introduction to Stochastic Control, Holt, Rinehart and Winston, NY.zbMATHGoogle Scholar
  9. Mitten, L.G. (1964). Composition principles for synthesis of optimal multistage processes, Operations Research, 12, 414–424.MathSciNetCrossRefGoogle Scholar
  10. Morin, T.L. (1982). Monotonicity and the principle of optimality, Journal of Mathematical analysis and Applications, 88, 665–674.MathSciNetCrossRefGoogle Scholar
  11. Porteus, E. (1975). An informal look at the principle of optimality, Management Science, 21, 1346–1348.CrossRefGoogle Scholar
  12. Sniedovich, M. (1986). A new look at Bellman’s principle of optimality, Journal of Optimization Theory and Applications, 49(1.1), 161–176.MathSciNetCrossRefGoogle Scholar
  13. Sniedovich, M. (1992). Dynamic Programming, Marcel Dekker, NY.zbMATHGoogle Scholar
  14. Yakowitz S. (1969). Mathematic of Adaptive Control Processes, Elsevier, NY.zbMATHGoogle Scholar
  15. Woeginger, G.J. (2000). When does a dynamic programming formulation guarantee the existence of a fully polynomial time approximation scheme (FPTAS), INFORMS Journal on Computing.Google Scholar

Copyright information

© Springer Science + Business Media, Inc. 2002

Authors and Affiliations

  • Moshe Sniedovich
    • 1
  1. 1.Department of Mathematics and StatisticsThe University of MelbourneParkvilleAustralia

Personalised recommendations