Skip to main content

Stochastic Markovian Control: Applications and Algorithms

  • Conference paper
DGOR

Part of the book series: Operations Research Proceedings 1986 ((ORP,volume 1986))

  • 140 Accesses

Summary

This paper aims to give a non-technical introduction to the field of stochastic control, particularly to Markov decision control. The emphasis is on the basic concepts of the Markov decision model and on giving an idea of the practical usefulness of this model in a variety of application areas. Also, the value-iteration algorithm with lower and upper bounds on the average costs will be discussed; this method is easy to apply in practice and is usually the most effective method for solving large-scale Markov decision problems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. A. Federgruen, H. Groenevelt and H.C. Tijms (1984), Coordinated replenishments in a multi-item inventory system with compound Poisson demands, Manag. Sci., 30, 344–357.

    Article  Google Scholar 

  2. F. Kamoun and L. Kleinrock (1980), Analysis of a shared finite storage in a computer network node environment under general traffic conditions, IEEE Trans. Commun., 28, 992–1003.

    Article  Google Scholar 

  3. B.S. Maglaris and M. Schwartz (1982), Optimal fixed frame multiplexing in integrated line-and packet-switched communication networks, IEEE Trans. Inform. Th., 28, 263–273.

    Article  Google Scholar 

  4. P. Nain and K.W. Ross (1985), Optimal multiplexing of heterogeneous traffic with hard constraint, INRIA Report, Valbonne, France.

    Google Scholar 

  5. J.L. Popyack, R.L. Brown and C.C. White III (1979), Discrete versions of an algorithm due to Varaiya, IEEE Trans. Autom. Contr., 24, 503–504.

    Article  Google Scholar 

  6. K.W. Ross (1985), Constrained Markov Decision Processes with Queueing Applications, Ph.D. Dissertation, CICE Program, University of Michigan.

    Google Scholar 

  7. P.J. Schweitzer (1971), Iterative solution of the functional equations of undiscounted Markov renewal programming, J. Math. Anal. Appl., 34, 495–501.

    Article  Google Scholar 

  8. S.Y. Su and R.A. Deininger (1974), Modeling the regulation of Lake Superior under uncertainty of future water supplies, Water Resources, 10, 11–25,

    Article  Google Scholar 

  9. H.C. Tijms, Stochastic Modelling and Analysis: A Computational Approach (1986), Wiley, Chichester.

    Google Scholar 

  10. H.C. Tijms and A.M. Eikeboom (1986), A simple technique in Markovian control with applications to resource allocation in communication networks, OR Letters, 5, 11–19.

    Google Scholar 

  11. J. Van der Wal and P.J. Schweitzer (1987), Iterative bounds on the equilibrium distribution of a finite Markov chain, Probability in the Engineering and Informational Sciences, 1.

    Google Scholar 

  12. R.C. Vergin and M. Scriabin (1977), Maintenance scheduling for multicomponent equipment, AIIE Trans. 15, 297–305.

    Google Scholar 

  13. D.J. White (1985), Real applications of Markov decision problems, Interfaces, 15, no.6, 73–83.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1987 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Tijms, H. (1987). Stochastic Markovian Control: Applications and Algorithms. In: Isermann, H., Merle, G., Rieder, U., Schmidt, R., Streitferdt, L. (eds) DGOR. Operations Research Proceedings 1986, vol 1986. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-72557-9_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-72557-9_3

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-17612-1

  • Online ISBN: 978-3-642-72557-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics