Skip to main content

On the complexity of finite memory policies for Markov decision processes

  • Contributed Papers
  • Conference paper
  • First Online:
Mathematical Foundations of Computer Science 1995 (MFCS 1995)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 969))

Abstract

We consider some complexity questions concerning a model of uncertainty known as Markov decision processes. Our results concern the problem of constructing optimal policies under a criterion of optimality defined in terms of constraints on the behavior of the process. The constraints are described by regular languages, and the motivation goes from robot motion planning. It is known that, in the case of perfect information, optimal policies under the traditional cost criteria can be found among Markov policies and in polytime. We show, firstly, that for the behavior criterion optimal policies are not Markovian for finite as well as infinite horizon. On the other hand, optimal policies in this case lie in the class of finite memory policies defined in the paper, and can be found in polytime. We remark that in the case of partial information, finite memory policies cannot be optimal in the general situation. Nevertheless, the class of finite memory policies seems to be of interest for probabilistic policies: though probabilistic policies are not better than deterministic ones in the general class of history remembering policies, the former ones can be better in the class of finite memory policies.

The research of this author was supported by DRET and Armines contract 920171.00.1013.

The research of this author was partially supported by DRET contract 91/1061.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. D. P. Bertsekas. Dynamic Programming and Stochastic Control. Academic Press, New York, 1976.

    Google Scholar 

  2. D. Burago, M. de Rougemont, and A. Slissenko. On the complexity of partially observed Markov decision processes. 19p., accepted to Theor. Comput. Sci., 1995.

    Google Scholar 

  3. C. J. Eilenberg. Automata, Languages and Machines. Academic Press, New York, 1974. Vol. A.

    Google Scholar 

  4. L.C.M. Kallenberg. Linear programming and finite Markovian control problems. Technical Report 148, Mathematics Centrum Tract, Amsterdam, 1983.

    Google Scholar 

  5. C. H. Papadimitriou and J. N. Tsitsiklis. The complexity of Markov decision procedures. Mathematics of Operations Research, 12(3):441–450, 1987.

    Google Scholar 

  6. M.L. Puterman. Markov decision processes. In D.P. Heyman and M.J. Sobel, editors, Handbooks in Operations Research and Management Science. Stochastic Models, pages 331–434. North Holland, 1990. Vol. 2.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Jiří Wiedermann Petr Hájek

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Beauquier, D., Burago, D., Slissenko, A. (1995). On the complexity of finite memory policies for Markov decision processes. In: Wiedermann, J., Hájek, P. (eds) Mathematical Foundations of Computer Science 1995. MFCS 1995. Lecture Notes in Computer Science, vol 969. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-60246-1_125

Download citation

  • DOI: https://doi.org/10.1007/3-540-60246-1_125

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-60246-0

  • Online ISBN: 978-3-540-44768-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics