Skip to main content

Evidential Markov Decision Processes

  • Conference paper
  • 878 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6717))

Abstract

This paper proposes a new model, the EMDP (Evidential Markov Decision Process). It is a MDP (Markov Decision Process) for belief functions in which rewards are defined for each state transition, like in a classical MDP, whereas the transitions are modeled as in an EMC (Evidential Markov Chain), i.e. they are sets transitions instead of states transitions. The EMDP can fit to more applications than a MDPST (MDP with Set-valued Transitions). Generalizing to belief functions allows us to cope with applications with high uncertainty (imprecise or lacking data) where probabilistic approaches fail. Implementation results are shown on a search-and-rescue unmanned rotorcraft benchmark.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Doré, P.-E., Martin, A., Abi-Zeid, I., Jousselme, A.-L., Maupin, P.: Belief functions induced by multimodal probability density functions, an application to search and rescue pronlem. RAIRO Operations Research 44 (October-December 2010)

    Google Scholar 

  2. Ferson, S., Kreinovich, V., Ginzburg, L., Myers, D.S., Sentz, K.: Constructing probability boxes and Dempster-Shafer structures. Sand report no sand2002-4015, Sandia National Laboratories, Albuquerque, New Mexico and Livermore, California (January 2003)

    Google Scholar 

  3. Fouque, L., Appriou, A., Pieczynski, W.: An evidential Markovian model for data fusion and unsupervised image classification. In: Proc. of 3rd Int. Conf. on Information Fusion (FUSION), Paris, France, pp. YuB4-25–TuB4-31, July 10-13 (2000)

    Google Scholar 

  4. Geffner, H.: Planning under uncertainty: a survey. In: Ghallab, M., Hertzberg, J., Traverso, P. (eds.) Proc. of 6th Int. Conf. on Artificial Intelligence Planning Syst (AIPS), Toulouse, France, April 23-27 (2002)

    Google Scholar 

  5. Givan, R., Leach, S., Dean, T.: Bounded-parameter Markov decision processes. In: Steel, S. (ed.) ECP 1997. LNCS, vol. 1348, pp. 234–246. Springer, Heidelberg (1997)

    Chapter  Google Scholar 

  6. Harmanec, D.: Generalizing Markov decision processes to imprecise probabilities. J. Stat. Planning and Inference 105, 199–213 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  7. Lanchantin, P., Pieczynski, W.: Unsupervised restoration of hidden nonstationary Markov chains using evidential priors. IEEE Trans. on Signal Processing 53, 3091–3098 (2005)

    Article  MathSciNet  Google Scholar 

  8. Perny, P., Spanjaard, O., Weng, P.: Algebraic Markov decision processes. In: Proc. of the 19th IJCAI, Edinburgh, Scotland, July 30-August 5, pp. 1372–1377 (2005)

    Google Scholar 

  9. Perny, P., Weng, P.: On finding compromise solutions in multiobjective Markov decision processes. In: Porc. of 5th Multidisciplinary M-PREF Workshop on Advances in Preference Handling (ECAI 2010), Lisbon, Portugal, August 2010, pp. 55–60 (2010)

    Google Scholar 

  10. Pieczynski, W.: Multisensor triplet Markov chain and theory of evidence. Int. J. Approximate Reasoning 45, 1–16 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  11. Puterman, M.L.: Markov Decision Processes. Wiley, New York (1994)

    Book  MATH  Google Scholar 

  12. Rachelson, E., Teichteil, F., Garcia, F.: XMDP: un modèle de planification temporelle dans l’incertain á actions paramétriques. Journées Françaises Planification Décision Apprentissage (2007)

    Google Scholar 

  13. Sabbadin, R.: Une Approche Ordinale de la Decision dans l’Incertain: Axiomatisation, Representation Logique et Application à la Décision Séquentielle. Ph.d. thesis, Université Paul Sabatier, Toulouse, France (December 1998)

    Google Scholar 

  14. Satia, J.K., Lave, R.E.: MDPs with uncertain transition probabilities. Operations Research 21, 728–740 (1970)

    Article  MATH  Google Scholar 

  15. Schmeidler, D.: Subjective probability and expected utility without additivity. Econometrica 57, 571–587 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  16. Shafer, G.: A Mathematical Theory of Evidence. Princeton Univ. Press, Princeton (1976)

    MATH  Google Scholar 

  17. Soubaras, H.: An Evidential Measure of Risk in Evidential Markov Chains. In: Sossai, C., Chemello, G. (eds.) ECSQARU 2009. LNCS, vol. 5590, pp. 863–874. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  18. Soubaras, H.: Probabilistic and non-probabilistic measures of risk in Markov-type systems for Planning under Uncertainty. Ph.d. thesis, Télécom ParisTech, France, January 21 (2011)

    Google Scholar 

  19. Teichteil-Konigsbuch, F., Fabiani, P.: An hybrid probabilistic model for autonomous exploration. In: Proc. of 20th RFIA (2004)

    Google Scholar 

  20. Teichteil-Konigsbuch, F., Fabiani, P.: Symbolic heuristic policy iteration algorithms for structured decision-theoretic exploration problems. In: Proc. of Workshop WS6 of ICAPS 2005, Monterye, CA, June 6-10 (2005)

    Google Scholar 

  21. Teichteil-Konigsbuch, F., Fabiani, P.: A multi-thread decisional architecture for real-time planning under uncertaint. In: Proc. of 3rd Workshop on Planning and Plan Execution for Real-World Syst., September 22 (2007)

    Google Scholar 

  22. Teichteil-Konigsbuch, F., Infantes, G.: MDP hybrides sans intégration analytique en utilisant régression, échantillonnage et mises-à-jour locales. In: Proc. of JFPDA (April 2009)

    Google Scholar 

  23. Trevizan, F.W., Cozman, F.G., de Barros, L.N.: Planning under risk and Knightian uncertainty. In: Proc. of 20th IJCAI, Hyderabad, India, January 6-12, pp. 2023–2028 (2007)

    Google Scholar 

  24. Trevizan, F.W., Cozman, F.G., de Barros, L.N.: Mixed probabilistic and nondeterministic factored planning through Markov decision processes with set-valued transitions. In: Proc. of 18th Int. Conf. on Automated Planning and Scheduling (ICAPS), Sydney, Australia, September 14-18 (2008)

    Google Scholar 

  25. Walley, P.: Statistical Reasoning With Imprecise Probabilities. Chapman & Hall, New York (1991)

    Book  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Soubaras, H., Labreuche, C., Savéant, P. (2011). Evidential Markov Decision Processes. In: Liu, W. (eds) Symbolic and Quantitative Approaches to Reasoning with Uncertainty. ECSQARU 2011. Lecture Notes in Computer Science(), vol 6717. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-22152-1_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-22152-1_29

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-22151-4

  • Online ISBN: 978-3-642-22152-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics