Skip to main content

Efficient Uncertainty Propagation for Reinforcement Learning with Limited Data

  • Conference paper
Artificial Neural Networks – ICANN 2009 (ICANN 2009)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5768))

Included in the following conference series:

Abstract

In a typical reinforcement learning (RL) setting details of the environment are not given explicitly but have to be estimated from observations. Most RL approaches only optimize the expected value. However, if the number of observations is limited considering expected values only can lead to false conclusions. Instead, it is crucial to also account for the estimator’s uncertainties. In this paper, we present a method to incorporate those uncertainties and propagate them to the conclusions. By being only approximate, the method is computationally feasible. Furthermore, we describe a Bayesian approach to design the estimators. Our experiments show that the method considerably increases the robustness of the derived policies compared to the standard approach.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Calafiore, G., El Ghaoui, L.: On distributionally robust chance-constrained linear programs. In: Optimization Theory and Applications (2006)

    Google Scholar 

  2. D’Agostini, G.: Bayesian Reasoning in Data Analysis: A Critical Introduction. World Scientific Publishing, Singapore (2003)

    Book  MATH  Google Scholar 

  3. Delage, E., Mannor, S.: Percentile optimization in uncertain Markov decision processes with application to efficient exploration. In: Proc. of the Int. Conf. on Machine Learning (2007)

    Google Scholar 

  4. Engel, Y., Mannor, S., Meir, R.: Bayes meets Bellman: the Gaussian process approach to temporal difference learning. In: Proc. of the Int. Conf. on Machine Learning (2003)

    Google Scholar 

  5. Engel, Y., Mannor, S., Meir, R.: Reinforcement learning with Gaussian processes. In: Proc. of the Int. Conf. on Machine Learning (2005)

    Google Scholar 

  6. Ghavamzadeh, M., Engel, Y.: Bayesian policy gradient algorithms. In: Advances in Neural Information Processing Systems (2006)

    Google Scholar 

  7. Ghavamzadeh, M., Engel, Y.: Bayesian actor-critic algorithms. In: Proc. of the Int. Conf. on Machine Learning (2007)

    Google Scholar 

  8. Nilim, A., El Ghaoui, L.: Robustness in Markov decision problems with uncertain transition matrices. In: Advances in Neural Information Processing Systems (2003)

    Google Scholar 

  9. Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley & Sons Canada, Ltd., Chichester (1994)

    Book  MATH  Google Scholar 

  10. Schneegass, D., Udluft, S., Martinetz, T.: Uncertainty propagation for quality assurance in reinforcement learning. In: Proc. of the Int. Joint Conf. on Neural Networks (2008)

    Google Scholar 

  11. Strehl, A.L., Littman, M.L.: An empirical evaluation of interval estimation for markov decision processes. In: 16th IEEE Int. Conf. on Tools with Artificial Intelligence, pp. 128–135 (2004)

    Google Scholar 

  12. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)

    Google Scholar 

  13. Tresp, V.: The wet game of chicken. Siemens AG, CT IC 4, Technical Report (1994)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Hans, A., Udluft, S. (2009). Efficient Uncertainty Propagation for Reinforcement Learning with Limited Data. In: Alippi, C., Polycarpou, M., Panayiotou, C., Ellinas, G. (eds) Artificial Neural Networks – ICANN 2009. ICANN 2009. Lecture Notes in Computer Science, vol 5768. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04274-4_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-04274-4_8

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-04273-7

  • Online ISBN: 978-3-642-04274-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics