Skip to main content

TeXDYNA: Hierarchical Reinforcement Learning in Factored MDPs

  • Conference paper
From Animals to Animats 11 (SAB 2010)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6226))

Included in the following conference series:

Abstract

Reinforcement learning is one of the main adaptive mechanisms that is both well documented in animal behaviour and giving rise to computational studies in animats and robots. In this paper, we present TeXDYNA, an algorithm designed to solve large reinforcement learning problems with unknown structure by integrating hierarchical abstraction techniques of Hierarchical Reinforcement Learning and factorization techniques of Factored Reinforcement Learning. We validate our approach on the LIGHT BOX problem.

This work was founded by CIFRE convention - 1032/2006.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)

    Google Scholar 

  2. Boutilier, C., Dearden, R., Goldszmidt, M.: Exploiting structure in policy construction. In: Proceedings of the 14th International Joint Conference on Artificial Intelligence, pp. 1104–1111 (1995)

    Google Scholar 

  3. Barto, A., Mahadevan, S.: Recent advances in hierarchical reinforcement learning. Discrete Event Systems Journal 13, 41–77 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  4. Hengst, B.: Discovering hierarchy in reinforcement learning with HEXQ. In: Proceedings of the 19th International Conference on Machine Learning, pp. 243–250 (2002)

    Google Scholar 

  5. Jonsson, A., Barto, A.: Causal graph based decomposition of factored MDPs. Journal of Machine Learning Research 7, 2259–2301 (2006)

    MathSciNet  Google Scholar 

  6. Degris, T., Sigaud, O., Wuillemin, P.H.: Learning the structure of factored markov decision processes in reinforcement learning problems. In: Proceedings of the 23rd International Conference on Machine Learning, Pittsburgh, Pennsylvania, pp. 257–264. ACM, New York (2006)

    Google Scholar 

  7. Boutilier, C., Dearden, R., Goldszmidt, M.: Stochastic dynamic programming with factored representations. Artificial Intelligence 121(1-2), 49–107 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  8. Sutton, R.S.: DYNA, an integrated architecture for learning, planning and reacting. In: Working Notes of the AAAI Spring Symposium on Integrated Intelligent Architectures (1991)

    Google Scholar 

  9. Sutton, R., Precup, D., Singh, S.: Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence 112, 181–211 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  10. Jonsson, A.: A causal approach to hierarchical decomposition in reinforcement learning. PhD thesis, University of Massachusetts Amherst (2006)

    Google Scholar 

  11. Vigorito, C.M., Barto, A.G.: Autonomous Hierarchical Skill Acquisition in Factored MDPs. In: Yale Workshop on Adaptive and Learning Systems, New Haven, Connecticut (2008)

    Google Scholar 

  12. Vigorito, C., Barto, A.: Hierarchical Representations of Behavior for Efficient Creative Search. In: AAAI Spring Symposium on Creative Intelligent Systems, Palo Alto, CA (2008)

    Google Scholar 

  13. Singh, S., Barto, A., Chentanez, N.: Intrinsically motivated reinforcement learning. Advances in Neural Information Processing Systems 18, 1281–1288 (2004)

    Google Scholar 

  14. Szita, I., Lörincz, A.: The many faces of optimism: a unifying approach. In: Proceedings of the 25th International Conference on Machine Learning, pp. 1048–1055. ACM, New York (2008)

    Chapter  Google Scholar 

  15. Oudeyer, P.Y., Kaplan, F., Hafner, V.: Intrinsic motivation systems for autonomous mental development. IEEE Transactions on Evolutionary Computation 11 (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kozlova, O., Sigaud, O., Meyer, C. (2010). TeXDYNA: Hierarchical Reinforcement Learning in Factored MDPs . In: Doncieux, S., Girard, B., Guillot, A., Hallam, J., Meyer, JA., Mouret, JB. (eds) From Animals to Animats 11. SAB 2010. Lecture Notes in Computer Science(), vol 6226. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15193-4_46

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-15193-4_46

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-15192-7

  • Online ISBN: 978-3-642-15193-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics