Skip to main content

Unified Inter and Intra Options Learning Using Policy Gradient Methods

  • Conference paper
Recent Advances in Reinforcement Learning (EWRL 2011)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 7188))

Included in the following conference series:

Abstract

Temporally extended actions (or macro-actions) have proven useful for speeding up planning and learning, adding robustness, and building prior knowledge into AI systems. The options framework, as introduced in Sutton, Precup and Singh (1999), provides a natural way to incorporate macro-actions into reinforcement learning. In the subgoals approach, learning is divided into two phases, first learning each option with a prescribed subgoal, and then learning to compose the learned options together. In this paper we offer a unified framework for concurrent inter- and intra-options learning. To that end, we propose a modular parameterization of intra-option policies together with option termination conditions and the option selection policy (inter options), and show that these three decision components may be viewed as a unified policy over an augmented state-action space, to which standard policy gradient algorithms may be applied. We identify the basis functions that apply to each of these decision components, and show that they possess a useful orthogonality property that allows to compute the natural gradient independently for each component. We further outline the extension of the suggested framework to several levels of options hierarchy, and conclude with a brief illustrative example.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Comanici, G., Precup, D.: Optimal policy switching algorithms for reinforcement learning. In: Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems, pp. 709–714 (2010)

    Google Scholar 

  2. Ghavamzadeh, M., Mahadevan, S.: Hierarchical policy gradient algorithms. In: Twentieth ICML, pp. 226–233 (2003)

    Google Scholar 

  3. Neumann, G., Maass, W., Peters, J.: Learning complex motions by sequencing simpler motion templates. In: ICML (2009)

    Google Scholar 

  4. Sutton, R.S., Precup, D., Singh, S.: Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial intelligence 112, 181–211 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  5. Simsek, O., Barto, A.: Using relative novelty to identify useful temporal abstractions in reinforcement learning. In: ICML, vol. 21, p. 751. Citeseer (2004)

    Google Scholar 

  6. Menache, I., Mannor, S., Shimkin, N.: Q-Cut - Dynamic Discovery of Sub-goals in Reinforcement Learning. In: Elomaa, T., Mannila, H., Toivonen, H. (eds.) ECML 2002. LNCS (LNAI), vol. 2430, pp. 295–306. Springer, Heidelberg (2002)

    Chapter  Google Scholar 

  7. Sutton, R.S., McAllester, D., Singh, S., Mansour, Y.: Policy gradient methods for reinforcement learning with function approximation. In: Advances in Neural Information Processing Systems, vol. 12 (2000)

    Google Scholar 

  8. Peters, J., Schaal, S.: Natural actor-critic. Neurocomputing 71(7-9), 1180–1190 (2008)

    Article  Google Scholar 

  9. Bhatnagar, S., Sutton, R.S., Ghavamzadeh, M., Lee, M.: Natural actor-critic algorithms. Automatica 45, 2471–2482 (2009)

    Article  MATH  Google Scholar 

  10. Richter, S., Aberdeen, D., Yu, J.: Natural actor-critic for road traffic optimisation. In: Advances in Neural Information Processing Systems, vol. 19, p. 1169 (2007)

    Google Scholar 

  11. Buffet, O., Dutech, A., Charpillet, F.: Shaping multi-agent systems with gradient reinforcement learning. In: Autonomous Agents and Multi-Agent Systems (2007)

    Google Scholar 

  12. Kakade, S.: A natural policy gradient. In: Advances in Neural Information Processing Systems 14, vol. 2, pp. 1531–1538 (2002)

    Google Scholar 

  13. Bagnell, J., Schneider, J.: Covariant policy search. In: International Joint Conference on Artificial Intelligence, vol. 18, pp. 1019–1024. Citeseer (2003)

    Google Scholar 

  14. Boyan, J.A.: Technical update: Least-squares temporal difference learning. Machine Learning 49, 233–246 (2002)

    Article  MATH  Google Scholar 

  15. Nedić, A., Bertsekas, D.: Least squares policy evaluation algorithms with linear function approximation. Discrete Event Dynamic Systems 13 (2003)

    Google Scholar 

  16. Yoshimoto, J., Nishimura, M., Tokita, Y., Ishii, S.: Acrobot control by learning the switching of multiple controllers. Artificial Life and Robotics 9 (2005)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Levy, K.Y., Shimkin, N. (2012). Unified Inter and Intra Options Learning Using Policy Gradient Methods. In: Sanner, S., Hutter, M. (eds) Recent Advances in Reinforcement Learning. EWRL 2011. Lecture Notes in Computer Science(), vol 7188. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-29946-9_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-29946-9_17

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-29945-2

  • Online ISBN: 978-3-642-29946-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics