Skip to main content

Policy Gradient Reinforcement Learning with Environmental Dynamics and Action-Values in Policies

  • Conference paper
Knowledge-Based and Intelligent Information and Engineering Systems (KES 2011)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6881))

  • 1282 Accesses

Abstract

The knowledge concerning an agent’s policies consists of two types: the environmental dynamics for defining state transitions around the agent, and the behavior knowledge for solving a given task. However, these two types of information, which are usually combined into state-value or action-value functions, are learned together by conventional reinforcement learning. If they are separated and learned independently, either might be reused in other tasks or environments. In our previous work, we presented learning rules using policy gradients with an objective function, which consists of two types of parameters representing environmental dynamics and behavior knowledge, to separate the learning for each type. In such a learning framework, state-values were used as an example of the set of parameters corresponding to behavior knowledge. By the simulation results on a pursuit problem, our method properly learned hunter-agent policies and reused either bit of knowledge. In this paper, we adopt action-values as a set of parameters in the objective function instead of state-values and present learning rules for the function. Simulation results on the same pursuit problem as in our previous work show that such parameters and learning rules are also useful.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Sutton, R.S., Barto, A.G.: Reinforcement Learning. MIT Press, Cambridge (1998)

    Google Scholar 

  2. Ishihara, S., Igarashi, H.: Behavior Learning Based on a Policy Gradient Method: Separation of Environmental Dynamics and State Values in Policies. In: Ho, T.-B., Zhou, Z.-H. (eds.) PRICAI 2008. LNCS (LNAI), vol. 5351, pp. 164–174. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  3. Williams, R.J.: Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning. Machine Learning 8, 229–256 (1992)

    MATH  Google Scholar 

  4. Kimura, H., Yamamura, M., Kobayashi, S.: Reinforcement Learning by Stochastic Hill Climbing on Discounted Reward. In: Proc. of ICML 1995, pp. 295–303 (1995)

    Google Scholar 

  5. Sutton, R.S., McAllester, D., Singh, S., Mansour, Y.: Policy Gradient Methods for Reinforcement Learning with Function Approximation. In: Advances in NIPS, vol. 12, pp. 1057–1063. MIT Press, Cambridge (2000)

    Google Scholar 

  6. Konda, V.R., Tsitsiklis, J.N.: Actor-Critic Algorithms. In: Advances in NIPS, vol. 12, pp. 1008–1014. MIT Press, Cambridge (2000)

    Google Scholar 

  7. Baird, L., Moore, A.: Gradient Descent for General Reinforcement Learning. In: Advances in NIPS, vol. 11, pp. 968–974. MIT Press, Cambridge (1999)

    Google Scholar 

  8. Igarashi, H., Ishihara, S., Kimura, M.: Reinforcement Learning in Non-Markov Decision Processes —Statistical Properties of Characteristic Eligibility—. Natural Sciences and Engineering 52(2), 1–7 (2008)

    Google Scholar 

  9. Ishihara, S., Igarashi, H.: Applying the Policy Gradient Method to Behavior Learning in Multi-agent Systems: The Pursuit Problem. Systems and Computers in Japan 37(10), 101–109 (2006)

    Article  Google Scholar 

  10. Peshkin, L., Kim, K.E., Meuleau, N., Kaelbling, L.P.: Learning to cooperative via policy search. In: Proc. of UAI 2000, pp. 489–496 (2000)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Ishihara, S., Igarashi, H. (2011). Policy Gradient Reinforcement Learning with Environmental Dynamics and Action-Values in Policies. In: König, A., Dengel, A., Hinkelmann, K., Kise, K., Howlett, R.J., Jain, L.C. (eds) Knowledge-Based and Intelligent Information and Engineering Systems. KES 2011. Lecture Notes in Computer Science(), vol 6881. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-23851-2_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-23851-2_13

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-23850-5

  • Online ISBN: 978-3-642-23851-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics