Advertisement

Integrating Relational Reinforcement Learning with Reasoning about Actions and Change

  • Matthias Nickles
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7207)

Abstract

This paper presents an approach to the integration of Relational Reinforcement Learning with Answer Set Programming and the Event Calculus. Our framework allows for background and prior knowledge formulated in a semantically expressive formal language and facilitates the computationally efficient constraining of the learning process by means of soft as well as compulsive (sub-)policies and (sub-)plans generated by an ASP-solver. As part of this, a new planning-based approach to Relational Instance-Based Learning is proposed. An empirical evaluation of our approach shows a significant improvement of learning efficiency and learning results in various benchmark settings.

Keywords

Relational Reinforcement Learning Statistical-Relational Learning Planning Event Calculus Answer Set Programming Hierarchical Reinforcement Learning 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Dzeroski, S., De Raedt, L., Blockeel, H.: Relational reinforcement learning. In: Procs. ICML 1998. Morgan Kaufmann (1998)Google Scholar
  2. 2.
    Kowalski, R., Sergot, M.: A Logic-Based Calculus of Events. New Generation Computing 4, 67–95 (1986)CrossRefGoogle Scholar
  3. 3.
    Ramon, J., Bruynooghe, M.: A polynomial time computable metric between point sets. Acta Informatica 37, 765–780 (2001)MathSciNetzbMATHCrossRefGoogle Scholar
  4. 4.
    Driessens, K.: Relational Reinforcement Learning. PhD thesis, Department of Computer Science, Katholieke Universiteit Leuven (2004)Google Scholar
  5. 5.
    Croonenborghs, T., Ramon, J., Bruynooghe, M.: Towards informed reinforcement learning. In: Procs. of the Workshop on Relational Reinforcement Learning at ICML 2004 (2004)Google Scholar
  6. 6.
    Kersting, K., De Raedt, L.: Logical Markov decision programs. In: Procs. IJCAI 2003 Workshop on Learning Statistical Models of Relational Data (2003)Google Scholar
  7. 7.
    Boutilier, C., Reiter, R., Price, B.: Symbolic dynamic programming for First-order MDP’s. In: Procs. IJCAI 2001. Morgan Kaufmann Publishers (2001)Google Scholar
  8. 8.
    Letia, I.A., Precup, D.: Developing collaborative Golog agents by reinforcement learning. In: Procs. ICTAI 2001. IEEE Computer Society (2001)Google Scholar
  9. 9.
    Bryce, D.: POND: The Partially-Observable and Non-Deterministic Planner. Notes of the 5th International Planning Competition at ICAPS 2006 (2006)Google Scholar
  10. 10.
    Sutton, R.S., Precup, D., Singh, S.: Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence 112, 181–211 (1999)MathSciNetzbMATHCrossRefGoogle Scholar
  11. 11.
    Ferraris, P., Giunchiglia, F.: Planning as satisfiability in nonde-terministic domains. In: Proc. of AAAI 2000 (2000)Google Scholar
  12. 12.
    Shanahan, M., Witkowski, M.: Event Calculus Planning Through Satisfiability. Journal of Logic and Computation 14(5), 731–745 (2004)MathSciNetzbMATHCrossRefGoogle Scholar
  13. 13.
    Gebser, M., Kaminski, R., Kaufmann, B., Ostrowski, M., Schaub, T., Schneider, M.: Potassco: The Potsdam Answer Set Solving Collection. AI Communications 24(2), 105–124 (2011)MathSciNetGoogle Scholar
  14. 14.
    Van Otterlo, M.: A Survey of RL in Relational Domains, CTIT Technical Report Series (2005)Google Scholar
  15. 15.
    Rodrigues, C., Gerard, P., Rouveirol, C.: Relational TD Reinforcement Learning. In: Procs. EWRL 2008 (2008)Google Scholar
  16. 16.
    Rummery, G.A., Niranjan, M.: Online Q-learning using connectionist systems. Technical Report, Cambridge University Engineering Department (1994)Google Scholar
  17. 17.
    Ryan, M.R.K.: Hierarchical Reinforcement Learning: A Hybrid Approach. PhD thesis, University of New South Wales, New South Wales, Australia (2002)Google Scholar
  18. 18.
    Kim, T.-W., Lee, J., Palla, R.: Circumscriptive event calculus as answer set programming. In: Procs. IJCAI 2009 (2009)Google Scholar
  19. 19.
    Dietterich, T.G.: Hierarchical reinforcement learning with the maxq value function decomposition. Journal of Artificial Intelligence Research 13, 227–303 (2000)MathSciNetzbMATHGoogle Scholar
  20. 20.
    Moyle, S., Muggleton, S.: Learning Programs in the Event Calculus. In: Džeroski, S., Lavrač, N. (eds.) ILP 1997. LNCS, vol. 1297, pp. 205–212. Springer, Heidelberg (1997)CrossRefGoogle Scholar
  21. 21.
    Goudey, B.: A Comparison of Situation Calculus and Event Calculus. Physical Review (2007)Google Scholar
  22. 22.
    Shanahan, M.: A Circumscriptive Calculus of Events. Artificial Intelligence 1995, 249–284 (1995)MathSciNetCrossRefGoogle Scholar
  23. 23.
    Giunchiglia, E., Lifschitz, V.: An Action Language Based on Causal Explanation: Preliminary Report. In: Procs. of AAAI 1998 (1998)Google Scholar
  24. 24.
    Gelfond, M., Lifschitz, V.: The stable model semantics for logic programming. In: Procs. of the Fifth International Conference on Logic Programming (ICLP) (1988)Google Scholar
  25. 25.
    Reiter, R.: The frame problem in the situation calculus: a simple solution (sometimes) and a completeness result for goal regression. In: Lifshitz, V. (ed.) Artificial Intelligence and Mathematical Theory of Computation: Papers in Honour of John McCarthy. Academic Press Professional, San Diego (1991)Google Scholar
  26. 26.
    Mueller, E.T.: Event calculus reasoning through satisfiability. Journal of Logic and Computation 14(5), 703–730 (2004)MathSciNetzbMATHCrossRefGoogle Scholar
  27. 27.
    Finzi, A., Lukasiewicz, T.: Adaptive Multi-agent Programming in GTGolog. In: Freksa, C., Kohlhase, M., Schill, K. (eds.) KI 2006. LNCS (LNAI), vol. 4314, pp. 389–403. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  28. 28.
    Beck, D., Lakemeyer, G.: Reinforcement Learning for Golog Programs. In: Procs. Workshop on Relational Approaches to Knowledge Representation and Learning (2009)Google Scholar
  29. 29.
    Fern, A., Yoon, S., Givan, R.: Reinforcement Learning in Relational Domains: A Policy-Language Approach. In: Getoor, L., Taskar, B. (eds.) Introduction to Statistical Relational Learning. MIT Press (2007)Google Scholar
  30. 30.
    Martin, M., Geffner, H.: Learning Generalized Policies from Planning Examples Using Concept Languages. Applied Intelligence 20(1), 9–19 (2004)zbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Matthias Nickles
    • 1
  1. 1.Department of Computer ScienceTechnical University of MunichGarchingGermany

Personalised recommendations