Skip to main content

Beyond Simple Rule Extraction: Acquiring Planning Knowledge from Neural Networks

  • Conference paper
Neural Nets WIRN Vietri-01

Part of the book series: Perspectives in Neural Computing ((PERSPECT.NEURAL))

Abstract

This paper discusses learning in hybrid models that goes beyond simple classification rule extraction from backpropagation networks. Although simple rule extraction has received a lot of research attention, we need to further develop hybrid learning models that learn autonomously and acquire both symbolic and subsymbolic knowledge. It is also necessary to study autonomous learning of both subsymbolic and symbolic knowledge in integrated architectures. This paper will describe planning knowledge extraction from neural reinforcement learning that goes beyond extracting simple rules. It includes two approaches towards extracting planning knowledge: the extraction of symbolic rules from neural reinforcement learning, and the extraction of complete plans. This work points to a general framework for achieving the subsymbolic to symbolic transition in an integrated autonomous learning framework.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • L.M. Fu, (1991). Rule learning by searching on adapted nets, Proc.of AAAI’91, pp.590–595.

    Google Scholar 

  • N. Lavrac and S. Dzeroski, (1994). Inductive Logic Programming. Ellis Horword, New York.

    MATH  Google Scholar 

  • L. Lin, (1992). Self-improving reactive agents based on reinforcement learning, planning, and teaching. Machine Learning. Vol.8, pp.293–321.

    Google Scholar 

  • R. Maclin and J. Shavlik, (1994). Incorporating advice into agents that learn from reinforcements. Proc.of AAAI-94. Morgan Kaufmann, San Meteo, CA.

    Google Scholar 

  • R. Sun, (1992). On variable binding in connectionist networks. Connection Science, Vol.4, No.2, pp.93–124. 1992.

    Article  Google Scholar 

  • R. Sun, (1997). Learning, action, and consciousness: a hybrid approach towards modeling consciousness. Neural Networks, 10 (7), pp.1317–1331

    Article  Google Scholar 

  • R. Sun, (2000). Symbol grounding: a new look at an old idea. Philosophical Psychology, Vol.13, No.2, pp.149–172.

    Article  Google Scholar 

  • R. Sun, E. Merrill, and T. Peterson, (2000). Prom implicit skills to explicit knowledge: a bottom-up model of skill learning. Cognitive Science, in press.

    Google Scholar 

  • R. Sun and T. Peterson, (1997). A hybrid model for learning sequential navigation. Proc. of IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA’97). Monterey, CA. pp.234–239. IEEE Press.

    Google Scholar 

  • R. Sun and T. Peterson, (1998). Autonomous learning of sequential tasks: experiments and analyses. IEEE Transactions on Neural Networks, Vol.9, No.6, pp. 1217–1234.

    Article  Google Scholar 

  • R. Sun and C. Sessions, (1998). Extracting plans from reinforcement learners. Proceedings of the 1998 International Symposium on Intelligent Data Engineering and Learning (IDEAL’98). pp.243–248. eds. L. Xu, L. Chan, I. King, and A. Fu. Springer-Verlag.

    Google Scholar 

  • R. Sutton, (1996). Generalization in reinforcement learning. Advances in Neural Information Processing Systems 8. MIT Press. Cambridge, MA.

    Google Scholar 

  • T. Tesauro, (1992). Practical issues in temporal difference learning. Machine Learning. Vol.8, 257–277.

    MATH  Google Scholar 

  • A. Tickle, J. Diederich, et al, (2000). Lessons from past, current issues, and future research directions in extracting knowledge embedded in artificial neural networks. In: S. Wermter and R. Sun, (eds.) Hybrid Neural Systems, Springer-Verlag, Berlin.

    Google Scholar 

  • G. Towell and J. Shavlik, (1993). Extracting refined rules from Knowledge-Based Neural Networks, Machine Learning. 13 (1), 71–101.

    Google Scholar 

  • C. Watkins, (1989). Learning with Delayed Rewards. Ph.D Thesis, Cambridge University, Cambridge, UK.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag London Limited

About this paper

Cite this paper

Sun, R., Peterson, T., Sessions, C. (2002). Beyond Simple Rule Extraction: Acquiring Planning Knowledge from Neural Networks. In: Tagliaferri, R., Marinaro, M. (eds) Neural Nets WIRN Vietri-01. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-0219-9_32

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-0219-9_32

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-85233-505-2

  • Online ISBN: 978-1-4471-0219-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics