Skip to main content

Sequential Consolidation of Learned Task Knowledge

  • Conference paper
Advances in Artificial Intelligence (Canadian AI 2004)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 3060))

Abstract

A fundamental problem of life-long machine learning is how to consolidate the knowledge of a learned task within a long-term memory structure (domain knowledge) without the loss of prior knowledge. Consolidated domain knowledge makes more efficient use of memory and can be used for more efficient and effective transfer of knowledge when learning future tasks. Relevant background material on knowledge based inductive learning and the transfer of task knowledge using multiple task learning (MTL) neural networks is reviewed. A theory of task knowledge consolidation is presented that uses a large MTL network as the long-term memory structure and task rehearsal to overcome the stability-plasticity problem and the loss of prior knowledge. The theory is tested on a synthetic domain of diverse tasks and it is shown that, under the proper conditions, task knowledge can be sequentially consolidated within an MTL network without loss of prior knowledge. In fact, a steady increase in the accuracy of consolidated domain knowledge is observed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Abu-Mostafa, Y.S.: Hints. Neural Computation 7, 639–671 (1995)

    Article  Google Scholar 

  2. Baxter, J.: Learning internal representations. In: Proceedings of the Eighth International Conference on Computational Learning Theory (1995)

    Google Scholar 

  3. Caruana, R.A.: Multitask learning. Machine Learning 28, 41–75 (1997)

    Article  Google Scholar 

  4. Grossberg, S.: Competitive learning: From interactive activation to adaptive resonance. Cognitive Science 11, 23–64 (1987)

    Article  Google Scholar 

  5. McClelland, J.L., McNaughton, B.L., O’Reilly, R.C.: Why there are complementary learning systems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory. Technical Report PDP.CNS.94.1 (1994)

    Google Scholar 

  6. McCloskey, M., Cohen, N.J.: Catastrophic interference in connectionist networks: the sequential learning problem. The Psychology of Learning and Motivation 24, 109–165 (1989)

    Article  Google Scholar 

  7. Mitchell, T., Thrun, S.: Explanation based neural network learning for robot control. Advances in Neural Information Processing Systems 5(5), 287–294 (1993); Giles, C.L., Hanson, S.J., Cowan, J.D. (eds.)

    Google Scholar 

  8. Mitchell, T.M.: Machine Learning. McGraw Hill, New York (1997)

    MATH  Google Scholar 

  9. Naik, D.K., Mammone, R.J.: Learning by learning in neural networks. Artificial Neural Networks for Speech and Vision (1993)

    Google Scholar 

  10. Pratt, L.Y.: Discriminability-based transfer between neural networks. Advances in Neural Information Processing Systems 5(5), 204–211 (1993); Giles, C.L., Hanson, S.J., Cowan, J.D. (eds.)

    Google Scholar 

  11. Ring, M.: Learning sequential tasks by incrementally adding higher orders. Advances in Neural Information Processing Systems 5(5), 155–122 (1993); Giles, C.L., Hanson, S.J., Cowan, J.D. (eds.)

    Google Scholar 

  12. Robins, A.V.: Catastrophic forgetting, rehearsal, and pseudorehearsal. Connection Science 7, 123–146 (1995)

    Article  Google Scholar 

  13. Sharkey, N.E., Sharkey, A.J.C.: Adaptive generalization and the transfer of knowledge. Working paper - Center for Connection Science (1992)

    Google Scholar 

  14. Shavlik, J.W., Dietterich, T.G.: Readings in Machine Learning. Morgan Kaufmann Publishers, San Mateo (1990)

    Google Scholar 

  15. Silver, D.L.: Selective Transfer of Neural Network Task Knowledge. PhD Thesis, Dept. of Computer Science, University of Western Ontario, London, Canada (2000)

    Google Scholar 

  16. Silver, D.L., McCracken, P.: The consolidation of neural network task knowledge. In: Arif Wani, M. (ed.) Proceedings of the Internation Conference on Machine Learning and Applications (ICMLA 2003), Los Angeles, CA, pp. 185–192 (2003)

    Google Scholar 

  17. Silver, D.L., McCracken, P.: Selective transfer of task knowledge using stochastic noise. In: Xiang, Y., Chaib-draa, B. (eds.) Advances in Artificial Intelligence, Proceedings of the 16th Conference of the Canadian Society for Computational Studies of Intelligence (AI 2003), pp. 190–205. Springer, Heidelberg (2003)

    Google Scholar 

  18. Silver, D.L., Mercer, R.E.: The parallel transfer of task knowledge using dynamic learning rates based on a measure of relatedness. Connection Science Special Issue: Transfer in Inductive Systems 8(2), 277–294 (1996)

    Google Scholar 

  19. Silver, D.L., Mercer, R.E.: The task rehearsal method of life-long learning: Overcoming impoverished data. In: Cohen, R., Spencer, B. (eds.) Canadian AI 2002. LNCS (LNAI), vol. 2338, pp. 90–101. Springer, Heidelberg (2002)

    Chapter  Google Scholar 

  20. Singh, S.P.: Transfer of learning by composing solutions for elemental sequential tasks. Machine Learning (1992)

    Google Scholar 

  21. Thrun, S.: Lifelong learning algorithms. Learning to Learn, 181–209 (1997)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2004 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Silver, D.L., Poirier, R. (2004). Sequential Consolidation of Learned Task Knowledge. In: Tawfik, A.Y., Goodwin, S.D. (eds) Advances in Artificial Intelligence. Canadian AI 2004. Lecture Notes in Computer Science(), vol 3060. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-24840-8_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-24840-8_16

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-22004-6

  • Online ISBN: 978-3-540-24840-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics