Skip to main content

Consolidation Using Sweep Task Rehearsal: Overcoming the Stability-Plasticity Problem

  • Conference paper
  • First Online:
Book cover Advances in Artificial Intelligence (Canadian AI 2015)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9091))

Included in the following conference series:

Abstract

This paper extends prior work on knowledge consolidation and the stability-plasticity problem within the context of a Lifelong Machine Learning (LML) system. A context-sensitive multiple task learning (csMTL) neural network is used as a consolidated domain knowledge store. Prior work has demonstrated that a csMTL network, in combination with task rehearsal, can retain previous task knowledge when consolidating a sequence of up to ten tasks from a domain. However subsequent experimentation has shown that the method suffers from scaling problems as the learning sequence increases resulting in the loss of prior task accuracy and a growing computational cost for rehearsing prior tasks using larger training sets. A solution to these two problems is presented that uses a sweep method of rehearsal that requires only a small number of rehearsal examples (as few as one) for each prior task per training iteration in order to maintain prior task accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Baxter, J.: Learning model bias. In: Touretzky, D.S., Mozer, M.C., Hasselmo, M.E. (eds.) Advances in Neural Information Processing Systems, vol. 8, pp. 169–175 (1996)

    Google Scholar 

  2. Carlson, A., Betteridge, J., Kisiel, B., Settles, B., Hruschka, Jr., E.R., Mitchell, T.M.: Toward an architecture for never-ending language learning. In: Fox, M., Poole, D. (eds.) AAAI. AAAI Press (2010)

    Google Scholar 

  3. Caruana, R.A.: Multitask learning. Machine Learning 28, 41–75 (1997)

    Article  Google Scholar 

  4. Eljabu, L.: Knowledge Consolidation Using Multiple Task Learning: Overcoming The Stability-Plasticity Problem. Masters Thesis, Jodrey School of Computer Science, Acadia University, Wolfville, NS (2014)

    Google Scholar 

  5. Fowler, B., Silver, D.L.: Consolidation using context-sensitive multiple task learning. In: Butz, C., Lingras, P. (eds.) Canadian AI 2011. LNCS, vol. 6657, pp. 128–139. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  6. Grossberg, S.: Competitive learning: From interactive activation to adaptive resonance. Cognitive Science 11, 23–64 (1987)

    Article  Google Scholar 

  7. McCloskey, M., Cohen, N.J.: Catastrophic interference in connectionist networks: the sequential learning problem. The Psychology of Learning and Motivation 24, 109–165 (1989)

    Article  Google Scholar 

  8. Robins, A.V.: Catastrophic forgetting, rehearsal, and pseudorehearsal. Connection Science 7, 123–146 (1995)

    Article  Google Scholar 

  9. Robins, A.V.: Consolidation in neural networks and in the sleeping brain. Connection Science Special Issue: Transfer in Inductive Systems 8(2), 259–275 (1996)

    Article  Google Scholar 

  10. Silver, D.L.: The consolidation of task knowledge for lifelong machine learning. In: AAAI Spring Symposium 2012 (2012)

    Google Scholar 

  11. Silver, D.L. Mercer, R.E.: The parallel transfer of task knowledge using dynamic learning rates based on a measure of relatedness. Learning to Learn, 213–233 (1997)

    Google Scholar 

  12. Silver, D.L., Mercer, R.E.: The task rehearsal method of life-long learning: overcoming impoverished data. In: Cohen, R., Spencer, B. (eds.) AI 2002. LNCS, vol. 2338, pp. 90–101. Springer, Heidelberg (2002)

    Chapter  Google Scholar 

  13. Silver, D.L., Poirier, R.: Sequential consolidation of learned task knowledge. In: Tawfik, A.Y., Goodwin, S.D. (eds.) Canadian AI 2004. LNCS (LNAI), vol. 3060, pp. 217–232. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  14. Silver, D.L., Poirier, R., Currie, D.: Inductive tranfser with context-sensitive neural networks. Machine Learning 73(3), 313–336 (2008)

    Article  Google Scholar 

  15. Thrun, S.: Is learning the n-th thing any easier than learning the first? In: Advances in Neural Information Processing Systems, pp. 640–646. The MIT Press (1996)

    Google Scholar 

  16. Thrun, S.: Lifelong learning algorithms. Learning to Learn, ch. 1, pp. 181–209. Kluwer Academic Publisher (1997)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daniel L. Silver .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Silver, D.L., Mason, G., Eljabu, L. (2015). Consolidation Using Sweep Task Rehearsal: Overcoming the Stability-Plasticity Problem. In: Barbosa, D., Milios, E. (eds) Advances in Artificial Intelligence. Canadian AI 2015. Lecture Notes in Computer Science(), vol 9091. Springer, Cham. https://doi.org/10.1007/978-3-319-18356-5_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-18356-5_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-18355-8

  • Online ISBN: 978-3-319-18356-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics