Skip to main content

Consolidation Using Context-Sensitive Multiple Task Learning

  • Conference paper
Advances in Artificial Intelligence (Canadian AI 2011)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6657))

Included in the following conference series:

Abstract

Machine lifelong learning (ML3) is concerned with machines capable of learning and retaining knowledge over time, and exploiting this knowledge to assist new learning. An ML3 system must accurately retain knowledge of prior tasks while consolidating in knowledge of new tasks, overcoming the stability-plasticity problem. A system is presented using a context-sensitive multiple task learning (csMTL) neural network. csMTL uses a single output and additional context inputs for associating examples with tasks. A csMTL-based ML3 system is analyzed empirically using synthetic and real domains. The experiments focus on the effective retention and consolidation of task knowledge using both functional and representational transfer. The results indicate that combining the two methods of transfer serves best to retain prior knowledge, but at the cost of less effective new task consolidation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Baxter, J.: Learning model bias. In: Touretzky, D.S., Mozer, M.C., Hasselmo, M.E. (eds.) Advances in Neural Information Processing Systems, vol. 8, pp. 169–175. The MIT Press, Cambridge (1996)

    Google Scholar 

  2. Caruana, R.A.: Multitask learning. Machine Learning 28, 41–75 (1997)

    Article  Google Scholar 

  3. Fowler, B.: Context-Sensitive Multiple Task Learning with Consolidated Domain Knowledge. Master’s Thesis Thesis, Jodrey School of Computer Science, Acadia University (2011)

    Google Scholar 

  4. French, R.M.: Pseudo-recurrent connectionist networks: An approach to the sensitivity-stability dilemma. Connection Science 9(4), 353–379 (1997)

    Article  Google Scholar 

  5. Grossberg, S.: Competitive learning: From interactive activation to adaptive resonance. Cognitive Science 11, 23–64 (1987)

    Article  Google Scholar 

  6. Robins, A.V.: Catastrophic forgetting, rehearsal, and pseudorehearsal. Connection Science 7, 123–146 (1995)

    Article  Google Scholar 

  7. Salakhutdinov, R., Adams, R., Tenenbaum, J., Ghahramani, Z., Griffiths, T.: Workshop: Transfer Learning Via Rich Generative Models. Neural Information Processing Systems (NIPS) (2010), http://www.mit.edu/~rsalakhu/workshop_nips2010/index.html

  8. Silver, D.L., Mercer, R.E.: The parallel transfer of task knowledge using dynamic learning rates based on a measure of relatedness. Learning to Learn, 213–233 (1997)

    Google Scholar 

  9. Silver, D.L., Mercer, R.E.: The task rehearsal method of life-long learning: Overcoming impoverished data. In: Advances in Artificial Intelligence, 15th Conference of the Canadian Society for Computational Studies of Intelligence (AI 2002), pp. 90–101 (2002)

    Google Scholar 

  10. Silver, D.L., Poirier, R.: Sequential consolidation of learned task knowledge. In: 17th Conference of the Canadian Society for Computational Studies of Intelligence (AI 2004). LNAI, pp. 217–232 (2004)

    Google Scholar 

  11. Silver, D.L., Poirier, R., Currie, D.: Inductive tranfser with context-sensitive neural networks. Machine Learning 73(3), 313–336 (2008)

    Article  Google Scholar 

  12. Thrun, S.: Is learning the n th thing any easier than learning the first? In: Advances in Neural Information Processing Systems 8, vol. 8, pp. 640–646 (1996)

    Google Scholar 

  13. Turney, P.D.: The identification of context-sensitive features: A formal definition of context for concept learning. In: 13th International Conference on Machine Learning (ICML 1996), Workshop on Learning in Context-Sensitive Domains, Bari, Italy, vol. NRC 39222, pp. 53–59 (1996)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Fowler, B., Silver, D.L. (2011). Consolidation Using Context-Sensitive Multiple Task Learning. In: Butz, C., Lingras, P. (eds) Advances in Artificial Intelligence. Canadian AI 2011. Lecture Notes in Computer Science(), vol 6657. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-21043-3_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-21043-3_16

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-21042-6

  • Online ISBN: 978-3-642-21043-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics