Skip to main content

Deep Transfer Learning via Minimum Enclosing Balls

  • Conference paper
  • First Online:
  • 2156 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11303))

Abstract

Training of deep learning algorithms such as CNN, LSTM, or GRU often requires large amount of data. However in real world applications, the amount of data especially labelled data is limited. To address this challenge, we study Deep Transfer Learning (DTL) in the context of Multitasking Learning (MTL) to extract sharable knowledge from tasks and use it for related tasks. In this paper, we use Minimum Closed Ball (MEB) as a flexible knowledge representation method to map shared domain knowledge from primary task to secondary task in multitasking learning. The experiments provide both analytic and empirical results to show the effectiveness and robustness of the proposed MEB-based deep transfer learning.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Badoiu, M.: Optimal core sets for balls. In: DIMACS Workshop on Computational Geometry (2002)

    Google Scholar 

  2. Chapelle, O., Scholkopf, B., Zien, A.: Semi-supervised Learning. MIT Press, Cambridge (2006)

    Book  Google Scholar 

  3. Cho, K., van Merrienboer, B., Gülçehre, Ç., Bahdanau, D., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1724–1734 (2014)

    Google Scholar 

  4. Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., Darrell, T.: Decaf: A deep convolutional activation feature for generic visual recognition. In: Proceedings of the 31st International Conference on Machine Learning (ICML), pp. 647–655 (2014)

    Google Scholar 

  5. Hochreiter, S., Schmidhuber, J.: Learning to forget: continual prediction with lstm. Neural Comput. 12(10), 2451–2471 (2000)

    Article  Google Scholar 

  6. Jure, Z., Yann, L.: Stereo matching by training a convolutional neural network to compare image patches. J. Mach. Learn. Res. 17, 65:1–65:32 (2016)

    MATH  Google Scholar 

  7. Kumar, P., Mitchell, J.S.B., Yildirim, E.A.: Approximate minimum enclosing balls in high dimensions using core-sets. J. Exp. Algorithmics (JEA) 8, 1–1 (2003)

    MathSciNet  MATH  Google Scholar 

  8. Liu, F.: Minimum Enclosing Ball-Based Learner Independent Knowledge Transfer for Correlated Multi-task Learning. Master’s thesis, School of Computing and Mathematical Sciences, Auckland University of Technology, September 2010

    Google Scholar 

  9. Mou, L., Meng, Z., Yan, R., Li, G., Xu, Y., Zhang, L., Jin, Z.: How transferable are neural networks in NLP applications? In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 479–489 (2016)

    Google Scholar 

  10. Pang, S., Liu, F., Kadobayashi, Y., Ban, T., Inoue, D.: Training minimum enclosing balls for cross tasks knowledge transfer. In: ICONIP, pp. 375–382, November 2012

    Google Scholar 

  11. Pang, S., Liu, F., Kadobayashi, Y., Ban, T., Inoue, D.: A learner-independent knowledge transfer approach to multi-task learning. Cogn. Comput. 6(3), 304–320 (2014)

    Article  Google Scholar 

  12. Silver, D.L., Mercer, R.E.: Selective functional transfer: Inductive bias from related tasks. In: IASTED International Conference on Artificial Intelligence and Soft Computing (ASC2001), pp. 182–189 (2002)

    Google Scholar 

  13. Tsang, I.W., Kwok, J.T., Cheung, P.M.: Core vector machines: fast SVM training on very large data sets. J. Mach. Learn. Res. 6, 363–392 (2005)

    MathSciNet  MATH  Google Scholar 

  14. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998)

    Article  Google Scholar 

  15. Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? In: NIPS, pp. 3320–3328, December 2014

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fan Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Deng, Z., Liu, F., Zhao, J., Wei, Q., Pang, S., Leng, Y. (2018). Deep Transfer Learning via Minimum Enclosing Balls. In: Cheng, L., Leung, A., Ozawa, S. (eds) Neural Information Processing. ICONIP 2018. Lecture Notes in Computer Science(), vol 11303. Springer, Cham. https://doi.org/10.1007/978-3-030-04182-3_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-04182-3_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-04181-6

  • Online ISBN: 978-3-030-04182-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics