Skip to main content

Domain Adaptation Transfer Learning by Kernel Representation Adaptation

  • Conference paper
  • First Online:
Pattern Recognition Applications and Methods (ICPRAM 2017)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 10857))

  • 603 Accesses

Abstract

Domain adaptation, where no labeled target data is available, is a challenging task. To solve this problem, we first propose a new SVM based approach with a supplementary Maximum Mean Discrepancy (MMD)-like constraint. With this heuristic, source and target data are projected onto a common subspace of a Reproducing Kernel Hilbert Space (RKHS) where both data distributions are expected to become similar. Therefore, a classifier trained on source data might perform well on target data, if the conditional probabilities of labels are similar for source and target data, which is the main assumption of this paper. We demonstrate that adding this constraint does not change the quadratic nature of the optimization problem, so we can use common quadratic optimization tools. Secondly, using the same idea that rendering source and target data similar might ensure efficient transfer learning, and with the same assumption, a Kernel Principal Component Analysis (KPCA) based transfer learning method is proposed. Different from the first heuristic, this second method ensures other higher order moments to be aligned in the RKHS, which leads to better performances. Here again, we select MMD as the similarity measure. Then, a linear transformation is also applied to further improve the alignment between source and target data. We finally compare both methods with other transfer learning methods from the literature to show their efficiency on synthetic and real datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    A universal kernel is necessarily characteristic, while the reverse is not true. For more details see [29, 30].

  2. 2.

    http://archive.ics.uci.edu/ml/.

  3. 3.

    Here, KPCA-TL and KPCA-LT-TL lead to the same results.

References

  1. Blöbaum, P., Schulz, A., Hammer, B.: Unsupervised dimensionality reduction for transfer learning. In: Proceedings of the 23rd European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (2015)

    Google Scholar 

  2. Chen, X., Lengellé, R.: Domain adaptation transfer learning by SVM suhject to a maximum-mean-discrepancy-like constraint. In: Proceedings of the 6th International Conference on Pattern Recognition Applications and Methods, ICPRAM 2017 (2017)

    Google Scholar 

  3. Dudley, R.M.: A course on empirical processes. In: Hennequin, P.L. (ed.) École d’Été de Probabilités de Saint-Flour XII - 1982. LNM, vol. 1097, pp. 1–142. Springer, Heidelberg (1984). https://doi.org/10.1007/BFb0099432

    Chapter  Google Scholar 

  4. Dudley, R.M.: Real Analysis and Probability, vol. 74. Cambridge University Press, Cambridge (2002)

    Book  Google Scholar 

  5. Fortet, R., Mourier, E.: Convergence de la répartition empirique vers la réparation théorique. Ann. Scient. École Norm. Sup., 266–285 (1953)

    Google Scholar 

  6. Gao, J., Fan, W., Jiang, J., Han, J.: Knowledge transfer via multiple model local structure mapping. In: Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 283–291. ACM (2008)

    Google Scholar 

  7. Gong, B., Shi, Y., Sha, F., Grauman, K.: Geodesic flow kernel for unsupervised domain adaptation. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2066–2073. IEEE (2012)

    Google Scholar 

  8. Gretton, A., Borgwardt, K.M., Rasch, M.J., Schölkopf, B., Smola, A.: A kernel two-sample test. J. Mach. Learn. Res. 13, 723–773 (2012)

    MathSciNet  MATH  Google Scholar 

  9. Huang, C.-H., Yeh, Y.-R., Wang, Y.-C.F.: Recognizing actions across cameras by exploring the correlated subspace. In: Fusiello, A., Murino, V., Cucchiara, R. (eds.) ECCV 2012. LNCS, vol. 7583, pp. 342–351. Springer, Heidelberg (2012)

    Chapter  Google Scholar 

  10. Huang, J., Gretton, A., Borgwardt, K.M., Schölkopf, B., Smola, A.J.: Correcting sample selection bias by unlabeled data. In: Advances in Neural Information Processing Systems, pp. 601–608 (2006)

    Google Scholar 

  11. Jiang, J.: A literature survey on domain adaptation of statistical classifiers (2008). http://sifaka.cs.uiuc.edu/jiang4/domainadaptation/survey

  12. Joachims, T.: Transductive inference for text classification using support vector machines. In: ICML, vol. 99, pp. 200–209 (1999)

    Google Scholar 

  13. Liang, F., Tang, S., Zhang, Y., Zuoxin, X., Li, J.: Pedestrian detection based on sparse coding and transfer learning. Mach. Vis. Appl. 25(7), 1697–1709 (2014)

    Article  Google Scholar 

  14. Ling, X., Dai, W., Xue, G.-R., Yang, Q., Yu, Y.: Spectral domain-transfer learning. In: Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 488–496. ACM (2008)

    Google Scholar 

  15. Long, M., Wang, J., Ding, G., Sun, J., Yu, P.S.: Transfer feature learning with joint distribution adaptation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2200–2207 (2013)

    Google Scholar 

  16. van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(Nov), 2579–2605 (2008)

    MATH  Google Scholar 

  17. Pan, S.J., Kwok, J.T., Yang, Q.: Transfer learning via dimensionality reduction. In: AAAI, vol. 8, pp. 677–682 (2008)

    Google Scholar 

  18. Pan, S.J., Tsang, I.W., Kwok, J.T., Yang, Q.: Domain adaptation via transfer component analysis. IEEE Trans. Neural Netw. 22(2), 199–210 (2011)

    Article  Google Scholar 

  19. Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010)

    Article  Google Scholar 

  20. Patel, V.M., Gopalan, R., Li, R., Chellappa, R.: Visual domain adaptation: a survey of recent advances. IEEE Signal Process. Mag. 32(3), 53–69 (2015)

    Article  Google Scholar 

  21. Paulsen, V.I.: An Introduction to the Theory of Reproducing Kernel Hilbert Spaces. Cambridge University Press, Cambridge (2009)

    Google Scholar 

  22. Quanz, B., Huan, J.: Large margin transductive transfer learning. In: Proceedings of the 18th ACM Conference on Information and Knowledge Management, pp. 1327–1336. ACM (2009)

    Google Scholar 

  23. Quionero-Candela, J., Sugiyama, M., Schwaighofer, A., Lawrence, N.D.: Dataset Shift in Machine Learning. The MIT Press, Cambridge (2009)

    Google Scholar 

  24. Ren, J., Liang, Z., Hu, S.: Multiple kernel learning improved by MMD. In: Cao, L., Zhong, J., Feng, Y. (eds.) ADMA 2010. LNCS (LNAI), vol. 6441, pp. 63–74. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-17313-4_7

    Chapter  Google Scholar 

  25. Schölkopf, B., Herbrich, R., Smola, A.J.: A generalized representer theorem. In: Helmbold, D., Williamson, B. (eds.) COLT 2001. LNCS (LNAI), vol. 2111, pp. 416–426. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-44581-1_27

    Chapter  Google Scholar 

  26. Serfling, R.J.: Approximation Theorems of Mathematical Statistics, vol. 162. Wiley, Hoboken (2009)

    MATH  Google Scholar 

  27. Si, S., Tao, D., Geng, B.: Bregman divergence-based regularization for transfer subspace learning. IEEE Trans. Knowl. Data Eng. 22(7), 929–942 (2010)

    Article  Google Scholar 

  28. Smola, A.: Maximum mean discrepancy. In: Proceedings of the 13th International Conference, ICONIP 2006, Hong Kong, China, 3–6 October 2006

    Google Scholar 

  29. Sriperumbudur, B.K., Gretton, A., Fukumizu, K., Schölkopf, B., Lanckriet, G.R.G.: Hilbert space embeddings and metrics on probability measures. J. Mach. Learn. Res. 11(Apr), 1517–1561 (2010)

    MathSciNet  MATH  Google Scholar 

  30. Steinwart, I.: On the influence of the kernel on the consistency of support vector machines. J. Mach. Learn. Res. 2, 67–93 (2002)

    MathSciNet  MATH  Google Scholar 

  31. Tan, Q., Deng, H., Yang, P.: Kernel mean matching with a large margin. In: Zhou, S., Zhang, S., Karypis, G. (eds.) ADMA 2012. LNCS (LNAI), vol. 7713, pp. 223–234. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-35527-1_19

    Chapter  Google Scholar 

  32. Tu, W., Sun, S.: Transferable discriminative dimensionality reduction. In: 2011 23rd IEEE International Conference on Tools with Artificial Intelligence (ICTAI), pp. 865–868. IEEE (2011)

    Google Scholar 

  33. Uguroglu, S., Carbonell, J.: Feature Selection for transfer learning. In: Gunopulos, D., Hofmann, T., Malerba, D., Vazirgiannis, M. (eds.) ECML PKDD 2011. LNCS (LNAI), vol. 6913, pp. 430–442. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-23808-6_28

    Chapter  Google Scholar 

  34. Wang, Z., Song, Y., Zhang, C.: Transferred dimensionality reduction. In: Daelemans, W., Goethals, B., Morik, K. (eds.) ECML PKDD 2008. LNCS (LNAI), vol. 5212, pp. 550–565. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-87481-2_36

    Chapter  Google Scholar 

  35. Yang, S., Lin, M., Hou, C., Zhang, C., Yi, W.: A general framework for transfer sparse subspace learning. Neural Comput. Appl. 21(7), 1801–1817 (2012)

    Article  Google Scholar 

  36. Zhang, P., Zhu, X., Guo, L.: Mining data streams with labeled and unlabeled training examples. In: Ninth IEEE International Conference on Data Mining, ICDM 2009, pp. 627–636. IEEE (2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaoyi Chen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chen, X., Lengellé, R. (2018). Domain Adaptation Transfer Learning by Kernel Representation Adaptation. In: De Marsico, M., di Baja, G., Fred, A. (eds) Pattern Recognition Applications and Methods. ICPRAM 2017. Lecture Notes in Computer Science(), vol 10857. Springer, Cham. https://doi.org/10.1007/978-3-319-93647-5_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-93647-5_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-93646-8

  • Online ISBN: 978-3-319-93647-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics