Transfer Domain Class Clustering for Unsupervised Domain Adaptation

  • Yunxin Fan
  • Gang Yan
  • Shuang Li
  • Shiji Song
  • Wei Wang
  • Xinping Peng
Conference paper
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 482)

Abstract

In this paper, we propose a transfer domain class clustering (TDCC) algorithm to address the unsupervised domain adaptation problem, in which the training data (source domain) and the test data (target domain) follow different distributions. TDCC aims to derive new feature representations for source and target in a latent subspace to simultaneously reduce the distribution distance between two domains, which helps transfer the source knowledge to the target domain effectively, and enhance the class discriminativeness of data as much as possible by minimizing the intra-class variations, which can benefit the final classification a lot. The effectiveness of TDCC is verified by comprehensive experiments on several cross-domain datasets, and the results demonstrate that TDCC is superior to the competitive algorithms.

Keywords

Feature learning Distribution adaptation Domain adaptation Transfer learning 

Notes

Acknowledgements

This research is supported by the CRRC Major Scientific Projects under Grant No. 2106CKZ206-1 and National Key R&D Program under Grant No. 2016YFB1200203.

References

  1. 1.
    Pan SJ, Yang Q (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng 22(10):1345–1359CrossRefGoogle Scholar
  2. 2.
    Long M, Wang J, Ding G et al (2014) Transfer joint matching for unsupervised domain adaptation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1410–1417Google Scholar
  3. 3.
    Long M, Wang J, Ding G et al (2013) Transfer feature learning with joint distribution adaptation. In: Proceedings of the IEEE international conference on computer vision, pp 2200–2207Google Scholar
  4. 4.
    Li S, Song S, Huang G (2016) Prediction reweighting for domain adaptation. IEEE Trans Neural Netw Learn SystGoogle Scholar
  5. 5.
    Pan SJ, Tsang IW, Kwok JT et al (2011) Domain adaptation via transfer component analysis. IEEE Trans Neural Netw 22(2):199–210CrossRefGoogle Scholar
  6. 6.
    Suykens JAK, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Process Lett 9(3):293–300CrossRefMATHGoogle Scholar
  7. 7.
    Fukunaga K, Narendra PM (1975) A branch and bound algorithm for computing k-nearest neighbors. IEEE Trans Comput 100(7):750–753CrossRefMATHGoogle Scholar
  8. 8.
    Gretton A, Borgwardt KM, Rasch MJ et al (2013) A kernel two-sample test. J Mach Learn Res 13:723–773Google Scholar
  9. 9.
    Jolliffe I (2002) Principal component analysis. WileyGoogle Scholar
  10. 10.
    Schölkopf B, Smola A, Müller KR (1998) Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput 10(5):1299–1319CrossRefGoogle Scholar
  11. 11.
    Gong B, Shi Y, Sha F et al (2012) Geodesic flow kernel for unsupervised domain adaptation. In: Computer vision and pattern recognition (CVPR), 2012 IEEE Conference on IEEE, pp 2066–2073Google Scholar
  12. 12.
    Wang H, Wang W, Zhang C et al (2014) Cross-domain metric learning based on information theory. AAAI, pp 2099–2105Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  • Yunxin Fan
    • 1
  • Gang Yan
    • 1
  • Shuang Li
    • 2
  • Shiji Song
    • 2
  • Wei Wang
    • 1
  • Xinping Peng
    • 1
  1. 1.The State Key Laboratory of Heavy Duty AC Drive Electric Locomotive Systems IntegrationHunanChina
  2. 2.Tsinghua UniversityBeijingChina

Personalised recommendations