Locality Fisher discriminant analysis for conditional domain adaption

Abstract

Domain adaptation tackles a learning problem in a test data (target domain) by utilizing the training data (source domain) in a related domain, often with different distribution. Intuitively, discovering a good feature representation across the source and target domains is crucial to boost the model performance. In this paper, we are to find such a representation through a novel learning method, locality Fisher discriminant analysis for conditional domain adaption (LFDA-CDA). LFDA-CDA tries to find a linear combination of features across domains in an embedded subspace using Bregman divergence (BD). In the embedded subspace spanned with new representation, data distributions in different domains are close to each other. Therefore, the standard machine learning methods can be applied to train a model on the source data for exploiting on the target data. In fact, we propose a novel feature representation in which to perform conditional domain adaptation via a new parametric nonlinear BD method, which can considerably minimize the distributions mismatch across domains by projecting data onto the learned subspaces. Moreover, LFDA-CDA benefits from data geometric structure segmentation and alignment between source and target domains, through locality preserving projection. Extensive experiments on 16 real vision datasets with different difficulties verify that LFDA-CDA can significantly outperform state-of-the-art methods in image classification tasks. Our source code is available at https://github.com/Jtahmores/LFA-CDA.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

References

  1. 1.

    Shi, Y., Sha, F.: Information-theoretical learning of discriminative clusters for unsupervised domain adaptation. In: Proceedings of the 29th International Conference on International Conference on Machine Learning, pp. 1275–1282 (2012)

  2. 2.

    Tahmoresnezhad, J., Hashemi, S.: Transductive transfer learning via maximum margin criterion. Sci. Iran. Trans. D Comput Sci Eng. Elect 23(3), 1239–1250 (2016)

  3. 3.

    Tahmoresnezhad, J., Hashemi, S.: A generalized kernel-based random k-samplesets method for transfer learning. Iran. J. Sci. Technol. Trans. Elect. Eng. 39(2), 193–207 (2015)

    Google Scholar 

  4. 4.

    Fisher, R.A.: The use of multiple measurements in taxonomic problems. Ann. Eugen. 7(2), 179–188 (1936)

    Article  Google Scholar 

  5. 5.

    He, X., Yan, S., Hu, Y., Niyogi, P., Zhang, H.-J.: Face recognition using laplacianfaces. IEEE Trans. Pattern Anal. Mach. Intell. 3, 328–340 (2005)

    Google Scholar 

  6. 6.

    Bregman, L.M.: The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex program- 495 ming. USSR Comput. Math. Math. Phys. 7(3), 200–217 (1967)

    Article  Google Scholar 

  7. 7.

    Wang, H.-Y., Zheng, V. W., Zhao, J., Yang, Q.: Indoor localization in multi-floor environments with reduced effort. In: 2010 IEEE International Conference on Pervasive Computing and Communications (PerCom), pp. 244–252. IEEE (2010)

  8. 8.

    Krizhevsky, A., Sutskever, I., Hinton, G. E.: Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp. 1097–1105 (2012)

  9. 9.

    Rifai, S., Dauphin, Y. N., Vincent, P., Bengio, Y., Muller, X.: The manifold tangent classifier. In: Advances in Neural Information Processing Systems, pp. 2294–2302 (2011)

  10. 10.

    Srivastava, N., Salakhutdinov, R. R.: Multimodal learning with deep boltzmann machines. In: Advances in Neural Information Processing Systems, pp. 2222–2230 (2012)

  11. 11.

    Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., Kuksa, P.: Natural language processing (almost) from scratch. J. Mach. Learn. Res. 12, 2493–2537 (2011)

    MATH  Google Scholar 

  12. 12.

    Duan, L., Xu, D., Tsang, I.W.-H.: Domain adaptation from multiple sources: a domain-dependent regularization approach. IEEE Trans. Neural Netw. Learn. Syst. 23(3), 504–518 (2012)

    Article  Google Scholar 

  13. 13.

    Aytar, Y., Zisserman, A.: Tabula rasa: Model transfer for object category detection. In: 2011 International Conference on Computer Vision, pp. 2252–2259. IEEE (2011)

  14. 14.

    Yang, J., Yan, R., Hauptmann, A. G.: Adapting SVM classifiers to data with shifted distributions. In: Seventh IEEE International Conference on Data Mining Workshops (ICDMW 2007), pp. 69–76. IEEE (2007)

  15. 15.

    Gong, B., Grauman, K., Sha, F.: Connecting the dots with landmarks: Discriminatively learning domain-invariant features for unsupervised domain adaptation. In: International Conference on Machine Learning, pp. 222–230 (2013)

  16. 16.

    Aljundi, R., Emonet, R., Muselet, D., Sebban, M.: Landmarks based kernelized subspace alignment for unsupervised domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 56–63. IEEE (2015)

  17. 17.

    Si, S., Tao, D., Geng, B.: Bregman divergence-based regularization for transfer subspace learning. IEEE Trans. Knowl. Data Eng. 22(7), 929–942 (2010)

    Article  Google Scholar 

  18. 18.

    Satpal, S., Sarawagi, S.: Domain adaptation of conditional probability models via feature subsetting. In: European Conference on Principles of Data Mining and Knowledge Discovery, pp. 224–235. Springer (2007)

  19. 19.

    Long, M., Wang, J., Ding, G., Sun, J., Philip, S.Y.: Transfer feature learning with joint distribution adaptation. In: IEEE International Conference on Computer Vision (ICCV), pp. 2200–2207 (2013)

  20. 20.

    Tahmoresnezhad, J., Hashemi, S.: Visual domain adaptation via transfer feature learning. Knowl. Inform. Syst. 50(2), 585–605 (2017)

    Article  Google Scholar 

  21. 21.

    Pan, S.J., Tsang, I.W., Kwok, J.T., Yang, Q.: Domain adaptation via transfer component analysis. IEEE Trans. Neural Netw. 22(2), 199–210 (2011)

    Article  Google Scholar 

  22. 22.

    Long, M., Wang, J., Ding, G., Pan, S.J., Philip, S.Y.: Adaptation regularization: a general framework for transfer learning. IEEE Trans. Knowl. Data Eng. 26(5), 1076–1089 (2014)

    Article  Google Scholar 

  23. 23.

    Sugiyama, M.: Dimensionality reduction of multimodal labeled data by lo-cal fisher discriminant analysis. J. Mach. Learn. Res. 8(May), 1027–1061 (2007)

    MATH  Google Scholar 

  24. 24.

    Tahmoresnezhad, J., Hashemi, S.: Exploiting kernel-based feature weighting and instance clustering to transfer knowledge across domains. Tur. J. Elect. Eng. Comput. Sci 25(1), 292–307 (2017)

    Article  Google Scholar 

  25. 25.

    Wand, M.P., Jones, M.C.: Kernel smoothing. Chapman and Hall/CRC, London (1994)

    Google Scholar 

  26. 26.

    Gong, B., Shi, Y., Sha, F., Grauman, K.: Geodesic flow kernel for unsupervised domain adaptation. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2066–2073. IEEE (2012)

  27. 27.

    Xu, Y., Fang, X., Wu, J., Li, X., Zhang, D.: Discriminative transfer subspace learning via low-rank and sparse representation. IEEE Trans. Image Process. 25(2), 850–863 (2016)

    MathSciNet  Article  Google Scholar 

  28. 28.

    Gheisari, M., Baghshah, M.S.: Joint predictive model and representation learning for visual domain adaptation. Eng. Appl. Artif. Intell. 58, 157–170 (2017)

    Article  Google Scholar 

  29. 29.

    Ding, Z., Fu, Y.: Robust transfer metric learning for image classification. IEEE Trans. Image Process. 26(2), 660–670 (2017)

    MathSciNet  Article  Google Scholar 

  30. 30.

    Luo, L., Wang, X., Hu, S., Wang, C., Tang, Y., Chen, L.: Close yet distinctive domain adaptation. arXiv preprint arXiv:1704.04235 (2017)

  31. 31.

    Gheisari, M., Baghshah, M.S.: Unsupervised domain adaptation via representation learning and adaptive classifier learning. Neurocomputing 165, 300–311 (2015)

    Article  Google Scholar 

  32. 32.

    Zhang, J., Li, W., Ogunbona, P.: Joint geometrical and statistical alignment for visual domain adaptation. arXiv:1705.05498 (2017)

  33. 33.

    Liu, J., Li, J., Lu, K.: Coupled local-global adaptation for multi-source transfer learning. ENeurocomputing 275, 247–254 (2018)

    Article  Google Scholar 

  34. 34.

    Wang, B., Qiu, M., Wang, X., Li, Y., Gong, Y., Zeng, X., Huang, J., Zheng, B., Cai, D., Zhou, J.: A minimax game for Instance based selective transfer learning. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery Data Mining 2019 Jul 25, pp. 34–43. ACM (2019)

  35. 35.

    Gretton, A., Borgwardt, K. M., Rasch, M., Scholkopf, B., Smola, A. J.: A kernel method for the two-sample-problem. In: Proceedings of the 19th International Conference on Neural Information Processing Systems, pp. 513–520 (2006)

  36. 36.

    Gong, B., Shi, Y., Sha, F., Grauman, K.: Geodesic flow kernel for unsupervised domain adaptation. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2066–2073. IEEE (2012)

  37. 37.

    Cover, Thomas, Hart, Peter: Nearest neighbor pattern classification. IEEE Trans. Inform. Theory 13, 21–27 (1967)

    Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Jafar Tahmoresnezhad.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Zandifar, M., Tahmoresnezhad, J. Locality Fisher discriminant analysis for conditional domain adaption. Iran J Comput Sci (2020). https://doi.org/10.1007/s42044-020-00062-2

Download citation

Keywords

  • Transfer learning
  • Unsupervised domain adaptation
  • Dimensionality reduction
  • Fisher discriminant analysis
  • Locality preserving projection
  • Bregman divergence