Advertisement

Feature Selection for Unsupervised Domain Adaptation Using Optimal Transport

  • Leo GautheronEmail author
  • Ievgen Redko
  • Carole Lartizien
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11052)

Abstract

In this paper, we propose a new feature selection method for unsupervised domain adaptation based on the emerging optimal transportation theory. We build upon a recent theoretical analysis of optimal transport in domain adaptation and show that it can directly suggest a feature selection procedure leveraging the shift between the domains. Based on this, we propose a novel algorithm that aims to sort features by their similarity across the source and target domains, where the order is obtained by analyzing the coupling matrix representing the solution of the proposed optimal transportation problem. We evaluate our method on a well-known benchmark data set and illustrate its capability of selecting correlated features leading to better classification performances. Furthermore, we show that the proposed algorithm can be used as a pre-processing step for existing domain adaptation techniques ensuring an important speed-up in terms of the computational time while maintaining comparable results. Finally, we validate our algorithm on clinical imaging databases for computer-aided diagnosis task with promising results. Code related to this paper is available at: https://leogautheron.github.io/ and Data related to this paper is available at: https://github.com/LeoGautheron/ECML2018-FeatureSelectionOptimalTransport

Supplementary material

478890_1_En_45_MOESM1_ESM.pdf (339 kb)
Supplementary material 1 (pdf 339 KB)

References

  1. 1.
    Aljundi, R., Lehaire, J., Prost-Boucle, F., Rouvière, O., Lartizien, C.: Transfer learning for prostate cancer mapping based on multicentric MR imaging databases. In: Bhatia, K.K., Lombaert, H. (eds.) MLMMI 2015. LNCS, vol. 9487, pp. 74–82. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-27929-9_8CrossRefGoogle Scholar
  2. 2.
    Bay, H., Tuytelaars, T., Van Gool, L.: SURF: speeded up robust features. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 404–417. Springer, Heidelberg (2006).  https://doi.org/10.1007/11744023_32CrossRefGoogle Scholar
  3. 3.
    Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., Vaughan, J.: A theory of learning from different domains. Mach. Learn. 79, 151–175 (2010)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Ben-David, S., Blitzer, J., Crammer, K., Pereira, F.: Analysis of representations for domain adaptation. In: NIPS, pp. 137–144 (2007)Google Scholar
  5. 5.
    Courty, N., Flamary, R., Tuia, D.: Domain adaptation with regularized optimal transport. In: Calders, T., Esposito, F., Hüllermeier, E., Meo, R. (eds.) ECML PKDD 2014. LNCS (LNAI), vol. 8724, pp. 274–289. Springer, Heidelberg (2014).  https://doi.org/10.1007/978-3-662-44848-9_18CrossRefGoogle Scholar
  6. 6.
    Cuturi, M.: Sinkhorn distances: lightspeed computation of optimal transport. In: NIPS, pp. 2292–2300 (2013)Google Scholar
  7. 7.
    Fernando, B., Habrard, A., Sebban, M., Tuytelaars, T.: Unsupervised visual domain adaptation using subspace alignment. In: ICCV, pp. 2960–2967 (2013)Google Scholar
  8. 8.
    Gopalan, R., Li, R., Chellappa, R.: Domain adaptation for object recognition: an unsupervised approach. In: ICCV, pp. 999–1006 (2011)Google Scholar
  9. 9.
    Gretton, A., Borgwardt, K.M., Rasch, M.J., Schölkopf, B., Smola, A.: A kernel two-sample test. J. Mach. Learn. Res. 13, 723–773 (2012)MathSciNetzbMATHGoogle Scholar
  10. 10.
    Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. J. Mach. Learn. Res. 3, 1157–1182 (2003)zbMATHGoogle Scholar
  11. 11.
    Jia, Y., et al.: Caffe: convolutional architecture for fast feature embedding. In: International Conference on Multimedia, pp. 675–678 (2014)Google Scholar
  12. 12.
    Knight, P.A.: The sinkhorn-knopp algorithm: convergence and applications. SIAM J. Matrix Anal. Appl. 30(1), 261–275 (2008)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. In: NIPS, pp. 1097–1105 (2012)Google Scholar
  14. 14.
    Li, J., Zhao, J., Lu, K.: Joint feature selection and structure preservation for domain adaptation. In: IJCAI, pp. 1697–1703 (2016)Google Scholar
  15. 15.
    Niaf, E., Rouvière, O., Mège-Lechevallier, F., Bratan, F., Lartizien, C.: Computer-aided diagnosis of prostate cancer in the peripheral zone using multiparametric MRI. Phys. Med. Biol. 57(12), 3833–51 (2012)CrossRefGoogle Scholar
  16. 16.
    Pan, S.J., Tsang, I.W., Kwok, J.T., Yang, Q.: Domain adaptation via transfer component analysis. IEEE Trans. Neural Networks 22(2), 199–210 (2011)CrossRefGoogle Scholar
  17. 17.
    Persello, C., Bruzzone, L.: Kernel-based domain-invariant feature selection in hyperspectral images for transfer learning. IEEE Trans. Geosci. Remote Sens. 54(5), 2615–2626 (2016)CrossRefGoogle Scholar
  18. 18.
    Redko, I., Habrard, A., Sebban, M.: Theoretical analysis of domain adaptation with optimal transport. In: Ceci, M., Hollmén, J., Todorovski, L., Vens, C., Džeroski, S. (eds.) ECML PKDD 2017. LNCS (LNAI), vol. 10535, pp. 737–753. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-71246-8_45CrossRefGoogle Scholar
  19. 19.
    Saenko, K., Kulis, B., Fritz, M., Darrell, T.: Adapting visual category models to new domains. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6314, pp. 213–226. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15561-1_16CrossRefGoogle Scholar
  20. 20.
    Sun, B., Feng, J., Saenko, K.: Return of frustratingly easy domain adaptation. In: AAAI, p. 8 (2016)Google Scholar
  21. 21.
    Szegedy, C., et al.: Going deeper with convolutions. In: CVPR, pp. 1–9 (2015)Google Scholar
  22. 22.
    Talagrand, M.: Concentration of measure and isoperimetric inequalities in product spaces. Publications Mathématiques de l’ I.H.E.S. 81, 73–205 (1995)MathSciNetCrossRefGoogle Scholar
  23. 23.
    Uguroglu, S., Carbonell, J.: Feature selection for transfer learning. In: Gunopulos, D., Hofmann, T., Malerba, D., Vazirgiannis, M. (eds.) ECML PKDD 2011. LNCS (LNAI), vol. 6913, pp. 430–442. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-23808-6_28CrossRefGoogle Scholar
  24. 24.
    Villani, C.: Optimal Transport: Old and New, vol. 338. Springer Science & Business Media, Heidelberg (2008)zbMATHGoogle Scholar
  25. 25.
    Yin, Z., Wang, Y., Liu, L., Zhang, W., Zhang, J.: Cross-subject EEG feature selection for emotion recognition using transfer recursive feature elimination. Frontiers Neurorobotics 11, 19 (2017)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Leo Gautheron
    • 1
    • 2
    Email author
  • Ievgen Redko
    • 1
  • Carole Lartizien
    • 1
  1. 1.Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint-Etienne CNRS, Inserm, CREATIS UMR 5220, U1206, F-69621LyonFrance
  2. 2.Univ Lyon, UJM-Saint-Etienne, CNRS, Institut d Optique Graduate School Laboratoire Hubert Curien UMR 5516, F-42023Saint-EtienneFrance

Personalised recommendations