LSTN: Latent Subspace Transfer Network for Unsupervised Domain Adaptation

  • Shanshan Wang
  • Lei ZhangEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11257)


For handling cross-domain distribution mismatch, a specially designed subspace and reconstruction transfer functions bridging multiple domains for heterogeneous knowledge sharing are wanted. In this paper, we propose a novel reconstruction-based transfer learning method called Latent Subspace Transfer Network (LSTN). We embed features/pixels of source and target into reproducing kernel Hilbert space (RKHS), in which the high dimensional features are mapped to nonlinear latent subspace by feeding them into MLP network. This approach is very simple but effective by combining both advantages of subspace learning and neural network. The adaptation behaviors can be achieved in the method of joint learning a set of hierarchical nonlinear subspace representation and optimal reconstruction matrix simultaneously. Notably, as the latent subspace model is a MLP Network, the layers in it can be optimized directly to avoid a pre-trained model which needs large-scale data. Experiments demonstrate that our approach outperforms existing non-deep adaptation methods and exhibits classification performance comparable with that of modern deep adaptation methods.


Domain adaptation Latent subspace MLP 


  1. 1.
    Aljundi, R., Emonet, R., Muselet, D., Sebban, M.: Landmarks-based kernelized subspace alignment for unsupervised domain adaptation. In: CVPR (2015)Google Scholar
  2. 2.
    Baktashmotlagh, M., Harandi, M.T., Lovell, B.C., Salzmann, M.: Unsupervised domain adaptation by domain invariant projection. In: ICCV, pp. 769–776 (2013)Google Scholar
  3. 3.
    Dai, D., Yang, W.: Satellite image classification via two-layer sparse coding with biased image representation. IEEE Geosci. Remote Sens. Lett. 8(1), 173–176 (2011)CrossRefGoogle Scholar
  4. 4.
    Donahue, J., et al.: DeCAF: a deep convolutional activation feature for generic visual recognition. In: PMLR, pp. 647–655 (2014)Google Scholar
  5. 5.
    Duan, L., Xu, D., Tsang, I.W., Luo, J.: Visual event recognition in videos by learning from web data. In: CVPR, pp. 1959–1966 (2010)Google Scholar
  6. 6.
    Fernando, B., Habrard, A., Sebban, M., Tuytelaars, T.: Unsupervised visual domain adaptation using subspace alignment. In: ICCV, pp. 2960–2967 (2014)Google Scholar
  7. 7.
    Gaidon, A., Zen, G., Rodriguez-Serrano, J.A.: Self-learning camera: autonomous adaptation of object detectors to unlabeled video streams. arXiv (2014)Google Scholar
  8. 8.
    Gong, B., Shi, Y., Sha, F., Grauman, K.: Geodesic flow kernel for unsupervised domain adaptation. In: CVPR, pp. 2066–2073 (2012)Google Scholar
  9. 9.
    Gong, B., Grauman, K., Sha, F.: Learning kernels for unsupervised domain adaptation with applications to visual object recognition. IJCV 109(1–2), 3–27 (2014)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Herath, S., Harandi, M., Porikli, F.: Learning an invariant hilbert space for domain adaptation. In: CVPR, pp. 3956–3965 (2017)Google Scholar
  11. 11.
    Hoffman, J., et al.: CyCADA: cycle-consistent adversarial domain adaptation. arXiv preprint arXiv:1711.03213 (2017)
  12. 12.
    Hu, J., Lu, J., Tan, Y.P.: Deep transfer metric learning. In: CVPR, pp. 325–333 (2015)Google Scholar
  13. 13.
    Jhuo, I.H., Liu, D., Lee, D., Chang, S.F.: Robust visual domain adaptation with low-rank reconstruction. In: CVPR, pp. 2168–2175 (2012)Google Scholar
  14. 14.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS, vol. 25, no. 2, pp. 1097–1105 (2012)Google Scholar
  15. 15.
    Kulis, B., Saenko, K., Darrell, T.: What you saw is not what you get: domain adaptation using asymmetric kernel transforms. In: CVPR, pp. 1785–1792 (2011)Google Scholar
  16. 16.
    Liu, D., Hua, G., Chen, T.: A hierarchical visual model for video object summarization. IEEE Trans. PAMI 32(12), 2178–2190 (2010)CrossRefGoogle Scholar
  17. 17.
    Long, M., Cao, Y., Wang, J., Jordan, M.: Learning transferable features with deep adaptation networks. In: ICML, pp. 97–105 (2015)Google Scholar
  18. 18.
    Long, M., Wang, J., Ding, G., Sun, J., Yu, P.S.: Transfer feature learning with joint distribution adaptation. In: ICCV, pp. 2200–2207 (2014)Google Scholar
  19. 19.
    Long, M., Wang, J., Ding, G., Sun, J., Yu, P.S.: Transfer joint matching for unsupervised domain adaptation. In: CVPR, pp. 1410–1417 (2014)Google Scholar
  20. 20.
    Long, M., Zhu, H., Wang, J., Jordan, M.I.: Unsupervised domain adaptation with residual transfer networks. In: NIPS, pp. 136–144 (2016)Google Scholar
  21. 21.
    Lu, H., Shen, C., Cao, Z., Xiao, Y., van den Hengel, A.: An embarrassingly simple approach to visual domain adaptation. IEEE Trans. Image Process. PP(99), 1 (2018)MathSciNetGoogle Scholar
  22. 22.
    Oquab, M., Bottou, L., Laptev, I., Sivic, J.: Learning and transferring mid-level image representations using convolutional neural networks. In: CVPR, pp. 1717–1724 (2014)Google Scholar
  23. 23.
    Risojevic, V., Babic, Z.: Aerial image classification using structural texture similarity, vol. 19, no. 5, pp. 190–195 (2011)Google Scholar
  24. 24.
    Rosasco, L., Verri, A., Santoro, M., Mosci, S., Villa, S.: Iterative projection methods for structured sparsity regularization. Computation (2009)Google Scholar
  25. 25.
    Shao, M., Kit, D., Fu, Y.: Generalized transfer subspace learning through low-rank constraint. IJCV 109(1–2), 74–93 (2014)MathSciNetCrossRefGoogle Scholar
  26. 26.
    Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: CNN features off-the-shelf: an astounding baseline for recognition. In: CVPR, pp. 806–813 (2014)Google Scholar
  27. 27.
    Simon, K., Jonathon, S., Le, Q.V.: Do better imagenet models transfer better? arXiv preprint arXiv:1805.08974v1 (2018)
  28. 28.
    Sun, B., Feng, J., Saenko, K.: Return of frustratingly easy domain adaptation. In: AAAI, vol. 6, p. 8 (2016)Google Scholar
  29. 29.
    Tzeng, E., Hoffman, J., Darrell, T., Saenko, K.: Simultaneous deep transfer across domains and tasks. In: ICCV, pp. 4068–4076 (2015)Google Scholar
  30. 30.
    Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: CVPR (2017)Google Scholar
  31. 31.
    Wang, S., Zhang, L., Zuo, W.: Class-specific reconstruction transfer learning via sparse low-rank constraint. In: ICCVW, pp. 949–957 (2017)Google Scholar
  32. 32.
    Xie, M., Jean, N., Burke, M., Lobell, D., Ermon, S.: Transfer learning from deep features for remote sensing and poverty mapping. arXiv (2015)Google Scholar
  33. 33.
    Yang, Y., Newsam, S.: Bag-of-visual-words and spatial extensions for land-use classification. In: SIGSPATIAL International Conference on Advances in Geographic Information Systems, pp. 270–279 (2010)Google Scholar
  34. 34.
    Zhang, J., Li, W., Ogunbona, P.: Joint geometrical and statistical alignment for visual domain adaptation. In: CVPR, pp. 5150–5158 (2017)Google Scholar
  35. 35.
    Zhang, L., Zhang, D.: Robust visual knowledge transfer via extreme learning machine-based domain adpatation. IEEE Trans. Image Process. 25(3), 4959–4973 (2016)MathSciNetCrossRefGoogle Scholar
  36. 36.
    Zhang, L., Zuo, W., Zhang, D.: LSDT: latent sparse domain transfer learning for visual adaptation. IEEE Trans. Image Process. 25(3), 1177–1191 (2016)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.College of Communication EngineeringChongqing UniversityChongqingChina

Personalised recommendations