Advertisement

Learning Distribution-Matched Landmarks for Unsupervised Domain Adaptation

  • Mengmeng Jing
  • Jingjing LiEmail author
  • Jidong Zhao
  • Ke Lu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10828)

Abstract

Domain adaptation is widely used in database applications, especially in data mining. The basic assumption of domain adaptation (DA) is that some latent factors are shared by the source domain and the target domain. Revealing these shared factors, as a result, is the core operation of many DA approaches. This paper proposes a novel approach, named Learning Distribution-Matched Landmarks (LDML), for unsupervised DA. LDML reveals the latent factors by learning a domain-invariant subspace where the two domains are well aligned at both feature level and sample level. At the feature level, the divergences of both the marginal distribution and the conditional distribution are mitigated. At the sample level, each sample is evaluated so that we can take full advantage of the pivotal samples and filter out the outliers. Extensive experiments on two standard benchmarks verify that our approach can outperform state-of-the-art methods with significant advantages.

Keywords

Domain adaptation Transfer learning Landmark selection 

Notes

Acknowledgment

This work was supported in part by the National Postdoctoral Program for Innovative Talents under Grant BX201700045, China Postdoctoral Science Foundation under Grant 2017M623006, the Applied Basic Research Program of Sichuan Province under Grant 2015JY0124, and the Fundamental Research Funds for the Central Universities under Grant ZYGX2016J089.

References

  1. 1.
    Aljundi, R., Emonet, R., Muselet, D., Sebban, M.: Landmarks-based kernelized subspace alignment for unsupervised domain adaptation. In: CVPR, pp. 56–63 (2015)Google Scholar
  2. 2.
    Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines. ACM TIST 2(3), 27 (2011)Google Scholar
  3. 3.
    Chattopadhyay, R., Sun, Q., Fan, W., Davidson, I., Panchanathan, S., Ye, J.: Multisource domain adaptation and its application to early detection of fatigue. KDD 6(4), 18 (2012)Google Scholar
  4. 4.
    Chen, H.T., Chang, H.W., Liu, T.L.: Local discriminant embedding and its variants. In: CVPR, vol. 2, pp. 846–853. IEEE (2005)Google Scholar
  5. 5.
    Ding, Z., Shao, M., Fu, Y.: Deep low-rank coding for transfer learning. In: IJCAI, pp. 3453–3459 (2015)Google Scholar
  6. 6.
    Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., Darrell, T.: DeCAF: a deep convolutional activation feature for generic visual recognition. In: ICML, pp. 647–655 (2014)Google Scholar
  7. 7.
    Fernando, B., Habrard, A., Sebban, M., Tuytelaars, T.: Unsupervised visual domain adaptation using subspace alignment. In: ICCV, pp. 2960–2967 (2013)Google Scholar
  8. 8.
    Ghifary, M., Balduzzi, D., Kleijn, W.B., Zhang, M.: Scatter component analysis: a unified framework for domain adaptation and domain generalization. IEEE TPAMI 39(7), 1414–1430 (2016)CrossRefGoogle Scholar
  9. 9.
    Gong, B., Grauman, K., Sha, F.: Connecting the dots with landmarks: discriminatively learning domain-invariant features for unsupervised domain adaptation. In: ICML, pp. 222–230 (2013)Google Scholar
  10. 10.
    Gong, B., Shi, Y., Sha, F., Grauman, K.: Geodesic flow kernel for unsupervised domain adaptation. In: CVPR, pp. 2066–2073. IEEE (2012)Google Scholar
  11. 11.
    Gopalan, R., Li, R., Chellappa, R.: Domain adaptation for object recognition: an unsupervised approach. In: ICCV, pp. 999–1006. IEEE (2011)Google Scholar
  12. 12.
    Gretton, A., Borgwardt, K.M., Rasch, M., Schölkopf, B., Smola, A.J.: A kernel method for the two-sample-problem. In: NIPS, pp. 513–520 (2007)Google Scholar
  13. 13.
    Griffin, G., Holub, A., Perona, P.: Caltech-256 Object Category Dataset (2007)Google Scholar
  14. 14.
    Gu, Q., Li, Z., Han, J.: Joint feature selection and subspace learning. In: IJCAI, vol. 22, p. 1294 (2011)Google Scholar
  15. 15.
    Hubert Tsai, Y.H., Yeh, Y.R., Frank Wang, Y.C.: Learning cross-domain landmarks for heterogeneous domain adaptation. In: CVPR, pp. 5081–5090 (2016)Google Scholar
  16. 16.
    Li, J., Lu, K., Huang, Z., Shen, H.T.: Two birds one stone: on both cold-start and long-tail recommendation. In: ACM Multimedia, pp. 898–906. ACM (2017)Google Scholar
  17. 17.
    Li, J., Lu, K., Zhu, L., Li, Z.: Locality-constrained transfer coding for heterogeneous domain adaptation. In: Huang, Z., Xiao, X., Cao, X. (eds.) ADC 2017. LNCS, vol. 10538, pp. 193–204. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-68155-9_15CrossRefGoogle Scholar
  18. 18.
    Li, J., Wu, Y., Zhao, J., Lu, K.: Low-rank discriminant embedding for multiview learning. IEEE TCYB 47(11), 3516–3529 (2017)Google Scholar
  19. 19.
    Li, J., Zhao, J., Lu, K.: Joint feature selection and structure preservation for domain adaptation. In: IJCAI, pp. 1697–1703 (2016)Google Scholar
  20. 20.
    Long, M., Wang, J., Ding, G., Sun, J., Yu, P.S.: Transfer feature learning with joint distribution adaptation. In: ICCV, pp. 2200–2207 (2013)Google Scholar
  21. 21.
    Long, M., Wang, J., Ding, G., Sun, J., Yu, P.S.: Transfer joint matching for unsupervised domain adaptation. In: CVPR, pp. 1410–1417 (2014)Google Scholar
  22. 22.
    Pan, S.J., Tsang, I.W., Kwok, J.T., Yang, Q.: Domain adaptation via transfer component analysis. IEEE TNN 22(2), 199–210 (2011)Google Scholar
  23. 23.
    Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE TKDE 22(10), 1345–1359 (2010)Google Scholar
  24. 24.
    Saenko, K., Kulis, B., Fritz, M., Darrell, T.: Adapting visual category models to new domains. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6314, pp. 213–226. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15561-1_16CrossRefGoogle Scholar
  25. 25.
    Sim, T., Baker, S., Bsat, M.: The CMU pose, illumination, and expression (PIE) database. In: FG, pp. 53–58. IEEE (2002)Google Scholar
  26. 26.
    Sugiyama, M.: Local fisher discriminant analysis for supervised dimensionality reduction. In: ICML, pp. 905–912. ACM (2006)Google Scholar
  27. 27.
    Yan, S., Xu, D., Zhang, B., Zhang, H.J., Yang, Q., Lin, S.: Graph embedding and extensions: a general framework for dimensionality reduction. IEEE TPAMI 29(1), 40–51 (2007)CrossRefGoogle Scholar
  28. 28.
    Zhang, J., Li, W., Ogunbona, P.: Joint geometrical and statistical alignment for visual domain adaptation. In: CVPR (2017)Google Scholar
  29. 29.
    Zhou, M., Chang, K.C.: Unifying learning to rank and domain adaptation: enabling cross-task document scoring. In: SIGKDD, pp. 781–790. ACM (2014)Google Scholar
  30. 30.
    Zhu, L., Shen, J., Jin, H., Zheng, R., Xie, L.: Content-based visual landmark search via multimodal hypergraph learning. IEEE TCYB 45(12), 2756–2769 (2015)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Mengmeng Jing
    • 1
  • Jingjing Li
    • 1
    Email author
  • Jidong Zhao
    • 1
  • Ke Lu
    • 1
  1. 1.School of Computer Science and EngineeringUniversity of Electronic Science and Technology of ChinaChengduChina

Personalised recommendations