Missing Modality Transfer Learning

  • Zhengming DingEmail author
  • Handong Zhao
  • Yun Fu
Part of the Advanced Information and Knowledge Processing book series (AI&KP)


In reality, however, we always confront such a problem that no target data are achievable, especially when data are multi-modal. Under this situation, the target modality is blind in the training stage, while only the source modality can be obtained. We define such a problem as Missing Modality Problem in transfer learning.


  1. Belhumeur PN, Hespanha JP, Kriegman DJ (1997) Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Trans Pattern Anal Mach Intell 19(7):711–720CrossRefGoogle Scholar
  2. Cai J-F, Candès EJ, Shen Z (2010) A singular value thresholding algorithm for matrix completion. SIAM J Optim 20(4):1956–1982MathSciNetCrossRefGoogle Scholar
  3. Di H, Jia S, Yunhong W (2012) The buaa-visnir face database instructions. In: IRIP-TR-12-FR-001Google Scholar
  4. Ding Z, Fu Y (2014) Low-rank common subspace for multi-view learning. In: IEEE international conference on data mining. IEEE, pp 110–119Google Scholar
  5. Ding Z, Shao M, Fu Y (2014) Latent low-rank transfer subspace learning for missing modality recognition. In: Proceedings of the 28th AAAI conference on artificial intelligenceGoogle Scholar
  6. Ding Z, Shao M, Fu Y (2015) Missing modality transfer learning via latent low-rank constraint. IEEE Trans Image Process 24(11):4322–4334MathSciNetCrossRefGoogle Scholar
  7. Fernando B, Habrard A, Sebban M, Tuytelaars T et al (2013) Unsupervised visual domain adaptation using subspace alignment. In: IEEE international conference on computer visionGoogle Scholar
  8. Gong B, Shi Y, Sha F, Grauman K (2012) Geodesic flow kernel for unsupervised domain adaptation. In: IEEE conference on computer vision and pattern recognition. pp 2066–2073Google Scholar
  9. Gretton A, Borgwardt KM, Rasch M, Schölkopf B, Smola AJ (2006) A kernel method for the two-sample-problem. In: Advances in neural information processing systems. pp 513–520Google Scholar
  10. He X, Niyogi P (2003) Locality preserving projections. In: Neural information processing systems, vol 16. p 153Google Scholar
  11. Hou C, Nie F, Yi D, Wu Y (2011) Feature selection via joint embedding learning and sparse regression. International joint conference on artificial intelligence 22(1):1324Google Scholar
  12. Jhuo I-H, Liu D, Lee D, Chang S-F (2012) Robust visual domain adaptation with low-rank reconstruction. In: IEEE conference on computer vision and pattern recognition. pp 2168–2175Google Scholar
  13. Jia C, Kong Y, Ding Z, Fu YR (2014) Latent tensor transfer learning for rgb-d action recognition. In: Proceedings of the 22nd ACM international conference on multimedia. ACM, pp 87–96Google Scholar
  14. Klare BF, Jain AK (2013) Heterogeneous face recognition using kernel prototype similarities. IEEE Trans Pattern Anal Mach Intell 35(6):1410–1422CrossRefGoogle Scholar
  15. Lewis AS, Malick J (2008) Alternating projections on manifolds. Math Oper Res 33(1):216–234MathSciNetCrossRefGoogle Scholar
  16. Li W, Duan L, Xu D, Tsang I (2013) Learning with augmented features for supervised and semi-supervised heterogeneous domain adaptation. IEEE Trans Pattern Anal Mach Intell 36(6):1134–1148CrossRefGoogle Scholar
  17. Liu G, Yan S (2012) Latent low-rank representation for subspace segmentation and feature extraction. In: IEEE international conference on computer vision. pp 1615–1622Google Scholar
  18. Liu G, Lin Z, Yan S, Sun J, Yu Y, Ma Y (2013) Robust recovery of subspace structures by low-rank representation. IEEE Trans Pattern Anal Mach Intell 35(1):171–184CrossRefGoogle Scholar
  19. Long M, Wang J, Ding G, Pan S, Yu P (2013) Adaptation regularization: a general framework for transfer learning. IEEE Transactions on Knowledge and Data EngineeringGoogle Scholar
  20. Pan SJ, Tsang IW, Kwok JT, Yang Q (2011) Domain adaptation via transfer component analysis. IEEE Trans Neural Netw 22(2):199–210CrossRefGoogle Scholar
  21. Shao M, Castillo C, Gu Z, Fu Y (2012) Low-rank transfer subspace learning. In: IEEE 12th international conference on data mining. pp 1104–1109Google Scholar
  22. Shao M, Kit D, Fu Y (2014) Generalized transfer subspace learning through low-rank constraint. Int J Comput Vis 109:1–20MathSciNetCrossRefGoogle Scholar
  23. Si S, Tao D, Geng B (2010) Bregman divergence -based regularization for transfer subspace learning. IEEE Trans Knowl Data Eng 22(7):929–942CrossRefGoogle Scholar
  24. Turk M, Pentland A (1991) Eigenfaces for recognition. J Cognit Neurosci 3(1):71–86CrossRefGoogle Scholar
  25. Wang S, Zhang LY, Pan Q (2012) Semi-coupled dictionary learning with applications in image super-resolution and photo-sketch synthesis. In: IEEE conference on computer vision and pattern recognition. IEEEGoogle Scholar
  26. Yang J, Yin W, Zhang Y, Wang Y (2009) A fast algorithm for edge-preserving variational multichannel image restoration. SIAM J Imaging Sci 2(2):569–592MathSciNetCrossRefGoogle Scholar
  27. Zhang W, Wang X, Tang X (2011) Coupled information-theoretic encoding for face photo-sketch recognition. In: IEEE conference on computer vision and pattern recognition. IEEE, pp 513–520Google Scholar
  28. Zhou T, Tao D (2011) Godec: randomized low-rank & sparse matrix decomposition in noisy case. In: Proceedings of the 28th international conference on machine learning. pp 33–40Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Indiana University-Purdue University IndianapolisIndianapolisUSA
  2. 2.Adobe ResearchSan JoseUSA
  3. 3.Northeastern UniversityBostonUSA

Personalised recommendations