Advertisement

Multi-view collective tensor decomposition for cross-modal hashing

  • Limeng CuiEmail author
  • Jiawei Zhang
  • Lifang He
  • Philip S. Yu
Regular Paper
  • 8 Downloads

Abstract

With the development of social media, data often come from a variety of sources in different modalities. These data contain complementary information that can be used to produce better learning algorithms. Such data exhibit dual heterogeneity: On the one hand, data obtained from multiple modalities are intrinsically different; on the other hand, features obtained from different disciplines are usually heterogeneous. Existing methods often consider the first facet while ignoring the second. Thus, in this paper, we propose a novel multi-view cross-modal hashing method named Multi-view Collective Tensor Decomposition (MCTD) to mitigate the dual heterogeneity at the same time, which can fully exploit the multimodal multi-view feature while simultaneously discovering multiple separated subspaces by leveraging the data categories as supervision information. We propose a novel cross-modal retrieval framework which consists of three components: (1) two tensors which model the multi-view features from different modalities in order to get better representation of the complementary features and a latent representation space; (2) a block-diagonal loss which is used to explicitly enforce a more discriminative latent space by leveraging supervision information; and (3) two feature projection matrices which characterize the data and generate the latent representation for incoming new queries. We use an iterative updating optimization algorithm to solve the objective function designed for MCTD. Extensive experiments prove the effectiveness of MCTD compared with state-of-the-art methods.

Keywords

Cross-modal hashing Tensor factorization Metric learning Multi-view learning 

Notes

Acknowledgements

The work is supported by the National Natural Science Foundation of China under Grant No.: 61672313 and 61503253, the National Science Foundation under Grant Nos.: IIS-1526499, IIS-1763365 and CNS-1626432 and Natural Science Foundation of Guangdong Province under Grant No.: 2017A030313339.

References

  1. 1.
    Antipov G, Berrani SA, Ruchaud N, Dugelay JL (2015) Learned versus hand-crafted features for pedestrian gender recognition. In: Proceedings of the 23rd ACM international conference on Multimedia. ACM, pp 1263–1266Google Scholar
  2. 2.
    Bronstein MM, Bronstein AM, Michel F, Paragios N (2010) Data fusion through cross-modality metric learning using similarity-sensitive hashing. In: Computer vision and pattern recognition (CVPR), 2010 IEEE conference on. IEEE, pp 3594–3601Google Scholar
  3. 3.
    Cao B, Zhou H, Li G, Yu PS (2016) Multi-view machines. In: Proceedings of the ninth ACM international conference on web search and data mining. ACM, pp 427–436Google Scholar
  4. 4.
    Cao Y, Long M, Wang J, Liu S (2017) Collective deep quantization for efficient cross-modal retrieval. In: AAAI, pp 3974–3980Google Scholar
  5. 5.
    Cao Y, Long M, Wang J, Yang Q, Yu PS (2016) Deep visual-semantic hashing for cross-modal retrieval. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 1445–1454Google Scholar
  6. 6.
    Ding C, Tao D (2015) Robust face recognition via multimodal deep face representation. IEEE Trans Multimedia 17(11):2049–2058CrossRefGoogle Scholar
  7. 7.
    Ding G, Guo Y, Zhou J (2014) Collective matrix factorization hashing for multimodal data. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2075–2082Google Scholar
  8. 8.
    Gong Y, Ke Q, Isard M, Lazebnik S (2014) A multi-view embedding space for modeling internet images, tags, and their semantics. Int J Comput Vis 106(2):210–233CrossRefGoogle Scholar
  9. 9.
    Gong Y, Lazebnik S, Gordo A, Perronnin F (2013) Iterative quantization: a procrustean approach to learning binary codes for large-scale image retrieval. IEEE Trans Pattern Anal Mach Intell 35(12):2916–2929CrossRefGoogle Scholar
  10. 10.
    Huang X, Peng Y, Yuan M (2017) Cross-modal common representation learning by hybrid transfer network. In: Proceedings of the 26th international joint conference on artificial intelligence. AAAI Press, pp 1893–1900Google Scholar
  11. 11.
    Hwang SJ, Grauman K (2012) Reading between the lines: object localization using implicit cues from image tags. IEEE Trans Pattern Anal Mach Intell 34(6):1145–1158CrossRefGoogle Scholar
  12. 12.
    Jiang QY, Li WJ (2017) Deep cross-modal hashing. In: Computer vision and pattern recognition (CVPR), 2017 IEEE conference on. IEEE, pp 3270–3278Google Scholar
  13. 13.
    Jin L, Gao S, Li Z, Tang J (2014) Hand-crafted features or machine learnt features? Together they improve rgb-d object recognition. In: Multimedia (ISM), 2014 IEEE international symposium on. IEEE, pp 311–319Google Scholar
  14. 14.
    Kolda TG, Bader BW (2009) Tensor decompositions and applications. SIAM Rev 51(3):455–500MathSciNetCrossRefGoogle Scholar
  15. 15.
    Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105Google Scholar
  16. 16.
    Kumar S, Udupa R (2011) Learning hash functions for cross-view similarity search. In: IJCAI proceedings-international joint conference on artificial intelligence, vol 22, p 1360Google Scholar
  17. 17.
    Li K, Qi GJ, Ye J, Hua KA (2017) Linear subspace ranking hashing for cross-modal retrieval. IEEE Trans Pattern Anal Mach Intell 39(9):1825–1838CrossRefGoogle Scholar
  18. 18.
    Lin Z, Ding G, Han J, Wang J (2017) Cross-view retrieval via probability-based semantics-preserving hashing. IEEE Trans Cybernet 47(12):4342–4355CrossRefGoogle Scholar
  19. 19.
    Liu H, Ji R, Wu Y, Hua G (2016) Supervised matrix factorization for cross-modality hashing. In: Proceedings of the twenty-fifth international joint conference on artificial intelligence. AAAI Press, pp 1767–1773Google Scholar
  20. 20.
    Lu X, Wu F, Tang S, Zhang Z, He X, Zhuang Y (2013) A low rank structural large margin method for cross-modal ranking. In: Proceedings of the 36th international ACM SIGIR conference on research and development in information retrieval. ACM, pp 433–442Google Scholar
  21. 21.
    Moran S, Lavrenko V (2015) Regularised cross-modal hashing. In: Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval. ACM, pp 907–910Google Scholar
  22. 22.
    Mørup M, Hansen LK, Arnfred SM (2008) Algorithms for sparse nonnegative Tucker decompositions. Neural Computation 20(8):2112–2131CrossRefGoogle Scholar
  23. 23.
    Peng Y, Huang X, Qi J (2016) Cross-media shared representation by hierarchical learning with multiple deep networks. In: IJCAI, pp 3846–3853Google Scholar
  24. 24.
    Peng Y, Qi J, Huang X, Yuan Y (2018) Ccl: cross-modal correlation learning with multigrained fusion by hierarchical network. IEEE Trans Multimedia 20(2):405–420CrossRefGoogle Scholar
  25. 25.
    Qi J, Peng Y (2018) Cross-modal bidirectional translation via reinforcement learning. In: IJCAI, pp 2630–2636Google Scholar
  26. 26.
    Rasiwasia N, Costa Pereira J, Coviello E, Doyle G, Lanckriet GR, Levy R, Vasconcelos N (2010) A new approach to cross-modal multimedia retrieval. In: Proceedings of the 18th ACM international conference on Multimedia. ACM, pp 251–260Google Scholar
  27. 27.
    Rendle S (2010) Factorization machines. In: Data mining (ICDM), 2010 IEEE 10th international conference on. IEEE, pp 995–1000Google Scholar
  28. 28.
    Sharma A, Kumar A, Daume H, Jacobs DW (2012) Generalized multiview analysis: a discriminative latent space. In: Computer vision and pattern recognition (CVPR), 2012 IEEE conference on. IEEE, pp 2160–2167Google Scholar
  29. 29.
    Shen X, Shen F, Sun QS, Yang Y, Yuan YH, Shen HT (2017) Semi-paired discrete hashing: learning latent hash codes for semi-paired cross-view retrieval. IEEE Trans Cybern 47(12):4275–4288CrossRefGoogle Scholar
  30. 30.
    Shen X, Shen F, Sun QS, Yuan YH (2015) Multi-view latent hashing for efficient multimedia search. In: Proceedings of the 23rd ACM international conference on Multimedia. ACM, pp 831–834Google Scholar
  31. 31.
    Song J, Yang Y, Yang Y, Huang Z, Shen HT (2013) Inter-media hashing for large-scale retrieval from heterogeneous data sources. In: Proceedings of the 2013 ACM SIGMOD international conference on management of data. ACM, pp 785–796Google Scholar
  32. 32.
    Tang J, Jin L, Li Z, Gao S (2015) Rgb-d object recognition via incorporating latent data structure and prior knowledge. IEEE Trans Multimedia 17(11):1899–1908CrossRefGoogle Scholar
  33. 33.
    Tang J, Wang K, Shao L (2016) Supervised matrix factorization hashing for cross-modal retrieval. IEEE Trans Image Process 25(7):3157–3166MathSciNetCrossRefGoogle Scholar
  34. 34.
    Wang J, Shen HT, Song J, Ji J (2014) Hashing for similarity search: a survey. arXiv preprint arXiv:1408.2927
  35. 35.
    Wang K, Yin Q, Wang W, Wu S, Wang L (2016) A comprehensive survey on cross-modal retrieval. arXiv preprint arXiv:1607.06215
  36. 36.
    Wei Y, Zhao Y, Lu C, Wei S, Liu L, Zhu Z, Yan S (2017) Cross-modal retrieval with cnn visual features: a new baseline. IEEE Trans Cybern 47(2):449–460Google Scholar
  37. 37.
    Xu X, Shen F, Yang Y, Shen HT, Li X (2017) Learning discriminative binary codes for large-scale cross-modal retrieval. IEEE Trans Image Process 26(5):2494–2507MathSciNetCrossRefGoogle Scholar
  38. 38.
    Yang Y, Xu D, Nie F, Luo J, Zhuang Y (2009) Ranking with local regression and global alignment for cross media retrieval. In: Proceedings of the 17th ACM international conference on multimedia. ACM, pp 175–184Google Scholar
  39. 39.
    Yao T, Kong X, Fu H, Tian Q (2016) Semantic consistency hashing for cross-modal retrieval. Neurocomputing 193:250–259CrossRefGoogle Scholar
  40. 40.
    Zhang D, Li WJ (2014) Large-scale supervised multimodal hashing with semantic correlation maximization. AAAI 1:7CrossRefGoogle Scholar
  41. 41.
    Zhang J, Peng Y (2017) Ssdh: semi-supervised deep hashing for large scale image retrieval. IEEE Trans Circuits Syst Video TechnolGoogle Scholar
  42. 42.
    Zhang J, Peng Y (2018) Query-adaptive image retrieval by deep weighted hashing. IEEE Trans MultimediaGoogle Scholar
  43. 43.
    Zhang J, Peng Y, Yuan M (2018) Unsupervised generative adversarial cross-modal hashingGoogle Scholar
  44. 44.
    Zhen Y, Yeung DY (2012) Co-regularized hashing for multimodal data. In: Advances in neural information processing systems, pp 1376–1384Google Scholar
  45. 45.
    Zhen Y, Yeung DY (2012) A probabilistic model for multimodal hash function learning. In: Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pp 940–948Google Scholar
  46. 46.
    Zhou J, Ding G, Guo Y (2014) Latent semantic sparse hashing for cross-modal similarity search. In: Proceedings of the 37th international ACM SIGIR conference on research & development in information retrieval. ACM, pp 415–424Google Scholar
  47. 47.
    Zhu X, Huang Z, Shen HT, Zhao X (2013) Linear cross-modal hashing for efficient multimedia search. In: Proceedings of the 21st ACM international conference on multimedia. ACM, pp 143–152Google Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2019

Authors and Affiliations

  • Limeng Cui
    • 1
    Email author
  • Jiawei Zhang
    • 2
  • Lifang He
    • 3
  • Philip S. Yu
    • 4
  1. 1.College of Information Science and TechnologyPennsylvania State UniversityState CollegeUSA
  2. 2.IFM Lab, Department of Computer ScienceFlorida State UniversityTallahasseeUSA
  3. 3.Weill Cornell MedicineCornell UniversityNew YorkUSA
  4. 4.Department of Computer ScienceUniversity of Illinois at ChicagoChicagoUSA

Personalised recommendations