Advertisement

Applied Intelligence

, Volume 49, Issue 5, pp 1925–1936 | Cite as

Multimodal correlation deep belief networks for multi-view classification

  • Nan Zhang
  • Shifei DingEmail author
  • Hongmei Liao
  • Weikuan Jia
Article
  • 118 Downloads

Abstract

The Restricted Boltzmann machine (RBM) has been proven to be a powerful tool in many specific applications, such as representational learning, document modeling, and many other learning tasks. However, the extensions of the RBM are rarely used in the field of multi-view learning. In this paper, we present a new RBM model based on canonical correlation analysis, named as the correlation RBM, for multi-view learning. The correlation RBM computes multiple representations by regularizing the marginal likelihood function with the consistency among representations from different views. In addition, the multimodal deep model can obtain a unified representation that fuses multiple representations together. Therefore, we stack the correlation RBM to create the correlation deep belief network (DBN), and then propose the multimodal correlation DBN for learning multi-view data representations. Contrasting with existing multi-view classification methods, such as multi-view Gaussian process with posterior consistency (MvGP) and consensus and complementarity based maximum entropy discrimination (MED-2C), the correlation RBM and the multimodal correlation DBN have achieved satisfactory results on two-class and multi-class classification datasets. Experimental results show that correlation RBM and the multimodal correlation DBN are effective learning algorithms.

Keywords

Restricted boltzmann machines Deep belief networks Multi-view learning Canonical correlation analysis Multimodal learning 

Notes

Acknowledgements

This work is supported by the Fundamental Research Funds for the Central Universities (No.2017XKZD03).

References

  1. 1.
    Hinton GE, Salakhutdinov RR (2006) Reducing the dimensionality of data with neural networks. Science 313(5786):504–507MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Zhang N, Ding S, Zhang J, Xue Y (2018) An overview on restricted boltzmann machines. Neurocomputing 275:1186–1199CrossRefGoogle Scholar
  3. 3.
    Courville A, Desjardins G, Bergstra J, Bengio Y (2014) The spike-and-slab RBM and extensions to discrete and sparse data distributions. IEEE Trans Pattern Anal Mach Intell 36(9):1874–1887CrossRefGoogle Scholar
  4. 4.
    Mittelman R, Kuipers B, Savarese S, Lee H (2014) Structured recurrent temporal restricted boltzmann machines. In: International Conference on Machine Learning, pp. 1647–1655Google Scholar
  5. 5.
    Zhang N, Ding S, Zhang J, Xue Y (2017) Research on point-wise gated deep networks. Appl Soft Comput 52:1210–1221CrossRefGoogle Scholar
  6. 6.
    Nguyen TD, Tran T, Phung D, Venkatesh S (2016) Graph-induced restricted Boltzmann machines for document modeling. Inf Sci 328:60–75CrossRefzbMATHGoogle Scholar
  7. 7.
    Amer MR, Shields T, Siddiquie B, Tamrakar A, Divakaran A, Chai S (2018) Deep multimodal fusion: A hybrid approach. Int J Comput Vis 126(2–4):440–456MathSciNetCrossRefGoogle Scholar
  8. 8.
    Basu S, Karki M, Ganguly S, DiBiano R, Mukhopadhyay S, Gayaka S, Kannan R, Nemani R (2017) Learning sparse feature representations using probabilistic quadtrees and deep belief nets. Neural Process Lett 45(3):855–867CrossRefGoogle Scholar
  9. 9.
    Salakhutdinov RR, Hinton GE (2009) Deep boltzmann machines. In: International Conference on Artificial Intelligence and Statistics, pp. 448–455Google Scholar
  10. 10.
    Kang Y, Choi S (2011) Restricted deep belief networks for multi-view learning. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 130–145Google Scholar
  11. 11.
    Zhao J, Xie X, Xu X, Sun S (2017) Multi-view learning overview: Recent progress and new challenges. Information Fusion 38:43–54CrossRefGoogle Scholar
  12. 12.
    Zhang Y, Yang Y, Lia T, Fujita H (2019) A multitask multiview clustering algorithm in heterogeneous situations based on LLE and LE. Knowl-Based Syst 163:776–786CrossRefGoogle Scholar
  13. 13.
    Wang H, Yang Y, Liu B, Fujita H (2019) A study of graph-based system for multi-view clustering. Knowl-Based Syst 163:1009–1019CrossRefGoogle Scholar
  14. 14.
    Liu Q, Sun S (2017) Multi-view regularized Gaussian processes. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, pp. 655–667Google Scholar
  15. 15.
    Chao G, Sun S (2016) Consensus and complementarity based maximum entropy discrimination for multi-view classification. Inf Sci 367:296–310CrossRefGoogle Scholar
  16. 16.
    Andrew G, Arora R, Bilmes J, Livescu K (2013) Deep canonical correlation analysis. In: International Conference on Machine Learning, pp. 1247–1255Google Scholar
  17. 17.
    Ravanbakhsh S, Póczos B, Schneider J, Schuurmans D, Greiner R (2016) Stochastic neural networks with monotonic activation functions. In: International Conference on Artificial Intelligence and Statistics, pp. 809–818Google Scholar
  18. 18.
    Hinton GE (2002) Training products of experts by minimizing contrastive divergence. Neural Comput 14(8):1711–1800CrossRefzbMATHGoogle Scholar
  19. 19.
    Li CL, Ravanbakhsh S, Poczos B (2016) Annealing Gaussian into ReLU: a new sampling strategy for leaky-ReLU RBM. arXiv preprint arXiv:1611.03879Google Scholar
  20. 20.
    Ding S, Zhang X, An Y, Xue Y (2017) Weighted linear loss multiple birth support vector machine based on information granulation for multi-class classification. Pattern Recogn 67:32–46CrossRefGoogle Scholar
  21. 21.
    Mangasarian OL, Street WN, Wolberg WH (1995) Breast cancer diagnosis and prognosis via linear programming. Oper Res 43(4):570–577MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    Arabasadi Z, Alizadehsani R, Roshanzamir M, Moosaei H, Yarifard AA (2017) Computer aided decision making for heart disease detection using hybrid neural network-Genetic algorithm. Comput Methods Prog Biomed 141:19–26CrossRefGoogle Scholar
  23. 23.
    Güvenir HA, Demiröz G, Ilter N (1998) Learning differential diagnosis of erythemato-squamous diseases using voting feature intervals. Artif Intell Med 13(3):147–165CrossRefGoogle Scholar
  24. 24.
    Johnson B, Tateishi R, Xie Z (2012) Using geographically-weighted variables for image classification. Remote Sensing Letters 3(6):491–499CrossRefGoogle Scholar
  25. 25.
    Srivastava N, Hinton GE, Krizhevsky A, Sutskever I, Salakhutdinov RR (2014) Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J Mach Learn Res 15:1929–1958MathSciNetzbMATHGoogle Scholar
  26. 26.
    Zhang J, Ding S, Zhang N, Xue Y (2016) Weight uncertainty in boltzmann machine. Cogn Comput 8(6):1064–1073CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  • Nan Zhang
    • 1
  • Shifei Ding
    • 1
    • 2
    Email author
  • Hongmei Liao
    • 1
  • Weikuan Jia
    • 3
  1. 1.School of Computer Science and TechnologyChina University of Mining and TechnologyXuzhouChina
  2. 2.Key Laboratory of Intelligent Information Processing, Institute of Computing TechnologyChinese Academy of SciencesBeijingChina
  3. 3.School Information Science and EngineeringShandong Normal UniversityJinanChina

Personalised recommendations