Advertisement

Cognitive Computation

, Volume 11, Issue 6, pp 778–788 | Cite as

A Novel Deep Density Model for Unsupervised Learning

  • Xi Yang
  • Kaizhu HuangEmail author
  • Rui Zhang
  • John Y. Goulermas
Article

Abstract

Density models are fundamental in machine learning and have received a widespread application in practical cognitive modeling tasks and learning problems. In this work, we introduce a novel deep density model, referred to as deep mixtures of factor analyzers with common loadings (DMCFA), with an efficient greedy layer-wise unsupervised learning algorithm. The model employs a mixture of factor analyzers sharing common component loadings in each layer. The common loadings can be considered to be a feature selection or reduction matrix which makes this new model more physically meaningful. Importantly, sharing common components is capable of reducing both the number of free parameters and computation complexity remarkably. Consequently, DMCFA makes inference and learning rely on a dramatically more succinct model and avoids sacrificing its flexibility in estimating the data density by utilizing Gaussian distributions as the priors. Our model is evaluated on five real datasets and compared to three other competitive models including mixtures of factor analyzers (MFA), MFA with common loadings (MCFA), deep mixtures of factor analyzers (DMFA), and their collapsed counterparts. The results demonstrate the superiority of the proposed model in the tasks of density estimation, clustering, and generation.

Keywords

Deep density model Mixtures of factor analyzers Common component factor loadings Dimensionality reduction 

Notes

Funding Information

The work reported in this paper was partially supported by the following: National Natural Science Foundation of China (NSFC) under grant no. 61473236, Natural Science Fund for Colleges and Universities in Jiangsu Province under grant no. 17KJD520010, Suzhou Science and Technology Program under grant nos. SYG201712 and SZS201613, Jiangsu University Natural Science Research Programme under grant no. 17KJB520041, Key Program Special Fund in XJTLU (KSFA − 01).

Compliance with Ethical Standards

Conflict of Interests

The authors declare that they have no conflict of interest.

Ethical Approval

This article does not contain any studies with human participants performed by any of the authors.

References

  1. 1.
    Adams RP, Wallach HM, Ghahramani Z. Learning the structure of deep sparse graphical models. In: Proceedings of the thirteenth international conference on artificial intelligence and statistics; 2010. p. 1–8.Google Scholar
  2. 2.
    Arnold L, Ollivier Y. Layer-wise learning of deep generative models. CoRR arXiv:1212.1524; 2012.
  3. 3.
    Baek J, McLachlan GJ. Mixtures of common t-factor analyzers for clustering high-dimensional microarray data. Bioinformatics 2011;27(9):1269–1276.CrossRefGoogle Scholar
  4. 4.
    Baek J, McLachlan GJ, Flack LK. Mixtures of factor analyzers with common factor loadings: applications to the clustering and visualization of high-dimensional data. IEEE Trans Pattern Anal Mach Intell 2010;32(7):1298–1309.CrossRefGoogle Scholar
  5. 5.
    Bengio Y. Learning deep architectures for AI. Found Trends Mach Learn 2009;2(1):1–127.CrossRefGoogle Scholar
  6. 6.
    Chen B, Polatkan G, Sapiro G, Dunson DB, Carin L. The hierarchical beta process for convolutional factor analysis and deep learning. In: Proceedings of the 28th International conference on machine learning; 2011. p. 361–368.Google Scholar
  7. 7.
    Everett B. An introduction to latent variable models. Springer Science & Business Media; 2013.Google Scholar
  8. 8.
    Ghahramani Z. Probabilistic machine learning and artificial intelligence. Nature 2015;521(7553):452.CrossRefGoogle Scholar
  9. 9.
    Ghahramani Z, Hinton G. The em algorithm for mixtures of factor analyzers. In: Technical Report CRG-TR-96-1. University of Toronto; 1996. p. 11–18. http://www.gatsby.ucl.ac.uk/.zoubin/papers.html.
  10. 10.
    Hinton GE, Osindero S, Teh YW. A fast learning algorithm for deep belief nets. Neural Comput 2006; 18(7):1527–1554.CrossRefGoogle Scholar
  11. 11.
    Hinton GE, Salakhutdinov RR. Reducing the dimensionality of data with neural networks. Science 2006; 313(5786):504– 507.CrossRefGoogle Scholar
  12. 12.
    Jiang Z, Zheng Y, Tan H, Tang B, Zhou H. Variational deep embedding: an unsupervised and generative approach to clustering. In: Proceedings of the twenty-sixth international joint conference on artificial intelligence; 2017. p. 1965–1972.Google Scholar
  13. 13.
    Johnson B. High resolution urban land cover classification using a competitive multi-scale object-based approach. Remote Sens Lett 2013;4(2):131–140.CrossRefGoogle Scholar
  14. 14.
    Johnson B, Xie Z. Classifying a high resolution image of an urban area using super-object information. ISPRS J Photogramm Remote Sens 2013;83:40–49.CrossRefGoogle Scholar
  15. 15.
    Kung SY, Mak MW, Lin SH. Biometric authentication: a machine learning approach, chap. Expectation-maximization theory. Upper Saddle River: Prentice Hall Professional Technical Reference; 2005.Google Scholar
  16. 16.
    Lecun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proc IEEE 1998;86(11):2278–2324.CrossRefGoogle Scholar
  17. 17.
    Likas A, Vlassis N, Verbeek JJ. The global k-means clustering algorithm. Pattern Recogn 2003;36(2): 451–461.CrossRefGoogle Scholar
  18. 18.
    McLachlan G, Krishnan T. The EM algorithm and extensions. Wiley; 2007. vol. 382.Google Scholar
  19. 19.
    McLachlan GJ, Peel D. Mixtures of factor analyzers. In: International Conference on machine learning (ICML); 2000. p. 599–606.Google Scholar
  20. 20.
    Nene SA, Nayar SK, Murase H. 1996. Columbia object image library (coil-20). Tech. rep. Technical Report CUCS-005-96.Google Scholar
  21. 21.
    Patel AB, Nguyen T, Baraniuk RG. 2015. A probabilistic theory of deep learning. arXiv:1504.00641.
  22. 22.
    Rippel O, Adams RP. 2013. High-dimensional probability estimation with deep density models. CoRR arXiv:http://arXiv.org/1302.5125.
  23. 23.
    Salakhutdinov R, Mnih A, Hinton GE. Restricted boltzmann machines for collaborative filtering. In: Machine learning, proceedings of the twenty-fourth international conference (ICML); 2007. p. 791–798.Google Scholar
  24. 24.
    Tang Y, Salakhutdinov R, Hinton GE. Deep mixtures of factor analysers. In: Proceedings of the 29th international conference on machine learning. ICML; 2012.Google Scholar
  25. 25.
    Tortora C, McNicholas PD, Browne RP. A mixture of generalized hyperbolic factor analyzers. Adv Data Anal Classif 2016;10(4):423–440.CrossRefGoogle Scholar
  26. 26.
    Wang W. Mixtures of common factor analyzers for high-dimensional data with missing information. J Multivar Anal 2013;117:120–133.CrossRefGoogle Scholar
  27. 27.
    Wei H, Dong Z. V4 neural network model for shape-based feature extraction and object discrimination. Cogn Comput 2015;7(6):753–762.CrossRefGoogle Scholar
  28. 28.
    Wen G, Hou Z, Li H, Li D, Jiang L, Xun E. Ensemble of deep neural networks with probability-based fusion for facial expression recognition. Cogn Comput; 201.  https://doi.org/10.1007/s12559-017-9472-6.CrossRefGoogle Scholar
  29. 29.
    Yang X, Huang K, Goulermas JY, Zhang R. Joint learning of unsupervised dimensionality reduction and gaussian mixture model. Neural Process Lett 2017;45(3):791–806.CrossRefGoogle Scholar
  30. 30.
    Yang X, Huang K, Zhang R. Deep mixtures of factor analyzers with common loadings: aa novel deep generative approach to clustering. In: Neural Information processing - 24rd international conference, ICONIP; 2017.Google Scholar
  31. 31.
    Zeng N, Wang Z, Zhang H, Liu W, Alsaadi FE. Deep belief networks for quantitative analysis of a gold immunochromatographic strip. Cogn Comput 2016;8(4):684–692.CrossRefGoogle Scholar
  32. 32.
    Zhang J, Ding S, Zhang N, Xue Y. Weight uncertainty in Boltzmann machine. Cogn Comput 2016; 8(6):1064–1073.CrossRefGoogle Scholar
  33. 33.
    Zheng Y, Cai Y, Zhong G, Chherawala Y, Shi Y, Dong J. Stretching deep architectures for text recognition. In: Document Analysis and recognition (ICDAR)–13th international conference. IEEE; 2015. p. 236–240.Google Scholar
  34. 34.
    Zhong G, Yan S, Huang K, Cai Y, Dong J. Reducing and stretching deep convolutional activation features for accurate image classification. Cogn Comput 2018;10(1):179–186.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  • Xi Yang
    • 1
  • Kaizhu Huang
    • 1
    Email author
  • Rui Zhang
    • 1
  • John Y. Goulermas
    • 2
  1. 1.Xi’an Jiaotong-Liverpool University, SIPSuzhouChina
  2. 2.University of LiverpoolLiverpoolUK

Personalised recommendations