Recursive Extraction of Modular Structure from Layered Neural Networks Using Variational Bayes Method

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10558)


Deep neural networks have made a substantial contribution to the recognition and prediction of complex data in various fields, such as image processing, speech recognition and bioinformatics. However, it is very difficult to discover knowledge from the inference provided by a neural network, since its internal representation consists of many nonlinear and hierarchical parameters. To solve this problem, an approach has been proposed that extracts a global and simplified structure for a neural network. Although it can successfully detect such a hidden modular structure, its convergence is not sufficiently stable and is vulnerable to the initial parameters. In this paper, we propose a new deep learning algorithm that consists of recursive back propagation, community detection using a variational Bayes, and pruning unnecessary connections. We show that the proposed method can appropriately detect a hidden inference structure and compress a neural network without increasing the generalization error.


Layered neural networks Network analysis Community detection Pruning Variational Bayes method 


  1. 1.
    Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (2013)CrossRefGoogle Scholar
  2. 2.
    Hinton, G., et al.: Deep neural networks for acoustic modeling in speech recognition. IEEE Signal Process. Mag. 29(6), 82–97 (2012)CrossRefGoogle Scholar
  3. 3.
    Xiong, H., et al.: The human splicing code reveals new insights into the genetic determinants of disease. Science 347(6218) (2015)Google Scholar
  4. 4.
    Tompson, J., et al.: Joint training of a convolutional network and a graphical model for human pose estimation. In: Advances in Neural Information Processing Systems (2014)Google Scholar
  5. 5.
    Leung, M., et al.: Deep learning of the tissue-regulated splicing code. Bioinformatics 30(12), i121–i129 (2014)CrossRefGoogle Scholar
  6. 6.
    Sainath, T., et al.: Deep convolutional neural networks for LVCSR. In International Conference on Acoustics, Speech and Signal Processing (2013)Google Scholar
  7. 7.
    Ishikawa, M.: A structural connectionist learning algorithm with forgetting. J. Jap. Soc. Artif. Intell. 5(5), 595–603 (1990)Google Scholar
  8. 8.
    Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems (2012)Google Scholar
  9. 9.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521, 436–444 (2015)CrossRefGoogle Scholar
  10. 10.
    Newman, M., Leicht, E.: Mixture models and exploratory analysis in networks. Proc. Natl. Acad. Sci. 104(23), 9564–9569 (2007)CrossRefzbMATHGoogle Scholar
  11. 11.
    Pagnotta, F., Amran, H.: Using data mining to predict secondary school student alcohol consumption (2016).
  12. 12.
    Rumelhart, D., Hinton, G., Williams, R.: Learning representations by back-propagating errors. Nature 323, 533–536 (1986)CrossRefzbMATHGoogle Scholar
  13. 13.
    Tibshirani, R.: Regression shrinkage and selection via the lasso. J. Roy. Stat. Soc. Ser. B 58, 267–288 (1994)MathSciNetzbMATHGoogle Scholar
  14. 14.
    Watanabe, C., Hiramatsu, K., Kashino, K.: Modular representation of layered neural networks (2017). arXiv:1703.00168
  15. 15.
    Werbos, P.: Beyond regression : new tools for prediction and analysis in the behavioral sciences. Ph.D. thesis. Harvard University (1974)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.NTT Communication Science LaboratoriesAtsugi-shiJapan

Personalised recommendations