Skip to main content

Towards Explaining Deep Neural Networks Through Graph Analysis

  • Conference paper
  • First Online:
Database and Expert Systems Applications (DEXA 2019)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1062))

Included in the following conference series:

Abstract

Due to its potential to solve complex tasks, deep learning is being used across many different areas. The complexity of neural networks however makes it difficult to explain the whole decision process used by the model, which makes understanding deep learning models an active research topic. In this work we address this issue by extracting the knowledge acquired by trained Deep Neural Networks (DNNs) and representing this knowledge in a graph. The proposed graph encodes statistical correlations between neurons’ activation values in order to expose the relationship between neurons in the hidden layers with both the input layer and output classes. Two initial experiments in image classification were conducted to evaluate whether the proposed graph can help understanding and explaining DNNs. We first show how it is possible to explore the proposed graph to find what neurons are the most important for predicting each class. Then, we use graph analysis to detect groups of classes that are more similar to each other and how these similarities affect the DNN. Finally, we use heatmaps to visualize what parts of the input layer are responsible for activating each neuron in hidden layers. The results show that by building and analysing the proposed graph it is possible to gain relevant insights of the DNN’s inner workings.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bartlett, E.B.: Self determination of input variable importance using neural networks. Neural Parallel Sci. Comput. 2, 103–114 (1994)

    MATH  Google Scholar 

  2. Blondel, V., Guillaume, J.L., Lambiotte, R., Lefebvre, E.: Fast unfolding of communities in large networks. J. Stat. Mech. Theory Exp. (2008). https://doi.org/10.1088/1742-5468/2008/10/P10008

    Article  Google Scholar 

  3. Chan, V., Chan, C.W.: Development and application of an algorithm for extracting multiple linear regression equations from artificial neural networks for nonlinear regression problems. In: 2016 IEEE 15th International Conference on Cognitive Informatics Cognitive Computing (ICCI*CC), pp. 479–488, August 2016

    Google Scholar 

  4. Erhan, D., Bengio, Y., Courville, A., Vincent, P.: Visualizing higher-layer features of a deep network. Technical report. Univeristé de Montréal, January 2009

    Google Scholar 

  5. Junque de Fortuny, E., Martens, D.: Active learning-based pedagogical rule extraction. IEEE Trans. Neural Netw. Learn. Syst. 26, 2664–2677 (2015)

    Article  MathSciNet  Google Scholar 

  6. Garcia-Gasulla, D., et al.: Building graph representations of deep vector embeddings. CoRR abs/1707.07465 (2017). http://arxiv.org/abs/1707.07465

  7. Garcia-Gasulla, D., et al.: An out-of-the-box full-network embedding for convolutional neural networks. In: 2018 IEEE International Conference on Big Knowledge (ICBK), pp. 168–175 (2018)

    Google Scholar 

  8. Kim, D.E., Lee, J.: Handling continuous-valued attributes in decision tree with neural network modeling. In: López de Mántaras, R., Plaza, E. (eds.) ECML 2000. LNCS (LNAI), vol. 1810, pp. 211–219. Springer, Heidelberg (2000). https://doi.org/10.1007/3-540-45164-1_22

    Chapter  Google Scholar 

  9. Krishnan, R., Sivakumar, G., Bhattacharya, P.: A search technique for rule extraction from trained neural networks. Pattern Recogn. Lett. 20(3), 273–280 (1999)

    Article  Google Scholar 

  10. LeCun, Y., Cortes, C.: MNIST handwritten digit database (2010). http://yann.lecun.com/exdb/mnist/

  11. Liu, B., Wei, Y., Zhang, Y., Yang, Q.: Deep neural networks for high dimension, low sample size data, pp. 2287–2293, August 2017. https://doi.org/10.24963/ijcai.2017/318

  12. Mak, B., Blanning, R.W.: An empirical measure of element contribution in neural networks. IEEE Trans. Syst. Man Cyberne. Part C (Appl. Rev.) 28(4), 561–564 (1998)

    Article  Google Scholar 

  13. Mohamed, M.H.: Rules extraction from constructively trained neural networks based on genetic algorithms. Neurocomput. 74(17), 3180–3192 (2011)

    Article  Google Scholar 

  14. Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. CoRR abs/1312.6034 (2013). http://arxiv.org/abs/1312.6034

  15. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 (2014). http://arxiv.org/abs/1409.1556

  16. Towell, G.G., Shavlik, J.W.: Extracting refined rules from knowledge-based neural networks. Mach. Learn. 13(1), 71–101 (1993)

    Google Scholar 

  17. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms (2017)

    Google Scholar 

Download references

Acknowledgements

The Insight Centre for Data Analytics is supported by Science Foundation Ireland under Grant Number 17/RC-PhD/3483.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vitor A. C. Horta .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Horta, V.A.C., Mileo, A. (2019). Towards Explaining Deep Neural Networks Through Graph Analysis. In: Anderst-Kotsis, G., et al. Database and Expert Systems Applications. DEXA 2019. Communications in Computer and Information Science, vol 1062. Springer, Cham. https://doi.org/10.1007/978-3-030-27684-3_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-27684-3_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-27683-6

  • Online ISBN: 978-3-030-27684-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics