Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Visualizing surrogate decision trees of convolutional neural networks


Interpreting the decision-making of black boxes in machine learning becomes urgent nowadays due to their lack of transparency. One effective way to interpret these models is to transform them into interpretable surrogate models such as decision trees and rule lists. Compared with other methods that open the black boxes, rule extraction is a universal method which can theoretically extend to any black boxes. However, in practice, it is not appropriate for deep learning models such as convolutional neural networks (CNNs), since the extracted rules or decision trees are too large to interpret and the rules are not at the semantic level. These two drawbacks limit the usability of rule extraction for deep learning models. In this paper, we adopt a new strategy to solve the problem. We first decompose a CNN into a feature extractor and a classifier. Then extract the decision tree only from the classifier. Then, we leverage lots of segmented labeled images to learn the concepts of each feature. This method can extract human-readable decision trees from CNNs. Finally, we build CNN2DT, a visual analysis system to enable users to explore the surrogate decision trees. Use cases show that CNN2DT provides global and local interpretations of the CNN decision process. Besides, users can easily find the misclassification reasons for single images and the discriminating capacity of different models. A user study has demonstrated the effectiveness of CNN2DT on AlexNet and VGG16 for image classification.


This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12


  1. Asahi T, Turo D, Shneiderman B (1995) Visual decision-making: using treemaps for the analytic hierarchy process. In: Bederson BB, B Shneiderman (eds) The craft of information visualization. Elsevier, pp 405–406.

  2. Bau D, Zhou B, Khosla A, Oliva A, Torralba A (2017) Network dissection: quantifying interpretability of deep visual representations. In: CVPR

  3. Breiman L, Friedman J, Stone CJ, Olshen RA (1984) Classification and regression trees. CRC Press, Boca Raton

  4. Choo J, Liu S (2018) Visual analytics for explainable deep learning. IEEE Comput Graph Appl 38:84–92

  5. Collins C, Penn G, Carpendale S (2009) Bubble sets: revealing set relations with isocontours over existing visualizations. IEEE Trans Vis Comput Graph 15:1009,1016–1009,1016

  6. Craven M, Shavlik JW (1994) Using sampling and queries to extract rules from trained neural networks. In: International joint conference on neural networks, pp 37–45

  7. Craven M, Shavlik JW (1996) Extracting tree-structured representations of trained networks. In: Advances in neural information processing systems, pp 24–30

  8. Dabkowski P, Yarin G (2017) Real time image saliency for black box classifiers. Adv Neural Inf Process Syst 2017:6967–6976

  9. Ellson J, Gansner ER, Koutsofios E, North SC, Woodhull G (2004) Graphviz and dynagraph—static and dynamic graph drawing tools. In: Graph drawing software. Springer, Berlin, Heidelberg, pp 127–148

  10. Haipeng Z (2017) Towards better understanding of deep learning with visualization. Foundations of Science

  11. Han J, Cercone N (2001) Interactive construction of decision trees. In: Cheung D, Williams GJ, Li Q (eds) Advances in knowledge discovery and data mining, PAKDD 2001. Lecture notes in computer science, vol 2035. Springer, Berlin, Heidelberg, pp 575–580.

  12. Hohman FM, Kahng M, Pienta R, Chau DH (2018) Visual analytics in deep learning: an interrogative survey for the next frontiers. IEEE Trans Vis Comput Graph

  13. Huysmans J, Baesens B, Vanthienen J (2006) Using rule extraction to improve the comprehensibility of predictive models. Social science Electronic Publishing, Rochester

  14. Kahng M, Andrews PY, Kalro A, Polo DC (2017) ACTIVIS: visual exploration of industry-scale deep neural network models. IEEE Trans Vis Comput Graph 24:1–1

  15. Karpathy A, Toderici G, Shetty S, Leung T, Sukthankar R, Fei-Fei L (2014) Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1725–1732

  16. Krizhevsky A, Hinton G (2009) Learning multiple layers of features from tiny images

  17. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Proceedings of the 25th international conference on neural information processing systems, vol 1. Curran Associates Inc, Lake Tahoe, Nevada, pp 1097–1105

  18. Lakkaraju H, Bach SH, Leskovec J (2016) Interpretable decision sets: a joint framework for description and prediction. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining pp 1675–1684

  19. Liu Y, Salvendy G (2007) Design and evaluation of visualization support to facilitate decision trees classification. Int J Hum Comput Stud 65:95–110

  20. Liu M, Shi J, Cao K, Zhu J, Liu S (2017) Analyzing the training processes of deep generative models. IEEE Trans Vis Comput Graph 24:1–1

  21. Liu S, Xiao J, Liu J, Wang X, Jing W, Zhu J (2017) Visual diagnosis of tree boosting methods. IEEE Trans Vis Comput Graph 24:1–1

  22. Lvd Maaten, Hinton G (2008) Visualizing data using t-SNE. J Mach Learn Res 9:2579–2605

  23. Mansmann F, Göbel T, Cheswick W (2012) Visual analysis of complex firewall configurations. In: Proceedings of the ninth international symposium on visualization for cyber security, pp 1–8

  24. Muhlbacher T, Linhardt L, Moller T, Piringer H (2017) TreePOD: sensitivity-aware selection of pareto-optimal decision trees. IEEE Trans Vis Comput Graph 24:1–1

  25. Pedregosa F et al (2011) Scikit-learn: machine learning in Python. J Mach Learn Res 12:2825–2830

  26. Pezzotti N, Höllt T, Gemert JV, Lelieveldt BPF, Eisemann E, Vilanova A (2017) DeepEyes: progressive visual analytics for designing deep neural networks. IEEE Trans Vis Comput Graph 24:1–1

  27. Rauber Paulo E, Fadel Samuel G, Falcao Alexandre X, Telea Alexandru C (2017) Visualizing the hidden activity of artificial neural networks. IEEE Trans Vis Comput Graph 23:101–110.

  28. Reingold EM, Tilford JS (1981) Tidier drawings of trees. IEEE Trans Softw Eng 2:223–228

  29. Ribeiro MT, Singh S, Guestrin C (2016) Why should I trust you?: Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining pp 1135–1144

  30. Sacha D, Kraus M, Keim DA, Chen M (2019) VIS4ML: an ontology for visual analytics assisted machine learning. IEEE Trans Vis Comput Graph 25:385–395

  31. Sato M, Tsukimoto H (2001) Rule extraction from neural networks via decision tree induction. In: International joint conference on neural networks. Proceedings (Cat. No. 01CH37222), vol 3. IEEE, pp 1870–1875

  32. Shixia Liu, Xiting Wang, Liu Mengchen, Jun Zhu (2017) Towards better analysis of machine learning models: a visual analytics perspective. Vis Inform 1:48–56

  33. Simonyan K, Zisserman A (2013) Very deep convolutional networks for large-scale image recognition. In: ICLR 2015

  34. Tzeng FY, Ma KL (2005) Opening the black box—data driven visualization of neural networks. In: VIS 05. IEEE Visualization, vol 2005. IEEE, pp 383–390

  35. Van Den Elzen S, van Wijk JJ (2011) Baobabview: interactive construction and analysis of decision trees. In: IEEE conference on visual analytics science and technology, pp 151–160

  36. Wang J, Gou L, Zhang W, Yang H, Shen H-W (2019) DeepVID: deep visual interpretation and diagnosis for image classifiers via knowledge distillation. IEEE Trans Vis Comput Graph 25:2168–2180

  37. Wongsuphasawat K et al (2017) Visualizing dataflow graphs of deep learning models in tensorflow. IEEE Trans Vis Comput Graph 24:1–1

  38. Wu M, Hughes MC, Parbhoo S, Zazzi M, Roth V, Doshi-Velez F (2017) Beyond Sparsity: tree regularization of deep models for interpretability. IIPS TIML workshop

  39. Yao Ming HQ, Bertini E (2019) RuleMatrix: visualizing and understanding classifiers with rules. IEEE Trans Vis Comput Graph 25:342–352

  40. Zeiler MD, Fergus R (2014) Visualizing and understanding convolutional networks. In: European conference on computer vision, pp 818–833

  41. Zhang Q-s, Zhu S-C (2018) Visual interpretability for deep learning: a survey. Front Inf Technol Electron Eng 19:27–39

  42. Zhang Q, Yang Y, Wu YN, Zhu S-C (2018) Interpreting CNNs via Decision Trees CoRR. arXiv:abs/1802.00121

Download references


We would like to thank David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, Antonio Torralba, the authors of network dissection. Partial work of this paper is based on the results of their hard work and open source. Besides, we greatly appreciate the feedback from anonymous reviews. This work was supported by National NSF of China (No. 61702359).

Author information

Correspondence to Shichao Jia.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 399 KB)

Supplementary material 2 (avi 86703 KB)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Jia, S., Lin, P., Li, Z. et al. Visualizing surrogate decision trees of convolutional neural networks. J Vis 23, 141–156 (2020).

Download citation


  • Rule extraction
  • Surrogate decision tree
  • Convolutional neural networks
  • Deep learning
  • Model interpretation