Efficiently Explaining Decisions of Probabilistic RBF Classification Networks
For many important practical applications model transparency is an important requirement. A probabilistic radial basis function (PRBF) network is an effective non-linear classifier, but similarly to most other neural network models it is not straightforward to obtain explanations for its decisions. Recently two general methods for explaining of a model’s decisions for individual instances have been introduced which are based on the decomposition of a model’s prediction into contributions of each attribute. By exploiting the marginalization property of the Gaussian distribution, we show that PRBF is especially suitable for these explanation techniques. By explaining the PRBF’s decisions for new unlabeled cases we demonstrate resulting methods and accompany presentation with visualization technique that works both for single instances as well as for the attributes and their values, thus providing a valuable tool for inspection of the otherwise opaque models.
Keywordsclassification explanation model explanation comprehensibility probabilistic RBF networks model visualization game theory
Unable to display preview. Download preview PDF.
- 5.Možina, M., Demšar, J., Kattan, M.W., Zupan, B.: Nomograms for visualization of naive bayesian classifier. In: Boulicaut, J.-F., Esposito, F., Giannotti, F., Pedreschi, D. (eds.) PKDD 2004. LNCS (LNAI), vol. 3202, pp. 337–348. Springer, Heidelberg (2004)Google Scholar
- 7.Setiono, R., Liu, H.: Understanding neural networks via rule extraction. In: Proceedings of IJCAI 1995, pp. 480–487 (1995)Google Scholar
- 8.Shapley, L.S.: A value for n-person games. In: Contributions to the Theory of Games, vol. II. Princeton University Press, Princeton (1953)Google Scholar
- 11.Towell, G.G., Shavlik, J.W.: Extracting refined rules from knowledge-based neural networks. Machine Learning 13(1), 71–101 (1993)Google Scholar