Advertisement

Explaining Single Predictions: A Faster Method

  • Gabriel FerrettiniEmail author
  • Julien Aligon
  • Chantal Soulé-Dupuy
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12011)

Abstract

Machine learning has proven increasingly essential in many fields. Yet, a lot obstacles still hinder its use by non-experts. The lack of trust in the results obtained is foremost among them, and has inspired several explanatory approaches in the literature. In this paper, we are investigating the domain of single prediction explanation. This is performed by providing the user a detailed explanation of the attribute’s influence on each single predicted instance, related to a particular machine learning model. A lot of possible explanation methods have been developed recently. Although, these approaches often require an important computation time in order to be efficient. That is why we are investigating about new proposals of explanation methods, aiming to increase time performances, for a small loss in accuracy.

Keywords

Machine learning Explanation model predictive model 

References

  1. 1.
    Altmann, A., Tolosi, L., Sander, O., Lengauer, T.: Permutation importance: a corrected feature importance measure. Bioinformatics 26(10), 1340–1347 (2010)CrossRefGoogle Scholar
  2. 2.
    Casalicchio, G., Molnar, C., Bischl, B.: Visualizing the Feature Importance for Black Box Models. arXiv e-prints, April 2018Google Scholar
  3. 3.
    Datta, A., Sen, S., Zick, Y.: Algorithmic transparency via quantitative input influence: theory and experiments with learning systems. In: 2016 IEEE Symposium on Security and Privacy (SP), pp. 598–617, May 2016Google Scholar
  4. 4.
    Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I.H.: The weka data mining software: an update. ACM SIGKDD Explor. Newslett. 11(1), 10–18 (2009)CrossRefGoogle Scholar
  5. 5.
    Henelius, A., Puolamaki, K., Boström, H., Asker, L., Papapetrou, P.: A peek into the black box: exploring classifiers by randomization. Data Min. Knowl. Discov. 28(5–6), 1503–1529 (2014). qC 20180119MathSciNetCrossRefGoogle Scholar
  6. 6.
    Lundberg, S., Lee, S.I.: A unified approach to interpreting model predictions. In: NIPS (2017)Google Scholar
  7. 7.
    Quinlan, J.: Induction of decision trees. Mach. Learn. 1(1), 81–106 (1986)Google Scholar
  8. 8.
    Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2016, pp. 1135–1144. ACM, New York (2016)Google Scholar
  9. 9.
    Shapley, L.S.: A value for n-person games. In: Contributions to the Theory of Games, vol. 28, pp. 307–317 (1953)Google Scholar
  10. 10.
    Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: Proceedings of the 34th International Conference on Machine Learning, ICML 2017, vol. 70, pp. 3145–3153 (2017)Google Scholar
  11. 11.
    Štrumbelj, E., Kononenko, I.: Towards a model independent method for explaining classification for individual instances. In: Song, I.-Y., Eder, J., Nguyen, T.M. (eds.) DaWaK 2008. LNCS, vol. 5182, pp. 273–282. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-85836-2_26CrossRefGoogle Scholar
  12. 12.
    Strumbelj, E., Kononenko, I.: An efficient explanation of individual classifications using game theory. J. Mach. Learn. Res. 11, 1–18 (2010)MathSciNetzbMATHGoogle Scholar
  13. 13.
    Vanschoren, J., van Rijn, J.N., Bischl, B., Torgo, L.: OpenML: networked science in machine learning. SIGKDD Explor. 15(2), 49–60 (2013)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Gabriel Ferrettini
    • 1
    Email author
  • Julien Aligon
    • 1
  • Chantal Soulé-Dupuy
    • 1
  1. 1.Université de Toulouse, UT1, IRIT, (CNRS/UMR 5505)ToulouseFrance

Personalised recommendations