Skip to main content

Explaining Single Predictions: A Faster Method

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12011))

Abstract

Machine learning has proven increasingly essential in many fields. Yet, a lot obstacles still hinder its use by non-experts. The lack of trust in the results obtained is foremost among them, and has inspired several explanatory approaches in the literature. In this paper, we are investigating the domain of single prediction explanation. This is performed by providing the user a detailed explanation of the attribute’s influence on each single predicted instance, related to a particular machine learning model. A lot of possible explanation methods have been developed recently. Although, these approaches often require an important computation time in order to be efficient. That is why we are investigating about new proposals of explanation methods, aiming to increase time performances, for a small loss in accuracy.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Iris, Fisher: https://en.wikipedia.org/wiki/Iris_flower_data_set.

  2. 2.

    http://osirim.irit.fr/site/en.

  3. 3.

    Available in https://www.openml.org/s/107/tasks.

References

  1. Altmann, A., Tolosi, L., Sander, O., Lengauer, T.: Permutation importance: a corrected feature importance measure. Bioinformatics 26(10), 1340–1347 (2010)

    Article  Google Scholar 

  2. Casalicchio, G., Molnar, C., Bischl, B.: Visualizing the Feature Importance for Black Box Models. arXiv e-prints, April 2018

    Google Scholar 

  3. Datta, A., Sen, S., Zick, Y.: Algorithmic transparency via quantitative input influence: theory and experiments with learning systems. In: 2016 IEEE Symposium on Security and Privacy (SP), pp. 598–617, May 2016

    Google Scholar 

  4. Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I.H.: The weka data mining software: an update. ACM SIGKDD Explor. Newslett. 11(1), 10–18 (2009)

    Article  Google Scholar 

  5. Henelius, A., Puolamaki, K., Boström, H., Asker, L., Papapetrou, P.: A peek into the black box: exploring classifiers by randomization. Data Min. Knowl. Discov. 28(5–6), 1503–1529 (2014). qC 20180119

    Article  MathSciNet  Google Scholar 

  6. Lundberg, S., Lee, S.I.: A unified approach to interpreting model predictions. In: NIPS (2017)

    Google Scholar 

  7. Quinlan, J.: Induction of decision trees. Mach. Learn. 1(1), 81–106 (1986)

    Google Scholar 

  8. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2016, pp. 1135–1144. ACM, New York (2016)

    Google Scholar 

  9. Shapley, L.S.: A value for n-person games. In: Contributions to the Theory of Games, vol. 28, pp. 307–317 (1953)

    Google Scholar 

  10. Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: Proceedings of the 34th International Conference on Machine Learning, ICML 2017, vol. 70, pp. 3145–3153 (2017)

    Google Scholar 

  11. Štrumbelj, E., Kononenko, I.: Towards a model independent method for explaining classification for individual instances. In: Song, I.-Y., Eder, J., Nguyen, T.M. (eds.) DaWaK 2008. LNCS, vol. 5182, pp. 273–282. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-85836-2_26

    Chapter  Google Scholar 

  12. Strumbelj, E., Kononenko, I.: An efficient explanation of individual classifications using game theory. J. Mach. Learn. Res. 11, 1–18 (2010)

    MathSciNet  MATH  Google Scholar 

  13. Vanschoren, J., van Rijn, J.N., Bischl, B., Torgo, L.: OpenML: networked science in machine learning. SIGKDD Explor. 15(2), 49–60 (2013)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gabriel Ferrettini .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ferrettini, G., Aligon, J., Soulé-Dupuy, C. (2020). Explaining Single Predictions: A Faster Method. In: Chatzigeorgiou, A., et al. SOFSEM 2020: Theory and Practice of Computer Science. SOFSEM 2020. Lecture Notes in Computer Science(), vol 12011. Springer, Cham. https://doi.org/10.1007/978-3-030-38919-2_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-38919-2_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-38918-5

  • Online ISBN: 978-3-030-38919-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics