Skip to main content

Measuring Interpretability for Different Types of Machine Learning Models

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11154))

Abstract

The interpretability of a machine learning model plays a significant role in practical applications, thus it is necessary to develop a method to compare the interpretability for different models so as to select the most appropriate one. However, model interpretability, a highly subjective concept, is difficult to be accurately measured, not to mention the interpretability comparison of different models. To this end, we develop an interpretability evaluation model to compute model interpretability and compare interpretability for different models. Specifically, first we we present a general form of model interpretability. Second, a questionnaire survey system is developed to collect information about users’ understanding of a machine learning model. Next, three structure features are selected to investigate the relationship between interpretability and structural complexity. After this, an interpretability label is build based on the questionnaire survey result and a linear regression model is developed to evaluate the relationship between the structural features and model interpretability. The experiment results demonstrate that our interpretability evaluation model is valid and reliable to evaluate the interpretability of different models.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Data is available at http://pan.baidu.com/s/1eRSNWtW.

References

  1. Zhou, Z.H.: Machine Learning, 1st edn. Tsinghua University Press, Beijing (2016)

    Google Scholar 

  2. Jovanovic, M., Radovanovic, S., Vukicevic, M., et al.: Building interpretable predictive models for pediatric hospital readmission using Tree-Lasso logistic regression. Artif. Intell. Med. 72, 12 (2016)

    Article  Google Scholar 

  3. Huysmans, J., Dejaeger, K., Mues, C., et al.: An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models. Decis. Support Syst. 51(1), 141–154 (2011)

    Article  Google Scholar 

  4. Sim, K.C.: On constructing and analysing an interpretable brain model for the DNN based on hidden activity patterns. In: Automatic Speech Recognition and Understanding, pp. 22–29. IEEE (2016)

    Google Scholar 

  5. Allahyari, H., Lavesson, N.: User-oriented assessment of classification model understandability. In: Scandinavian Conference on Artificial Intelligence (2011)

    Google Scholar 

  6. Lipton, Z.C.: The Mythos of Model Interpretability (2016)

    Google Scholar 

  7. Ustun, B., Traca, S., Rudin, C.: Supersparse linear integer models for predictive scoring systems. In: AAAI Conference on Artificial Intelligence (2013)

    Google Scholar 

  8. Lu, T.K.P., Chau, V.T.N., Phung, N.H.: Extracting rule RF in educational data classification: from a random forest to interpretable refined rules. In: International Conference on Advanced Computing and Applications, pp. 20–27. IEEE (2015)

    Google Scholar 

  9. Turner, R.: A Model Explanation System: Latest Updates and Extensions (2016)

    Google Scholar 

  10. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?”: explaining the predictions of any classifier. In: ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. ACM (2016)

    Google Scholar 

  11. Maimon, O., Rokach, L.: Data Mining and Knowledge Discovery Handbook. Springer, Berlin (2005)

    Book  Google Scholar 

  12. Schielzeth, H.: Simple means to improve the interpretability of regression coefficients. Methods Ecol. Evol. 1(2), 103–113 (2010)

    Article  Google Scholar 

  13. Martens, D., Vanthienen, J., Verbeke, W., et al.: Performance of classification models from a user perspective. Decis. Support Syst. 51(4), 782–793 (2011)

    Article  Google Scholar 

  14. Freitas, A.A.: Comprehensible classification models:a position paper. ACM SIGKDD Explor. Newsl. 15(1), 1–10 (2014)

    Article  Google Scholar 

  15. Bibal, A., Frénay, B.: Interpretability of machine learning models and representations: an introduction. In: Proceedings ESANN, pp. 77–82 (2016)

    Google Scholar 

  16. Piltaver, R., Luštrek, M., Gams, M., et al.: What makes classification trees comprehensible? Expert Syst. Appl. 62(10), 333–346 (2016)

    Article  Google Scholar 

  17. Sweller, J.: Cognitive load during problem solving: effects on learning. Cognit. Sci. 12(2), 257–285 (1988)

    Article  Google Scholar 

  18. Piltaver, R., Luštrek, M., Gams, M., et al.: Comprehensibility of classification trees—survey design. In: International Multiconference Information Society—Is (2014)

    Google Scholar 

  19. Xing, W., Guo, R., Petakovic, E., et al.: Participation-based student final performance prediction model through interpretable Genetic Programming. Comput. Hum. Behav. 47(C), 168–181 (2015)

    Article  Google Scholar 

  20. Zhou, L., Si, Y.W., Fujita, H.: Predicting the listing statuses of Chinese-listed companies using decision trees combined with an improved filter feature selection method. Knowl.-Based Syst. 128 (2017)

    Google Scholar 

  21. Gorzałczany, M.B., Rudziński, F.: Interpretable and accurate medical data classification—a multi-objective genetic-fuzzy optimization approach. Expert Syst. Appl. 71, 26–39 (2017)

    Article  Google Scholar 

  22. White box. https://en.wikipedia.org/wiki/White_box_(software_engineering). Accessed 20 Nov 2017

  23. Black box. https://en.wikipedia.org/wiki/Black_box. Accessed 20 Nov 2017

  24. “no free lunch” theorem. https://en.wikipedia.org/wiki/No_free_lunch_theorem. Accessed 19 March 2018

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chao Mou .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhou, Q., Liao, F., Mou, C., Wang, P. (2018). Measuring Interpretability for Different Types of Machine Learning Models. In: Ganji, M., Rashidi, L., Fung, B., Wang, C. (eds) Trends and Applications in Knowledge Discovery and Data Mining. PAKDD 2018. Lecture Notes in Computer Science(), vol 11154. Springer, Cham. https://doi.org/10.1007/978-3-030-04503-6_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-04503-6_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-04502-9

  • Online ISBN: 978-3-030-04503-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics