Advertisement

Measuring Interpretability for Different Types of Machine Learning Models

  • Qing Zhou
  • Fenglu Liao
  • Chao MouEmail author
  • Ping Wang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11154)

Abstract

The interpretability of a machine learning model plays a significant role in practical applications, thus it is necessary to develop a method to compare the interpretability for different models so as to select the most appropriate one. However, model interpretability, a highly subjective concept, is difficult to be accurately measured, not to mention the interpretability comparison of different models. To this end, we develop an interpretability evaluation model to compute model interpretability and compare interpretability for different models. Specifically, first we we present a general form of model interpretability. Second, a questionnaire survey system is developed to collect information about users’ understanding of a machine learning model. Next, three structure features are selected to investigate the relationship between interpretability and structural complexity. After this, an interpretability label is build based on the questionnaire survey result and a linear regression model is developed to evaluate the relationship between the structural features and model interpretability. The experiment results demonstrate that our interpretability evaluation model is valid and reliable to evaluate the interpretability of different models.

Keywords

Structural complexity Model interpretability Interpretability evaluation model Machine learning models 

References

  1. 1.
    Zhou, Z.H.: Machine Learning, 1st edn. Tsinghua University Press, Beijing (2016)Google Scholar
  2. 2.
    Jovanovic, M., Radovanovic, S., Vukicevic, M., et al.: Building interpretable predictive models for pediatric hospital readmission using Tree-Lasso logistic regression. Artif. Intell. Med. 72, 12 (2016)CrossRefGoogle Scholar
  3. 3.
    Huysmans, J., Dejaeger, K., Mues, C., et al.: An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models. Decis. Support Syst. 51(1), 141–154 (2011)CrossRefGoogle Scholar
  4. 4.
    Sim, K.C.: On constructing and analysing an interpretable brain model for the DNN based on hidden activity patterns. In: Automatic Speech Recognition and Understanding, pp. 22–29. IEEE (2016)Google Scholar
  5. 5.
    Allahyari, H., Lavesson, N.: User-oriented assessment of classification model understandability. In: Scandinavian Conference on Artificial Intelligence (2011)Google Scholar
  6. 6.
    Lipton, Z.C.: The Mythos of Model Interpretability (2016)Google Scholar
  7. 7.
    Ustun, B., Traca, S., Rudin, C.: Supersparse linear integer models for predictive scoring systems. In: AAAI Conference on Artificial Intelligence (2013)Google Scholar
  8. 8.
    Lu, T.K.P., Chau, V.T.N., Phung, N.H.: Extracting rule RF in educational data classification: from a random forest to interpretable refined rules. In: International Conference on Advanced Computing and Applications, pp. 20–27. IEEE (2015)Google Scholar
  9. 9.
    Turner, R.: A Model Explanation System: Latest Updates and Extensions (2016)Google Scholar
  10. 10.
    Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?”: explaining the predictions of any classifier. In: ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. ACM (2016)Google Scholar
  11. 11.
    Maimon, O., Rokach, L.: Data Mining and Knowledge Discovery Handbook. Springer, Berlin (2005)CrossRefGoogle Scholar
  12. 12.
    Schielzeth, H.: Simple means to improve the interpretability of regression coefficients. Methods Ecol. Evol. 1(2), 103–113 (2010)CrossRefGoogle Scholar
  13. 13.
    Martens, D., Vanthienen, J., Verbeke, W., et al.: Performance of classification models from a user perspective. Decis. Support Syst. 51(4), 782–793 (2011)CrossRefGoogle Scholar
  14. 14.
    Freitas, A.A.: Comprehensible classification models:a position paper. ACM SIGKDD Explor. Newsl. 15(1), 1–10 (2014)CrossRefGoogle Scholar
  15. 15.
    Bibal, A., Frénay, B.: Interpretability of machine learning models and representations: an introduction. In: Proceedings ESANN, pp. 77–82 (2016)Google Scholar
  16. 16.
    Piltaver, R., Luštrek, M., Gams, M., et al.: What makes classification trees comprehensible? Expert Syst. Appl. 62(10), 333–346 (2016)CrossRefGoogle Scholar
  17. 17.
    Sweller, J.: Cognitive load during problem solving: effects on learning. Cognit. Sci. 12(2), 257–285 (1988)CrossRefGoogle Scholar
  18. 18.
    Piltaver, R., Luštrek, M., Gams, M., et al.: Comprehensibility of classification trees—survey design. In: International Multiconference Information Society—Is (2014)Google Scholar
  19. 19.
    Xing, W., Guo, R., Petakovic, E., et al.: Participation-based student final performance prediction model through interpretable Genetic Programming. Comput. Hum. Behav. 47(C), 168–181 (2015)CrossRefGoogle Scholar
  20. 20.
    Zhou, L., Si, Y.W., Fujita, H.: Predicting the listing statuses of Chinese-listed companies using decision trees combined with an improved filter feature selection method. Knowl.-Based Syst. 128 (2017)Google Scholar
  21. 21.
    Gorzałczany, M.B., Rudziński, F.: Interpretable and accurate medical data classification—a multi-objective genetic-fuzzy optimization approach. Expert Syst. Appl. 71, 26–39 (2017)CrossRefGoogle Scholar
  22. 22.
  23. 23.
    Black box. https://en.wikipedia.org/wiki/Black_box. Accessed 20 Nov 2017
  24. 24.
    “no free lunch” theorem. https://en.wikipedia.org/wiki/No_free_lunch_theorem. Accessed 19 March 2018

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.The College of Computer ScienceChongqing UniversityChongqingChina
  2. 2.School of Foreign Languages and CulturesChongqing UniversityChongqingChina

Personalised recommendations