Advertisement

Advanced Neural Network Approach, Its Explanation with LIME for Credit Scoring Application

  • Lkhagvadorj Munkhdalai
  • Ling Wang
  • Hyun Woo Park
  • Keun Ho RyuEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11432)

Abstract

Neural network models have achieved a human-level performance in many application domains, including image classification, speech recognition and machine translation. However, in credit scoring application, neural network approach has been useless because of its black box nature that the relationship between contextual input and output cannot be completely understood. In this study, we investigate the advanced neural network approach and its’ explanation for credit scoring. We use the LIME technique to interpret the black box of such neural network and verify its’ trustworthiness by comparing a high interpretable logistic model. The results show that neural network models give higher accuracy and equivalent explanation with the logistic model.

Keywords

Neural network LIME Credit scoring 

Notes

Acknowledgements

This work was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (No. 2017R1A2B4010826), by the Business for Cooperative R&D between Industry, Academy, and Research Institute funded Korea Small and Medium Business Administration (Grants No. C0541451), by the Private Intelligence Information Service Expansion (No. C0511-18-1001) funded by the NIPA (National IT Industry Promotion Agency) and by National Natural Science Foundation of China (Grant No. 61701104).

References

  1. 1.
    Vellido, A., Martín-Guerrero, J.D., Lisboa, P.J.G.: Making machine learning models interpretable. In: ESANN, vol. 12, pp. 163–172 (2012)Google Scholar
  2. 2.
    Louzada, F., Ara, A., Fernandes, G.B.: Classification methods applied to credit scoring: systematic review and overall comparison. Surv. Oper. Res. Manage. Sci. 21(2), 117–134 (2016)MathSciNetGoogle Scholar
  3. 3.
    Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you? Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. ACM (2016)Google Scholar
  4. 4.
    Cox, D.R.: The regression analysis of binary sequences. J. R. Stat. Soc. Series B (Methodological) 20, 215–242 (1958)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Lessmann, S., Baesens, B., Seow, H.-V., Thomas, L.C.: Benchmarking state-of-the-art classification algorithms for credit scoring: an update of research. Eur. J. Oper. Res. 247(1), 124–136 (2015)CrossRefGoogle Scholar
  6. 6.
    Hand, D.J., Anagnostopoulos, C.: A better Beta for the H measure of classification performance. Pattern Recogn. Lett. 40, 41–46 (2014)CrossRefGoogle Scholar
  7. 7.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436 (2015)CrossRefGoogle Scholar
  8. 8.
    West, D.: Neural network credit scoring models. Comput. Oper. Res. 27(11–12), 1131–1152 (2000)CrossRefGoogle Scholar
  9. 9.
    Lee, T.-S., Chen, I.-F.: A two-stage hybrid credit scoring model using artificial neural networks and multivariate adaptive regression splines. Expert Syst. Appl. 28(4), 743–752 (2005)CrossRefGoogle Scholar
  10. 10.
    Wong, B.K., Selvi, Y.: Neural network applications in finance: a review and analysis of literature (1990–1996). Inf. Manage. 34(3), 129–139 (1998)CrossRefGoogle Scholar
  11. 11.
    Rosenblatt, F.: The perceptron: a probabilistic model for information storage and organization in the brain. Psychol. Rev. 65(6), 386 (1958)CrossRefGoogle Scholar
  12. 12.
    Girosi, F., Jones, M., Poggio, T.: Regularization theory and neural networks architectures. Neural Comput. 7(2), 219–269 (1995)CrossRefGoogle Scholar
  13. 13.
    Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNetzbMATHGoogle Scholar
  14. 14.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  15. 15.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
  16. 16.
    Ruder, S.: An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747 (2016)
  17. 17.
    Asuncion, A., Newman, D.: UCI machine learning repository (2007)Google Scholar
  18. 18.
  19. 19.
    Farrar, D.E., Glauber, R.R.: Multicollinearity in regression analysis: the problem revisited. Rev. Econ. Stat. 49, 92–107 (1967)CrossRefGoogle Scholar
  20. 20.
    Amadoz, A., Sebastian-Leon, P., Vidal, E., Salavert, F., Dopazo, J.: Using activation status of signaling pathways as mechanism-based biomarkers to predict drug sensitivity. Sci. Rep. 5, 18494 (2015)CrossRefGoogle Scholar
  21. 21.
    Arnold, T.B.: KerasR: R interface to the Keras deep learning library. J. Open Source Softw. 2, 296 (2017)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Database/Bioinformatics Laboratory, School of Electrical and Computer EngineeringChungbuk National UniversityCheongjuRepublic of Korea
  2. 2.Department of Computer Technology, School of Information EngineeringNortheast Electric Power UniversityJilin CityChina
  3. 3.Faculty of Information TechnologyTon Duc Thang UniversityHo Chi Minh CityVietnam
  4. 4.Department of Computer ScienceChungbuk National UniversityCheongjuRepublic of Korea

Personalised recommendations