Advertisement

Toward Interpretable Deep Reinforcement Learning with Linear Model U-Trees

  • Guiliang LiuEmail author
  • Oliver Schulte
  • Wang Zhu
  • Qingcan Li
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11052)

Abstract

Deep Reinforcement Learning (DRL) has achieved impressive success in many applications. A key component of many DRL models is a neural network representing a Q function, to estimate the expected cumulative reward following a state-action pair. The Q function neural network contains a lot of implicit knowledge about the RL problems, but often remains unexamined and uninterpreted. To our knowledge, this work develops the first mimic learning framework for Q functions in DRL. We introduce Linear Model U-trees (LMUTs) to approximate neural network predictions. An LMUT is learned using a novel on-line algorithm that is well-suited for an active play setting, where the mimic learner observes an ongoing interaction between the neural net and the environment. Empirical evaluation shows that an LMUT mimics a Q function substantially better than five baseline methods. The transparent tree structure of an LMUT facilitates understanding the network’s learned strategic knowledge by analyzing feature influence, extracting rules, and highlighting the super-pixels in image inputs. Code related to this paper is available at: https://github.com/Guiliang/uTree_mimic_mountain_car.

References

  1. 1.
    Ba, J., Caruana, R.: Do deep nets really need to be deep? In: Advances in Neural Information Processing Systems, pp. 2654–2662 (2014)Google Scholar
  2. 2.
    Boz, O.: Extracting decision trees from trained neural networks. In: Proceedings SIGKDD, pp. 456–461. ACM (2002)Google Scholar
  3. 3.
    Brockman, G., et al.: OpenAI Gym (2016). arXiv preprint arXiv:1606.01540
  4. 4.
    Chaudhuri, P., Huang, M.C., Loh, W.Y., Yao, R.: Piecewise-polynomial regression trees. Statistica Sinica 4(1), 143–167 (1994)zbMATHGoogle Scholar
  5. 5.
    Che, Z., et al.: Interpretable deep models for ICU outcome prediction. In: AMIA Annual Symposium Proceedings, vol. 2016, p. 371. AMIA (2016)Google Scholar
  6. 6.
    Craven, M., Shavlik, J.W.: Extracting tree-structured representations of trained networks. In: Advances in Neural Information Processing Systems, pp. 24–30 (1996)Google Scholar
  7. 7.
    Dancey, D., Bandar, Z.A., McLean, D.: Logistic model tree extraction from artificial neural networks. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 37(4), 794–802 (2007)CrossRefGoogle Scholar
  8. 8.
    Hall, M., Frank, E., et al.: The weka data mining software: an update. ACM SIGKDD Explor. Newsl. 11(1), 10–18 (2009)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Ikonomovska, E., Gama, J., Džeroski, S.: Learning model trees from evolving data streams. Data Min. Knowl. Discov. 23(1), 128–168 (2011)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Landwehr, N., Hall, M., Frank, E.: Mach. Learn. 59(1–2), 161–205 (2005)CrossRefGoogle Scholar
  11. 11.
    Lipton, Z.C.: The mythos of model interpretability (2016). arXiv preprint arXiv:1606.03490
  12. 12.
    Loh, W.Y.: Classification and regression trees. Data Min. Knowl. Discov. 1(1), 14–23 (2011). Wiley Interdisciplinary ReviewsCrossRefGoogle Scholar
  13. 13.
    McCallum, A.K., et al.: Learning to use selective attention and short-term memory in sequential tasks. Proc. SAB 4, 315–325 (1996)Google Scholar
  14. 14.
    Mnih, V., Kavukcuoglu, K., Silver, D., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)CrossRefGoogle Scholar
  15. 15.
    Quinlan, R.J.: Learning with continuous classes. In: 5th Australian Joint Conference on Artificial Intelligence, pp. 343–348. World Scientific, Singapore (1992)Google Scholar
  16. 16.
    Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. In: Proceedings SIGKDD, pp. 1135–1144. ACM (2016)Google Scholar
  17. 17.
    Riedmiller, M.: Neural fitted Q iteration – first experiences with a data efficient neural reinforcement learning method. In: Gama, J., Camacho, R., Brazdil, P.B., Jorge, A.M., Torgo, L. (eds.) ECML 2005. LNCS (LNAI), vol. 3720, pp. 317–328. Springer, Heidelberg (2005).  https://doi.org/10.1007/11564096_32CrossRefGoogle Scholar
  18. 18.
    Sutton, R.S., Barto, A.G.: Introduction to Reinforcement Learning, vol. 135. MIT Press, Cambridge (1998)Google Scholar
  19. 19.
    Tong, S., Koller, D.: Support vector machine active learning with applications to text classification. J. Mach. Learn. Res. 2, 45–66 (2001)zbMATHGoogle Scholar
  20. 20.
    Uther, W.T., Veloso, M.M.: Tree based discretization for continuous state space reinforcement learning. In: AAAI/IAAI, pp. 769–774 (1998)Google Scholar
  21. 21.
    Wu, M., Hughes, M., et al.: Beyond sparsity: tree regularization of deep models for interpretability. In: AAAI (2018)Google Scholar
  22. 22.
    Johansson, U., Sönströd, C., König, R.: Accurate and interpretable regression trees using oracle coaching. In: IEEE SSCI Symposium on Computational Intelligence and Data Mining (CIDM), pp. 194–201 (2014)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Guiliang Liu
    • 1
    Email author
  • Oliver Schulte
    • 1
  • Wang Zhu
    • 1
  • Qingcan Li
    • 1
  1. 1.School of Computing ScienceSimon Fraser UniversityBurnabyCanada

Personalised recommendations