Advertisement

Interpreting Deep Sports Analytics: Valuing Actions and Players in the NHL

  • Guiliang LiuEmail author
  • Wang Zhu
  • Oliver Schulte
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11330)

Abstract

Deep learning has started to have an impact on sports analytics. Several papers have applied action-value Q learning to quantify a team’s chance of success, given the current match state. However, the black-box opacity of neural networks prohibits understanding why and when some actions are more valuable than others. This paper applies interpretable Mimic Learning to distill knowledge from the opaque neural net model to a transparent regression tree model. We apply Deep Reinforcement Learning to compute the Q function, and action impact under different game contexts, from 3M play-by-play events in the National Hockey League (NHL). The impact of an action is the change in Q-value due to the action. The play data along with the associated Q functions and impact are fitted by a mimic regression tree. We learn a general mimic regression tree for all players, and player-specific trees. The transparent tree structure facilitates understanding the general action values by feature influence and partial dependence plots, and player’s exceptional characteristics by identifying player-specific relevant state regions.

References

  1. 1.
    Ba, J., Caruana, R.: Do deep nets really need to be deep? In: Advances in Neural Information Processing Systems, pp. 2654–2662 (2014)Google Scholar
  2. 2.
    Cervone, D., D’Amour, A., Bornn, L., Goldsberry, K.: Pointwise: predicting points and valuing decisions in real time with NBA optical tracking data. In: Proceedings of the 8th MIT Sloan Sports Analytics Conference, Boston, MA, USA, vol. 28, p. 3 (2014)Google Scholar
  3. 3.
    Che, Z., et al.: Interpretable deep models for ICU outcome prediction. In: AMIA Annual Symposium Proceedings, vol. 2016, p. 371. AMIA (2016)Google Scholar
  4. 4.
    Dancey, D., Bandar, Z.A., McLean, D.: Logistic model tree extraction from artificial neural networks. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 37(4), 794–802 (2007)CrossRefGoogle Scholar
  5. 5.
    De’Ath, G.: Multivariate regression trees: a new technique for modeling species-environment relationships. Ecology 83(4), 1105–1117 (2002)Google Scholar
  6. 6.
    Decroos, T., Dzyuba, V., Van Haaren, J., Davis, J.: Predicting soccer highlights from spatio-temporal match event streams. In: AAAI, pp. 1302–1308 (2017)Google Scholar
  7. 7.
    Gerstenberg, T., Ullman, T., Kleiman-Weiner, M., Lagnado, D., Tenenbaum, J.: Wins above replacement: Responsibility attributions as counterfactual replacements. In: Proceedings of the Annual Meeting of the Cognitive Science Society, vol. 36 (2014)Google Scholar
  8. 8.
    Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pp. 315–323 (2011)Google Scholar
  9. 9.
    Kharrat, T., Pena, J.L., McHale, I.: Plus-minus player ratings for soccer. arXiv preprint arXiv:1706.04943 (2017)
  10. 10.
    Le, H.M., Carr, P., Yue, Y., Lucey, P.: Data-driven ghosting using deep imitation learning. In: MIT Sloan Sports Analytics Conference (2017)Google Scholar
  11. 11.
    Lipton, Z.C.: The mythos of model interpretability. arXiv preprint arXiv:1606.03490 (2016)
  12. 12.
    Liu, G., Schulte, O.: Deep reinforcement learning in ice hockey for context-aware player evaluation. In: Proceedings IJCAI-18, pp. 3442–3448, July 2018.  https://doi.org/10.24963/ijcai.2018/478
  13. 13.
    Liu, G., Schulte, O.: Drl-ice-hockey (2018). https://github.com/Guiliang/DRL-ice-hockey
  14. 14.
    Macdonald, B.: A regression-based adjusted plus-minus statistic for NHL players. J. Quant. Anal. Sports 7(3), 29 (2011)Google Scholar
  15. 15.
    Mehrasa, N., Zhong, Y., Tung, F., Bornn, L., Mori, G.: Deep learning of player trajectory representations for team activity analysis. In: MIT Sloan Sports Analytics Conference (2018)Google Scholar
  16. 16.
    Perera, H., Davis, J., Swartz, T.: Assessing the impact of fielding in twenty20 cricket. J. Oper. Res. Soc. 69, 1335–1343 (2018)CrossRefGoogle Scholar
  17. 17.
    Routley, K., Schulte, O.: A Markov game model for valuing player actions in ice hockey. In: Proceedings Uncertainty in Artificial Intelligence (UAI), pp. 782–791 (2015)Google Scholar
  18. 18.
    Schuckers, M., Curro, J.: Total hockey rating (THoR): a comprehensive statistical rating of national hockey league forwards and defensemen based upon all on-ice events. In: 7th Annual MIT Sloan Sports Analytics Conference (2013)Google Scholar
  19. 19.
    Schulte, O., Khademi, M., Gholami, S., Zhao, Z., Javan, M., Desaulniers, P.: A Markov game model for valuing actions, locations, and team performance in ice hockey. Data Min. Knowl. Discovery 31(6), 1735–1757 (2017)MathSciNetCrossRefGoogle Scholar
  20. 20.
    Schulte, O., Zhao, Z., Javan, M., Desaulniers, P.: Apples-to-apples: clustering and ranking NHL players using location information and scoring impact. In: MIT Sloan Sports Analytics Conference (2017)Google Scholar
  21. 21.
    Struyf, J., Džeroski, S.: Constraint based induction of multi-objective regression trees. In: Bonchi, F., Boulicaut, J.-F. (eds.) KDID 2005. LNCS, vol. 3933, pp. 222–233. Springer, Heidelberg (2006).  https://doi.org/10.1007/11733492_13CrossRefGoogle Scholar
  22. 22.
    Sutton, R.S., Barto, A.G.: Introduction to Reinforcement Learning, vol. 135. MIT Press, Cambridge (1998)zbMATHGoogle Scholar
  23. 23.
    Swartz, T.B., Arce, A.: New insights involving the home team advantage. Int. J. Sports Sci. Coaching 9(4), 681–692 (2014)CrossRefGoogle Scholar
  24. 24.
    Thomas, A., Ventura, S., Jensen, S., Ma, S.: Competing process hazard function models for player ratings in ice hockey. Ann. Appl. Stat. 7(3), 1497–1524 (2013).  https://doi.org/10.1214/13-AOAS646MathSciNetCrossRefzbMATHGoogle Scholar
  25. 25.
    Uther, W.T., Veloso, M.M.: Tree based discretization for continuous state space reinforcement learning. In: AAAI/IAAI, pp. 769–774 (1998)Google Scholar
  26. 26.
    Wang, J., Fox, I., Skaza, J., Linck, N., Singh, S., Wiens, J.: The advantage of doubling: a deep reinforcement learning approach to studying the double team in the NBA. In: MIT Sloan Sports Analytics Conference (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Simon Fraser UniversityBurnabyCanada

Personalised recommendations