Advertisement

Dynamic Student Classiffication on Memory Networks for Knowledge Tracing

  • Sein MinnEmail author
  • Michel C. Desmarais
  • Feida Zhu
  • Jing Xiao
  • Jianzong Wang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11440)

Abstract

Knowledge Tracing (KT) is the assessment of student’s knowledge state and predicting whether that student may or may not answer the next problem correctly based on a number of previous practices and outcomes in their learning process. KT leverages machine learning and data mining techniques to provide better assessment, supportive learning feedback and adaptive instructions. In this paper, we propose a novel model called Dynamic Student Classification on Memory Networks (DSCMN) for knowledge tracing that enhances existing KT approaches by capturing temporal learning ability at each time interval in student’s long-term learning process. Experimental results confirm that the proposed model is significantly better at predicting student performance than well known state-of-the-art KT modelling techniques.

Keywords

Massive open online courses Knowledge tracing Key-value memory networks Student clustering LSTMs 

References

  1. 1.
    d Baker, R.S.J., Corbett, A.T., Aleven, V.: More accurate student modeling through contextual estimation of slip and guess probabilities in Bayesian knowledge tracing. In: Woolf, B.P., Aïmeur, E., Nkambou, R., Lajoie, S. (eds.) ITS 2008. LNCS, vol. 5091, pp. 406–415. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-69132-7_44CrossRefGoogle Scholar
  2. 2.
    Ball, G., Hall Dj, I.: A novel method of data analysis and pattern classification. Isodata, a novel method of data analysis and pattern classification. Technical report 5ri, project 5533 (1965)Google Scholar
  3. 3.
    Brown, J.S., Burton, R.R.: Diagnostic models for procedural bugs in basic mathematical skills. Cogn. Sci. 2(2), 155–192 (1978)CrossRefGoogle Scholar
  4. 4.
    Corbett, A.: Cognitive computer tutors: solving the two-sigma problem. User Model. 2001, 137–147 (2001)MathSciNetzbMATHGoogle Scholar
  5. 5.
    Corbett, A.T., Anderson, J.R.: Knowledge tracing: modeling the acquisition of procedural knowledge. User Model. User-Adapt. Interact. 4(4), 253–278 (1994)CrossRefGoogle Scholar
  6. 6.
    Desmarais, M.C., Baker, R.S.: A review of recent advances in learner and skill modeling in intelligent learning environments. User Model. User-Adapt. Interact. 22(1–2), 9–38 (2012)CrossRefGoogle Scholar
  7. 7.
    Feng, M., Heffernan, N., Koedinger, K.: Addressing the assessment challenge with an online system that tutors as it assesses. User Model. User-Adapt. Interact. 19(3), 243–266 (2009)CrossRefGoogle Scholar
  8. 8.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  9. 9.
    MacQueen, J., et al.: Some methods for classification and analysis of multivariate observations. In: Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Oakland, CA, USA, vol. 1, pp. 281–297 (1967)Google Scholar
  10. 10.
    Minn, S., Yu, Y., Desmarais, M.C., Zhu, F., Vie, J.J.: Deep knowledge tracing and dynamic student classification for knowledge tracing. In: IEEE International Conference on Data Mining (2018)Google Scholar
  11. 11.
    Minn, S., Zhu, F., Desmarais, M.C.: Improving knowledge tracing model by integrating problem difficulty. In: IEEE International Conference on Data Mining, Ph.D. Forum (2018)Google Scholar
  12. 12.
    Pardos, Z.A., Heffernan, N.T.: KT-IDEM: introducing item difficulty to the knowledge tracing model. In: Konstan, J.A., Conejo, R., Marzo, J.L., Oliver, N. (eds.) UMAP 2011. LNCS, vol. 6787, pp. 243–254. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-22362-4_21CrossRefGoogle Scholar
  13. 13.
    Piech, C., et al.: Deep knowledge tracing. In: Advances in Neural Information Processing Systems, pp. 505–513 (2015)Google Scholar
  14. 14.
    Polson, M.C., Richardson, J.J.: Foundations of Intelligent Tutoring Systems. Psychology Press, London (2013)CrossRefGoogle Scholar
  15. 15.
    Razzaq, L., et al.: The assistment project: blending assessment and assisting. In: Proceedings of the 12th Annual Conference on Artificial Intelligence in Education, pp. 555–562 (2005)Google Scholar
  16. 16.
    Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNetzbMATHGoogle Scholar
  17. 17.
    Wan, H., Beck, J.B.: Considering the influence of prerequisite performance on wheel spinning. In: International Educational Data Mining Society (2015)Google Scholar
  18. 18.
    Xiong, X., Zhao, S., Van Inwegen, E., Beck, J.: Going deeper with deep knowledge tracing. In: EDM, pp. 545–550 (2016)Google Scholar
  19. 19.
    Zhang, J., Shi, X., King, I., Yeung, D.Y.: Dynamic key-value memory networks for knowledge tracing. In: Proceedings of the 26th International Conference on World Wide Web, pp. 765–774. International World Wide Web Conferences Steering Committee (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Sein Minn
    • 1
    Email author
  • Michel C. Desmarais
    • 1
  • Feida Zhu
    • 2
  • Jing Xiao
    • 3
  • Jianzong Wang
    • 3
  1. 1.Polytechnique MontrealMontrealCanada
  2. 2.Singapore Management UniversitySingaporeSingapore
  3. 3.Ping An Technology (Shenzhen) Co., Ltd.ShenzhenChina

Personalised recommendations