Advertisement

Identifiability: A Fundamental Problem of Student Modeling

  • Joseph E. Beck
  • Kai-min Chang
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4511)

Abstract

In this paper we show how model identifiabilityis an issue for student modeling: observed student performance corresponds to an infinite family of possible model parameter estimates, all of which make identical predictions about student performance. However, these parameter estimates make different claims, some of which are clearly incorrect, about the student’s unobservable internal knowledge. We propose methods for evaluating these models to find ones that are more plausible. Specifically, we present an approach using Dirichlet priors to bias model search that results in a statistically reliable improvement in predictive accuracy (AUC of 0.620 ± 0.002 vs. 0.614 ± 0.002). Furthermore, the parameters associated with this model provide more plausible estimates of student learning, and better track with known properties of students’ background knowledge. The main conclusion is that prior beliefs are necessary to bias the student modeling search, and even large quantities of performance data alone are insufficient to properly estimate the model.

Keywords

Student Performance Area Under Curve Intelligent Tutor System Student Knowledge Baseline Approach 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Corbett, A., Anderson, J.: Knowledge tracing: Modeling the acquisition of procedural knowledge. User modeling and user-adapted interaction 4, 253–278 (1995)CrossRefGoogle Scholar
  2. 2.
    Beck, J.E., Sison, J.: Using knowledge tracing in a noisy environment to measure student reading proficiencies. International Journal of Artificial Intelligence in Education 16, 129–143 (2006)Google Scholar
  3. 3.
    Anderson, J.R., Corbett, A.T., Koedinger, K.R., Pelletier, R.: Cognitive tutors:Lessons learned. The Journal of the Learning Sciences 4, 167–207 (1995)CrossRefGoogle Scholar
  4. 4.
    Mostow, J., Aist, G.: Evaluating tutors that listen: An overview of Project LISTEN. In: Forbus, K., Feltovich, P.(eds.) Smart Machines in Education. MIT/AAAI Press, Menlo Park, CA, pp. 169–234 (2001)Google Scholar
  5. 5.
    Chang, K.-m., Beck, J., Mostow, J.,Corbett, A.: A Bayes Net Toolkit for Student Modeling in Intelligent Tutoring Systems. In: Proceedings of the 8th International Conference on Intelligent Tutoring Systems, p. Jhongli, Taiwan (2006)Google Scholar
  6. 6.
    Reye, J.: Student Modelling based on Belief Networks. International Journal of Artificial Intelligence in Education 14, 1–33 (2004)Google Scholar
  7. 7.
    Heckerman, D.: A Tutorial on Learning With Bayesian Networks, Microsoft Research Technical Report (MSR-TR-95-06) (1995)Google Scholar
  8. 8.
    Hand, D., Mannila, H., Smyth, P.: Principles of Data Mining. MIT Press, Cambridge, Massachusetts (2001)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Joseph E. Beck
    • 1
  • Kai-min Chang
    • 1
  1. 1.School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213.USA

Personalised recommendations