Advertisement

Evaluation of Data Aging: A Technique for Discounting Old Data during Student Modeling

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1452)

Abstract

Student modeling systems must operate in an environment in which a student’s mastery of a subject matter is likely to change as a lesson progresses. A student model is formed from evaluation of evidence about the student’s mastery of the domain. However, given that such mastery will change, older evidence is likely to be less valuable than recent evidence. Data aging addresses this issue by discounting the value of older evidence. This paper provides experimental evaluation of the effects of data aging. While it is demonstrated that data aging can result in statistically significant increases in both the number and accuracy of predictions that a modeling system makes, it is also demonstrated that the reverse can be true. Further, the effects experienced are of only small magnitude. It is argued that these results demonstrate some potential for data aging as a general strategy, but do not warrant employing data aging in its current form.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Beller, S., Hoppe, H. U.: Deductive error reconstruction and classification in a logic programming framework. In Proceedings of AIEd’93 The World Conference on Artificial Intelligence in Education (1993) pp. 433–440. Edinburgh. 384Google Scholar
  2. 2.
    Giangrandi, P., Tasso, C.: Temporal reasoning in student modelling. In du Boulay, B., Mizoguchi, R. (Eds.), Proceedings of AI-Ed 97 World Conference on Artificial Intelligence in Education (1997) pp. 514–521 Kobe, Japan. IOS Press. 384Google Scholar
  3. 3.
    Ikeda, M., Kono, Y., Mizoguchi, R.: Nonmonotonic model inference: A formalization of student modeling. In Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence: IJCAI’93 (1993) pp. 467–473 Chambery, France. Morgan Kaufmann. 384Google Scholar
  4. 4.
    Kuzmycz, M., Webb, G. I.: Evaluation of Feature Based Modelling in subtraction. In Frasson, C., Gauthier, G., McCalla, G. I. (Eds.), Intelligent Tutoring Systems (1992) pp. 269–276 Berlin. Springer-Verlag. 387Google Scholar
  5. 5.
    Langley, P., Ohlsson, S.: Automated cognitive modeling. In Proceedings of the Second National Conference on Artificial Intelligence (1984) pp. 193–197. 384Google Scholar
  6. 6.
    Sleeman, D. H.: Inferring student models for intelligent computer-aided instruction. In Michalski, R. S., Carbonell, J. G., Mitchell, T. M. (Eds.), Machine Learning: An Artificial Intelligence Approach (1984) pp. 483–510. Springer-Verlag, Berlin. 384Google Scholar
  7. 7.
    VanLehn, K.: Learning one procedure per lesson. Artificial Intelligence 31 (1987) 1–40. 384CrossRefGoogle Scholar
  8. 8.
    Webb, G. I.: Inside the Unification Tutor: The architecture of an intelligent educational system. In Godfrey, B. (Ed.), Proceedings of the Eighth Annual Conference of the Ausralian Society for Computers in Learning in Tertiary Education (1991) pp. 675–684 Launceston. ASCILITE. 386Google Scholar
  9. 9.
    Webb, G. I., Chiu, B. C., Kuzmycz, M.: Comparative evaluation of alternative induction engines for Feature Based Modelling. International Journal of Artificial Intelligence in Education 8 (1997). 387Google Scholar
  10. 10.
    Webb, G. I., Kuzmycz, M.: Feature Based Modelling: A methodology for producing coherent, consistent, dynamically changing models of agents’ competencies. User Modeling and User Assisted Interaction 5 (1996) 117–150. 385, 386, 386, 387, 387, 388, 388, 388, 393CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  1. 1.School of Computing and MathematicsDeakin UniversityGeelongAustralia

Personalised recommendations