Providing Feedback to Equation Entries in an Intelligent Tutoring System for Physics

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1452)


Andes, an intelligent tutoring system for Newtonian physics, provides an environment for students to solve quantitative physics problems. Andes provides immediate correct/incorrect feedback to each student entry during problem solving. When a student enters an equation, Andes must (1) determine quickly whether that equation is correct, and (2) provide helpful feedback indicating what is wrong with the student’s entry. To address the former, we match student equations against a pre-generated list of correct equations. To address the latter, we use the pre-generated equations to infer what equation the student may have been trying to enter, and generate hints based on the discrepancies. This paper describes the representation of equations and the procedures Andes uses to perform these tasks.


Hash Table Primitive Equation Correct Equation Intelligent Tutoring System Student Model 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    Cristina Conati, Abigail S. Gertner, Kurt VanLehn, and Marek J. Druzdzel. Online student modeling for coached problem solving using Bayesian networks. In Proceedings of UM-97, Sixth International Conference on User Modeling, pages 231–242, Sardinia, Italy, June 1997. Springer. 258Google Scholar
  2. [2]
    Abigail S. Gertner, Cristina Conati, and Kurt VanLehn. Procedural help in Andes: Generating hints using a Bayesian network student model. In Proceedings of the 15th National Conference on Artificial Intelligence, Madison, WI, 1998. to appear. 259Google Scholar
  3. [3]
    Denise Gürer. A Bi-Level Physics Student Diagnostic Utilizing Cognitive Models for an Intelligent Tutoring System. PhD thesis, Lehigh University, 1993. 257Google Scholar
  4. [4]
    Joel Martin and Kurt VanLehn. Student assessment using Bayesian nets. International Journal of Human-Computer Studies, 42:575–591, 1995. 259CrossRefGoogle Scholar
  5. [5]
    Douglas C. Merril, Brian J. Reiser, Michael Ranney, and J. Gregory Trafton. Effective tutoring techniques: A comparison of human tutors and intelligent tutoring systems. The Journal of the Learning Sciences, 3(2):277–305, 1992. 261CrossRefGoogle Scholar
  6. [6]
    A. G. Priest and R. O. Lindsay. New light on novice-expert differences in physics problem solving. British Journal of Psychology, 83:389–405, 1992. 256Google Scholar
  7. [7]
    Derek H. Sleeman. Inferring student models for intelligent computer-aided instruction. In R.S. Michalski, J.G. Carbonnel, and T.M. Mitchell, editors, Machine Learning: An Artificial Intelligence Approach, pages 483–510. Tioga Publishing Company, Palo Alto, CA, 1983. 257Google Scholar
  8. [8]
    K. VanLehn. Conceptual and meta learning during coached problem solving. In C. Frasson, G. Gauthier, and A. Lesgold, editors, Proceedings of the 3rd International Conference on Intelligent Tutoring Systems ITS’ 96, pages 29–47. Springer, 1996. 258, 262Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  1. 1.Learning Research and Development CenterUniversity of PittsburghPittsburgh

Personalised recommendations