Learner models are widely used to individualize instruction in intelligent tutors and other educational learning environments. Learner models are often evaluated by their ability to predict learner performance, and considerable research has gone into improving their predictive accuracy. However, improvements in predictive accuracy do not necessarily translate into improved learning gains, greater insight into learning, or better instruction. For this special issue we invited papers on learner modeling with significant novel implications for learning or instruction.

Learner modeling research can be characterized in a three-dimensional space: (1) The aspect of the learner that is modeled; (2) the goal of learner modeling; and (3) the way the model is deployed to achieve that goal. The first UMUAI student modeling “special issue” (spanning three 1994–1995 issues), appeared at what might be considered the height of the “classical” age in student modeling: the papers focused almost exclusively on monitoring student response accuracy, in intelligent tutoring systems, to model domain-specific knowledge, in order to improve learning outcomes. These models were typically employed in one of two ways: to individualize VanLehn’s (2006) “inner loop” functions (providing help messages to students during learning tasks) and/or to individualize “outer loop” functions (sequencing the curriculum of successive learning tasks).

Of course, learner modeling still focuses predominantly on improving domain-specific learning outcomes. Efforts to model domain-specific student knowledge also remain central in learner modeling, and these models have been extended both in what is modeled (e.g., depth of knowledge), and how they are used (e.g., to evaluate students’ use of help facilities). But the first decade of the twenty-first century also saw a rich flowering in student-modeling efforts that are focused on creating conditions that can foster domain-specific learning gains. These notably include:

  • monitoring various metacognitive learning strategies, self-assessment, and self-regulation activities;

  • monitoring student affect, engagement, and motivation;

  • monitoring students’ collaborative learning skills; and

  • using student modeling to evaluate learning materials.

Hand in hand with these efforts, the types of student behaviors that serve as evidence for student modeling have greatly expanded beyond just response accuracy, and student modeling activities were extended to a wider range of learning environments, including open-ended intelligent learning environments (OELEs), hypermedia, and other types of online learning environments, including MOOCs.

The four papers in this special issue nicely span this greatly diversified space of student modeling. They include a wide range of learning tasks, from learning historical facts to constructing conceptual and computational models of physical and ecological systems. The studies attempt to model a wide range of student characteristics, including domain-specific knowledge and reasoning, metacognitive strategies, and affect. Most of the papers attempt to foster improved domain-specific learning outcomes, but other important goals include improving metacognitive task decomposition strategies, self-assessment, self-regulation, and learner perseverance. What all of the papers have in common is that they employ their student models to individualize the students’ learning experiences and they examine the impact of their student modeling activities on learner experience and outcomes with in vivo evaluations.

In the first article, Satabdi Basu, Gautam Biswas, and John Kinnebrew report on Learner modeling for adaptive scaffolding in a Computational Thinking-based science learning environment. This study models student knowledge in CTSiM, an OELE in which middle school students develop both conceptual and computational models of a physical system and an ecological system. The authors employ a hierarchical representation of the cognitive skills or tasks (information acquisition, conceptual and computational model construction, and solution assessment) required to construct the domain models and verify their correctness, and metacognitive knowledge (strategies for switching among information acquisition, conceptual and computational model construction, and solution assessment). The student model is employed to individualize inner loop activities—it is employed to select appropriate domain-specific knowledge scaffolds and strategic scaffolds. The authors examine the impact of this student modeling effort and corresponding inner-loop individualization on student problem-solving performance in the OELE, on learning outcomes for both domain-specific knowledge and metacognitive strategic knowledge, and on transfer of knowledge.

In the second article, Yanjin Long and Vincent Aleven report on Enhancing learning outcomes through self-regulated learning support with an open learner model. The learning environment in this study is Lynnette, an example tracing Cognitive Tutor for middle school equation solving. The study focuses on a Bayesian Knowledge Tracing model of domain-specific knowledge that is characteristic of Cognitive Tutors, the embodiment of the model in an inspectable open learning model (the skill meter), and the outer-loop use of the model to individualize the curriculum. In this study, the authors actively engage the students in two self-monitoring tasks. First they engage students in self-assessment by asking them to rate their performance and knowledge before the students view skill meter updates. Then they engage students in self-regulation through shared control of problem selection, wherein students select the level of difficulty for the subsequent problem while the system selects the specific problem. The authors examine the contribution of each of these self-regulation tasks to learning efficiency and learning efficacy of domain-specific knowledge, accuracy of student self-ratings, and one affective measure: student enjoyment of the tutor.

In the third article, Radek Pelánek, Jan Papoušek, Jiří Řihák, Vít Stanislav, and Juraj Nižnan report on Elo-based learner modeling for the adaptive practice of facts. This study examines several models of students’ factual knowledge and the use of these models for the outer-loop function of automatically designing successive multiple-choice questions. The learners in this study are using outlinemaps.org, an online learning environment for geography facts, but the modeling efforts are intended for any factual learning environment in which learners may vary widely in background knowledge. The goal of the modeling effort is to foster engagement (time on task) and improve learning outcomes. The authors compare the goodness of fit of multiple models of learners’ initial knowledge and discuss how the best-fitting models can provide information on the structure of domain knowledge that, in turn, can guide question construction. The authors also compare the goodness of fit of multiple models of students’ growing knowledge during learning. Finally, the authors examine the impact of their algorithms for automatically designing multiple-choice questions on short-term and long-term engagement, on learners’ task difficulty ratings, and on learning outcomes.

In the fourth article, Beate Grawemeyer, Manolis Mavrikis, Wayne Holmes, Sergio Gutiérrez-Santos, Michael Wiedmann, and Nikol Rummel report on Affective learning. Exploring the impact of affect-aware support on learning and engagement. The goal of this study is to improve students’ affective state, student engagement, and learning outcomes in Fractions Lab, an exploratory learning component of a learning environment for young learners called iTalk2Learn, which fosters conceptual and procedural knowledge of fractions. The authors model student affect in Fractions Lab based both on student speech (keywords and prosodic cues) and on student interactions in the learning environment. In this paper the affect model is employed for the inner-loop function of individualizing feedback and prompts. The learning environment employs the affect model to select among eight types of message content and to decide between a low-interruptive and a high-interruptive presentation method. The authors examine how well student affect is detected, and evaluate the impact of this affect-aware learning environment on student affect, student engagement, and student learning outcomes.

We hope that bringing together these diverse but exemplary studies of the impact of learner modeling will encourage readers to consider their similarities and differences, thereby making this special issue greater than the sum of its parts and helping to inspire continued progress in innovative development and useful application of learner models.