Keywords

1 Introduction

1.1 Background

Students’ high level of motivation is fundamental for their learning success. Specifically, students with learning difficulties such as dyslexia can cause low levels of engagement with the education system or dropouts. Thanks to the advancement of assistive learning systems and user modelling techniques for personalised learning, different individual learning needs can be met by personalising learning environments based on user models. User modelling combined with mechanisms of personalisation is necessary to apply the user models to the real-world scenarios and tailor learning service and experience to different individuals’ learning needs or mental states. Most of the research pertaining to personalized learning has focused on emotion and cognition of learners such as inducing more positive emotions or reattracting attention; in contrast, e-learning system designers have neglected to apply user modelling and personalized interventions that aims at improvements in exactly learners’ motivational states. Therefore, we have developed inference rules based on our motivation model developed previously [1] to identify user’s motivational states based on the data collected during a learning process including both self-reported data and eye gaze data that can be recorded automatically and finally enable the pedagogical agent to output personalised feedback to user to sustain and enhance motivation in real-time.

1.2 Motivation Assessment

Motivation is a multi-faceted concept, and we have developed a conceptual motivation model previously from domain knowledge and interviews as well as a multi-item questionnaire study [1, 2]. The model contains factors that determine the continued use intention in an e-learning environment from intrinsic motivation such as Self-efficacy, extrinsic motivation such as Visual Attractiveness, and mediators such as Reading Experience. Eye gaze data has shed light on human’s various cognitive process including problem solving [3], and eye gaze features like pupil dilation has been used as indicators of emotional states [4]. In a previous experiment with students with learning difficulties [5], we have found that gaze features such as average pupil diameter and fixation number can play significant roles in assessing learners’ motivation in e-learning context with a prediction accuracy up to 81.3%, which can be a good alternative to the approach using merely self-reported data to avoid the self-reported bias.

1.3 The Demonstration System

The demonstration system is a gaze-based learning application that assesses a learner’s motivation related to the factors in our motivation model mentioned before based on the features computed from both the self-input data and real-time data from the Tobii eye tracker 4c. The system then uses personalization algorithms to generate personalised feedback based on the motivation assessment to address the corresponding motivational needs. Different motivational factors are assessed based on different gaze features with different parameters using the logistic regression models resulted from our previous study [5], starting from which, we have improved the prediction models by including only the gaze features that have significant prediction power. The system implemented some regression models to assess the corresponding motivational factors (e.g., Confirmed Fit and Reading Experience) purely based on gaze features. Other factors such as Attitudes Toward School and Self-efficacy were assessed based on self-reported data collected at the beginning of a learning process, as they involve intrinsic motivation that usually remains stable in a short-term period. The system also outputs personalised feedback at the stage of self-assessment quizzes based on both a user’s answer and gaze data. Given that we have developed the motivation model and classification algorithms using rigorous approaches and have evaluated the relevant motivational strategies [5], we have confidence that the present system will yield motivation-enhanced learning experience and better learning performance over the long term by monitoring and incorporating eye tracking it into motivation assessment and providing personalised feedback to respond to different motivational states in real-time.

2 GazeMotive Walkthrough

GazeMotive was designed for two user groups, expert and learner. When a user logs on with a username and a password (see Fig. 1a), the system will redirect the user to either the expert interface or the learner interface. The expert interface allows an expert user to add or delete learning materials, page by page. The expert user can then input self-assessment quizzes after each lesson (see Fig. 1b). Any learning materials can be added to our system in front-end by expert users in picture format, and this demonstration system uses materials adapted from a free e-learning course [6], the frozen planet, as an example. We adopted 16–40pt Verdana dark fonts on light yellow background and visual elements like images according to the design principles for students with learning difficulties [7]. Our system assesses learners’ motivational states from gaze features, some of which are computed based on AOIs (Areas of Interest), and different learning materials and pages have different AOIs, so expert users need to select one or more AOIs for each page by clicking on the points at the corners of the polygons to enable relevant gaze features to be computed, and an AOI can also be selected for review or deletion by the expert users (see Fig. 1c and d).

Fig. 1.
figure 1

Screenshots of expert interface showing an example of (a) login, (b) adding a quiz, (c) the process of adding an AOI, and (d) the AOIs added.

The learner interface is similar with the expert one, the main difference is the personalised feedback output from the system to user dependent on the motivational states detected by the system from eye-tracking data or self-reported data or a combination of both. During a learning process, a pedagogic agent representing a virtual tutor will output personalised feedback to address specific motivational needs. The feedback implemented in our system is in format of text and picture, and more diverse formats of feedback and interventions such as speech and animation can also be incorporated into the inference rules and implemented following our present work. Figure 2 shows two examples of personalised feedback based on real-time motivation assessment, in the learning stage (see Fig. 2a) and the quiz stage (see Fig. 2b), respectively.

Fig. 2.
figure 2

Screenshots of user interface showing the examples of personalized feedback to provide motivational help (a) when the system detects a user to have negative reading experience from eye-tracking data in a learning page, and (b) when the user submits an incorrect answer to a quiz and is meanwhile detected from the eye-tracking data as having not put enough effort.

3 Conclusion

In this paper, we demonstrate how we have applied the motivation model and corresponding personalisation rules to real-world scenarios. The motivation model and classification algorithms were developed from our previous studies using interviews, a multi-item questionnaire and an experiment with target users. The inference rules and motivational strategies were adapted to our learning environment based on our motivation model. Our system uses eye tracking technology to assess motivational states by monitoring gaze features and self-input data during user’s learning process, and this motivation-aware system demonstrated a way of applying the motivational strategies to dynamically respond to the motivational states detected based on our motivation model using gaze data instead of relying on obtrusive self-report data towards accurate real-time motivation assessment for enhanced motivation in an e-learning environment. In future, we will keep validating the models and algorithms to maximize the accuracy of motivation assessment and personalised feedback, as well as improving the interface design. Additionally, more dimensions of motivation and more diverse formats of feedback will be implemented using the logic that has been developed in the system.