Introduction

Personalization has a crucial role in fostering effective, active and efficient learning. This is especially relevant in informal learning contexts of lifelong learning, where there is a particular need for learner control. Personalization of learning environments has a long history and the research has evolved as new technological innovations have appeared. It requires careful modelling of users (learners, educators, coaches) and context (with new features coming from enhanced interaction environments), which nowadays are becoming highly interconnected, thus calling for interoperability.

However, in the current state of the art, there are no established ways to easily create personalized learning systems that reliably enhance learning outcomes. One key issue is the development of suitable user models, which deal with the evolving nature of learner’s current needs. Some open issues include how to standardize open learner model features for an extended range of available learners’ interactions and support interoperability with external learning services. The learner modelling should be able to manage learner’s affective states, context, needs and behaviour. Another broad research area addresses personalization strategies and techniques, considering not only the learner model, but the whole learning experience, thus creating models which are able to integrate contextual information from new ambient intelligence devices, going beyond data traditionally available from conventional desktop interaction.

This special issue originates in a body of current research on the ways that user modelling and associated artificial intelligence techniques can drive the personalization to enhance learning by building systems that are sensitive to the learners and their context. This comes from the focus of a workshop series on Personalization Approaches in Learning Environments (PALE) held annually in conjunction with the conference on User Modeling, Adaptation and Personalization (UMAP) over the last five years, which considered the different and complementary perspectives in which personalization can be addressed in learning environments. The scope includes: i) intelligent tutoring systems (Janning et al. 2016; Arevalillo-Herráez et al. 2014; Arevalillo-Herráez et al. 2013; Costa et al. 2012), ii) educational recommender systems (Greer et al. 2015; Henning et al. 2014; Labaj and Bieliková 2013; Manjarrés-Riesco et al. 2013; Nussbaumer et al. 2012; Roldan et al. 2011; Berthold et al. 2011; Thai-Nghe et al. 2011; Minguillon et al. 2011), iii) learning management systems (Tang and Yacef 2015; Chacón-Rivas et al. 2015), iv) personal learning environments (Nussbaumer et al. 2012; Berthold et al. 2011), v) educational games (Ghergulescu and Muntean 2016; Pentel 2015; Leite et al. 2011; Frias-Martinez and Virseda 2011; Muir et al. 2011), vi) agent-based learning environments (Dennis et al. 2016; Tamayo and Perez-Marin 2012; Ginon et al. 2012; Redondo-Hernandez and Perez-Marin 2011), vii) multi-user virtual environments (Ocumpaugh et al. 2014), and viii) other ad-hoc approaches (Sawadogo et al. 2014; Koch et al. 2013).

More specifically, as far as interactions and technological devices are concerned, there are publications on detecting learners’ interactions from diverse sources, such as i) observations (Ocumpaugh et al. 2014), ii) input devices such as mice (Pentel 2015; Labaj and Bieliková 2013), iii) videocameras (Koch et al. 2013; Leite et al. 2011), iv) touch gestures (Koch et al. 2013), v) social interactions (Lobo et al. 2014; Ming and Ming 2012), vi) eye-tracking (Labaj and Bieliková 2013; Muir et al. 2011), and vii) physiological sensors (Ghergulescu and Muntean 2016). They also consider diverse technological devices including mobiles (Frias-Martinez and Virseda 2011), tablets (Koch et al. 2013) and tabletops (Roldan et al. 2011).

Specific domains that have been addressed include: i) STEM (i.e., Science, Technology, Engineering and Math) education (Greer et al. 2015; Silva et al. 2015; Arevalillo-Herráez et al. 2014; Arevalillo-Herráez et al. 2013; Tamayo and Perez-Marin 2012; Costa et al. 2012; Muir et al. 2011), ii) higher education (Henning et al. 2014; Labaj and Bieliková 2013; Ming and Ming 2012), iii) distance learning scenarios (Manjarrés-Riesco et al. 2013), iv) vocational learning (Tang and Yacef 2015), v) after-school support (Leite et al. 2011; Frias-Martinez and Virseda 2011), and vi) MOOCs (i.e., Massive Open Online Courses) (Henning et al. 2014; Tang and Kay 2014).

Many sub-fields related to Artificial Intelligence in Education (AIED) have been considered, such as: i) knowledge representation (Khajah et al. 2014; Arevalillo-Herráez et al. 2014), ii) self-regulated learning (Tang and Yacef 2015; Tang and Kay 2014; Nussbaumer et al. 2012; Berthold et al. 2011), iii) instructional design (Tintarev et al. 2015), iv) educational data mining (Ming and Ming 2012; Kaochar 2011; Frias-Martinez and Virseda 2011; Thai-Nghe et al. 2011), v) collaborative learning (Lobo et al. 2014; Roldan et al. 2011), and vi) affective computing (Pentel 2015; Bixler et al. 2014; Arevalillo-Herráez et al. 2014; Ocumpaugh et al. 2014; Arevalillo-Herráez et al. 2013; Manjarrés-Riesco et al. 2013; Dennis et al. 2012; Leite et al. 2011), which bring together complementary perspectives from computer science, education, psychology or other related fields.

In addition, it is noteworthy to realize that while most of this work reports research carried out within the laboratory, recently there are some efforts that report real-world experiences from educational providers (Tang and Yacef 2015; Chacón-Rivas et al. 2015).

The structure of the paper is as follows. First, we provide an overview of main outcomes from the first 5-year period of the PALE workshop series since 2011. Afterwards, bearing this research in mind, we comment on the four papers accepted for publication in this special issue. These have been selected after a thorough peer-review process involving 45 researchers, from a pool of 25 proposals, which included both i) extended versions of papers published in previous editions of the PALE workshop enriched with the outcomes of discussions during workshop sessions, as well as ii) submissions received after an open call on this special issue’s topics, which focused on learner modelling and the personalization process. This paper concludes with suggested issues for further research regarding user modelling to support personalization in enhanced educational settings.

Personalization Approaches in Learning Environments

The catalyst for this special issue has been the long-standing workshop series PALE run each year at the UMAP conference ever since 2011. Based on this, in this section we provide an overview of the work reported throughout these first five editions (from 2011 to 2015), which covers a background of the issues discussed over these last five years. A total of 39 contributions have been analysed in terms of the following three aspects: 1) modelling learners and their performance to provide engaging learning experiences, 2) designing adaptive support, and 3) building standards-based models to cope with interoperability and portability.

Modelling Learners and their Performance to Provide Engaging Learning Experiences

Many different kinds of issues can be considered when modelling learners and their performance. In Greer et al. (2015) students in academic distress are identified, thus detecting when students are struggling academically so they can be given personalized advice on how to get back on track. To prevent that, Bixler et al. (2014) presented a proactive personalized learning environment in which learners are provided with materials that would potentially reduce the propensity to mind wander during learning by optimizing learning conditions (e.g., text difficulty and value) for individual learners. Pentel (2015) describes an unobtrusive method for detecting learners’ confusion by monitoring mouse movements. In fact, as discussed in Arevalillo-Herráez et al. (2013), learners’ emotional and mental states are to be discovered from learners’ interactions in the system and used to enrich the learners’ experience. Following Dennis et al. (2012) interrelationships among personality, affect and motivation are relevant factors that should be further analysed.

Other issues beyond mental or affective states have also been taken into account. Tang and Yacef (2015) presented an interface to improve user self-regulation, which aims at helping students in becoming better planners and time managers with the objective to reduce drop-out, increase engagement and motivation. Lobo et al. (2014) discussed a domain-independent reputation indicator to support collaborative behaviour and encourage motivation of students in collaborative learning environments. Reputation mechanisms to support matching groups of mentors and mentees have been discussed by Adewoyin and Vassileva (2012). Ming and Ming (2012) investigated how to predict students’ assessment outcomes from unstructured student text data in online class discussion forums. Koch et al. (2013) detected variations of attention deficit hyperactivity disorder based on level of attentiveness, activity and task performance.

In addition, access to models has also been investigated. Costa et al. (2012) proposed an open learner modelling approach focusing on a negotiation mechanism to solve detected cognitive conflicts that can emerge when learners inspect information collected in their own learner model. Johnson et al. (2011) discussed the need to extend open learner models with epistemic beliefs, which focus on the systematic linking of knowledge and the justification of understanding using evidence or prior understanding.

Designing Adaptive Support

Building suitable user models from their interactions, based on aforementioned issues and others related to them, provides the foundations for the different approaches followed in designing adaptive support. This way, Tintarev et al. (2015) proposed an algorithm for adapting the study plan, which is represented as a workflow with prerequisites. Arevalillo-Herráez et al. (2014) provided hints adapted to the line of reasoning (i.e., solution scheme) the student is currently following. Cohen (2011) suggested modelling learning factors as statistical processes in terms of probabilistic machines that move from state to state. Redondo-Hernandez and Perez-Marin (2011) proposed a procedure to generate questions adapted to the personality and learning style of students. Thai-Nghe et al. (2011) proposed context-aware models both to recommend tasks to students and to predict their performance.

Educators’ experience has also been explicitly taken into account. Thus, Silva et al. (2015) consulted experts in pedagogy and cognition in order to identify relevant parameters such as learner’s skill level and problem solving difficulty, which can be considered when creating learning objects. Lefevre et al. (2012a) carried out interviews with teachers to identify their practices in personalization in order to provide sequences of work matching both the profile of each student and the pedagogical goals of the teacher. Manjarrés-Riesco et al. (2013) followed a user centred engineering approach to involve educators in an affective recommendation elicitation process for distance learning scenarios. Tamayo and Perez-Marin (2012) applied user centred design techniques with both teachers and students to design the interface of a reading comprehension conversational agent for children. Ocumpaugh et al. (2014) used a field observation protocol to take into account constructs such as disgust and creative meta-narrative, which are not typically coded, but they may prove important for personalizing educational instruction.

In addition, innovative adaptive support has been proposed. Tang and Kay (2014) presented their ideas and guidelines for applying gamification as meta-cognitive scaffolds in open learning environments such as MOOCs. Labaj and Bieliková (2013) proposed a conversational evaluation approach that tracks user’s attention on particular items and uses that information to ask evaluation questions at the appropriate time, right when the learner is working on those items (or has just finished working with them). Nussbaumer et al. (2012) proposed a mashup recommender for personalized learning environments, which uses a taxonomy of learning activities and recommends widgets in order to support learners’ performance of different cognitive and meta-cognitive learning activities. Similarly, Berthold et al. (2011) identified principles for a mashup design in personalized learning environments and the importance of aligning widgets to psycho-pedagogical information in order to provide learners with meaningful recommendations and guide them in a self-regulated learning process. Kaochar (2011) performed behavioural studies that aimed to understand human teaching patterns, as a basis to develop future human-robot interaction systems. Leite et al. (2011) presented a case study with an empathic chess companion for children, based on a model of the user’s affect. Muir et al. (2011) described an educational game with a hint-based pedagogical agent and discussed preliminary work on using eye-tracking to better understand how students pay attention to hints provided.

Accessibility requirements have also been investigated when providing adaptation support. Roldan et al. (2011) presented a proposal to adapt learning activities in inclusive learning environments while students use multi-touch tabletops that model structural aspects, contents and interactions. Frias-Martinez and Virseda (2011) proposed a set of design suggestions to personalize and adapt mobile learning tools in order to enhance their educational impact in low-income communities by modelling users’ abilities and preferences. Santos et al. (2011) discussed some of the existing open issues in personalized inclusive learning scenarios.

Building Standards-Based Learner Models to Cope with Interoperability and Portability

In order to take advantage of learning services developed and provide synergies with external ones, interoperability needs to be supported. Interoperability in learner modelling can be provided in terms of an interplay sustained by integrating different complementary standards. In this sense, Chacón-Rivas et al. (2015) identified open issues when it comes to integrating information from the learner activity in standards-based learner models, which can be described in terms of diverse specifications such as IMS LIP (Learner Information Package), IMS RDCEO (Reusable Definition of Competency or Educational Objective) and IMS AFA (Access For All). These specifications include items to deal with features such as learning styles, competences, affective states, interaction needs and context information, and this research aims at filling the gaps beyond current usage in order to develop extensible, sustainable and applicable solutions suitable to a wide range of situations. In the same direction, Sawadogo et al. (2014) considers an extension of IMS LIP to build the user profile as well as IEEE LOM (Learning Object Metadata) to characterize learning resources.

Progress has also been reported on other standardization issues, such as those involved in designing and evaluating personalized educational systems. Henning et al. (2014) discussed the need for semantic interoperability in MOOCs when it comes to making them more suitable for a greater variability of learning needs. From this, they can provide personalized learning pathways for each learner through didactically meaningful learning object recommendations. Minguillon et al. (2011) proposed a social layer on top of learning object repositories to annotate learning objects from a teaching perspective in order to improve searching and browsing. Ginon et al. (2012) proposed a grammar to describe animated agents in a common formalism, specifying their characteristics and abilities and defining actions with parameters. Lefevre et al. (2012b) adopted a generic approach taking into account the many different teaching situations, and the wide variety of pedagogical activities that may exist. Khajah et al. (2014) applied a common evaluation metric to compare different modelling approaches of user performance.

Contents of this Special Issue

This IJAIED Special Issue on User Modelling to Support Personalization in Enhanced Educational Settings includes four articles, which are grounded in aspects discussed in the above personalization approaches for learning environments and give a peek at recent advances in some of the aforementioned issues. In particular, it provides a view of the state of the art in educational adaptive systems that are based on modelling the learner behaviour to meet the needs of technology-enhanced educational scenarios. The four papers in this special issue attest to the growing interest in this area. We solicited contributions from researchers and practitioners concerned with modelling users’ needs in new and evolving educational settings that are widening the diversity of learning contexts and issues to be considered. Many sub-fields related to AIED (user modelling and adaptation, knowledge representation, computer supported collaborative learning, instructional design, serious games, etc.) were addressed across the submissions received, whether from the perspective of computer science, psychology, intercultural studies or other related fields.

The selected papers situate their work with respect to a body of relevant research in both user modelling and enhanced educational settings. In particular, they address the three aspects identified in the previous section, namely: 1) modelling learners and their performance to provide engaging learning experiences (Ghergulescu and Muntean 2016; Janning et al. 2016), 2) designing adaptive support (Dennis et al. 2016), and 3) building standards-based learner models to cope with interoperability and portability (Valdés Aguirre et al. 2016). They also address different learning environments, such as educational games (Ghergulescu and Muntean 2016), intelligent tutoring systems (Janning et al. 2016), and conversational agents (Dennis et al. 2016).

The first paper (Ghergulescu and Muntean 2016) presents the ToTCompute mechanism to model and monitor engagement. Before the learner uses the system, it computes, with electroencephalography (EEG), the so-called TimeOnTask threshold, after which student engagement decreases. This generic measure represents the duration of time required by the player to complete a task. The goal behind this approach is to engage the players in order to support them (without interrupting the game-play) in maximizing their learning outcomes, in terms of providing the players with adequate feedback to maintain their motivation. The results of an experimental case study showed that ToTCompute could be used to automatically compute threshold values for the TimeOnTask generic engagement metric, which explains up to 76.2 % of the variance in engagement change. Furthermore, the results confirmed the usefulness of the mechanism as the TimeOnTask threshold value is highly task-dependent, and setting its value manually for multiple game tasks would be a laborious process. However, since this method works on task series it depends on how well tasks are defined in terms of their granularity in order to avoid overlapping.

The second paper (Janning et al. 2016) presents an approach to support task sequencing by perceived task-difficulty recognition of low-level features that can be extracted from log-files and which have statistical significance. Different classification methods have been applied to the log-file features (i.e., low-level features) for perceived task-difficulty recognition, resulting in a kind of higher ensemble method for improving the classification performance on the features extracted from a real data set. The presented approach outperforms classical ensemble methods and is able to improve the classification performance substantially, enabling perceived task-difficulty recognition that is satisfactory for employing its output for components of a real system like task independent support or task sequencing. This method identifies the next most appropriate task (i.e., not too easy or not too hard to avoid boredom or frustration respectively) for students, and discusses how to deal with an adaptive and personalised approach suited for different students and tasks.

The third paper (Dennis et al. 2016) focuses on improving feedback, as this is an important part of learning and motivation. The authors investigate how to adapt the feedback of a conversational agent to learner personality (i.e., the traits in the Five Factor Model) and performance. In particular, they investigate two aspects of feedback: 1) whether the conversational agent should employ a slant (or bias) in its feedback to motivate a learner with a particular personality trait more effectively, and 2) which emotional support messages the conversational agent should use (e.g., praise, emotional reflection, reassurance or advice), given learners’ personality and performance. User studies were run to understand the relationship between progress feedback and emotional support for students with different personalities and test scores. As a result, two algorithms were created, then evaluated and refined after a qualitative study with teachers. This methodology might also be adapted to deal with performance on multiple topics although further evidence is required from longitudinal studies with real learners.

The fourth and last paper (Valdés Aguirre et al. 2016) proposes a classification of learner models in terms of their portability. Portability is measured via each model’s accessibility, complexity, architecture, popularity, and description. Authors use this classification to analyse and then grade learner models that have been used in the literature. The classification is intended to be used by researchers both as i) a methodology to judge the portability of a student model, as well as ii) a guide to existing reusable models and so reduce the development time of personalized learning environments.

Key Issues to be Further Investigated

Learning environment personalization is a long-term research area, which evolves as new technological innovations appear. Nowadays there are new opportunities for building interoperable personalized learning solutions that consider a wider range of data coming from varied learner situations and interaction features. However, in the current state of the art, it is not clear how these new information sources are to be managed and combined in order to enhance interaction in a way that positively impacts on a learning process whose nature is essentially adaptive.

In this context, suitable user modelling is needed to understand both realistic learning environments cropping up in a wider range of situations and the needs of learners within and across them. There are new open issues in this area, which refer to detecting and managing personal and context data effectively in an increasing and varied range of learning situations in order to provide personal assistance to the learner, which can also take into account their affective state. This requires enhancing the management of an increasing number of information sources (including wearables with physiological and context sensors) and related data (e.g., big data settings), which ultimately are to provide a better understanding of every person’s learning needs within different contexts and over short-, medium-, and long-term periods of time. This will hopefully increase learner’s understanding of their own needs in terms of open learner models that are to be built from standards that support interoperability and which are to cover and extend the range of available features considered, thus allowing for combining different external learning services. The latter requires integrating an increasing number of information sources coming from ambient intelligence devices, which support gathering of information not only on learners’ interactions, but on the whole context where the learning experience takes place. In this way, learner modelling should be able to analyse changing situations in terms of context, learners’ needs and their behaviour, which requires personal and collective management of available information.

In addition to the issues discussed in the work reported in this paper, there are other issues that remain open regarding user modelling to support personalization in enhanced educational settings. In particular, it would be beneficial if the community of researchers dealing with these issues were able to set up the basis to manage an increasing amount of information coming from the task at hand and its surrounding environment. The purpose here is to provide personalization support in a wide range of learning environments and situations that go beyond learning, such as social interactions, which end up supporting user modelling in terms of being more sensitive to the learners and their context. Moreover, because of vast data coming from learners’ interactions (e.g., sensor detection of affect in context) and technological deployment (including web, mobiles, tablets, tabletops…) future research should enhance their management and integration, so that this wide range of situations and features involved can reinforce the model of the learner representing them and their impact. This can then be a foundation for dealing with a pervasive learner’s modelling in any place. All in all, it should not be forgotten that we are aiming to tackle the ever more demanding need to support personalized learning in wider contexts, ranging from daily life activities to MOOCs. We expect that forthcoming editions of the PALE workshop (the 6th edition is organized in conjunction with UMAP 2016) will explore some of these still open issues.