Game-based features are increasingly popular devices within educational learning environments as a means to increase student interest and promote long-term, persistent interactions within learning systems (Cordova and Lepper 1996; Jackson et al. 2009a, b; Jackson and McNamara 2013; Rai and Beck 2012). Games are clearly complex, but some elements include devices such as performance contingent incentives (e.g., currency), personalizable agents (e.g., avatar edits), and navigational choices (e.g., option to turn left or right). The incorporation of such interactive features within learning environments has been shown to increase students’ engagement and motivation in learning tasks (Cordova and Lepper 1996; Jackson and McNamara 2013; Rai and Beck 2012).

While educational games and game-based features clearly have some demonstrated benefits, particular in terms of enhancing students’ levels of motivation and engagement, there remains some controversy regarding their benefits to learning. One controversy regards to degree to which game-based features may act as seductive distracters. As such, these features may increase opportunities for students to engage in behaviors that distract from learning goals. For example, adding rewards or personal achievements to educational environments may increase students’ investment with the system (Snow et al. 2013c), but also afford behaviors that are distracting and tangential to the learning task (e.g., spending time viewing trophies). In general, when students behave in a manner that is divergent and unrelated to the designated task, there can be a negative impact on learning (Carroll 1963; Rowe et al. 2009; Stallings 1980). In this study, we further examine these issues regarding game-based features, focusing on game currency, which is frequently used within games to unlock various features. In particular, we investigate relations among students’ propensity to interact with game-based features (spendency), their in-system performance, and learning gains.

Seductive Distracters

Seductive distracters are defined as any irrelevant but engaging stimulus that pulls a student’s attention away from the learning goal (Harp and Mayer 1997). Some examples of seductive distracters include: achievements pop-ups (e.g., level advancement notifications), interface customization (e.g., editing system agents), and irrelevant information embedded within text. Students’ interactions with seductive distracters have been shown to have a negative impact on their learning gains (Garner et al. 1991; Godwin and Fisher 2011; Hidi and Baird 1986; Harp and Mayer 1997; Harp and Mayer 1998; Rowe et al. 2009). For instance, Godwin and Fisher (2011) found that when students spent more time focusing attention on seductive distracters, such as visual representations (i.e., posters, artwork, maps), their overall accuracy on comprehension learning assessments decreased. Similarly, Harp and Mayer (1997) showed that when students were asked to read a passage with an embedded seductive distracter (i.e., irrelevant information), they comprehended the passage at a more shallow level compared to those students who read the passage without the presence of seductive distracters. In sum, seductive distracters potentially disconnect students from the learning goal by decreasing the amount of time and attention a student may spend focusing on a designated task (Harp and Mayer 1998).

Intelligent Tutoring Systems and Gamification

Intelligent Tutoring Systems (ITSs) are computer-based learning environments that adapt content and feedback to students based on their individual needs (Murray 1999; VanLehn 2011). These systems are effective at improving student performance and promoting overall learning gains (Woolf 2009). When ITSs specialize in complex skill acquisition they may require extensive amounts of training and practice, which in turn, can result in student disengagement and boredom (Bell and McNamara 2007; Jackson and McNamara 2013; McNamara et al. 2009). This increase in student disengagement over time has led researchers to investigate how they can improve student motivation and engagement within ITSs.

One solution that researchers have found to combat student boredom within educational environments is through gamification. Gamification is the process of adding game-based features or elements (e.g., incentives, editable avatars, and navigational choices) into a non-game based context for the purpose of increasing students’ engagement in a task (Deterding et al. 2011). Previous work has revealed that when interactive features are assimilated into learning environments (i.e., gamification), students have shown increased engagement and motivation in tasks (Cordova and Lepper 1996; Baker et al. 2004; Facer et al. 2004; Jackson and McNamara 2011; Rowe et al. 2010; Sabourin et al. 2011; Rowe et al. 2010; Snow et al. 2013a; Rai and Beck 2012; Roscoe et al. 2013a, b). For instance, work by Rai and Beck found that when game-based features (e. g., personalizable monkey agent) were implemented into a complex learning environment, students reported increased enjoyment. Similarly, Rowe, Shores, Mott and Lester found that as interactive choices within a system increased (i.e., control over where to explore on a map), students’ engagement with the system also increased. More recently, Snow and colleagues demonstrated that students’ interactions with personalizable avatar features was negatively related to posttest measures of boredom and positively related to posttest measure of personal control. Thus, this growing body of research has demonstrated that the addition of game-based features within ITSs has been found to have positive effects on students’ attitudes.

Although research has demonstrated the benefits of game-based features, these same features may act as seductive distracters that engage students in behaviors that are irrelevant or tangential to the overall learning task, consequently impacting the success of the tutoring system. When researchers have investigated how users’ learning outcomes are affected by the presence of seductive game-based features, they have found mixed results (Snow et al. 2013a; Rai and Beck 2012; Rowe et al. 2009). For instance, Rowe and colleagues examined how students’ interactions with seductive game-based features influenced their learning outcomes. They found that embedded navigational choices in an open game-based environment (i.e., Crystal Island), negatively influenced posttest performance. In contrast, Rai and Beck found that game-based features embedded within a math tutor had no impact on students’ learned math skills. These contradictory findings indicate that more work needs to be conducted to fully understand how game-based features designed to increase engagement ultimately impact learning outcomes.

One way that researchers may begin to shed light on the influence that these features have on learning is to examine students’ process data (Snow et al. 2014; Snow, Likens, Jackson and McNamara 2013). Indeed, students may choose to interact with game-based features differently, which in turn may influence the impact that these features have on learning. The current study attempts to gain a deeper understanding of this issue by examining variations in students’ use of one specific game-based feature, in-system currency. In-game currency has been used in a variety of game-based environments as a way to promote engagement and provide performance incentives (Gillam and James 1999; Jackson and McNamara 2013; Kim et al. 2009; Rymaszewski 2007). For instance, within the mobile content sharing game Indagator, users earn currency by adding content the digital environment. Users’ currency can then be used to customize their experiences and unlock various system features (e.g., new encounters or game-based objects). Within Indagator, currency is also used to indicate users’ rank and experience within the game environment (Lee et al. 2010). Although there seems to be a wealth of systems that use some form of currency, relatively few studies have been conducted to examine how this game element may act as a seductive distractor that impacts students’ performance. The current study is one of the first studies to utilize students’ process data to gain a deeper understanding of how varying levels of interactions with in-game currency impact in-system performance and learning outcomes within the context of a game-based system.

iSTART

Interactive Strategy Training for Active Reading and Thinking (iSTART) is an ITS designed to improve students’ comprehension of science texts through the instruction of reading strategies (McNamara et al. 2004). iSTART provides students with instruction to use reading strategies, including comprehension monitoring, predicting, paraphrasing, elaborating, and bridging, in the context of self-explaining complex texts. Self-explanation, the process of explaining the meaning of text to oneself, has been found to improve performance on a wide range of tasks including problem solving, generating inferences, and deep comprehension of text (Chi et al. 1989; McNamara 2004). iSTART combines the process of self-explanation with comprehension strategy instruction so that students engage in deep processing of text and in turn, improve their use of strategies by employing them within self-explanations (McNamara 2004). Students using iSTART have demonstrated significant improvements in comprehension, with average Cohen’s d effect sizes ranging from 0.68 to 1.12 depending on the learner’s prior knowledge (Magliano et al. 2005; McNamara 2009; McNamara et al. 2006; McNamara et al. 2007a, b).

The iSTART system is divided into three training modules: introduction, demonstration, and practice. During the introduction module, students are introduced to three animated agents (two students and one teacher) who discuss the definition of self-explanation and the iSTART reading strategies. The animated agents provide descriptions and examples of each reading strategy, followed by formative assessments intended to measure students’ overall understanding of the strategies. After the introduction module is complete, students are transitioned into the demonstration module. In this module, two animated agents (one teacher and one student) apply the various strategies to example texts. Then, students are asked to identify the specific strategies that the agents have employed (see Fig. 1). Finally, the practice module of iSTART provides students with an opportunity to apply the strategies they have learned to science texts. In this module, students are given example texts and asked to self-explain various target sentences from the designated text. Then, the teacher agent provides students with formative feedback on their practice self-explanations.

Fig. 1
figure 1

Screen shot of the demonstration module

In addition to the three training modules, iSTART contains an extended practice module, which operates in the same manner as the practice module. The difference, however, is that extended practice contains a larger number of available texts, which are intended to sustain practice for several months. In addition, teachers can add their own texts that are not currently in the system and assign them to their students.

To provide students with appropriate feedback, an algorithm was developed to assess the quality of students’ individual self-explanations. The algorithm utilizes a combination of latent semantic analysis (LSA; Landauer et al. 2007) and word-based measures to assess students’ self-explanations, yielding a score from 0 to 3. A score of “0” is provided to self-explanations that are either composed of irrelevant information or are too short, whereas a score of “1” is applied to self-explanations that relate to the sentence itself and do not elaborate upon the provided information. A score of “2” implies that the student’s self-explanation incorporates information from the text beyond the target sentence itself. Finally, a score of “3” suggests that a students’ self-explanation has incorporated additional information about the text at a global level. This added information may relate to a student’s prior knowledge outside of the target text, or it may focus on the overall theme or purpose of the text. Previous research has shown that the iSTART algorithm provides self-explanation scores that are comparable to human raters (Jackson et al. 2010a, b; McNamara et al. 2007a, b).

iSTART-ME

Although the training and extended practice in the iSTART system were shown to improve students’ performance across time (Jackson et al. 2010a, b), the repetitive nature of the extended practice module resulted in some student disengagement and boredom (Bell and McNamara 2007). To address this problem, the iSTART extended practice module was incorporated into a game-based environment called iSTART-ME (Motivationally-Enhanced; Jackson and McNamara 2013). Recent work with iSTART-ME has shown that the addition of game-based features has enhanced students’ motivation, engagement, and persistence (Jackson and McNamara 2013; Snow et al. 2013c).

The extended practice interface in iSTART-ME is controlled through a selection menu where students can choose to read and self-explain new texts, personalize characters, play mini-games, earn points, purchase rewards, and advance levels (see Fig. 2). In addition, students can view their current level, the number of points earned, and trophies earned through game play (see Fig. 3). Students earn points in the system by interacting with texts, either within the context of generative games or a non-game practice module (see Jackson and McNamara 2013 for more detail). Students earn trophies in the system based on their performance in the various practice games. These trophies are linked to the number of points that students earn in each game (see Jackson and McNamara 2013, for more detail). As students collect more points in the system, they subsequently progress through a series of levels ranging from 0 to 25, where new game-based features are unlocked at different levels. Each level requires more points to proceed than the previous level. Thus, students must exert more effort as they progress through higher levels in the system.

Fig. 2
figure 2

Screen shot of iSTART-ME selection menu

Fig. 3
figure 3

Screen shot of trophy and level view on iSTART-ME selection menu

The points that students earn throughout their interactions with iSTART serve as a form of currency (iBucks) and can be used to unlock game-based features within the system. Students’ earned iBucks map on to the number of points that they have earned within the system. For example, if Jane earned 500 points while playing a practice game, she also has earned 500 iBucks that will automatically appear on the game-based menu. There are two primary uses for iBucks: personalization of system features and playing mini-games. As students choose to interact with these features, the appropriate number of iBucks will be deducted from their total; however, no points are deducted. Each of these system features requires either 300 or 450 iBucks (see Table 1 for a list of the potential uses for the iBucks). The game-based mechanic of currency was embedded within the system as a means to engage users’ interests and provide them with a sense of agency over their learning paths (Jackson and McNamara 2013). iSTART-ME uses an unlocking scaffolding with these personalizable features, thus, as students advance within the system they are rewarded with the opportunity to buy more game-based features (for more information on the design of iSTART-ME please see Jackson and McNamara 2013).

Table 1 Cost of interactions for every game-based feature

One potential use of the iBucks is for the personalization of the system environment. These options were implemented into the system as a means of increasing students’ investment in the system, as well as provide them with a sense of control over their learning environment. Within the extended practice interface, students have the option of changing numerous features, such as background colors or teacher agents. For instance, students can use their iBucks (iSTART points) to purchase new hairstyles or headgear for their avatars (see Fig. 4 for examples).

Fig. 4
figure 4

Example avatar configurations

In addition to personalizable interactions, students can use their iBucks to interact with practice features (mini-games). These practice features were added to iSTART-ME to provide students with opportunities to practice identifying the various self-explanation strategies. For instance, in the mini-game, Balloon Bust, students are presented with a target sentence and an example of a self-explanation. Students must then decide which strategy was used to generate the self-explanation and click the corresponding balloons on the screen (see Fig. 5).

Fig. 5
figure 5

Screen shot of Balloon Bust

Current Study

Previous work has provided some insight into the impact that game-based features have on student learning. However, there remain questions that need to be addressed to understand the influence that students’ interactions with these features may have on in-system performance and learning outcomes. The current study attempts to address this issue by investigating students’ process data to examine how they interact with in-game currency (i.e., spendency) and investigate four primary questions.

  1. 1)

    How does students’ spendency vary as a function of individual differences in prior skill level and self-reported attitudes?

  2. 2)

    How do students’ interaction patterns with various game-based features vary as a function of their spendency?

  3. 2)

    How do differences in students’ spendency relate to their in-system performance?

  4. 4)

    How do differences in students’ spendency relate to learning outcomes?

A unique contribution of this study stems primarily from the analysis of process data to investigate users’ propensity to spend in-game currency as a way to interact with game-based features (spendency), such as practice and personalizable features. Examining the impact of variations in students’ interactions further explores the impact that game-based features have on performance and learning gains. Results from this study may begin to provide researchers with deeper understanding of how gamification can influence learning goals within game-based systems.

Methods

Subjects

Participants in this study included 40 high-school students from a mid-south urban environment (50 % male; 73 % African American, 17 % Caucasian, 10 % other nationalities; average grade level of 10.4; average age = 15.5 years). The sample included in the current work is a subset of 124 students who participated in a larger 11-session study that compared three conditions: iSTART-ME, iSTART-Regular, and no-tutoring control. In that study, both iSTART and ISTART-ME were shown to improve students’ strategy performance from pretest to posttest (Jackson and McNamara 2013). The results presented in the current study solely focus on the students who were assigned to the iSTART-ME condition, as they were the only students who had access to the full game-based selection menu.

Procedure

All students completed the full 11-session experiment that consisted of a pretest, eight training sessions, a posttest, and a delayed retention test. During the first session, participants completed a pretest survey that included measures of motivation, attitudes toward technology, prior self-explanation (SE) ability, and prior reading comprehension ability. During the following eight sessions, participants took part in the training portion of the experiment, each lasting approximately 1 h (the duration of each session from 58 to 75 min). In these sessions, students interacted with the full game-based menu, where they had access to mini-games, texts, and interactive features. All students in the current study interacted with the iSTART-ME system for all eight training sessions. After the eight training sessions, students completed a posttest that included similar measures to the pretest. One week after the posttest, students completed a retention test that contained measures similar to the pretest and posttest (i.e., self-explanation ability).

Measures

Strategy Performance

Self-explanation (SE) quality was measured during pretest, posttest, and at retention by asking students to read through a text one sentence at a time and generate a self-explanation when prompted. Training strategy performance was assessed during sessions two through eight by measuring the self-explanations students produced while engaging in the generative practice games. All generated self-explanations were scored using the automated iSTART assessment algorithm (McNamara et al. 2007a, b; Jackson et al. 2010a, b).

Reading Comprehension Ability

Students’ reading comprehension ability was assessed at pretest using the Gates-MacGinitie Reading Comprehension Test (MacGinitie and MacGinitie 1989). This test includes 48 questions designed to assess the general reading ability of each student by asking them to read a passage and then answer comprehension questions about the passage. This test is a well-established measure of student reading comprehension, which provides researchers with in-depth information about students’ starting literacy abilities (α = 0.85–0.92, Phillips et al. 2002).

Attitudes and Motivation

During each training session, students were asked to answer a battery of questions that assessed their daily enjoyment and motivation during their time interacting with iSTART-ME (see Table 2 for daily questions). These questions have been used in previous studies to assess individuals’ motivation and enjoyment within adaptive systems (Jackson et al. 2009a, b; Jackson and McNamara 2013; Snow et al. 2013b). Students’ were asked to rate their experience within the system eight times (once per session). These scores were combined and averaged to create a mean enjoyment and motivation score.

Table 2 Daily enjoyment and motivation measures

In-System Performance

Students’ in-system performance was measured through the highest level that the student achieved throughout iSTART-ME training. As described earlier, students advance to higher levels within the system by earning performance points within practice games and mini-games. There are 25 levels and each level requires more points to proceed than the previous level. This measure is reflective of students’ daily strategy performance within the context of the iSTART-ME system.

Learning Transfer

iSTART-ME was designed to teach students strategies to improve their reading comprehension. Learning outcomes were measured through an open-ended comprehension measure. In the current study, the transfer of self-explanation training to reading comprehension was assessed at the retention test using a science passage-specific comprehension measure. In this task, each student was asked to read an assigned science text. After they finished reading, students were then presented with a series of open-ended questions that they were asked to answer based on their recollection of the text. These questions were designed to assess low-level and deep-level text comprehension. Two raters independently scored these open–ended questions. Initial inter-rater agreement was high, with an overall kappa of 0.951.

Spendency

Students’ propensity to use system currency as a way to engage with the game-based features was defined as their spendency. This measure was calculated by dividing the total number of iBucks students spent on mini-games and personalizable features by the total iBucks each student earned within the system (total points spent / total points earned). This proportion yielded each student’s spendency (tendency to spend) within iSTART-ME. This measure allows us to examine the degree to which students spent their in-system currency on various game-based features (with respect to the number of iBucks earned). This measure allows us to investigate how much of their available resources (i.e., iBucks) a student used while in iSTART-ME.

Interaction Patterns

All students’ interactions in iSTART-ME were logged and recorded within the iSTART-ME database. This process data was then organized according to the function afforded by each feature: generative practice, identification mini-games, personalization of interface features, and screens to monitor system achievements and progress (see Jackson and McNamara 2013, for detailed descriptions). Through the use of a statistical sequencing procedure similar to that used in D’Mello et al. (2007) and this time stamped data set, we calculated the probability of each student’s set of interactions within the system. This calculation can be described as L[It→Xt+1], where, put simply, we calculated the probability of a student’s next interaction (X) with an interface feature given their previous interaction (I). This unique sequencing procedure allows us to trace students’ interaction trajectories across time.

Results

Spendency

To characterize how students spent their earned iBucks, spendency was calculated using a proportional formula that took students’ total points spent (M = 1440.42, SD = 2272.78) and divided them by their total points earned (M = 23,180.57, SD = 30,541.69). Within the current study, students’ spendency varied considerably (range = 0.0 to 0.99, M = 0.49, SD = 0.26). This range reveals that students varied in their propensity to use iBucks. Indeed, some students may have hoarded or never spent any iBucks, while others spent iBucks as soon as they earned them. This wide range reflects the differences in which students’ engaged with the currency in iSTART-ME.

Individual Differences and Spendency

To gain a deeper understanding of how individual differences related to students’ tendency to spend iBucks, Pearson correlation analyses were conducted on pretest reading and self-explanation scores. Results from these analyses revealed no significant relation between students’ spendency and their prior reading ability (r = −0.21, p = 0.19) or self-explanation performance (r = −0.10, p = 0.55). Thus, these results indicate that individuals’ reading ability and strategy performance did not influence their propensity to spend iBucks on interactions with game-based features.

Motivation, Enjoyment and Spendency

To assess how spendency related to students’ self-reported enjoyment and motivation, Pearson correlation analyses were conducted using students’ mean enjoyment and motivation scores. Results from these analyses revealed no significant relation between students’ spendency and their self-reported enjoyment (r = −0.18, p = 0.26) or motivation (r = −0.21, p = 0.19). Thus, students’ propensity to spend their earned iBucks does not seem to be the result of boredom or disinterest. Overall, students tended to report high ratings of enjoyment (M = 5.0, SD = 0.75) and motivation (M = 5.37, SD = 0.80) within the system. These results are similar to recent work showing that students generally rate their interactions with educational games as positive experiences (Jackson and McNamara 2013; Rodrigo and Baker 2011).

Interaction Patterns

The current paper examines the extent to which students’ tendency to spend in-game currency relates to their interactions patterns within the iSTART-ME system. Using the aforementioned probability analysis, we can examine students’ transitions between and within the four categories of game-based features. To investigate how students’ spendency related to their unique system trajectories, Pearson correlations were calculated using students’ spendency levels and their transitional probability patterns (see Fig. 6 for interaction patterns). Fig. 6 provides a visual display of the relation between spendency and interactions patterns, with numbers inside a box representing the correlation between spendency and students’ selecting the same feature again. For instance, in the identification mini-game box, we see the value r = 0.05. This value indicates that spendency was not significantly related to students’ choice in selecting an identification mini-game after they had just completed an identification mini-game. The numbers near a transition line indicate the relation between spendency and students’ choice to transition from one feature to another.

Fig. 6
figure 6

Relation between interaction patterns and spendency

Overall, results from this analysis demonstrated a significant positive relation between spendency and two probability interactions. First, students who had a higher spendency were more likely to interact with a personalizable feature after playing a generative practice game (r = 0.34, p < .05). Similarly, students with a high spendency were also more likely to return to a generative practice game after interacting with a personalizable feature (r = 0.41, p < .01). These results reveal that students who spent a higher proportion of their iBucks were not just jumping between features that did not reinforce learning strategies. Indeed, these students seemed to be interacting in a pattern where they practiced generating self-explanations and then moved to personalizing various game-based features. Interestingly, these students were not likely to stay in the generative practice or personalizable feature screens, but instead transitioned between the two.

In-System Performance

Within iSTART, students advance to higher achievement levels based on their performance in the system. To investigate how spendency impacted achievement levels, we used a hierarchal linear regression model to factor out students’ pretest strategy knowledge. In Model 1 of this analysis, we included pretest strategy knowledge scores to predict achievement levels. Results from this analysis revealed that pretest strategy knowledge was a significant predictor of students’ achievement levels in the system (R 2 = 0.36, F(1,38) = 21.04, p < .01; see Table 3). In Model 2, we examined how spendency predicted achievement levels over and above pretest strategy knowledge. Results from this analysis indicated that spendency was a significant (negative) predictor of student achievement levels over and above prior strategy knowledge (R2 change = 0.17, F(1,37) = 20.77, p < .01; see Table 3). This analysis revealed that students’ spendency accounted for 17 % of the variance in students’ in-system achievement levels over and above their prior strategy knowledge, in particular indicating lower performance as a function of greater spendency.

Table 3 Hierarchal linear regression analyses predicting achievement level

Strategy Performance

We further examined how variations in students’ spendency related to their daily strategy performance. Pearson correlations were conducted (see Table 4) to investigate relations between students’ spendency and their daily self-explanation scores. Results from this analysis indicated that there was a negative correlation between students’ spendency and self-explanation quality on days 3, 4, 5, 6, 7, and 8. However, there was no significant relation between spendency and self-explanation quality on days 1 and 2 of training. This non-significant relation is not surprising because many students are just beginning to accumulate iBucks on Days 1 and 2. Indeed, most students spend the majority of these days completing the lesson videos and demonstration module.

Table 4 Spendency and daily strategy performance

To further examine these relations, we conducted separate hierarchal regression analyses on students’ self-explanation quality scores for each of the six significant training days in Table 4. These analyses investigated how students’ spendency predicted self-explanation scores over and above prior self-explanation ability (i.e., self-explanation scores at pretest). This is reflected by the R2 change attributable to the variance accounted for by spendency after entering prior strategy ability (i.e., pretest self-explanation scores) in the regression model (see Table 5). These analyses revealed significant models and R2 change values for session 3, F(1,37) = 6.79, p < .05, R 2 = 0.31, R2 change = 0.13 (i.e., see session 3 in Table 5), session 4, F(1,37) = 9.13, p < .01, R 2 = 0.29, R2 change = 0.18, session 5, F(1,37) = 11.52, p < .01, R 2 = 0.38, R2 change = 0.20, session 6, F(1,37) = 4.97, p < .05, R 2 = 0.17, R2 change = 0.11, session 7, F(1,37) = 7.31, p < .01, R2 = 0.25, R2 change = 0.15 and session 8, F(1,37) = 6.18, p < .05, R 2 = 0.30, R2 change = 0.15.

Table 5 Hierarchal linear regressions predicting self-explanation quality spendency and prior strategy ability

Finally, the current study investigated how students’ interactions with game-based features impacted their long-term learning outcomes (i.e., self-explanation quality). Pearson correlations were calculated to examine how spendency related to posttest, and retention self-explanation scores. This analysis revealed that spendency was not related to self-explanation quality at posttest (r = −0.195, p = 0.227) or retention (r = −0.231, p = 0.187). Taken together, these results indicate that spendency had an immediate negative effect on strategy performance, but it did not appear to be a detriment over time.

Learning Transfer

To investigate how students’ spendency related to transfer learning over and above prior ability, we conducted a hierarchal linear regression. In Model 1 of this analysis, we included pretest self-explanation scores to predict reading comprehension ability. Results from this analysis revealed that pretest strategy performance was a significant predictor of students’ scores on the reading comprehension transfer task (R 2 = 0.10, F(1,38) = 4.31, p < .05; see Table 6). In Model 2, we examined how spendency predicted scores on the comprehension transfer task over and above pretest strategy knowledge. Results from this analysis indicated that spendency was a significant predictor of students’ comprehension scores over and above prior strategy knowledge (R2 change = 0.13, F(1,37) = 6.16, p < .05; see Table 6).

Table 6 Hierarchal linear regression analyses predicting reading comprehension scores on the transfer task

General Discussion

The current study adds to the literature by using process data to investigate how students’ propensity to use in–system resources (i.e., iBucks) to engage with game-based features (spendency) impacted their interaction trajectories, in-system performance, attitudes, and learning gains. We introduce the term spendency as a term to describe students’ use of in-game currency. The importance of this game feature is not isolated solely to the iSTART-ME system. Of course the term spendency can be applied to any system that utilizes a gamification technique where in-game currency is applicable. As the gamification of ITSs becomes more prevalent, it is important to understand how users choose to engage with various types of game-based features and the impact of those interactions.

Initial results from this study revealed that students’ spendency was related to a specific interaction pattern within the iSTART-ME system. Specifically, when students’ had a high spendency they were more likely to engage in an interaction loop between generative practice games and personalizable features. Thus, these students played a generative practice game and then chose to customize the system in some way. Once they had customized the system, these students tended to revert back to playing a generative practice game. These unique trajectories could be attributable to a couple of factors. The first is that the personalizable features may be the most visually salient features on the interface, as they are prominently located in the right hand corner of the screen and also offer students a multitude of editing options (see Fig. 2). Thus, these features may direct students’ attention and resources towards them more often. The second is that these features are the only elements within the system that afford detachment from educational content. High spendency students may have interacted with these features as a way to get a mental break from the strategy instruction embedded within the system. Interestingly, attitudinal results from the current study indicate that spendency was not related to motivation or engagement. Thus, these students did not appear to simply be interacting with the personalizable features out of boredom.

Current literature indicates that interactions with game-based features can have a negative impact on students’ learning outcomes (Rowe et al. 2009). However, many of these studies examine the impact of gamification on learning or performance outcomes after only one session. The current work is unique, because we examine how students’ interactions with game-based features influences target skill acquisition over multiple training sessions. Indeed, results from the current study add a deeper understanding of the impact of game-based features by revealing that students’ spendency had an immediate negative impact on in-system performance, daily strategy performance, and learning transfer. Thus, students who were more interested in spending their earned currency did not perform well during training and also had lower scores on the learned skill transfer tasks. This finding could be due to students placing a higher importance on spending their earned resources to interact with features and, therefore, spending less time engaged in the learning tasks. Overall, these results support the hypothesis that overexposure to game-based features may act as seductive distracters that pull students’ attention away from the designated task thus, negatively influencing their ability to transfer learned skills to new tasks.

ITS researchers have used numerous types of game-based features to leverage students’ enjoyment and (indirectly) impact learning. However, the influence of these features is not completely understood. The results from the current study suggest that interactions with game-based features can vary; however, when students put a high amount of importance on these features there may be immediate and long-term negative consequences on task performance. This finding is important because it shows that the consequences of embedded game-based features may primarily impact immediate learning outcomes and students’ ability to transfer their learned skills to new tasks. These results underscore the importance of understanding of both the immediate and long-term effects of game-based features that are integrated into learning environments. Although previous work has shown positive attitudinal effects from students’ exposure to game-based features (Jackson and McNamara 2013; Snow et al. 2013a; Rai and Beck 2012), the current results suggest that there are some potential consequences, at least for immediate and transfer performance. As learning environments are developed, the addition of game-based features may or may not be appropriate depending on the learning task. For instance, if the learning goal of a system is to show immediate gains in domain knowledge, the inclusion of game-based features may be detrimental to that goal. However, in a system such as iSTART-ME, which was designed to engage students in practice over long periods of time, the immediate effects of game-based features may not be as important in the long-term. These initial results should warn developers to ensure that the inclusion of features does not interfere with the learning goal relative to the timeframe of the system.

Although it is important to understand the impact of game-based features on system and task performance, it is also important to identify students who are more inclined to engage with these features. Understanding the individual differences that drive students’ interactions within systems may help researchers develop environments wherein traits of the interface adapt depending on the users’ affect. The current study took steps to identify individual differences that may influence students’ propensity to interact with game-based features. However, results indicate that spendency was not related to students’ scores on the Gates-MacGinitie Reading Comprehension Test or their prior strategy performance. These results suggest that both high and low ability students use their in-game currency to interact with game-based features at an equivalent rate. These preliminary results begin to show that both high and low ability students engage with seductive distracters, though work remains to further elucidate how attitudinal measure influence whether or not students tend to attend to seductive distracters or ignore them.

Future Work

The analyses presented in the current work are intended as a seed for future studies by providing evidence that system log-data can provide valuable metrics for the way in which users behave with game-based environments. Going forward, we plan to include these initial behavioral measures in a student model within the iSTART-ME system. Such analyses will be especially valuable if the iSTART-ME system is able to recognize non-optimal learning behaviors and steer students toward more effective behaviors. For instance, if a student is engaging in a high spendency behaviors, it may be beneficial for iSTART-ME to have the capability to recognize this trend and prompt the student toward less spending and more generative practice.

Another future direction of study regards finding a “sweet spot” or balance between game-based and pedagogical features, successfully promoting both engagement and learning. Results from this study and others (Jackson and McNamara 2013) suggest that exposure to game-based features may negatively influence immediate performance and learning outcomes. However, other research has shown that game-based features have a positive effect on students’ attitudes (Rai and Beck 2012; Snow et al. 2013c). Combined, these findings suggest that future work should further focus on the complex interplay between disengagement and learning within game-based environments.

Conclusion

Game-based features have been incorporated within a number of adaptive environments, principally in an attempt to enhance students’ engagement and interest in particular learning tasks. While this movement toward more engaging and creative learning environments is compelling, it is important for researchers and educators to more fully understand the impact that these features may or may not have on students. Our study adopts a novel approach to achieving that objective by using process data to investigate how students’ spendency is related to training performance, immediate learning outcomes, and learning transfer. Previous work has found that game-based features may act as seductive distracters, negatively impacting learning outcomes (e.g., Harp and Mayer 1997). Such work may lead to the conclusion that game-based features and games should not be incorporated within learning environments. Our overarching assumption is that the impact of game-based features can have a negative impact on learning, thus they should be incorporated into systems with caution. The results of this study further support that assumption. Specifically, outcomes from current work indicate that students’ propensity to spend earned iBucks to interact with embedded game-based features led to immediate negative consequences during training performance and at transfer. Thus, initial results reveal that this specific gamification technique (i.e., currency) has the potential divert students’ attention and negatively impact their target skill acquisition within a game-based environment.