Spendency: Students’ Propensity to Use System Currency
Abstract
Using students’ process data from the game-based Intelligent Tutoring System (ITS) iSTART-ME, the current study examines students’ propensity to use system currency to unlock game-based features, (i.e., referred to here as spendency). This study examines how spendency relates to students’ interaction preferences, in-system performance, and learning outcomes (i.e., self-explanation quality, comprehension). A group of 40 high school students interacted with iSTART-ME as part of an 11-session experiment (pretest, eight training sessions, posttest, and a delayed retention test). Students’ spendency was negatively related to the frequency of their use of personalizable features. In addition, students’ spendency was negatively related to their in-system achievements, daily learning outcomes, and performance on a transfer comprehension task, even after factoring out prior ability. The findings from this study indicate that increases in students’ spendency are systematically related to their selection choices and may have a negative effect on in-system performance, immediate learning outcomes, and skill transfer outcomes. The results have particular relevance to game-based systems that incorporate currency to unlock features within games as well as to the differential tradeoffs of game features on motivation and learning.
Keywords
Game-based features Gamification Intelligent tutoring systems Seductive distractorsGame-based features are increasingly popular devices within educational learning environments as a means to increase student interest and promote long-term, persistent interactions within learning systems (Cordova and Lepper 1996; Jackson et al. 2009a, b; Jackson and McNamara 2013; Rai and Beck 2012). Games are clearly complex, but some elements include devices such as performance contingent incentives (e.g., currency), personalizable agents (e.g., avatar edits), and navigational choices (e.g., option to turn left or right). The incorporation of such interactive features within learning environments has been shown to increase students’ engagement and motivation in learning tasks (Cordova and Lepper 1996; Jackson and McNamara 2013; Rai and Beck 2012).
While educational games and game-based features clearly have some demonstrated benefits, particular in terms of enhancing students’ levels of motivation and engagement, there remains some controversy regarding their benefits to learning. One controversy regards to degree to which game-based features may act as seductive distracters. As such, these features may increase opportunities for students to engage in behaviors that distract from learning goals. For example, adding rewards or personal achievements to educational environments may increase students’ investment with the system (Snow et al. 2013c), but also afford behaviors that are distracting and tangential to the learning task (e.g., spending time viewing trophies). In general, when students behave in a manner that is divergent and unrelated to the designated task, there can be a negative impact on learning (Carroll 1963; Rowe et al. 2009; Stallings 1980). In this study, we further examine these issues regarding game-based features, focusing on game currency, which is frequently used within games to unlock various features. In particular, we investigate relations among students’ propensity to interact with game-based features (spendency), their in-system performance, and learning gains.
Seductive Distracters
Seductive distracters are defined as any irrelevant but engaging stimulus that pulls a student’s attention away from the learning goal (Harp and Mayer 1997). Some examples of seductive distracters include: achievements pop-ups (e.g., level advancement notifications), interface customization (e.g., editing system agents), and irrelevant information embedded within text. Students’ interactions with seductive distracters have been shown to have a negative impact on their learning gains (Garner et al. 1991; Godwin and Fisher 2011; Hidi and Baird 1986; Harp and Mayer 1997; Harp and Mayer 1998; Rowe et al. 2009). For instance, Godwin and Fisher (2011) found that when students spent more time focusing attention on seductive distracters, such as visual representations (i.e., posters, artwork, maps), their overall accuracy on comprehension learning assessments decreased. Similarly, Harp and Mayer (1997) showed that when students were asked to read a passage with an embedded seductive distracter (i.e., irrelevant information), they comprehended the passage at a more shallow level compared to those students who read the passage without the presence of seductive distracters. In sum, seductive distracters potentially disconnect students from the learning goal by decreasing the amount of time and attention a student may spend focusing on a designated task (Harp and Mayer 1998).
Intelligent Tutoring Systems and Gamification
Intelligent Tutoring Systems (ITSs) are computer-based learning environments that adapt content and feedback to students based on their individual needs (Murray 1999; VanLehn 2011). These systems are effective at improving student performance and promoting overall learning gains (Woolf 2009). When ITSs specialize in complex skill acquisition they may require extensive amounts of training and practice, which in turn, can result in student disengagement and boredom (Bell and McNamara 2007; Jackson and McNamara 2013; McNamara et al. 2009). This increase in student disengagement over time has led researchers to investigate how they can improve student motivation and engagement within ITSs.
One solution that researchers have found to combat student boredom within educational environments is through gamification. Gamification is the process of adding game-based features or elements (e.g., incentives, editable avatars, and navigational choices) into a non-game based context for the purpose of increasing students’ engagement in a task (Deterding et al. 2011). Previous work has revealed that when interactive features are assimilated into learning environments (i.e., gamification), students have shown increased engagement and motivation in tasks (Cordova and Lepper 1996; Baker et al. 2004; Facer et al. 2004; Jackson and McNamara 2011; Rowe et al. 2010; Sabourin et al. 2011; Rowe et al. 2010; Snow et al. 2013a; Rai and Beck 2012; Roscoe et al. 2013a, b). For instance, work by Rai and Beck found that when game-based features (e. g., personalizable monkey agent) were implemented into a complex learning environment, students reported increased enjoyment. Similarly, Rowe, Shores, Mott and Lester found that as interactive choices within a system increased (i.e., control over where to explore on a map), students’ engagement with the system also increased. More recently, Snow and colleagues demonstrated that students’ interactions with personalizable avatar features was negatively related to posttest measures of boredom and positively related to posttest measure of personal control. Thus, this growing body of research has demonstrated that the addition of game-based features within ITSs has been found to have positive effects on students’ attitudes.
Although research has demonstrated the benefits of game-based features, these same features may act as seductive distracters that engage students in behaviors that are irrelevant or tangential to the overall learning task, consequently impacting the success of the tutoring system. When researchers have investigated how users’ learning outcomes are affected by the presence of seductive game-based features, they have found mixed results (Snow et al. 2013a; Rai and Beck 2012; Rowe et al. 2009). For instance, Rowe and colleagues examined how students’ interactions with seductive game-based features influenced their learning outcomes. They found that embedded navigational choices in an open game-based environment (i.e., Crystal Island), negatively influenced posttest performance. In contrast, Rai and Beck found that game-based features embedded within a math tutor had no impact on students’ learned math skills. These contradictory findings indicate that more work needs to be conducted to fully understand how game-based features designed to increase engagement ultimately impact learning outcomes.
One way that researchers may begin to shed light on the influence that these features have on learning is to examine students’ process data (Snow et al. 2014; Snow, Likens, Jackson and McNamara 2013). Indeed, students may choose to interact with game-based features differently, which in turn may influence the impact that these features have on learning. The current study attempts to gain a deeper understanding of this issue by examining variations in students’ use of one specific game-based feature, in-system currency. In-game currency has been used in a variety of game-based environments as a way to promote engagement and provide performance incentives (Gillam and James 1999; Jackson and McNamara 2013; Kim et al. 2009; Rymaszewski 2007). For instance, within the mobile content sharing game Indagator, users earn currency by adding content the digital environment. Users’ currency can then be used to customize their experiences and unlock various system features (e.g., new encounters or game-based objects). Within Indagator, currency is also used to indicate users’ rank and experience within the game environment (Lee et al. 2010). Although there seems to be a wealth of systems that use some form of currency, relatively few studies have been conducted to examine how this game element may act as a seductive distractor that impacts students’ performance. The current study is one of the first studies to utilize students’ process data to gain a deeper understanding of how varying levels of interactions with in-game currency impact in-system performance and learning outcomes within the context of a game-based system.
iSTART
Interactive Strategy Training for Active Reading and Thinking (iSTART) is an ITS designed to improve students’ comprehension of science texts through the instruction of reading strategies (McNamara et al. 2004). iSTART provides students with instruction to use reading strategies, including comprehension monitoring, predicting, paraphrasing, elaborating, and bridging, in the context of self-explaining complex texts. Self-explanation, the process of explaining the meaning of text to oneself, has been found to improve performance on a wide range of tasks including problem solving, generating inferences, and deep comprehension of text (Chi et al. 1989; McNamara 2004). iSTART combines the process of self-explanation with comprehension strategy instruction so that students engage in deep processing of text and in turn, improve their use of strategies by employing them within self-explanations (McNamara 2004). Students using iSTART have demonstrated significant improvements in comprehension, with average Cohen’s d effect sizes ranging from 0.68 to 1.12 depending on the learner’s prior knowledge (Magliano et al. 2005; McNamara 2009; McNamara et al. 2006; McNamara et al. 2007a, b).
Screen shot of the demonstration module
In addition to the three training modules, iSTART contains an extended practice module, which operates in the same manner as the practice module. The difference, however, is that extended practice contains a larger number of available texts, which are intended to sustain practice for several months. In addition, teachers can add their own texts that are not currently in the system and assign them to their students.
To provide students with appropriate feedback, an algorithm was developed to assess the quality of students’ individual self-explanations. The algorithm utilizes a combination of latent semantic analysis (LSA; Landauer et al. 2007) and word-based measures to assess students’ self-explanations, yielding a score from 0 to 3. A score of “0” is provided to self-explanations that are either composed of irrelevant information or are too short, whereas a score of “1” is applied to self-explanations that relate to the sentence itself and do not elaborate upon the provided information. A score of “2” implies that the student’s self-explanation incorporates information from the text beyond the target sentence itself. Finally, a score of “3” suggests that a students’ self-explanation has incorporated additional information about the text at a global level. This added information may relate to a student’s prior knowledge outside of the target text, or it may focus on the overall theme or purpose of the text. Previous research has shown that the iSTART algorithm provides self-explanation scores that are comparable to human raters (Jackson et al. 2010a, b; McNamara et al. 2007a, b).
iSTART-ME
Although the training and extended practice in the iSTART system were shown to improve students’ performance across time (Jackson et al. 2010a, b), the repetitive nature of the extended practice module resulted in some student disengagement and boredom (Bell and McNamara 2007). To address this problem, the iSTART extended practice module was incorporated into a game-based environment called iSTART-ME (Motivationally-Enhanced; Jackson and McNamara 2013). Recent work with iSTART-ME has shown that the addition of game-based features has enhanced students’ motivation, engagement, and persistence (Jackson and McNamara 2013; Snow et al. 2013c).
Screen shot of iSTART-ME selection menu
Screen shot of trophy and level view on iSTART-ME selection menu
Cost of interactions for every game-based feature
Personalizable interactions | Cost in iBucks |
---|---|
Avatar edits | 300 |
Background theme edits | 300 |
Pedagogical agent change | 300 |
Practice interactions | Cost in iBucks |
Strategy match | 300 |
Bridge builder | 300 |
Vocabulous | 300 |
SE lifeline | 300 |
Balloon bust | 300 |
Dungeon escape | 450 |
Example avatar configurations
Screen shot of Balloon Bust
Current Study
- 1)
How does students’ spendency vary as a function of individual differences in prior skill level and self-reported attitudes?
- 2)
How do students’ interaction patterns with various game-based features vary as a function of their spendency?
- 2)
How do differences in students’ spendency relate to their in-system performance?
- 4)
How do differences in students’ spendency relate to learning outcomes?
A unique contribution of this study stems primarily from the analysis of process data to investigate users’ propensity to spend in-game currency as a way to interact with game-based features (spendency), such as practice and personalizable features. Examining the impact of variations in students’ interactions further explores the impact that game-based features have on performance and learning gains. Results from this study may begin to provide researchers with deeper understanding of how gamification can influence learning goals within game-based systems.
Methods
Subjects
Participants in this study included 40 high-school students from a mid-south urban environment (50 % male; 73 % African American, 17 % Caucasian, 10 % other nationalities; average grade level of 10.4; average age = 15.5 years). The sample included in the current work is a subset of 124 students who participated in a larger 11-session study that compared three conditions: iSTART-ME, iSTART-Regular, and no-tutoring control. In that study, both iSTART and ISTART-ME were shown to improve students’ strategy performance from pretest to posttest (Jackson and McNamara 2013). The results presented in the current study solely focus on the students who were assigned to the iSTART-ME condition, as they were the only students who had access to the full game-based selection menu.
Procedure
All students completed the full 11-session experiment that consisted of a pretest, eight training sessions, a posttest, and a delayed retention test. During the first session, participants completed a pretest survey that included measures of motivation, attitudes toward technology, prior self-explanation (SE) ability, and prior reading comprehension ability. During the following eight sessions, participants took part in the training portion of the experiment, each lasting approximately 1 h (the duration of each session from 58 to 75 min). In these sessions, students interacted with the full game-based menu, where they had access to mini-games, texts, and interactive features. All students in the current study interacted with the iSTART-ME system for all eight training sessions. After the eight training sessions, students completed a posttest that included similar measures to the pretest. One week after the posttest, students completed a retention test that contained measures similar to the pretest and posttest (i.e., self-explanation ability).
Measures
Strategy Performance
Self-explanation (SE) quality was measured during pretest, posttest, and at retention by asking students to read through a text one sentence at a time and generate a self-explanation when prompted. Training strategy performance was assessed during sessions two through eight by measuring the self-explanations students produced while engaging in the generative practice games. All generated self-explanations were scored using the automated iSTART assessment algorithm (McNamara et al. 2007a, b; Jackson et al. 2010a, b).
Reading Comprehension Ability
Students’ reading comprehension ability was assessed at pretest using the Gates-MacGinitie Reading Comprehension Test (MacGinitie and MacGinitie 1989). This test includes 48 questions designed to assess the general reading ability of each student by asking them to read a passage and then answer comprehension questions about the passage. This test is a well-established measure of student reading comprehension, which provides researchers with in-depth information about students’ starting literacy abilities (α = 0.85–0.92, Phillips et al. 2002).
Attitudes and Motivation
Daily enjoyment and motivation measures
Dependent measure | Response statement | Response SCALE |
---|---|---|
Enjoyment | “I enjoyed my most recent session” | 1–6 |
Motivation | “I was motivated to participate in my most recent session” | 1–6 |
In-System Performance
Students’ in-system performance was measured through the highest level that the student achieved throughout iSTART-ME training. As described earlier, students advance to higher levels within the system by earning performance points within practice games and mini-games. There are 25 levels and each level requires more points to proceed than the previous level. This measure is reflective of students’ daily strategy performance within the context of the iSTART-ME system.
Learning Transfer
iSTART-ME was designed to teach students strategies to improve their reading comprehension. Learning outcomes were measured through an open-ended comprehension measure. In the current study, the transfer of self-explanation training to reading comprehension was assessed at the retention test using a science passage-specific comprehension measure. In this task, each student was asked to read an assigned science text. After they finished reading, students were then presented with a series of open-ended questions that they were asked to answer based on their recollection of the text. These questions were designed to assess low-level and deep-level text comprehension. Two raters independently scored these open–ended questions. Initial inter-rater agreement was high, with an overall kappa of 0.951.
Spendency
Students’ propensity to use system currency as a way to engage with the game-based features was defined as their spendency. This measure was calculated by dividing the total number of iBucks students spent on mini-games and personalizable features by the total iBucks each student earned within the system (total points spent / total points earned). This proportion yielded each student’s spendency (tendency to spend) within iSTART-ME. This measure allows us to examine the degree to which students spent their in-system currency on various game-based features (with respect to the number of iBucks earned). This measure allows us to investigate how much of their available resources (i.e., iBucks) a student used while in iSTART-ME.
Interaction Patterns
All students’ interactions in iSTART-ME were logged and recorded within the iSTART-ME database. This process data was then organized according to the function afforded by each feature: generative practice, identification mini-games, personalization of interface features, and screens to monitor system achievements and progress (see Jackson and McNamara 2013, for detailed descriptions). Through the use of a statistical sequencing procedure similar to that used in D’Mello et al. (2007) and this time stamped data set, we calculated the probability of each student’s set of interactions within the system. This calculation can be described as L[It→Xt+1], where, put simply, we calculated the probability of a student’s next interaction (X) with an interface feature given their previous interaction (I). This unique sequencing procedure allows us to trace students’ interaction trajectories across time.
Results
Spendency
To characterize how students spent their earned iBucks, spendency was calculated using a proportional formula that took students’ total points spent (M = 1440.42, SD = 2272.78) and divided them by their total points earned (M = 23,180.57, SD = 30,541.69). Within the current study, students’ spendency varied considerably (range = 0.0 to 0.99, M = 0.49, SD = 0.26). This range reveals that students varied in their propensity to use iBucks. Indeed, some students may have hoarded or never spent any iBucks, while others spent iBucks as soon as they earned them. This wide range reflects the differences in which students’ engaged with the currency in iSTART-ME.
Individual Differences and Spendency
To gain a deeper understanding of how individual differences related to students’ tendency to spend iBucks, Pearson correlation analyses were conducted on pretest reading and self-explanation scores. Results from these analyses revealed no significant relation between students’ spendency and their prior reading ability (r = −0.21, p = 0.19) or self-explanation performance (r = −0.10, p = 0.55). Thus, these results indicate that individuals’ reading ability and strategy performance did not influence their propensity to spend iBucks on interactions with game-based features.
Motivation, Enjoyment and Spendency
To assess how spendency related to students’ self-reported enjoyment and motivation, Pearson correlation analyses were conducted using students’ mean enjoyment and motivation scores. Results from these analyses revealed no significant relation between students’ spendency and their self-reported enjoyment (r = −0.18, p = 0.26) or motivation (r = −0.21, p = 0.19). Thus, students’ propensity to spend their earned iBucks does not seem to be the result of boredom or disinterest. Overall, students tended to report high ratings of enjoyment (M = 5.0, SD = 0.75) and motivation (M = 5.37, SD = 0.80) within the system. These results are similar to recent work showing that students generally rate their interactions with educational games as positive experiences (Jackson and McNamara 2013; Rodrigo and Baker 2011).
Interaction Patterns
Relation between interaction patterns and spendency
Overall, results from this analysis demonstrated a significant positive relation between spendency and two probability interactions. First, students who had a higher spendency were more likely to interact with a personalizable feature after playing a generative practice game (r = 0.34, p < .05). Similarly, students with a high spendency were also more likely to return to a generative practice game after interacting with a personalizable feature (r = 0.41, p < .01). These results reveal that students who spent a higher proportion of their iBucks were not just jumping between features that did not reinforce learning strategies. Indeed, these students seemed to be interacting in a pattern where they practiced generating self-explanations and then moved to personalizing various game-based features. Interestingly, these students were not likely to stay in the generative practice or personalizable feature screens, but instead transitioned between the two.
In-System Performance
Hierarchal linear regression analyses predicting achievement level
Variable | B | SE | β | ΔR2 |
---|---|---|---|---|
Model 1 | 0.36** | |||
Pretest SE | 5.75 | 1.25 | 0.60** | |
Model 2 | 0.17 ** | |||
Pretest SE | 5.36 | 1.10 | 0.56** | |
Spendency | −9.41 | 2.56 | −0.42** |
Strategy Performance
Spendency and daily strategy performance
Daily strategy performance | Spendency |
---|---|
Session 1 | −0.147 |
Session 2 | −0.238 |
Session 3 | −0.392* |
Session 4 | −0.448** |
Session 5 | −0.479** |
Session 6 | −0.354** |
Session 7 | −0.416** |
Session 8 | −0.391** |
Hierarchal linear regressions predicting self-explanation quality spendency and prior strategy ability
Self-explanation quality | β | ΔR2 | R2 |
---|---|---|---|
Session 3 | 0.31* | ||
Prior strategy ability | 0.40 | 0.19** | |
Spendency | −0.36 | 0.13* | |
Session 4 | 0.29** | ||
Prior strategy ability | 0.29 | 0.11* | |
Spendency | −0.42 | 0.18* | |
Session 5 | 0.37** | ||
Prior strategy ability | 0.37 | 0.17** | |
Spendency | −0.45 | 0.20** | |
Session 6 | 0.17* | ||
Prior strategy ability | 0.20 | 0.05 | |
Spendency | −0.34 | 0.12* | |
Session 7 | 0.25* | ||
Prior strategy ability | 0.27 | 0.09 | |
Spendency | −0.39 | 0.16** | |
Session 8 | 0.30* | ||
Prior strategy ability | 0.38 | 0.14* | |
Spendency | −0.39 | 0.16* |
Finally, the current study investigated how students’ interactions with game-based features impacted their long-term learning outcomes (i.e., self-explanation quality). Pearson correlations were calculated to examine how spendency related to posttest, and retention self-explanation scores. This analysis revealed that spendency was not related to self-explanation quality at posttest (r = −0.195, p = 0.227) or retention (r = −0.231, p = 0.187). Taken together, these results indicate that spendency had an immediate negative effect on strategy performance, but it did not appear to be a detriment over time.
Learning Transfer
Hierarchal linear regression analyses predicting reading comprehension scores on the transfer task
Variable | B | SE | β | ΔR2 |
---|---|---|---|---|
Model 1 | 0.10* | |||
Pretest SE | 1.27 | 0.61 | 0.32* | |
Model 2 | 0.13* | |||
Pretest SE | 1.14 | 0.58 | 0.29 | |
Spendency | −3.40 | 1.37 | −0.36* |
General Discussion
The current study adds to the literature by using process data to investigate how students’ propensity to use in–system resources (i.e., iBucks) to engage with game-based features (spendency) impacted their interaction trajectories, in-system performance, attitudes, and learning gains. We introduce the term spendency as a term to describe students’ use of in-game currency. The importance of this game feature is not isolated solely to the iSTART-ME system. Of course the term spendency can be applied to any system that utilizes a gamification technique where in-game currency is applicable. As the gamification of ITSs becomes more prevalent, it is important to understand how users choose to engage with various types of game-based features and the impact of those interactions.
Initial results from this study revealed that students’ spendency was related to a specific interaction pattern within the iSTART-ME system. Specifically, when students’ had a high spendency they were more likely to engage in an interaction loop between generative practice games and personalizable features. Thus, these students played a generative practice game and then chose to customize the system in some way. Once they had customized the system, these students tended to revert back to playing a generative practice game. These unique trajectories could be attributable to a couple of factors. The first is that the personalizable features may be the most visually salient features on the interface, as they are prominently located in the right hand corner of the screen and also offer students a multitude of editing options (see Fig. 2). Thus, these features may direct students’ attention and resources towards them more often. The second is that these features are the only elements within the system that afford detachment from educational content. High spendency students may have interacted with these features as a way to get a mental break from the strategy instruction embedded within the system. Interestingly, attitudinal results from the current study indicate that spendency was not related to motivation or engagement. Thus, these students did not appear to simply be interacting with the personalizable features out of boredom.
Current literature indicates that interactions with game-based features can have a negative impact on students’ learning outcomes (Rowe et al. 2009). However, many of these studies examine the impact of gamification on learning or performance outcomes after only one session. The current work is unique, because we examine how students’ interactions with game-based features influences target skill acquisition over multiple training sessions. Indeed, results from the current study add a deeper understanding of the impact of game-based features by revealing that students’ spendency had an immediate negative impact on in-system performance, daily strategy performance, and learning transfer. Thus, students who were more interested in spending their earned currency did not perform well during training and also had lower scores on the learned skill transfer tasks. This finding could be due to students placing a higher importance on spending their earned resources to interact with features and, therefore, spending less time engaged in the learning tasks. Overall, these results support the hypothesis that overexposure to game-based features may act as seductive distracters that pull students’ attention away from the designated task thus, negatively influencing their ability to transfer learned skills to new tasks.
ITS researchers have used numerous types of game-based features to leverage students’ enjoyment and (indirectly) impact learning. However, the influence of these features is not completely understood. The results from the current study suggest that interactions with game-based features can vary; however, when students put a high amount of importance on these features there may be immediate and long-term negative consequences on task performance. This finding is important because it shows that the consequences of embedded game-based features may primarily impact immediate learning outcomes and students’ ability to transfer their learned skills to new tasks. These results underscore the importance of understanding of both the immediate and long-term effects of game-based features that are integrated into learning environments. Although previous work has shown positive attitudinal effects from students’ exposure to game-based features (Jackson and McNamara 2013; Snow et al. 2013a; Rai and Beck 2012), the current results suggest that there are some potential consequences, at least for immediate and transfer performance. As learning environments are developed, the addition of game-based features may or may not be appropriate depending on the learning task. For instance, if the learning goal of a system is to show immediate gains in domain knowledge, the inclusion of game-based features may be detrimental to that goal. However, in a system such as iSTART-ME, which was designed to engage students in practice over long periods of time, the immediate effects of game-based features may not be as important in the long-term. These initial results should warn developers to ensure that the inclusion of features does not interfere with the learning goal relative to the timeframe of the system.
Although it is important to understand the impact of game-based features on system and task performance, it is also important to identify students who are more inclined to engage with these features. Understanding the individual differences that drive students’ interactions within systems may help researchers develop environments wherein traits of the interface adapt depending on the users’ affect. The current study took steps to identify individual differences that may influence students’ propensity to interact with game-based features. However, results indicate that spendency was not related to students’ scores on the Gates-MacGinitie Reading Comprehension Test or their prior strategy performance. These results suggest that both high and low ability students use their in-game currency to interact with game-based features at an equivalent rate. These preliminary results begin to show that both high and low ability students engage with seductive distracters, though work remains to further elucidate how attitudinal measure influence whether or not students tend to attend to seductive distracters or ignore them.
Future Work
The analyses presented in the current work are intended as a seed for future studies by providing evidence that system log-data can provide valuable metrics for the way in which users behave with game-based environments. Going forward, we plan to include these initial behavioral measures in a student model within the iSTART-ME system. Such analyses will be especially valuable if the iSTART-ME system is able to recognize non-optimal learning behaviors and steer students toward more effective behaviors. For instance, if a student is engaging in a high spendency behaviors, it may be beneficial for iSTART-ME to have the capability to recognize this trend and prompt the student toward less spending and more generative practice.
Another future direction of study regards finding a “sweet spot” or balance between game-based and pedagogical features, successfully promoting both engagement and learning. Results from this study and others (Jackson and McNamara 2013) suggest that exposure to game-based features may negatively influence immediate performance and learning outcomes. However, other research has shown that game-based features have a positive effect on students’ attitudes (Rai and Beck 2012; Snow et al. 2013c). Combined, these findings suggest that future work should further focus on the complex interplay between disengagement and learning within game-based environments.
Conclusion
Game-based features have been incorporated within a number of adaptive environments, principally in an attempt to enhance students’ engagement and interest in particular learning tasks. While this movement toward more engaging and creative learning environments is compelling, it is important for researchers and educators to more fully understand the impact that these features may or may not have on students. Our study adopts a novel approach to achieving that objective by using process data to investigate how students’ spendency is related to training performance, immediate learning outcomes, and learning transfer. Previous work has found that game-based features may act as seductive distracters, negatively impacting learning outcomes (e.g., Harp and Mayer 1997). Such work may lead to the conclusion that game-based features and games should not be incorporated within learning environments. Our overarching assumption is that the impact of game-based features can have a negative impact on learning, thus they should be incorporated into systems with caution. The results of this study further support that assumption. Specifically, outcomes from current work indicate that students’ propensity to spend earned iBucks to interact with embedded game-based features led to immediate negative consequences during training performance and at transfer. Thus, initial results reveal that this specific gamification technique (i.e., currency) has the potential divert students’ attention and negatively impact their target skill acquisition within a game-based environment.
References
- Baker, R. S., Corbett, A. T., Koedinger, K. R., & Wagner, A. Z. (2004). Off-task behavior in the Cognitive Tutor classroom: When students “game the system.” Proceedings of ACM CHI 2004: Computer-Human Interaction (pp. 383–390). New York: ACM.Google Scholar
- Bell, C., & McNamara, D. S. (2007). Integrating iSTART into a high school curriculum. In Proceedings of the 29th Annual Meeting of the Cognitive Science Society (pp. 809–814).Google Scholar
- Carroll, J. A. (1963). Model for school learning. Teachers College Record, 64, 723–733.Google Scholar
- Chi, M. T., Bassok, M., Lewis, M. W., Reimann, P., & Glaser, R. (1989). Self-explanations: how students study and use examples in learning to solve problems. Cognitive Science, 13, 145–182.CrossRefGoogle Scholar
- Cordova, D. I., & Lepper, M. R. (1996). Intrinsic motivation and the process of learning: beneficial effects of contextualization, personalization, and choice. Journal of Educational Psychology, 88, 715–730.CrossRefGoogle Scholar
- D’Mello, S. K., Taylor, R., & Graesser, A. C. (2007). Monitoring affective trajectories during complex learning. In D. S. McNamara & J. G. Trafton (Eds.), Proceedings of the 29th annual meeting of the cognitive science society (pp. 203–208). Austin: Cognitive Science Society.Google Scholar
- Deterding, S., Sicart, M., Nacke, L., O’Hara, K., & Dixon, D. (2011). Gamification: Using game-design elements in non-gaming contexts. In PART 2 of the Proceedings of the 2011 annual conference extended abstracts on Human factors in computing systems (pp. 2425–2428). ACM.Google Scholar
- Facer, K., Joiner, R., Stanton, D., Reid, J., Hull, R., & Kirk, D. (2004). Savannah: mobile gaming and learning? Journal of Computer Assisted Learning, 20, 399–409.CrossRefGoogle Scholar
- Garner, R., Alexander, P. A., Gillingham, M. G., Kulikowich, J. M., & Brown, R. (1991). Interest and learning from text. American Educational Research Journal, 28, 643–659.CrossRefGoogle Scholar
- Gillam, B. D., & James, S. R. (1999). U.S. Patent No. 5,964,660. Washington, DC: U.S. Patent and Trademark Office.Google Scholar
- Godwin, K. E., & Fisher, A. V. (2011). Allocation of attention in classroom environments: consequences for learning. Paper Presented at the Annual Meeting of the Cognitive Science Society, Boston, Massachusetts.Google Scholar
- Harp, S. F., & Mayer, R. E. (1997). The role of interest in learning from scientific text and illustrations: on the distinction between emotional and cognitive interest. Journal of Educational Psychology, 89, 92–102.CrossRefGoogle Scholar
- Harp, S. F., & Mayer, R. E. (1998). How seductive details do their damage: a theory of cognitive interest in science learning. Journal of Educational Psychology, 90, 414.CrossRefGoogle Scholar
- Hidi, S., & Baird, W. (1986). Interestingness—a neglected variable in discourse processing. Cognitive Science, 10, 179–194.Google Scholar
- Jackson, G. T., & McNamara, D. S. (2011). Motivational impacts of a game-based intelligent tutoring system. In R. C. Murray & P. M. McCarthy (Eds.), Proceedings of the 24th International Florida Artificial Intelligence Research Society (FLAIRS) conference (pp. 519–524). Menlo Park: AAAI Press.Google Scholar
- Jackson, G. T., & McNamara, D. S. (2013). Motivation and performance in a game-based intelligent tutoring system. Journal of Educational Psychology. doi: 10.1037/a0032580. Advance online publication.Google Scholar
- Jackson, G. T., Boonthum, C., & McNamara, D. S. (2009a). iSTART-ME: situating extended learning within a game-based environment. In H. C. Lane, A. Ogan, & V. Shute (Eds.), In Proceedings of the workshop on intelligent educational games at the 14th annual conference on artificial intelligence in education (pp. 59–68). Brighton: AIED.Google Scholar
- Jackson, G. T., Graesser, A. C., & McNamara, D. S. (2009b). What students expect may have more impact than what they know or feel. In V. Dimitrova, R. Mizoguchi, B. du Boulay, & A. C. Graesser (Eds.), Artificial intelligence in education; building learning systems that care; from knowledge representation to affective modeling (pp. 73–80). Amsterdam: Ios Press.Google Scholar
- Jackson, G. T., Boonthum, C., & McNamara, D. S. (2010a). The efficacy of iSTART extended practice: low ability students catch up. In J. Kay & V. Aleven (Eds.), Proceedings of the 10th international conference on intelligent tutoring systems (pp. 349–351). Berlin: Springer.CrossRefGoogle Scholar
- Jackson, G. T., Guess, R. H., & McNamara, D. S. (2010b). Assessing cognitively complex strategy use in an untrained domain. Topics in Cognitive Science, 2, 127–137.CrossRefGoogle Scholar
- Kim, B., Park, H., & Baek, Y. (2009). Not just fun, but serious strategies: using meta-cognitive strategies in game-based learning. Computers & Education, 52(4), 800–810.CrossRefGoogle Scholar
- Landauer, T. K., McNamara, D. S., Dennis, S. E., & Kintsch, W. E. (2007). Handbook of latent semantic analysis. Lawrence Erlbaum Associates Publishers.Google Scholar
- Lee, C. S., Goh, D. H. L., Chua, A. Y., & Ang, R. P. (2010). Indagator: investigating perceived gratifications of an application that blends mobile content sharing with gameplay. Journal of the American Society for Information Science and Technology, 61(6), 1244–1257.Google Scholar
- MacGinitie, W. H., & MacGinitie, R. K. (1989). Gates MacGinitie reading tests. Chicago: Riverside.Google Scholar
- Magliano, J. P., Todaro, S., Millis, K., Wiemer-Hastings, K., Kim, H. J., & McNamara, D. S. (2005). Changes in reading strategies as a function of reading training: a comparison of live and computerized training. Journal of Educational Computing Research, 32, 185–208.CrossRefGoogle Scholar
- McNamara, D. S. (2004). SERT: self-explanation reading training. Discourse Processes, 38, 1–30.CrossRefGoogle Scholar
- McNamara, D. S. (2009). The importance of teaching reading strategies. Perspectives on Language and Literacy, 35, 34–40.Google Scholar
- McNamara, D. S., Levinstein, I. B., & Boonthum, C. (2004). iSTART: interactive strategy trainer for active reading and thinking. Behavioral Research Methods Instruments & Computers, 36, 222–233.CrossRefGoogle Scholar
- McNamara, D. S., O’Reilly, T., Best, R., & Ozuru, Y. (2006). Improving adolescent students’ reading comprehension with iSTART. Journal of Educational Computing Research, 34, 147–171.CrossRefGoogle Scholar
- McNamara, D. S., Boonthum, C., Levinstein, I. B., & Millis, K. (2007a). Evaluating self-explanations in iSTART: comparing word-based and LSA algorithms. In T. Landauer, D. S. McNamara, S. Dennis, & W. Kintsch (Eds.), Handbook of latent semantic analysis (pp. 227–241). Mahwah: Erlbaum.Google Scholar
- McNamara, D. S., O’Reilly, T., Rowe, M., Boonthum, C., & Levinstein, I. B. (2007b). iSTART: a web-based tutor that teaches self-explanation and metacognitive reading strategies. In D. S. McNamara (Ed.), Reading comprehension strategies: theories, interventions, and technologies (pp. 397–420). Mahwah: Erlbaum.Google Scholar
- McNamara, D. S., Jackson, G. T., & Graesser, A. C. (2009). Intelligent tutoring and games (ITaG). In Proceedings of the Workshop on Intelligent Educational Games at the 14th Annual Conference on Artificial Intelligence in Education (pp. 1–10).Google Scholar
- Murray, T. (1999). Authoring intelligent tutoring systems: an analysis of the state of the art. International Journal of Artificial Intelligence in Education, 10, 98–129.Google Scholar
- Phillips, L. M., Norris, S. P., Osmond, W. C., & Maynard, A. M. (2002). Relative reading achievement: a longitudinal study of 187 children from first through sixth grades. Journal of Educational Psychology, 94, 3–13.CrossRefGoogle Scholar
- Rai, D., & Beck, J. (2012). Math learning environment with game-like elements: an experimental framework. International Journal of Game Based Learning, 2, 90–110.CrossRefGoogle Scholar
- Rodrigo, M. M. T., & Baker, R. S. J. D. (2011). Comparing learners’ affect while using an intelligent tutor and an educational game. Research and Practice in Technology Enhanced Learning, 6, 43–66.Google Scholar
- Roscoe, R. D., Brandon, R. D., Snow, E. L., & McNamara, D. S. (2013a). Game-based writing strategy practice with the Writing Pal. In Pytash & Ferdig (Eds.). Exploring Technology for Writing and Writing Instruction (pp. 1–20). Hershey: IGI Global.Google Scholar
- Roscoe, R. D., Snow, E. L., Brandon, R. D., & McNamara, D. S. (2013b). Educational game enjoyment, perceptions, and features in an intelligent writing tutor. In C. Boonthum-Denecke & G. M. Youngblood (Eds.), Proceedings of the 26th annual Flordia Artificial Intelligence Research Society (FLAIRS) conference (pp. 515–520). Menlo Park: The AAAI Press.Google Scholar
- Rowe, J., McQuiggan, S., Robison, J., & Lester, J. (2009). Off-task behavior in narrative- centered learning environments. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Education, 99–106.Google Scholar
- Rowe, J., Shore, L., Mott, B., & Lester, J. (2010). Integrating learning and engagement in narrative-centered learning environments. Proceeding of Intelligent Tutoring Systems, 166–177.Google Scholar
- Rymaszewski, M. (2007). Second life: The official guide. Wiley.Google Scholar
- Sabourin, J., Rowe, J., Mott, B., & Lester, J. (2011). When off-task is on-task: the affective role of off-task behavior in narrative-centered learning environments. In Artificial Intelligence in Education (pp. 534–536). Springer Berlin/Heidelberg.Google Scholar
- Snow, E. L., Jackson, G. T., Varner, L. K., & McNamara, D. S. (2013a). The impact of performance orientation on students’ interactions and achievements in an ITS. In C. Boonthum-Denecke & G. M. Youngblood (Eds.), Proceedings of the 26th annual Flordia Artificial Intelligence Research Society (FLAIRS) conference (pp. 521–526). Menlo Park: The AAAI Press.Google Scholar
- Snow, E. L., Jackson, G. T., Varner, L. K., & McNamara, D. S. (2013b). Investigating the effects of off-task personalization on system performance and attitudes within a game-based environment. In S. K. D’Mello, R. A. Calvo, & A. Olney (Eds.), Proceedings of the 6th international conference on educational data mining (pp. 272–275). Heidelberg: Springer.Google Scholar
- Snow, E. L., Jackson, G. T., Varner, L. K., & McNamara, D. S. (2013c). The impact of system interactions on motivation and performance. In Proceedings of the 15th International Conference on Human-Computer Interaction (HCII) (pp. 103–107). Heidelberg: Springer.Google Scholar
- Snow, E. L., Likens, A., Jackson, G. T., & McNamara, D. S. (2013d). Students’ walk through tutoring: Using a random walk analysis to profile students. In S. K. D’Mello, R. A. Calvo, & A. Olney (Eds.), Proceedings of the 6th international conference on educational data mining (pp. 276–279). Heidelberg: Springer.Google Scholar
- Snow, E. L., Jackson, G. T., & McNamara, D. S. (2014). Emergent behaviors in computer-based learning environments: computational signals of catching up. Computers in Human Behavior, 41, 62–70.CrossRefGoogle Scholar
- Stallings, J. (1980). Allocated academic learning time revisited, or beyond time on task. Educational Researcher, 9(11), 11–16.CrossRefGoogle Scholar
- VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist, 46, 197–221.CrossRefGoogle Scholar
- Woolf, B. P. (2009). Building intelligent interactive tutors: Student-centered strategies for revolutionizing e-learning. Burlington: Morgan Kaufman Publishers.Google Scholar