Personalization is an upcoming trend in gamification research, with several researchers proposing that gamified systems should take personal characteristics into account. However, creating good gamified designs is effort intensive as it is and tailoring system interactions to each user will only add to this workload. We propose machine learning algorithm -based personalized content selection to address a part of this problem and present a process for creating personalized designs that allows automating a part of the implementation. The process is based on Deterding’s 2015 framework for gameful design, the lens of intrinsic skill atoms, with additional steps for selecting a personalization strategy and algorithm creation. We then demonstrate the process by implementing personalized gamification for a computer-supported collaborative learning environment. For this demonstration, we use the gamification user type hexad for personalization and the heuristics for effective design of gamification for overall design. The result of the applied design process is a context-aware, personalized gamification ruleset for collaborative environments. Lastly, we present a method for translating gamification rulesets to machine-readable classifier algorithm using the CN2 rule inducer.
Gamification is the application of game-like elements to non-game environments . Recent literature reviews assessing the potential of gamification in education have found several positive implications, like increased engagement and motivation [14, 46, 53] although some studies also link gamification to negative consequences, like unproductive competition or reward saturation that leads to demotivation [16, 18]. Different authors have pointed to contextual and personal differences to explain these mixed results and have called upon future research to take these characteristics into account [3, 40, 59]. Several researchers propose that gamified systems should be tailored to the system’s different users in order to realize the full potential of gamified systems [11, 22, 45, 48, 61].
Some research has been performed in adaptive gamification systems . However, we underline a difference between adaptive gamification, which is the gamified system reacting to different situations, and personalized gamification, which is the system being able to respond more structurally to the situation and the characteristics of individual users. There has been little research on personalized gamification systems , with only few studies on the design of such systems [4, 49]. A major issue in personalized gamification is that if a person was making the choice of which personalization strategy to select for each user, it would be very time consuming, would need constant monitoring, and be very expensive. Each new personalization strategy would essentially cause already work-intensive gamification design and implementation work to multiply.
We propose that to address issues and effort in selecting personalized content, gamification systems should use algorithms to automate some aspects of personalization work. Furthermore, the design of gamification personalization approaches should be systematic, theory-based and repeatable. To address these issues, we present a design process to create supervised machine learning algorithms that enable the selection of personalized gamification elements for each type of user depending on user profile and the system context. We base the design process on Deterding’s gamification design process  and extend it by adding steps for selecting a personalization strategy and algorithm creation.
In our research, we follow the design science research (DSR) methodology , which has been defined by Hevner and Chatterjee (p. 5) as “a research paradigm in which a designer answers questions relevant to human problems via the creation of innovative artifacts, thereby contributing new knowledge to the body of scientific evidence.” Depending on the level of abstraction, the outcomes of design science research can be instantiations of software, models, design methods, or design theories . Our contribution in this paper is the extension of an existing design process, which could be classified as a design method type of artefact. In design science, the validity of artefacts are evaluated by their utility . Therefore, we also demonstrate the design process by presenting a case of how we applied the process in the context of computer-supported collaborative learning (CSCL) systems, providing initial support for the process’s utility.
The paper is structured as follows: The next section reviews literature on gamification in education and the state of the art of personalization in gamification. In Section 3, we detail how we applied the design science research method. In Section 4, we present the design process for personalized gamification and in Section 5 we demonstrate the process. The paper ends with discussion and conclusion in Section 6.
Related research on gamification
In this section, we review related literature on gamification, how gamification has previously been applied to education, and the theoretical principles our work is based on.
Gamification in education
Various positive effects of gamified educational systems have been uncovered over the years (for an overview, see e.g. ). According to current literature on gamification, effective gamification is about using the game elements to foster users’ three innate needs for intrinsic motivationFootnote 1 . These principles were originally adapted by a series of studies  from Deci and Ryan’s self-determination theory . These principles are :
Relatedness: the universal need to interact and be connected with others.
Competence: the universal need to be effective and master a problem in a given environment.
Autonomy: the universal need to control one’s own life.
Studies in the field indicate that gamification methods are successful in fostering collaboration, especially when following the principles of self-determination theory [34, 56]. Recent research concludes that simply applying a single outward aspect of gamification, like badges or other repetitive rewards [18, 30, 53], does not work, and instead gamification has to consider the motivation of the participants, the goals of the course and gameful design together . In fact, a systematic mapping study conducted on engagement levels and gamification  indicate that even as many as 40% of the gamified approaches fail to achieve meaningful differences on the engagement and motivation when compared against non-gamified system providing the same services. Successful gamification in collaborative learning was reported in studies by Moccozet et al.  and Dubois and Tamburrelli , where activities in the system increased online reputation and the course participants were able to publish their competence and compare the results to those of their peers. The elements of success in these studies connect the users’ achievements to a meaningful community who shares some of the user’s personal goals.
Personalization in gamification
Recently, a number of researchers have started to hypothesize that gamification’s presumed positive effects can be intensified when taking users’ personal characteristics into account [3, 5, 40]. This idea sprouts from the observation that the same game can elicit different responses and consequences in different users . Similarly, Koster  reasoned that different predispositions and social structures bring a unique, personalised sense of fun for everyone, as such making it impossible to design a universally “fun” game. More particularly, research has shown that different users interpret, functionalise and evaluate the same game elements in highly different ways . Antin and Churchill  exemplified this by distinguishing five different functions a user can ascribe to a badge. Further, it has been shown that (a) the enjoyment derived from a game [9, 50, 57]; (b) a user’s preference for specific game elements ; (c) the perceived persuasiveness of game elements ; and (d) the motivation derived from game elements [49, 57] are all impacted by a user’s personality and their personal characteristics. In sum, it can be reasoned that gamified systems should be specifically tailored to its different users in order for gamification to live up to its full potential [11, 22, 60, 61].
The success of personalization techniques has already been proven in other digital contexts, like with persuasive technologies and games (see for example [2, 31, 54]). The first studies surfacing in the field of gamification paint a similar picture, stating that personalized gamification leads to more behavioral and emotional engagement , while also enhancing users’ self-efficacy and their perception of the system’s usefulness and ease of use . However, the existing research scrutinizing the potential of personalized gamified systems remains scarce , with research on how to design such systems being close to non-existent . For example, a recent work on gamified learning  presents a system that adapts to an individual learner’s pace but does not have personalized content. Or the recent study on gamified learning that does have personalized content  but the content is still manually assigned to each student. While initial research on personalized gamification is surfacing, we present that that there is a research gap in the design processes of personalized gamification systems.
Other recent works [23, 27, 28] mention tailored challenges, which take the concept of gamification and extend them towards individually tailored assignments and tasks. These automatically adjust towards the personal experience and knowledge of the user, taking into account e.g. user capabilities and language skills. The task and challenge tailoring can also include other aspects, for example the cultural background of the users .
Utilizing design science research method to create a design process
In this section, we describe how we utilized the principles of design science to create new design knowledge on personalized gamification. What is different in design science research from positivist research, is that the outcome in design science research is prescriptive knowledge . Design science research often begins with an important opportunity, challenging problem or a vision for the application domain [20, 26]. During the research process DSR produces both an artefact that addresses an issue in the application domain, and prescriptive knowledge on how to change things . In addition to instantiations of artefacts, such as software systems, design science research processes can create higher level of artefacts such as design methods, design principles, or design theories.
In our design science research process, we applied the abstract design knowledge framework by Ostrowski and Helfert , which follows Goldkuhl and Lind’s  division of design science research into an empirical part (a design practice) and a theoretical part (meta-design). The abstract, meta-design design part creates information such as design theories, generic process models, or guidelines for design. These meta-artefacts in turn can be used in the creation of situational design knowledge, such as design models of specific gameful designs or instantiations of gamified systems. In our research, our aim is to create a generally applicable design process that can inform the creation of situated gamification designs.
To provide a structure for our research, we used the design science research methodology synthesized by Peffers et al. . They present six design science activities, which we summarize as follows and detail how we proceeded in each activity. While the activities are often presented in a linear form, it should be noted that the design science activities are iterative, and it is often necessary to return to an earlier activity based on feedback from a later activity, such as returning to design from evaluation.
Problem identification and motivation. We identified the research gap in creating personalized gamification approaches from our earlier literature review on gamification , and from a literature review on adaptive gamification . Furthermore, while there are already some adaptive gamification approaches, there are fewer design processes for them.
Defining the objects for a solution. Our research team, which includes software engineers, data analysis researchers, game designers and gamification researchers, decided that using machine-learning based algorithms would be the most efficient way to target personalization approaches. Furthermore, it was decided that our process should be based on Deterding’s gamification design process , as it demonstrates research rigor, is well-based on theories, and already presents well-justified arguments that gamification should be context-specific.
Design and development. In this activity, we created the additional process steps that were required to adapt the existing process for personalized gamification. We performed the adaptation by identifying a minimum number of additional steps required and then refining their description when applying them in the demonstration and evaluation steps.
Demonstration. The extended process was demonstrated on paper to other researchers in the community. Based on the feedback, the process was further revised in subsequent design and development activity iterations.
Evaluation. The initial evaluation was performed by using the process to create a personalized gamification design for computer-supported collaborative learning environments, as described in Section 5.
Communication. The communication activities include academic conferences, journal publications, and publishing some parts of the artefacts in scientific artefact repositories (e.g. Zenodo).
To summarize, we (1) used the design science research methodology activities by Peffers et al.  (2) to extend Deterding’s gamification design process  (3) to create a design process for algorithm-based personalized gamification. In our validation process we follow the multi-grounding principles by Goldkuhl and Lind , where the meta-artefact (i.e. our general gamification design process) should be validated by evaluating it against the scientific body of knowledge and by using it to create a situational artefact (i.e. a single gamification design). The situational artefact in turn should be validated empirically, which provides further evidence to support the meta-artefact. In this research paper, we present the first stage of validation, or the creation of a situational design. The empirical validation of the situational design, which would provide more evidence to support the process, is future work.
A design process for algorithm-based personalized gamification
There have been some efforts in researching and implementing personalized gamification, but this far there hasn’t been a process for implementing it without extensive handcrafting involved. In this section, we present a design process for algorithm-based personalized gamification that is based on Deterding’s framework  for creating gameful designs. Our novel contribution in this section is demonstrating how both a personalization strategy and an algorithm creation process can be used to augment existing design processes, with the algorithm allowing automating the choice of personalization strategies and tasks. Otherwise the process follows the principles and design steps presented by Deterding. We also draw on Monterrat and colleagues’ [42, 44], and Tondello and colleagues’ [39, 57, 58] work in which predefined player types and a player adaptation model are used to improve the matching of game elements to users’ preferences.
The framework was selected as it allows a system designer to “restructure challenges inherent in the user’s goal pursuit into a systemic whole that optimally affords enjoyable, motivating experiences” (p. 311). In other words, the design framework is not just a series of formulaic design patterns and interface elements. Instead, it enables the system designer to use a variety design lenses to harness challenges already present to the system to create intrinsic integration  between the content and the gamification mechanic.
At the core of Deterding’s framework are principles to create gameful designs for motivation and enjoyment, and a novel design perspective, which he names as “the lens of intrinsic skill atoms”. Design lenses combine a memorable name, a concise statement of a design principle and a set of focusing questions to evaluate game design from a specific perspective . Skill atoms originate from an effort to develop a formal grammar for games, in which skill atoms are the smallest defined elements, of which the following are used in gamification: goals, actions, objects, rules, feedback, challenge, and motivation. Using these principles, Deterding states that “in pursuing her needs, a user’s activity entails certain inherent, skill-based challenges. A gameful system supports the user’s needs by both (a) directly facilitating their attainment, removing all extraneous challenges, and (b) restructuring remaining inherent challenges into nested, interlinked feedback loops of goals, actions, objects, rules, and feedback that afford motivating experiences” (p. 315).
The design framework steps and how the novel personalization and algorithm design steps fit in are summarized in Table 1. The steps are detailed further in the following paragraphs, with steps 3 and 6 being novel ones. Steps 1 to 2, 4 to 5, and 7 are from Deterding’s original framework .
Step 1. Define gamification strategy
The first step when getting started is to define the overall gamification strategy. What is the purpose behind the desired change in behavior? What is happening and what needs to be changed? How this change can be measured? Additionally, other software system requirements and constrains need to be considered, such as resources, scope and technological requirements.
Step 2. Research
Analyze user behavior by deconstructing complex activities into behavior chains or using similar methods. This activity analysis should reveal what are the goals of processes happening in the system and how they can be encouraged or discouraged. After the model has been created, motivations and hurdles for the target activity and its behaviors should be identified. Finally, after initial research steps, the analysis of needs, motivations, and hurdles allows one to check whether gameful design is an effective and efficient strategy to achieve the target outcome .
Step 3. Select personalization strategies
The initial research, which consisted of behavior analysis and identifying user motivations, allows identifying whether the group of users are diverse. If this diversity exists, the users should be profiled, for example with the gamification user hexad [12, 39] or some customized approach. Earlier work on gamification design warns against oversimplification and against using methods that are not based on evidence . The chosen personalization approach should be grounded in the actual user base and then tested in Step 7, rapid prototyping.
Step 4. Synthesis
In this step, each activity that is targeted by gamified design should have motivations and inherent skill-based challenges identified. Analysis results should be presented in the form Activity > Challenge > Motivation clusters and serve as the main input for ideation. When using the design process for personalization, this step should also involve the selection of a machine learning or a programming platform, and an analysis of how system activity and activity structure in the form of skill atoms can be described using the selected platform. The design lens of intrinsic skill atoms  should be used to single out intrinsic skill atoms present in the process and to critically evaluate them.
Step 5. Ideation
This step involves ideating the rules by brainstorming promising design changes “by applying motivational design lenses to the identified motives and skill atom components of the target audience existing system” (p. 318). How to best apply the design lenses is described in the original framework. However, in the process we present, the brainstorming should be less flexible, because the designers should keep the limitations of the machine learning platform in mind. When ideating new elements, ideation process should be parallel with the design of how to describe new or existing skill atoms with the machine learning or programming platform selected in the previous step. Also, because this ideation step involves creating training material for a machine learning system, the designers involved should also concentrate on creating as many example situations and challenges as possible.
Step 6. Distill rules into an algorithm
The analysis performed in the synthesis step and the ideation of new elements provide source material for the creation of an personalization algorithm by using supervised machine learning methods. Supervised machine learning is essentially creating a function that maps an input to an output based on a set of example input-output pairs . This allows taking the set of conditions (input) connected to appropriate gamification elements (output) created in the previous step and using it to train the new algorithm. Just as gameful design should involve experts in the problem domain and game design, this step should involve an expert in machine learning in order to select the most suitable machine learning approach and to evaluate the validity of the algorithm.
Step 7. Rapid prototyping
Rapid prototyping involves creating a series of prototypes and testing them first with the designers and then volunteers from the user base. This allows evaluating whether the gameful design and the gamification system meets the goals set in Step 1. Is it fun and does it encourage the desired behavior? This step is even more vital than in the original framework because machine learning algorithms require evaluation and testing. Does the algorithm perform as desired, what is its accuracy in recognizing conditions, and does it provide suitable challenges?
Demonstrating the design process in a computer-supported collaborative learning context
In this section, we demonstrate the design process presented in the previous section by using it to design a personalized gamification approach for a computer-supported collaborative learning environment. To summarize, collaborative learning is a learning method where students have a symmetry of action, knowledge and status, and have a low division of labor . Computer-supported collaborative learning facilitates the interaction with software tools and increases potential for creative activities and social interaction . It was selected as the application domain because there is previous evidence that it can benefit from gamification, with some studies showing increases in student collaboration and motivation in educational settings . More specifically, the system is aimed to gamify a computer-supported collaborative learning environment used by software engineering students who practice working as engineering teams.
We follow the new design process for algorithm-based personalized gamification presented in the previous section and summarized in Table 1. In the following paragraphs, we detail step by step how the process first led to a personalization strategy, then a personalized gamification design, and finally into an instantiation of the design as an algorithm. The demonstration also is an initial, artificial validation  of the process as a design science model artifact, as the validity of design science artifacts are evaluated based on their utility .
First, we defined the strategy for our gamification approach. The target outcome is increased collaboration between students and increased engagement in our target audience, who are the users of the CSCL platform. The flexibility of the gamification design is constrained by automatically measured environmental variables, available resources for design, and the functionality of the platform.
The user activity was translated into behavior chains by analyzing current literature on CSCL  systems. User needs and motivations were adapted from current literature on motivation and self-determination theory as used in gamification . The design team concluded that the initial plan for the gamification design is fit.
3. Select personalization strategy
We selected the evidence-based gamification user type hexad [39, 58] as our personalization strategy. This enables creating a gamification task ruleset personalized for each user type. The personalization approach was evaluated by the design team and they concluded that in order to make gamification more user-centric and customized to the individual user in computer-supported collaborative learning environments, the systems should include profiling of users in its design principles and select most fitting gamification features for each user. The user type hexad and the personalization strategy is detailed more in Section 5.1.
The principles of self-determination theory , collaborative learning , good cooperative learning  and heuristics for the design of gamification in education  were used to analyze computer-supported collaborative learning systems. Typical actions taken in a computer-supported collaborative learning environment were analyzed, considering the context of possible actions that can be taken in a CSCL system aimed for software engineering students. Additionally, the design was considered in light of the user type hexad. The used design heuristics are detailed more in Section 5.2.
Was performed in a series of workshops, where a panel of experts ideated rules with a note-taker translating the ideas to the skill atom framework and presenting the results for approval. The panel of experts consisted of three experts on game design, three experts on gamification and education, and two software engineers. The ideation process resulted in a total of 69 gamification tasks for five different player types. When duplicates were collated, it resulted in 42 individual tasks. The ideation process and how the rules were structured is detailed further in Section 5.3.
6. Distill rules into an algorithm
After ideation we used the CN2 rule induction algorithm  to create a classifier to identify different conditions that occur in a CSCL environment  and to recommend gamification tasks for the main CSCL system. The algorithm instantiation process and the resulting artifact is detailed further in Section 5.4.
7. Rapid prototyping
The last step, was performed partly and left partly for future work. The ruleset and the algorithm were tested and evaluated by the design team, but naturalistic evaluation  in a classroom setting has not yet been performed. Combining the ruleset with a live CSCL system is part of future work.
Selecting the gamification personalization strategy
We selected the gamification user type hexad [39, 58] as a model for personalization design when creating gamification approaches. They used a survey with 133 participants and quantitative methods first to develop and then validate a response scale for assessing user preferences. This user model was selected over alternatives because it is evidence-based and gamification-specific.
The user types are summarized in Table 2. With each user type we also present the intended gamification approach. The disruptor user type was defined as out of scope in this project. This user type tends to disrupt the system and is difficult to address within the context of the system. Instead, they will be addressed by other types’ autonomy and relatedness -related challenges and by being involved in the development of the system.
Selecting design heuristics for gamification
The panel of experts that participated in design workshops were informed by principles of good cooperative learning , gamification user type hexad [39, 58], and the self-determination theory -based design heuristics for effective gamification of education  during the design process of the ruleset.
We first present the design heuristics by Roy and Zaman , and how they guided the design process as follows.
#1 Avoid obligatory uses. The computer-supported collaborative learning environment and especially its gamification features are voluntary to use.
#2 Provide a moderate amount of meaningful options. The user is able to choose which gamification tasks to accomplish, if any. Furthermore, as the challenges are based on the user’s characteristics, these challenges are relevant to each person and as such present meaningful options to the user.
#3 Set challenging but manageable goals. No designed task is meaningless or impossible to accomplish. Also, the difficulty level of the implemented challenges is tuned to the users’ capabilities, as such keeping the tasks manageable, while at the same time being challenging.
#4 Provide positive, competence-related feedback. Just as tasks should be meaningful, the feedback is meaningful and positive. There is no feedback that can be perceived as a punishment. When presented in the CSCL system, the feedback should make the user feel capable.
#5 Facilitate social interaction. There are several gamification tasks that show the positive impact the user’s actions can have on each other. CSCL systems are social by their nature and several tasks promote positive interaction.
#6 When supporting a particular psychological need, be wary to not thwart the other needs. The gamification tasks do not concentrate on promoting only one aspect over others. For example, when promoting relatedness and prompting users to interact, users should not feel that they are forced to, and thus feel less autonomous.
#7 Align gamification with the goal of the activity in question. Gamification tasks support both motivation and goal achievement. The CSCL system does not distract from accomplishing actual team and learning goals.
#8 Create a need-supporting context. The system is voluntary, open and supportive. When the algorithm is integrated to a CSCL environment, it should be presented as a supportive feature, not the main feature.
#9 Make the system flexible. The gamification system is adaptive, providing personalized challenges to different user types. The adaptive approach is the main novel contribution of this project for CSCL systems.
Also, the principles for well-functioning cooperative learning were followed, as formulated by Johnson and Johnson  and summarized in the following paragraphs.
#A Clearly perceived positive interdependence. This design recommendation fits design heuristic #5. Also, the system should promote a sense of community and demonstrate how user activities can benefit others.
#B Considerable promotive interaction. This design recommendation fits design heuristic #5. The system should provide opportunities for positive interactions.
#C Clearly perceived individual accountability and personal responsibility to achieve the goals of the group. The system should provide detailed enough feedback so that the contributions can be perceived at the level of individual user.
#D Frequent use of the relevant interpersonal and small-group skills. The system should enable and empower social contact, instead of reducing it e.g. to upvotes.
#E Frequent and regular group processing of current functioning to improve the future effectiveness of the group. The system should enable dialogue at meta-level and encourage mutual feedback.
Structuring gamification tasks
The design heuristics were created during two design workshops. Experts were selected into the workshop based on their expertise on related fields, including computer-supported collaborative learning, software engineering, gamification, and game design. First, rules were ideated in a brainstorming fashion during which a secretary recorded ideas. After that the ideas were tabulated into spreadsheets, structured into skill atoms and lastly analyzed with the lens of intrinsic skill atoms. The analysis results were used to prioritize the ideas and to see if they addressed the inherent challenges in the system.
We present one sample gamification task for each player type as an example in Table 3. In the following paragraphs we explain each element of a skill atom and explain the first and last examples from Table 3 (philanthropist and player) in detail. After that we evaluate the two in-detail examples with the design lens.
An extra, quest-like challenge that the user needs to accomplish. Something that is presented to the user by the system based on the recommendation of the algorithm.
As used in example: For the philanthropist type of user, the goal is to help a fellow student in the class’s chat system. For the player type of user, the goal is to get one of the members of the other student team to help the player to solve an issue from the player’s student team’s issue tracker.
Set of actions that the user can take in the system to achieve the goal.
Defined in columns Task 1 and 2.
As used in example: The philanthropist can interact through the chat system with other students. The player needs another student to contribute to their source code repository and then have the task progress enough so that the task can be marked as solved.
What the user can act upon, or the system state. In this case the conditions of Prerequisite 1 to 3 define which goals and actions are presented to the user.
As used in example: The system state is monitored by a series of inputs from the CSCL environment. In this case, the system is a combination of social media, source code repository and chat, being quite similar to GitHub. Users can commit source code, set goals for their team, and evaluate source code contributions against existing goals or issues. In the philanthropist’s case they would mostly interact with the chat system to help others and with the player it would be the use of social skills to get the other students to help them.
Specification of what actions the user can take and how they affect the system. In this system’s case they are inherent to the functioning of the CSCL environment and the variables monitored by the system.
As used in example: The rules are published through a quest-like system when the preconditions trigger, preferably when the user is not engaged in other, higher priority activity. They are published to the user through the system’s notification system, with an indication that they could take actions that benefit the classroom and everyone’s learning. The philanthropist would be prompted to find an inexperienced person from the chat and get an upvote after a helpful message. The player would be required to get a contribution to their team’s repository from another student and then have an issue related to that part of the project solved.
Sensory information that informs the user of system state changes. In this system’s case this is left open for the implementer of the CSCL environment. However, one minimal approach is presenting a notification and a badge when a goal has been achieved by a user's actions.
As used in example: When the rules set by the system have been achieved, the user is presented with positive feedback. With the philanthropist the feedback would be a “thank you” message by the person the user has helped, and in the player’s case it could be a badge or other virtual reward.
The difficulty of achieving the goal, caused by the difference in system state and user's perceived current skill. The tasks should be meaningful and always make the user feel that he or she made a real contribution to the collaborative environment.
As used in example: The challenges should not be trivial and should always composed of the inherent actions, or actions the user should be performing in the first place. In both cases, the tasks are activities the users should be performing as a part of computer-supported collaborative learning.
The psychological needs energizing and directing the user to seek out and engage with the system. In this system's case feelings of competence, relatedness, and autonomy.
As used in example: In these two cases, the main motivational points are competence and relatedness. The system encourages the users to connect socially and benefit from each other’s knowledge, allowing them to relate and demonstrate competence in a constructive manner. These motivational goals also match principles of good cooperative learning from the design heuristics.
Lastly, we present the evaluation of two task using the design lens of intrinsic skill atoms, using the lens’s evaluation grid [12, p., 315] as follows.
What motivations energize and direct the activity?
Philanthropist: The desire to help others.
Player: The desire to “win” the game by following the rules of the system.
What challenges are inherent in the activity?
Philanthropist: Finding ways to constructively contribute to other teams.
Player: Finding another student that can help them and then constructively integrating those contributions into their student team’s own project.
How does the system articulate these inherent challenges in goals?
Both: The system presents a verbal description in the quest text.
What actions can users take in the system to achieve these goals?
Philanthropist: Interact socially in chat.
Player: First interact socially and then work together in the source code repository.
What objects can the user interact with in the system to achieve these goals?
Philanthropist: The chat system.
Player: The chat system, other social media, and the source control repository.
What rules does the system articulate that determine what actions are allowable, and what system changes and feedback they result in?
Both: All features available in the system are allowed and the user can proceed with other tasks if they feel that other tasks are more important. However, the system clearly specifies which activities lead to the task being accomplished.
What feedback does the system provide on how successful the user’s action were, and how much progress does the user has made towards their goals?
Philanthropist: The system for example could present a “thank you” -note from the student who received help.
Player: The system for example could present a visual congratulation notification and storing a badge to the user’s profile.
Algorithm for personalized gamification for a computer-supported collaborative learning system
The algorithm is based on the ruleset and the design process presented in the previous sections. It is designed to choose context-dependent, personalized gamification tasks for each user type of a computer-supported collaborative learning system. It is based on a classifier created with the CN2 rule induction algorithm , which condensed the ruleset into a set of if-else -conditions. When activated, it uses the environmental variables to decide which quest-type task should be presented to the user. Most important variables used in the system are presented in Table 4.
In this case, a gamification task means tasks that correspond to a set of goals that need to be met, in a manner that is for example similar to a quest in a video game. The task assignment, accomplishment and feedback process follow the “new goal - rules - action - challenge - feedback - motivation” loop of the lens of intrinsic skill atoms , as presented in the design section.
The algorithm is designed to act as a stateless plugin for a specific type of computer-supported collaborative learning environment. It integrates to the CSCL system as presented in Fig. 1. It depends on the system to give it snapshots of status variables, which it uses to recommend gamification tasks. The system is responsible for task accomplishment tracking, feedback, and other interaction features. However, the ruleset is also presented in a human readable format in the online appendix and contains some recommendations for task presentation. The algorithm depends on the CSCL system for system status as input, such as user gamification type, user skill, issue tracker task activity and discussion system activity. The full list is presented in the Online Appendix.Footnote 2
The algorithm design makes the following assumptions on the system: 1) The users of the systems are students who are willing and allowed to help each other, 2) the students are engaged in collaborative teamwork and have series of tasks to do, 3) there is a system to track the tasks assigned, such as GitHub, 4) the system tracks when participants work on tasks and allows external help, and 5) there is a free-form synchronous discussion system associated with the CSCL environment.
As the basis of the algorithm, we used CN2 rule induction. CN2 is a basic component of many machine learning systems. It creates a list of classification rules from examples using entropy as its search heuristics . In this case, the examples are the list of prerequisites that can trigger the conditions for providing personalized gamification tasks and the classes are individual gamification tasks the algorithm should offer. The CN2 rule inducer was originally designed to function in a noisy environment and to find a minimal number of rules that cover a maximal number of cases. The list of cases was already pre-vetted by the panel of experts, so the CN2 inducer parameters were deliberately set to cause overfitting in order to cover all of the cases.
The rule induction process from 69 human-defined rules resulted to 59 machine format if-else rules. For example, the rules for the third task (Free spirit) in Table 3 was induced into a following rule: “IF Hexad = Free Spirit AND Chat Activity! = Low AND Ownteam opentasks = high AND Ownteam task age = high AND Ownteamactivity! = high THEN Challenge_class = 7 (Quality 0.125)”. The CN2 rule inducer was used in unordered mode, which means all the rules are evaluated and the algorithm does not stop after the first match. When several rules match, the one with the highest quality is selected.
The full list of rules, training data, variables and the algorithm itself, stored as an Python-based Orange Data Mining classifier,Footnote 3 are available in the Online Appendix2. Orange was selected as the classifier implementation because it provides a Python-based library and enables programmers to load and use the classifier without in-depth knowledge of machine learning. The appendix contains a short, interactive program for testing the classifier.
Discussion and conclusion
The research goal of this study was to discover how to systematically create algorithm-based personalized gamification systems that can save system designers or operators from repetitive personalization work, and to help base the personalization strategies on established design principles. In order to realize that goal, we used the design science research method  to create a personalized gamification design process based on Deterding’s work  with additional steps for personalization strategy and algorithm creation. We also demonstrated the process by applying it to a computer-supported collaborative learning environment, which resulted in an instantiation of one specific personalized gamification algorithm and an initial, artificial validation of the method according to the principles of design science [24, 51, 62].
There has been earlier research into adaptive gamification [5, 42] and some research into creating personalized gamification designs [4, 49]. Compared to earlier research, our novel contribution is presenting and demonstrating a design process that uses machine learning and algorithm-based automation to implement personalization. The process we presented is one possible answer for dynamically taking personal characteristics into account when designing the implementation of gamified systems, without the additional work involved in personalization overwhelming the operators of gamified systems.
The main limitation for the presented approach is that it introduces a need for a machine learning expert to gamification design. The second limitation related to our research is that while we have performed artificial evaluation of the process, the system should be verified empirically. Future work should involve testing in a diversity of situations and feedback from design teams that establishes the benefits of using the process in various design situations and the suitability of using machine learning -based algorithms in gamified systems.
Antin J, Churchill EF (2011) Badges in social media: A social psychological perspective. In: CHI 2011 Gamification Workshop Proceedings (Vancouver, BC, Canada, 2011)
Bakkes S, Tan CT, Pisan Y (2012) Personalised gaming: a motivation and overview of literature. In: Proceedings of the 8th Australasian Conference on Interactive Entertainment: Playing the System. ACM, p 4
Barata G, Gama S, Jorge J, Gonçalves D (2015) Gamification for smarter learning: tales from the trenches. Smart Learning Environments 2. https://doi.org/10.1186/s40561-015-0017-8
Böckle M, Micheel I, Bick M, Novak J (2018) A design framework for adaptive gamification applications. In: Proceedings of the 51st Hawaii International Conference on System Sciences. https://doi.org/10.24251/HICSS.2018.151
Böckle M, Novak J, Bick M (2017) Towards adaptive gamification: a synthesis of current developments. In: Proceedings of the 25th European Conference on Information Systems (ECIS). Guimarães, Portugal
Busch M, Mattheiss E, Orji R, et al (2015) Personalization in serious and persuasive games and gamified interactions. In: Proceedings of the 2015 Annual Symposium on Computer-Human Interaction in Play. ACM, New York, pp 811–816
Choi TST, Walker KZ, Palermo C (2017) Culturally Tailored Diabetes Education for Chinese Patients: A Qualitative Case Study. J Transcult Nurs 28:315–323. https://doi.org/10.1177/1043659616677641
Clark P, Boswell R (1991) Rule induction with CN2: Some recent improvements. In: European Working Session on Learning. Springer, pp. 151–163
Codish D, Ravid G (2014) Adaptive approach for gamification optimization. In: Proceedings of the 2014 IEEE/ACM 7th International Conference on Utility and Cloud Computing. IEEE Computer Society, Washington, DC, pp 609–610
Deci EL, Ryan RM (2012) Motivation, personality, and development within embedded social contexts: An overview of self-determination theory. The Oxford Handbook of Human Motivation, pp. 85–107
Deterding S (2014) Eudaimonic Design, or: Six Invitations to Rethink Gamification. Social Science Research Network, Rochester
Deterding S (2015) The Lens of Intrinsic Skill Atoms: A Method for Gameful Design. Human–Computer Interaction 30:294–335. https://doi.org/10.1080/07370024.2014.993471
Deterding S, Dixon D, Khaled R, Nacke L (2011) From game design elements to gamefulness: Defining “Gamification. In: Proceedings of the 15th International Academic MindTrek Conference: Envisioning Future Media Environments. ACM, New York, pp 9–15
Dicheva D, Dichev C, Agre G, Angelova G (2015) Gamification in education: A systematic mapping study. Educational Technology & Society 18:75–88
Dillenbourg P (1999) What do you mean by collaborative learning? Collaborative-Learning: Cognitive and Computational Approaches 1:1–15
Domínguez A, Saenz-de-Navarrete J, de-Marcos L et al (2013) Gamifying learning experiences: Practical implications and outcomes. Comput Educ 63:380–392. https://doi.org/10.1016/j.compedu.2012.12.020
Dubois DJ, Tamburrelli G (2013) Understanding gamification mechanisms for software development. In: Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering. pp 659–662
Falkner NJG, Falkner KE (2014) “Whither, Badges?” or “Wither, Badges!”: A Metastudy of Badges in Computer Science Education to Clarify Effects, Significance and Influence. In: Proceedings of the 14th Koli Calling International Conference on Computing Education Research. ACM, New York, pp 127–135
Goldkuhl G, Lind M (2010) A multi-grounded design research process. In: International Conference on Design Science Research in Information Systems. pp. 45–60. Springer
Gregor S, Hevner AR (2013) Positioning and presenting design science research for maximum impact. MIS Q 37:337–355
Habgood MJ, Ainsworth SE (2011) Motivating children to learn effectively: Exploring the value of intrinsic integration in educational games. J Learn Sci 20:169–206
Hakulinen L, Auvinen T, Korhonen A (2013) Empirical study on the effect of achievement badges in TRAKLA2 online learning environment. In: Learning and Teaching in Computing and Engineering (LaTiCE), 2013. IEEE, pp 47–54
Hanus MD, Cruz C (2018) Leveling up the Classroom: A Theoretical Approach to Education Gamification. Gamification in Education: Breakthroughs in Research and Practice:583–610. https://doi.org/10.4018/978-1-5225-5198-0.ch030
Hevner A, Chatterjee S (2010) Design Research in Information Systems. Springer US, Boston
Hevner AR, March ST, Park J, Ram S (2004) Design science in information systems research. MIS Q 28:75–105
Iivari J (2007) A paradigmatic analysis of information systems as a design science. Scand J Inf Syst 19:5
Jabbour J, Dhillon HM, Shepherd HL et al (2017) Challenges in Producing Tailored Internet Patient Education Materials. International Journal of Radiation Oncology*Biology*Physics 97:866–867. https://doi.org/10.1016/j.ijrobp.2016.11.023
Jianu EM, Vasilateanu A (2017) Designing of an e-learning system using adaptivity and gamification. IEEE, pp 1–4
Johnson DW, Johnson RT (1999) Making cooperative learning work. Theory Pract 38:67–73. https://doi.org/10.1080/00405849909543834
Kapp KM (2012) The gamification of learning and instruction: game-based methods and strategies for training and education. John Wiley & Sons, Hoboken
Kaptein M, Markopoulos P, de Ruyter B, Aarts E (2015) Personalizing persuasive technologies: Explicit and implicit personalization using persuasion profiles. International Journal of Human-Computer Studies 77:38–51. https://doi.org/10.1016/j.ijhcs.2015.01.004
Kasurinen J, Knutas A (2018) Publication trends in gamification: a systematic mapping study. Computer Science Review 27:33–44
Knutas A, Ikonen J, Maggiorini D et al (2016) Creating Student Interaction Profiles for Adaptive Collaboration Gamification Design. International Journal of Human Capital and Information Technology Professionals (IJHCITP) 7:47–62
Knutas A, Ikonen J, Nikula U, Porras J (2014) Increasing collaborative communications in a programming course with gamification: A case study. In: Proceedings of the 15th International Conference on Computer Systems and Technologies. ACM, New York, pp 370–377
Knutas A, Ikonen J, Porras J (2015) Computer-supported collaborative learning in software engineering education: a systematic mapping study. Journal on Information Technologies & Security 7:4
Koster R (2013) Theory of fun for game design. O’Reilly Media, Inc., Sebastopol
Kotsiantis SB, Zaharakis I, Pintelas P (2007) Supervised machine learning: A review of classification techniques. Emerging Artificial Intelligence Applications in Computer Engineering 160:3–24
Looyestyn J, Kernot J, Boshoff K et al (2017) Does gamification increase engagement with online programs? A systematic review. PLoS One 12:e0173403. https://doi.org/10.1371/journal.pone.0173403
Marczewski A (2015) Even ninja monkeys like to play: gamification, game thinking and motivational design. Gamified UK, S.l.
Mekler ED, Brühlmann F, Tuch AN, Opwis K (2017) Towards understanding the effects of individual gamification elements on intrinsic motivation and performance. Comput Hum Behav 71:525–534. https://doi.org/10.1016/j.chb.2015.08.048
Moccozet L, Tardy C, Opprecht W, Léonard M (2013) Gamification-based assessment of group work. In: Interactive Collaborative Learning (ICL), 2013 International Conference on, pp 171–179
Monterrat B, Desmarais M, Lavoué É, George S (2015) A Player Model for Adaptive Gamification in Learning Environments. In: Artificial Intelligence in Education. Springer, Cham, pp 297–306
Monterrat B, Lavoué E, George S (2014) Motivation for learning: Adaptive gamification for web-based learning environments. In: 6th International Conference on Computer Supported Education (CSEDU 2014). pp 117–125
Monterrat B, Lavoué É, George S (2015) Toward an adaptive gamification system for learning environments. In: Zvacek S, Restivo MT, Uhomoibhi J, Helfert M (eds) Computer Supported Education. Springer International Publishing, New York, pp 115–129
Mora A, Tondello GF, Nacke LE, Arnedo-Moreno J (2018) Effect of personalized gameful design on student engagement. EDUCON 2018
Nah FF-H, Zeng Q, Telaprolu VR, et al (2014) Gamification of education: a review of literature. In: International Conference on HCI in Business. Springer, pp 401–409
Orji R, Mandryk RL, Vassileva J, Gerling KM (2013) Tailoring persuasive health games to gamer type. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, pp 2467–2476
Orji R, Oyibo K, Tondello GF (2017) A comparison of system-controlled and user-controlled personalization approaches. In: Adjunct publication of the 25th conference on user modeling, adaptation and personalization. ACM, New York, pp 413–418
Orji R, Tondello GF, Nacke LE (2018) Personalizing persuasive strategies in gameful systems to gamification user types. In: Proceedings of the 2018 ACM Conference on Human Factors in Computing Systems
Osatuyi B, Osatuyi T, de la RR (2018) Systematic review of gamification research in IS education: a multi-method approach. Commun Assoc Inf Syst 42. https://doi.org/10.17705/1CAIS.04205
Ostrowski L, Helfert M, Xie S (2012) A conceptual framework to construct an artefact for meta-abstract design knowledge in design science research. In: 2012 45th Hawaii International Conference on System Sciences. pp. 4074–4081
Peffers K, Tuunanen T, Rothenberger MA, Chatterjee S (2007) A design science research methodology for information systems research. J Manag Inf Syst 24:45–77
Seaborn K, Fels DI (2015) Gamification in theory and action: A survey. International Journal of Human-Computer Studies 74:14–31. https://doi.org/10.1016/j.ijhcs.2014.09.006
Song H, Kim J, Tenzek KE, Lee KM (2013) The effects of competition and competitiveness upon intrinsic motivation in exergames. Comput Hum Behav 29:1702–1708. https://doi.org/10.1016/j.chb.2013.01.042
Stahl G, Koschmann T, Suthers D (2006) Computer-supported collaborative learning: An historical perspective. Cambridge Handbook of the Learning Sciences 2006:409–426
Thomas C, Berkling K (2013) Redesign of a gamified software engineering course. In: 2013 International Conference on Interactive Collaborative Learning (ICL). pp 778–786
Tondello GF, Mora A, Nacke LE (2017) Elements of gameful design emerging from user preferences. ACM Press, pp 129–142
Tondello GF, Wehbe RR, Diamond L, et al (2016) The gamification user types hexad scale. In: Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play. ACM, New York, pp 229–243
van Roy R, Zaman B (2015) The inclusion or exclusion of teaching staff in a gamified system: an example of the need to personalize. In: CHI Play ‘15 Workshop ‘Personalization in Serious and Persuasive Games and Gamified Interactions’
van Roy R, Zaman B (2015) Moving beyond the effectiveness of gamification. In: CHI ‘15 workshop ‘Researching Gamification: Strategies, Opportunities, Challenges, Ethics.’ Seoul, South Korea
van Roy R, Zaman B (2017) Why Gamification Fails in Education and How to Make It Successful: Introducing Nine Gamification Heuristics Based on Self-Determination Theory. In: Ma M, Oikonomou A (eds) Serious Games and Edutainment Applications. Springer International Publishing, pp 485–509
Venable J (2006) A framework for design science research activities. In: Emerging Trends and Challenges in Information Technology Management: Proceedings of the 2006 Information Resource Management Association Conference. Idea Group Publishing, pp 184–187
Open access funding provided by Lappeenranta University of Technology (LUT). Research was partially funded by European Union Regional Development Fund grant number A70554, “Kyberturvallisuusosaamisen ja liiketoiminnan kehittäminen,” administrated by the Council of Kymenlaakso. The work of the first author was supported by the Ulla Tuominen foundation.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Knutas, A., van Roy, R., Hynninen, T. et al. A process for designing algorithm-based personalized gamification. Multimed Tools Appl 78, 13593–13612 (2019). https://doi.org/10.1007/s11042-018-6913-5
- Adaptive systems
- Design process
- Machine learning