1 Introduction

Effective pedagogical practices are what create the effectiveness of learning, rather than the particular medium with which information is transmitted. This observation is often overlooked in research on technology-based courses. The designs of the studies comparing learning media are not likely to improve, and researchers should not assume that students learn best means most technologically advanced learning. They pose a question for researchers, “What combination of instructional strategies and delivery media will best produce the desired learning outcome for the intended audience?” [1].

Typically, dashboards display data integrated from multiple sources and exhibited in an easy-to-comprehend, informative graphic representation with explanatory text. This allows a reader to understand complex information in less time than it would take to read a full report… Dashboards offer convenient tools for the principal officers (typically, CEOs, CFOs, and CIOs) to track the key performance measures [2]. In a business community, a dashboard is recognized as an emerging performance management system, for example, to monitor productivity, analyze cost-effectiveness and improve customer satisfaction [3]. Managing strategic risk is one of the most challenging aspects of an executive’s job [4].

Education level seems to be a factor in the likelihood of trainees to excel when using computer technology in their respective training programs. However, education may or may not be a factor in the success of trainees when using a computer-based dashboard. Scorecards and dashboards are meant to be tools that help maintain data, identify trends, predict outcomes and strategize, but there exist very few empirical studies regarding the usability of a dashboard [5].

Usability relates to how easily a user is able to use a system or tool, how easy it is for the user to effectively and efficiently achieves goals with the system or tool and how satisfied the user feels while using the system or tool [6, 7]. In regard to how educated a user is, education level may either positively or negatively affect the usability that a user perceives [5].

In learning IT Governance issues, especially IT Risk Management, to understand its practical application, involves knowing frameworks like COBIT, Magerit, Octave, etc., which abound in concepts, catalog items, controls, formulas and other theoretical dimensions. This means that often understand the practical application of these frameworks is not an easy task. Use specific software can also lead to complicate their practical learning.

This study aims to demonstrate that the use of dashboards and scorecards developed for educational purposes, in simple tools such as Excel, allow us to understand more easily the applicability of reference models, in this case related to IT Risk Management.

2 Model Risk Management IT Used

Risk governance includes the totality of actors, rules, conventions, processes, and mechanisms concerned with how relevant risk information is collected, analyzed and communicated and with how management decisions are made… Risk governance does not rely on rigid adherence to a set of strict rules, but calls for the consideration of contextual factors such as: (a) institutional arrangements (e.g., regulatory and legal framework and coordination mechanisms such as markets, incentives, or self-imposed norms); and (b) sociopolitical culture and perceptions [8].

The risk factors of converging technologies can be grouped into four categories, according to their sources: Technological (such as wireless communications, hybrid nanobiodevices, engineered and byproduct nanoparticles); Environmental (such as new viruses and bacteria, and ultrafine sand storms); Societal (such as management and communication, and emotional response); and Dynamic evolution and interactions in the societal system (including reaction of interdependent networks, and government’s corrective actions through norms and regulations) [8].

The typical information-security risk assessment process commonly includes the phases of context establishment, risk identification and risk analysis. Each of these phases is usually made up of a number of activities and sub-processes. There exist a number of popular information-security risk assessment methodologies including FRAP, CRAMM, COBRA, OCTAVE, OCTAVE-S and CORAS in use in Europe, the US and Australasia. These methodologies are widely used by industry. Though these risk-assessment methods range in their underlying activities, order and depth, they generally apply a methodology consistent with context establishment, risk identification and risk analysis [9].

OCTAVE-S was selected as an ISRAFootnote 1 methodology for study. Developed by Carnegie Mellon University and applied throughout industry, OCTAVE-S is a variant of the OCTAVE (Operationally Critical Threat and Vulnerability Evaluation) method, geared specifically for small-medium enterprises. Consistent with our literature review, the OCTAVE-S risk assessment model flows through the three phases of context establishment, risk identification and a risk evaluation coupled with an analysis of the desired risk treatment plans [9].

Magerit is a methodology promoted by the Spanish Ministry for Public Administrations. It must be used by Spanish public administrations, but it can also be used by public and private corporations [10, 11].

For this study, a model of IT Risk Management was designed with reference methodologies MAGERIT and OCTAVE-S. Figure 1 shows the model of IT Risk Management which was built as a basis for the development of dashboard.

Fig. 1.
figure 1

Model Risk Management IT used as a reference for the construction of dashboard

3 Dashboard Prototype

There is no standard design for a given computer-based performance dashboard. Because dashboards are typically designed for the sole use of only one corporation, a great variety of characteristics among dashboards is possible, and there are no formal guidelines in place for dashboard development. General principles from usability and Human-Computer Interaction can be applied to dashboards, at the discretion of their creators; however, there is no specific recommendation applicable to all varieties of dashboards that are commercially available [5] (Fig. 2).

Fig. 2.
figure 2

View of the risk assessment through the heatmap

The functionality required for the prototype dashboard was:

  1. 1.

    Allow selecting IT assets that will be evaluated according to their type or classification

  2. 2.

    Define assessment scales for IT assets, vulnerabilities, extent of damage of threats and threat probability.

  3. 3.

    Determine the classification of risk levels, determining from that level are not tolerable

  4. 4.

    Determine the criticality of IT assets selected

  5. 5.

    Allow selecting the threats and vulnerabilities that are related to each asset selected IT

  6. 6.

    Rating of vulnerabilities, extent of damage of threats and threat probability, based on the scale determined by the user

  7. 7.

    Calculate the risk levels of IT

  8. 8.

    Show results in a scorecard, where the user can analyze the resulting heat map. Likewise, the results should show graphically the IT risk levels obtained, indicating those needing treatment

4 Methodology

The evaluation is a metric comparison of the use of dashboards with scorecard in learning models for IT Risk Management. To this end, participants were previously trained in the theoretical framework of Model IT Risk Management used. At the time of using the dashboard, participants would have the role of users “IT Risk Evaluator” (Fig. 3).

Fig. 3.
figure 3

Risk appetite: acceptable risks and unacceptable risks

Fig. 4.
figure 4

The conceptual model of study

To measure the effectiveness of construct “Use of dashboards with scorecard”, it was considered the following dimensions, as relevant evaluation criteria: [D1] Ease of use (positive relationship), [D2] Effectiveness (positive relationship), [D3] Usability (positive relationship) and [D4] User experience (positive relationship). Also considered the moderating variables [M1] user education level (positive relationship) and [M2] Complexity of assessment task (negative relationship), in the relationship effectiveness of using dashboards with scorecard in learning models for IT Risk Management.

4.1 Scales Measuring the Dimensions Evaluated

The dimensions of Effectiveness (EFFE) and Ease of Use (EU) were measured immediately after completing the work of the practical application of case studies. The dimensions of Usability (USA) and User Experience (XU) were used as metrics of the prototype dashboard used.

The Effectiveness dimension was measured with a three-point scale: (1) completed the development of the case at the scheduled time, (2) needed of the assistance of teacher to complete the task at the scheduled time and (3) not successfully completed the development of the case at the scheduled time. Effectiveness was measured for each of the three case studies.

To measure the dimension Ease of Use was used the Single Question (SEQ): “Overall, This task was?”. SEQ is a question that could be used to ask a user to respond immediately after attempting a task. It provides a simple and reliable way of measuring task-performance satisfaction [12]. To measure the perception of users on the domain “Ease of Use” dashboard, the question was changed to: “Overall, Using dashboard, the task of IT risk assessment was?”. The Ease of Use was measured for each of the three case studies. A measuring 7-point scale was used.

Usability dimension was measured using the System Usability Scale (SUS) questionnaire [13]. The questionnaire consists of 10 items that are answered using a 5-step Likert scale reaching from “strongly disagree” to “strongly agree”. It was chosen, because it is a reliable and valid measure of perceived usability [14]. The questionnaire was answered once was completed the time for each case study.

User experience dimension was measured using User Experience Questionnaire (UEQ) [15]. The user experience questionnaire contains 6 scales with 26 items in total:

  1. 1.

    Attractiveness: General impression towards the product. Do users like or dislike the product? This scale is a pure valence dimension. Items: annoying /enjoyable, good /bad, unlikable /pleasing, unpleasant /pleasant, attractive /unattractive, friendly /unfriendly

  2. 2.

    Efficiency: Is it possible to use the product fast and efficient? Does the user interface looks organized? Items: fast /slow, inefficient /efficient, impractical /practical, organized /cluttered

  3. 3.

    Perspicuity: Is it easy to understand how to use the product? Is it easy to get familiar with the product? Items: not understandable /understandable, easy to learn /difficult to learn, complicated /easy, clear /confusing

  4. 4.

    Dependability: Does the user feel in control of the interaction? Is the interaction with the product secure and predicable? Items: unpredictable /predictable, obstructive /supportive, secure /not secure, meets expectations /does not meet expectations

  5. 5.

    Stimulation: Is it interesting and exciting to use the product? Does the user feel motivated to further use the product? Items: valuable /inferior, boring /exiting, not interesting /interesting, motivating /demotivating

  6. 6.

    Novelty: Is the design of the product innovative and creative? Does the product grab user’s attention? Items: creative /dull, inventive /conventional, usual /leading edge, conservative /innovative

The items are scaled from −3 to +3. Thus, 3 represents the most negative answer, 0 a neutral answer, and +3 the most positive answer… Scale values above +1 indicate a positive impression of the users concerning this scale, values below −1 a negative impression [15].

A 3-category scale was used to measure the moderating variable User Education Level: (1) student, (2) bachelor and (3) professional.

For the variable Assessment Task Complexity three case studies were developed, which were applied in the last learning session when the dashboard was used for risk assessment. Each case study had different level of complexity. The first case was of a low complexity and the third case was more complex.

4.2 Application of the Survey

The surveys were applied to finalize the development of the theme of IT Risk Management, including using the dashboard, during last session of learning (duration 3 h). In the last session of learning, 3 case studies of different complexity were assessed using the dashboard.

The surveys were applied as follows:

  • 32 students in the last academic year of the Professional School of Systems Engineering - Semester 2013-II

  • 37 students in the last academic year of the Professional School of Systems Engineering - Semester 2014-I

  • 23 participants of the course “IT Risk Management” developed in the School of Engineers of Peru, Departmental Council of Lambayeque, developed during the months of September and October 2014

  • 19 participants of the course “Audit and Risk Management IT” developed in the School of Engineers of Peru, Departmental Council of Lambayeque, developed during the months of November and December 2014

5 Results and Discussion

The information obtained in the surveys was processed with SPSS software. The results of measurements of selected variables are shown below:

For the evaluation of the dimension “Ease of Use”, a survey for each case study developed was applied, using a measuring 7-point scale, where 1 - means the dashboard made them difficult task and 7 - means that the dashboard provided them work. Three case studies were developed. Each case study had different level of complexity as it rises. The first case was of a low complexity and the third case was more complex (Fig. 5).

Fig. 5.
figure 5

Comparison of means for each case study developed by User education level

The Table 1 shows that the dashboard facilitated the task of assessing the risks in each case. Although, increasing the complexity of the case study, the mean and median declined, but always obtained values superiors to 3.5, which is the midpoint of the metric evaluation scale used.

Table 1. Results of the evaluation of the dimension “Ease of Use” dashboard, for every case of studio developed.

The Fig. 4 displays the comparative results of the means obtained for each case study evaluated with the dashboard, by User education level. It can be seen that there is more easily use the dashboard for professional.

Regarding the Effectiveness dimension, the processed data of the survey are shown in Tables 2 and 3. Whereas the surveyed population was 111 participants, the results show that the percentage of participants who successfully completed the task of IT risk assessment, using the dashboard, decreases when increased complexity of the case, from 73 % in the simplest case to 53.5 % in the more complex case. The opposite happened to participants who needed help the teacher, from 23.4 % in the simplest case, to 55.9 % in the more complex case.

Table 2. Results of the evaluation of the dimension “Effectiveness” of the dashboard
Table 3. Results of the evaluation of the dimension “Effectiveness” of the dashboard by User education level

The Table 3 shows that users with level of professional education need less help from the teacher to complete the task of assessing the risks of IT with the dashboard. The opposite happens with users with level of student education.

System Usability Scale (SUS) questionnaire was used to measure the dimension Usability. In the reliability test is reached a Cronbach’s alpha of .720. The 10 items of the questionnaire are stable and consistent, with an acceptable level of correlation between them, as shown in Table 4.

Table 4. Results of the reliability test of the SUS questionnaire to evaluate the Usability dimension.

The results of the evaluation of perceived usability of the dashboard are shown in Table 5. The mean total equals 3.368, equivalent to 67.35 %.

Table 5. Results of the evaluation of the perception of usability of the dashboard

For the evaluation of the “User Experience” dimension used the User Experience Questionnaire (UEQ). The UEQ questionnaire assesses 26 items, grouped into 6 factors: Attractiveness (6 items), Efficiency (4 items), Perspicuity (4 items), Dependability (4 items), Stimulation (4 items) and Novelty (4 items). In Table 6 are shown the results the reliability tests, proving that in each factor, here stability and consistence between their items. Were obtained Cronbach’s alpha greater than 0.7.

Table 6. Results of the reliability test of the user experience questionnaire (UEQ) to evaluate the User Experience dimension.

The measurement scale to evaluate the items of User Experience dimension is 7 points from −3 to +3. Table 7 shows the means obtained for each factor, showing that exceed the average of the scale used. As shown, all have been measured item greater than 0, are positive.

Table 7. Results of the evaluation of the user experience dimension

6 Conclusions

The results show that the process of learning methodologies, methods and tools to manage IT risk, improves with the use of dashboard with scorecard. These tools enable users to identify the model elements and, as are structured and organized, above all, allow the user to practice the theory through case study, from a perspective of prior training.

The evaluations of dimensions, “Effectiveness” and “Ease of use” of dashboard, shows that are related to the degree of knowledge that the user has the model of IT risk management, developed in the dashboard (level of education user) and with the complexity of the cases evaluated. This means that the previous user experiences about IT Risk Management improves training using dashboard.

With regard to the evaluation of the product itself, by the dimensions Usability and User Experience, shows that the dashboard generates a user-machine interaction easy to understand, friendly and efficient to support the work of IT risk assessment. However, it is pending, the development of future research to evaluate other caracterísiticas the dashboard, through the User Experience Questionnaire (UEQ), trying to improve and adapt these products to other scenarios, models and types of user. I believe that the UEQ is a tool that can still be explored to achieve these possibilities.