Keywords

1 Introduction

To sustain their competitive advantage, the business sector and Government agencies are always keen to maintain well skilled and knowledgeable employees. A review of current government training practice reveals that key to achieving this desirable result is to adopt smarter use of our digital technologies. It is understandable that courseware designers mirror this continual quest for work-place education/training reform. It would seem from a review of current online training courseware offerings, that top of the Web-designer’s list is the notion that virtual reality technologies may bring about optimal learning outcomes; regardless of whether the training is meant for new workers entering the work-force (novice learners), or the longer-term (experienced) workers who are simply wanting to refresh their skill profiles. In order to cater for such a diverse training audience, it is important to first differentiate what an individual knows, from what they do not. Yet knowing how to accurately measure people’s understanding and skill remains a training dilemma, given the plethora of information communications technology (ICT) tools that are available for such expertise testing. The resulting outcome from this rather spurious skills pigeonholing technique is to expect employees to translate what they have learned during their training session into immediate (expert level) knowledge and skills in their work-place. To relieve these dilemmas, an HCI courseware design model is suggested as an online design benchmark to encourage corporate sector Web-designers to construct online courseware that will enable employees to remain engaged with their training materials.

The main aim of this paper is to describe the ramifications for effective human-computer interaction (HCI) courseware design arising from a funded research project conducted in Australia, differentiating what people do and do not know [3]. Referring to the model (see Fig. 10) which has been adapted from [4]; the main HCI factors remain as critical today as they were two decades ago. To make sense of the broad-reaching HCI design landscape, it is helpful to divide these factors into two aspects: human-dimension/machine-dimension [5]. These factors primarily relate to the users, highlighting awareness of the need to consider comfort and health, or work-place issues, or the technology deployed. The project described in this paper was primarily concerned with the interactive effects of individual’s media preferences on their training outcomes. The human-dimensional factors therefore under consideration were: the user (cognitive processes and capabilities, experience level); user interface (easy/complex, novel, task allocation, repetitive, monitoring, skills components). While the machine-dimensional factors involved: constraints (costs, timescales, budgets, staff, and equipment); system functionality (hardware, software, application); and productivity factors (increases in: output, quality, creative and innovative ideas leading to new products; decreases in: costs, errors, labour requirements, production time). However, to set the context for the reader and before presenting the HCI implications of this project, it is necessary to provide a brief project overview. This project has been published elsewhere in more detail in a number of peer reviewed outlets, namely [3, 6, 7].

2 Project Overview

Due to the requirement from the Government sector to rely on continual employee reskilling, this project set out to facilitate cost effective eLearning, using advanced ICT tools to enhance work-place training with assured predictable training outcomes. The project deliverables involved: installing an ‘electronic trainer/professional assistant’ to an existing courseware shell; customised online knowledge navigation; and devising efficient and effective eLearning models of best practice in Government training. The initial work was commissioned by a government skill development agency to design and develop a training courseware management information system (CMIS), to enable government agencies to learn how to construct their own training/learning management systems. In 2008 the author was awarded an Australian Research Council (ARC) linkage project grant to add an ‘intelligent-AGENT’ to the CMIS as a virtual reality avatar or personal eTraining assistant.

Two new features were added to the original CMIS; an electronic personal assistant or avatar (Fig. 1), and a customising pedagogical feature offering online instructional preferences (Fig. 2). At the time, this was considered cutting edge ICT tool development. Technological avatars/digital-agents as they apply to Computer Science, are pieces of software that run without human control or constant supervision to accomplish an individual’s training-goals. These agents will typically collect, filter, and process information during each employee’s training session, thus playing an important role in balancing exploitation with exploration in knowledge discovery in corporate online training systems [8].

Fig. 1.
figure 1

System AGENT as virtual reality avatar

Fig. 2.
figure 2

System schematic model

The second innovation involved the adaptive/flexible eLearning/training tools that were embedded within the CMIS. At the time, these training tools were not prevalent in the government sector. Figure 2 shows how each trainee could determine their own knowledge/skill development path. For instance, novice trainees were given the full step-by-step skill development path; while a more experienced trainee could choose to only refresh certain aspects of their training. Clearly, understanding the online learners’ instructional preferences, background knowledge levels, and concerns should enhance the usability and educational-IS design practice resulting in more effective Web-site training courseware [9].

3 Research Design and Methodology

The research question addressed in this project involved an investigation of the interactive effect of learning preference and instructional mode on participants’ learning outcomes. A quasi-experimental 3 × 3 research design was employed to carry out the work. The independent variables were identified as learning preference (training mode, experience with eLearning, and work mode training expectation - see Fig. 3) and training strategy (online, face-to-face, and blended online/face-to-face, see Fig. 4).

Fig. 3.
figure 3

Preferred instructional/learning mode

Fig. 4.
figure 4

Change in intro to ethics knowledge

3.1 Research Instruments

Test instruments were prepared for a Pilot Study [3] and a Main Experiment to evaluate attainment status with respect to knowledge of ‘introductory ethics,’ and the extent to which this knowledge can be interpreted in specific contexts. The intention was to prepare a range of items to evaluate multiple levels of knowledge, in accord with research [10] that showed a rectangular distribution of item difficulty over an extended range of achievement was the most effective in detecting achievement status. Without effective measurement of achievement status, one would have difficulty in measuring changes in achievement status as a consequence of a learning intervention [11]. The skill assessment instruments involved only open-ended test-items, as it was judged that a constructed response was the appropriate indication of (ethics) knowledge. Table 1 shows the content-skills coverage of the tests used in the Main Study.

Table 1. Pre-post testing instrument design

The matrix of scored person responses to each of the test-items was subject to quality control procedures using item response modeling with the ‘QUEST interactive test analysis system’ [2]. Central to QUEST is a measurement model developed in 1960 by the Danish statistician Rasch [12]. The initial analysis of the pre-test with QUEST used an iterative procedure to describe the uni-dimensional scale with equal intervals along the vertical axis, to represent individual performance (case achievement) and test-item difficulty on the same scale. The estimation procedure investigated the probability of an individual with a particular level of achievement making particular responses to a range of test-items. The diagram showing the pattern of cases (persons) and test-items is known as a variable map (see Fig. 5). Test-items in common with the post-test were fixed (anchored) at the difficulty established for the pre-test so that any changes in achievement as a consequence of the intervention could be evaluated accurately.

3.2 Participants

Overall there were 40 Government workers and 33 vocational students involved in the experiments. Invitations were sent to Government workers through a non-government training consultant (for the Focus Groups and a Pilot Study); while a vocational training institute invited students (for the Main Experiment). The researchers were informed that many of these participants had no previous experience with an ethics training course.

4 Data Analysis

The eLearning outcomes were evaluated in terms of the magnitude of change in participant proficiency magnitude of effect size as defined by Cohen’s statistical power analysis [13]. The QUEST instigated Rasch analysis [14] generated a set of hypotheses regarding the interactive dynamics of skill development with and without ICT-tools as training mediation techniques. QUEST allows for improved analyses of an individual’s performance relative to other participants [15], and relative to the test-item difficulty of introductory ethics knowledge levels. See an example of this in Fig. 5, where each participant or ‘case’ is depicted by an ‘x,’ and test-items are shown on the right-side of the map. The Rasch Item Response Theory (IRT) estimates the probability of an individual making a certain response to a test-item. The pre- and post-test results were analysed with a test-item matrix that had each individual’s responses for every test-item recorded. Common test-items (identically worded questions) were ‘anchored’ so that scale scores on the pre-test were comparable with scale scores on the post-test. The difference between pre-test and post-test scaled scores indicated whether learning occurred, whether no learning occurred, or whether the instructional strategy resulted in reduced achievement (see Fig. 8).

Figure 5 shows an annotated QUEST variable map revealing that participants X11, X12, and X18 achieved the same level of knowledge at a scale score of −0.80. The colour of the numeral indicates the treatment group (black is face-to-face T1, green is blended T2, and red is computerised T3). Participants X07, X15, and X30 achieved the same level of knowledge at a scale score of 1.10 close to the average difficulty of the pre-test (0.0), and this level was higher than participants X11, X12, and X18. This annotated variable map also shows that scoring a 1 on item-5 is more difficult than scoring a 1 on item-12. Getting item-19 correct is easier than scoring 2 or more on either item-16 or item-18. Scoring 4 on item-17 is easier that scoring 2 or more on either item-16 or item-18. Scoring a 2 on item-5.2 is the most difficult achievement. The probability basis of the model is illustrated by the position of participants X05, X08, and X32 at an achievement level of 1.56. These participants are more likely than not to score 1 on items-15, -16, and -18 and 3 on item-17.

Fig. 5.
figure 5

Annotated QUEST variable map - Post-test (Color figure online)

As reported by Adams and Khoo (1996), the QUEST software package provides item fit statistics. The mean square fit statistics provide a useful way of judging the compatibility of the model and the data. An item to the right of the right-hand dotted line in the diagram (see Fig. 6 for item-13 in this example) shows more variation from the model than expected. As such items are removed from the test. Items to the left of the left-hand dotted line in the diagram indicate less variation than expected (see Fig. 7).

Fig. 6.
figure 6

QUEST fit map misfitting item-13

Fig. 7.
figure 7

QUEST fit map - item-13 removed

Learning outcomes were evaluated in terms of the magnitude of change in participant proficiency (magnitude of effect size as defined by Izard’s adaptation [11] of Cohen’s statistical power analysis [13].

5 Findings

We show the gains in knowledge of introductory ethics achievement for three training treatment groups; face-to-face, computerised and a blend of both. Having access to our individual virtual learning space is critical; this project places Australia at the centre of training through a pseudo virtual reality environment.

As reported in [7], there are three directly parallel sets of data: one set (traditional classroom facilitator-led, T1) for the training sessions not using computer mediation at all or to any significant degree for the whole training period, one set (T2) that comprising a blended training approach that implements both traditional face-to-face and electronic instructional tools, and the third set (T3) for the training environments using the eLearning strategies as a central instructional tool for a whole training period. These data-sets are comparable because they involve the same training facilitators (whose knowledge of the business process and technical competence in eLearning has been identified as of an equally high standard to their general work-place/industry sector training competence), and working with various government employees/trainees in the same locations for each training session. It has been possible to identify paradigmatic differences between the levels of government practice, and demographic variations (time of service, gender and previous education) that may affect the training dynamics and training outcomes across and within the three training environments. Learning outcomes were illustrated by combining the achievement status measures from the Pre-Test with the anchored achievement status measures from the Post-Test, as shown in the following diagram (Fig. 8).

Fig. 8.
figure 8

Learning outcomes

It is important to note that while many participants made progress (shown ▲), some participants in each treatment failed to make progress (shown ▼). It is also necessary to note that many of the changes are of a substantial magnitude, even though the effectiveness of the respective learning interventions (whether online only, a mixture of face-to-face and online, or face-to-face only) was being judged with the same test. Averaging the results shows that T1 group improved by 1.63, T2 group by 1.74, and the T3 group by 1.41.

Despite the expectation of substantial gains with a short training period (around 2 h) is unrealistic for these instructional treatments [7]; the research has realised the anticipated results. Future investigations should include a 1-day, 2-day and 2 + day training sessions to infer the duration of training that will allow substantive magnitudes of learning to be detected. Similarly, the size of each training group needs to be larger: it is difficult to justify such small groups being involved in training given the costs associated with providing trainers, the provision of suitable facilities, and the transport costs for both presenters and participants. Face-to-face instruction/facilitation can provide more opportunities for timely feedback to participants. Moreover further feedback can add opportunities to the eLearning training strategy so that there is greater control of the magnitude of feedback which may be an alternative explanation of differential learning. Secondly, limiting the learning area to a single content area (such as ‘an introduction to ethics’) would provide no evidence of the extent to which the information obtained generalizes to other online-learning/training content areas. Additional instructional (content) areas therefore need to be added, with sufficient time allowed for the research team and the industry partners to generate appropriate eLearning content, and the assessments (the pre- and post-tests).

In the Pilot Study, the ethics pre-test showed an internal consistency of 0.61 without deleting or adding test-items. This value needed to be improved by using the analysis to modify some test-items, and perhaps to move some test-items from the post-test to the pre-test so both tests are of comparable internal consistency and accuracy [16]. In the Pilot Study, the ‘ethics post-test’ showed an internal consistency of 0.77 without deleting any test-items. Refinement of this test by deleting some test-items that failed to detect any differences between participants with knowledge and participants lacking knowledge on the dimension of interest (applied knowledge of ethics), would serve to improve the evaluation of post-training knowledge. The complexity of the ‘ethics’ case studies used in the face-to-face group may well be affecting the post-test evidence of achievement. This shows when participants identify stakeholders (for example), but do not choose the best solution or explain why that is so. Providers of eLearning materials must be able to demonstrate the extent to which learning has occurred. In this case it was made known that some participants did not learn about ethics. Consequently this result is worthy of further investigation. However, it was demonstrated that overall learning was positive for the group.

6 Discussion

As mentioned earlier, the research was especially examining the interactive effects of individual’s media preferences on their training outcomes. Therefore to unpick the far-reaching HCI design landscape, this project has been helpful in dividing the HCI-design factors of the training courseware management information system (CMIS) into two aspects: human-dimension/machine-dimension [5]. A key project motivation was to adopt a user-centric focus, highlighting awareness of the need to consider comfort and health, or work-place issues, or the technology deployed. So doing, the human-dimensional factors that were under consideration were: the user (cognitive processes and capabilities, experience level); user interface (easy/complex, novel, task allocation, repetitive, monitoring, skills components). So far as the user-HCI factor the participants’ cognitive processes were identified in a Object-Spatial Imagery and Verbal Questionnaire (OSIVQ) devised by [1] as a self-report questionnaire for locating the participants’ preferences for adopting mental imagery versus verbal representations. The capabilities and experience levels were captured in the pre- and post-tests; while the user interface underwent considerable work prior to the experimentation which closely followed the Merrill Principles of Instruction [17]. Consequently, the following ePedagogical strategies (Fig. 9) were provided in the opening screens of the training courseware management information system (CMIS):

Fig. 9.
figure 9

ePedagogical strategies (adapted from Merrill 2002)

While it has been easier to determine that the project’s main focus was on the user-centred (human-dimension) aspects of the HCI, the machine-dimension factors were brought to light through interpretation of the findings. The machine-dimensional HCI-factors involved: constraints (costs, timescales, budgets, staff, and equipment); system functionality (hardware, software, application); and productivity factors (increases in: output, quality, creative and innovative ideas leading to new products; decreases in: costs, errors, labour requirements, production time). So far as the machine-dimension the ramifications for effective HCI courseware design arising from this project, differentiate what people do and do not know [3]. The project described herein will facilitate cost effective eLearning practice using advanced ICT tools to enhance work-place training with assured predictable outcomes. The most desirable training approach is to personalise an employee’s knowledge development through flexible online learning. Improved IT governance serves to motivate disinterested trainees and energise frustrated management. However, within this digital training realm multi-disciplined specialists are required to resolve the factional dilemmas of corporate IT resource ownership. The timeliness of our project will highlight desirable change management issues to improve efficiencies and effectiveness of existing IT training resources (see Fig. 10). It is proposed that this project has unearthed evidence to support the extension of the earlier mentioned HCI model as proposed by [2]. This project has highlighted the diversity of government participant/trainee characteristics and how these diverse attributes affect their instructional outcomes.

Fig. 10.
figure 10

Human diversity of human-computer interaction dimensions

6.1 Human-Dimension - Trainee Interactions

When considering the Government factors involved within the human-dimensions of HCI it is worth considering that in many places around the world Web 2.0 continues to be promoted as the new incarnation of the Internet because of the social networking aspects afforded by the supporting technologies. This general acceptance may be due to the evolutionary nature of the whole Internet environment per se. As a result, there are many ways to view the term Web 2.0, in the sense that it represents both a range of ICT tools that enable the business sector to profit by, as well as the meme-like characteristics that imply a certain agreement or states of mind [18]. It is the latter definition that sets the broader context for this paper to expand the discussion beyond the popular generation of Web applications and sites that enable openness, interaction and new communities to flourish [19]. As far as the CMIS reflects the capacity to deal with ‘group diversity factors,’ because the popular notions of the Web 2.0 as we know today, extend beyond the more traditional view of Web 2.0 to mean any type of Web-based (user) collaborative behaviours that occur when people are learning new skills. This link was made when the first concepts of online or Web-based learning were developed in the 1960s at the University of Illinois through the creation of a computer-based education environment called PLATO (Programmed Logic for Automatic Teaching Operations), designed for delivery to university students. As such, PLATO paved the way for online communities [20] to include: Web application development tools, discussion forums, message boards, interactive testing, e-mail, chat rooms, picture languages, instant messaging, remote screen sharing, and multi-player gaming [21]. There seemed no end to the rise of the Web 2.0 possibilities until the dotcom bubble burst; encouraging the pessimists to await the demise of eCommerce. Despite their warnings the Web survived and since then we have witnessed a resurgence of the Internet according to ref [22].

6.2 Machine-Dimension - System Functionality

It is anticipated that more concentrated research on the design and development of flexible embedded eMentoring strategies will encourage people wanting to update their skills and to access Web-mediated programmes in a just-in-time or on-demand basis. To this end, the project team is committed to provide the instructional/training tools to other business partner units. Moreover, when the newly created Web-based knowledge-navigational/learning aids are further developed and replicated (or customised by other government departments and industry sector organisations), it is anticipated they will find the knowledge navigation tools to be practical and easily adaptable to include other instructional/learning streams. Furthermore, the innovative instructional package is designed to be uploaded to a corporate Intranet. This project has therefore succeeded in its initial endeavour to champion a research that involves an investigation of sound instructional strategies that underpin a CMIS with the capability of providing a Web-mediated (self-guided) eMentoring service for government employees while engaging in their online training. Based on previous work identified within this paper, it is suggested that these strategies add to the previous body of work and may in time contribute to the theoretical understanding of how to evaluate Web-mediated learning reinforcement.

The paper has discussed the project results through the lens of the above model. It is further suggested here that the human-dimensions of HCI can be defined as the social networking that offers the strategic Web 2.0 glue for successful adaptive online training which is often lacking [5]. However, the human-dimensions of HCI are but one piece of the complicated computer-usability or techno-puzzle because it involves two distinct contexts. One relates to the human-dimension or social context of computing, while the other relates to the machine-side, with people’s perspectives being shaped by the performance of the technical computing components. Until very recently, the literature has dealt more often with the latter. It is only in recent times that a voice has been given to computer-usability issues that involve the human-dimensions (or the social networking aspects).