Abstract
The application of Information technology in educational context and environment has dramatically changed the pattern at which people teach and learn. Institutions of higher learning globally are increasingly adopting e-Assessment as a replacement for traditional pen on paper examination due to its cost effectiveness, improved reliability due to machine marking, accurate and timely assessment. In spite of the numerous benefits of e-assessment, it is unclear if University students in Sub Saharan African Countries are willing to accept it. The purpose of this study is to examine technical support role towards mitigating effects of computer anxiety on electronic assessment amongst University students in Nigeria and Cameroon. Therefore, the study extended Technology Acceptance Model and was validated using 102 responses collected randomly across universities in Nigeria and Cameroon. This study supports the body of knowledge by establishing that Computer Anxiety is an important factor which can affect University students regardless of their level of computer proficiency. The outcome of the proposed model indicated that when technical assistance is provided during e-Assessment, computer anxiety on majority of University students in Nigeria and Cameroon is reduced. The practical implication of this study is that students’ actual academic potentials may not be seen if education policy makers and University administrators do not always strive to ensure that all measures, including technical support that can reduce fear associated with use of computer for assessment, are introduced.
Keywords
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
1 Introduction
The application of Information technology in educational context and environment has dramatically changed the pattern at which people teach and learn. These effects have been extended to areas of assessment most especially towards reducing the cost and examination misconduct associated with traditional paper based assessment methods. Nevertheless learning is a continuous process which begins at an early age and one of the major importance of learning is assessment [1]. It is a measure used to evaluate the rate at which individuals are progressing [2, 3]. E-assessment; also known as electronic testing or computer based test, has become an important tool for learning and teaching and it is a form of assessment which is conducted electronically [4]. Nevertheless the potential benefits of the classification of this assessment, it is unclear if prospective learners are willing to accept it. One of the factors mentioned to be affecting its exploit is anxiety related to the use of technology on the prospective learners; which in other words can be termed as computer anxiety. In the field of e-assessment acceptance, a number of studies have been done on computer anxiety [5,6,7], only few of such studies have mentioned the role of provision of technical support most especially if the examinees cannot find their way around using some of the inherent features of the technology efficiently during assessment. The purpose of this study therefore is to examine the role of technical support towards mitigating effects of computer anxiety on electronic assessment amongst University students in Nigeria and Cameroon.
1.1 Problem Background
In students’ academic assessment, institutions of higher learning globally are increasingly adopting e-Assessment as a replacement for traditional pen on paper examination [8, 9]; and in comparison with paper based assessment, e- assessment has advantages of cost effectiveness, improved reliability due to machine marking, unbiased assessment, greater storage capability, quick submission and grade report retrieval, effective record keeping, accurate and timely assessment [10]. In addition to the benefits inherent in e-assessment, researchers are still finding ways to ensure that prospective learners utilize its potential benefits. One of the factors affecting its exploit is anxiety related to the use of technology on the prospective learners. Naturally, fear or anxiety is an emotional and psychological phenomenon which is correlated with any form of assessments. It is an intuition caused by identified fear or instability which changes the entire biochemical processes that occur within living organism and ultimately lead to a change in behavior such as sudden movement away from the point of danger. Fear in humans can manifest as a result of sharp response to a certain stimuli occurring presently or in anticipation of a life threatening future occurrence. According to the Diagnostic and Statistical Manual of Mental Disorders, 4th Edition (DSM IV) of the American Psychiatric Association and cited by Beckers, Wicherts [11] defined “anxiety as a mood state in which a subject experiences fear, apprehension, nervousness, worry, tension”.
Naturally, anxiety’s task is to warn ahead of an impending danger and consequently build an adequate coping mechanism against it. However, when this fear becomes extreme; there is an element of frustration which often makes this anxiety difficult to manage [11]. This apprehension can also be extended to traditional classroom assessment most especially when the learners are lacking adequate preparation towards the assessment and if such test were to be examined using electronic means, this level of anxiety increases particularly if the users have low computer self-efficacy. Notwithstanding, preliminary investigation has revealed that most test takers would prefer to have a handy technical assistance that can provide timely solutions to any technical issues that might come up during assessment. In view of this, it becomes important to conduct more studies on ways to reduce the level of anxiety on test takers so that their academic mastery and emotional intelligence are not measured by their inability to manipulate the technology driving the assessment. Nevertheless, a number of studies have been conducted to establish links between computer anxiety and related factors; using suitable theoretical model such as Technology Acceptance Model (TAM), extended TAM, Unified Theory of Acceptance and Use of Technology (UTAUT), Computer based assessment acceptance model (CBAAM); only few of this study have mentioned impact of technical support [12] on prospective test takers during assessment most especially when there are reported low level of computer self-efficacy on the prospective test takers. In addition, it is unclear whether presence of technical support variable actually has the potentials to reduce computer anxiety on prospective University students in Nigeria and Cameroon context; besides, the researchers seek to address the research question: Does technical support has capability to reduce computer anxiety amongst University students in Nigeria and Cameroon? Hence, it becomes necessary to investigate this and proffer a suitable theoretical model to address this gap.
2 Literature Review
2.1 Learning and Assessment
Lachman [13] stressed that, most textbook definitions defined learning as behavioural transformation brought about by change in experience. This definition is fundamental; as in the contemporary world, learning is perceived as an instrument that delineates experience into behavior. In other words, it is considered as an outcome of a circumstance or behavior [14]. Over 50 years ago, Ausubel, Novak [15] suggested that “the most important factor influencing learning is what the learner already knows. Ascertain this and teach him accordingly”. Assessment is an important element that measures how a learner is progressing and it can be employed to provide feedback (formative assessment) or applied for grading purposes (Summative assessment). Whatever the reason behind assessment, learning cannot be said to be complete without assessments. Naturally, fear is a phenomenon which is correlated with any form of assessments most especially when the learners are afraid of their performance due to the perceived threat of failure. There is an ongoing feeling of worry and apprehension, and this constant fear can hinder learners’ attempts to understand the information that is required for academic success. According to an online article entitled Strategies for Addressing Student Fear in the Classroom written by Scott Bledsoe Psy.D. and Janice Baskin stressed that “…Fear can cause students to experience adverse responses which can be physiologically (e.g., shortness of breath), cognitively (inability to focus or concentrate, obsessive thinking, replaying in their minds problematic incidents that occurred in previous classes), and emotionally (easily agitated, overcome by excessive nervousness, frustration), and other negative feelings…”. However, due to many obvious benefits inherent in the use of technology in assessing learners, there is an increase in the level of apprehension most especially when the learners cannot find their way around using the technology. The objective of this study therefore is to minimize the effects of computer anxiety using technical support on University students in Nigeria and Cameroon.
2.2 E-Learning and E-Assessment
In comparison with traditional classroom learning system, E-learning has obvious advantages including real time availability, elimination of barrier of distance to learning, and personalized learning pace. Often, e-learning and e-assessment are considered to be one and the same thing, but it is not so. E-Assessment can be simply defined as the use of ICT for the purpose of carrying out assessment for measuring a student’s learning [16]. E-Assessment can be categorized according to different measuring guidelines. It can be classified as formative and summative in context of examination. Formative assessment helps to examine how learners are progressing towards their learning goals and it is also used to provide feedback to the students, e.g. in class quiz, assignment; whereas summative assessment is for grading purpose, e.g. end of session or semester exams [17]. This assessment classification can come in form of multiple choice questions, adaptive tests, and open-ended questions. In adaptive tests, the difficulty level of questions is adapted as per the response of the user. In case of wrong response, the difficulty level of the next questions is usually dropped. The most complex of these three types are essay type questions, as evaluating them using computers is still a major obstacle and an important area of research that has received little academic attention in the field of e-assessment [18,19,20,21].
E-Assessment can also be classified according to the type of technology used to conduct the examination. One of such technologies is Optical Mark Recognition (OMR) sheets which have become very popular over the last ten years. However, the use of dedicated scanners to read OMR sheets is an added financial and technical burden. The other popular types include E-Portfolios, standalone systems and network/web based systems. E-Portfolios provides assessment of the student as all the student’s activities during the course lifetime are recorded in it. Standalone systems on the other hand usually apply some external devices to record the test output while networked systems’ output are usually saved on a server [17, 19, 22]. Other advantages of e-assessment are quick appraisal of examinations, developing pragmatic questions by using audio-visual mediums, simulation etc. This type of examination can also be administered for children with special needs. Nevertheless, both e-learning and e-assessment or computer based test are not without their limitations. Researchers have stressed that high level of self-discipline and motivation are required greatly on the part of learners using e-learning mode of instruction while e-assessment shortfalls include high cost, security risks and technological hitch [17, 23].
2.3 E-Learning and E-Assessment Acceptance in Sub-Saharan African Countries
Universal interest in the use of information and communication technologies (ICT) is evident in Africa and tertiary education institutions are increasingly shifting focus towards distance education and the establishment of virtual communities [24]. E-learning in not new in Africa, a survey conducted by Unwin [25] on the status of e-learning in Africa from 46 countries, revealed that e-learning has been adopted by many countries in Africa, including Nigeria and Cameroon. Countries all around Africa are willing to tap the benefits of using technology to aid learning and assessment. In Nigeria, for instance, not many institutions have fully adopted electronic examination as an assessment method due to many factors which may be related to issues with infrastructural challenges [26, 27]. However, some institutions have come to terms with e-assessment as an option due to its inherent benefits which are not available in traditional pen on paper type of assessment.
For instance, National Open University of Nigeria (NOUN) is an institution of higher learning operating on open and distance learning mode. As at 2010, NOUN had 90,767 registered students and conducting assessment for this number without a corresponding human and infrastructural resources posed a great challenge for the university; hence the need to adopt electronic examination as an alternative to pen-on-paper examination [28]. Although previous study conducted by [28] focused on reactions of academic staff to e-examination, there is little known if anxiety plays a significant role towards academic staff adoption of electronic examination as a form of assessment.
Similarly, there had been a growing concern about the conduct, authenticity and reliability of qualifying examinations into Nigerian tertiary institutions. It is in this regard that the Joint Admission and Matriculation Board (JAMB) introduced the computer-based testing (CBT) with the objective of eliminating all forms of examination malpractices and promote the use of electronic testing in Nigeria [29]. In their study, many challenges such as economic, social, technological factors were mentioned [30]; though there were sensitization campaign going on to ensure students were well informed on the modalities surrounding CBT exams, but a study on whether failure of some students to pass Unified Tertiary Matriculation Examination (UTME) can be attributed to fear of using computer for assessment is yet to be seen.
2.4 Related Studies on Computer Anxiety and E-Assessment
A number of studies have been conducted in the field of computer anxiety role on e-assessment adoption. Beckers, Wicherts [11] defined computer anxiety as one of the most common anxiety disorders, it is a feeling of fear and apprehension experienced by prospective learners when they have the thought of using computer for assessment [31]. In this study, computer anxiety was classified into temporary or state anxiety which is experienced as a result of the state of the learner most especially when technology is introduced in assessment [31,32,33] while the other is trait-like which may be difficult to treat since the source of this anxiety is profound. The purpose of their study was to confirm if computer anxiety is a permanent attribute of humans or an anxiety that is subject to the introduction of a particular situational stressor such as computer use. Their study examined relationship between computer anxiety, trait and state anxiety and to measure the effect of this anxiety when a stressor like computer technology is introduced to assessment. The findings of their study highlighted that computer anxiety is more strongly correlated to trait anxiety than state anxiety. They further suggested that computer anxiety is deep-rooted in trait anxiety and therefore remains a composite occurrence which requires multi-dimensional approach.
Jimoh, Yussuff [8] extended Perceived Usefulness (PU) and Perceived Ease of Use (PEOU) in TAM with additional variable; perceived fairness (PF) towards acceptability of CBT for undergraduate courses in computer science. The findings of their study highlighted that PEOU of CBT positively influences its PU and PEOU, PU and PF of CBT systems have statistical significant effect on Behavioral Intention of students to accept the CBT systems. The important finding here is that students will use the CBT when they have the feeling that the system is fair to them; since those students who did well have a feeling that CBT was fair to them while those who did not do well perceived it as being unfair. This outcome is similar to a study conducted by Daly and Waldron [34] who suggested that students who performed better during assessment preferred CBT more. Despite this productive outcome, there was no indication that those who performed poorly did so as a result of fear associated with computer usage or low level of computer self-efficacy; since they have the perception that CBT platform might not have been fair to them after all.
In a related study conducted by Babo, Azevedo [35], this study ascertains the students’ perceptions about the use of Multiple Choice Question (MCQ) e-assessment using Moodle quizzes features. From the analysis, it was observed that students have positive perceptions about the MCQ test type. Although, technical issues such as servers’ instability and lack of sufficient time for the test were reported. Despite the effect a new type of test has on levels of anxiety, higher levels of fear and nervousness were not observed compared to traditional tests. In other words, students agreed that there are no differences in the complexity level of the two tests classifications, therefore suggesting that when the reported technical challenges are resolved, computer anxiety may not really have an effect on the subjects under investigation.
Alruwais, Wills [3] developed a conceptual model where Decomposed Theory of Planned Behavior (DTPB) [36, 37] (which has all important constructs of TAM and TPB) was extended with IT support (Conceptual Model of Acceptance and Usage of E-assessment (MAUE)) towards finding impacts of E-assessment used by lecturers in Saudi universities. However, this extension did not explain the role of computer self-efficacy and IT support in minimizing the effect of e-assessment on prospective learners. Similarly, Farzin and Dahlan [12] proposed a model to explore students’ perception of e-assessment. Their study extended UTAUT with two constructs which include Habit and Computer anxiety towards Behavioral Intention (BI) and Usage intention. Lack of technical support [38,39,40], a component of facilitating condition was also considered as a factor affecting e-assessment. However their proposed model did not provide an empirical evidence to highlight the effect of computer anxiety on prospective e-assessment users. Therefore, in view of the outcomes of the related studies, it becomes necessary to find ways to reduce the effect of computer anxiety on the examinee most especially if the inherent benefits of using this technology for assessment are necessary to be harnessed.
2.5 Theoretical Model of Adoption
Technology Acceptance Models (TAM) have been applied in many fields to understand factors that encourage prospective users to use a particular technology. For instance, TAM by Davis mentioned that users will rather use a technology if it is perceived to be beneficial and easy to use [41]. TAM shown in Fig. 1, represents an important theoretical contribution towards understanding Information Systems (IS) utilization and IS acceptance behaviors [42, 43, 96, 97] and the adoption and usage of new IS [44]. However in the context of e-assessment, a number of studies have applied TAM and extended versions of TAM to establish relationships between computer anxiety and related factors [8, 41, 45].
In TAM, Perceived Ease of Use (PEOU) is defined as the extent to which a system or innovation is easy to use [41] and Perceived Usefulness, which is the extent to which a person believes that using a particular system will enhance his/her job performance, are important predictors of BI of technology use. PEOU predicts PU while PEOU and PU are predictors of Attitude (A) towards using a technology. In addition, Attitude predicts BI and BI consequently predicts actual use of a technology. TAM explains the relationship that existed among perceived ease of use, perceived usefulness, user attitudes, behavioral intention and actual system use constructs [46]. According to TAM, behavioral intention determines if prospective users make decision to use the system or not. In this study, TAM was considered for extension since it is regarded as one of the mostly applied technology adoption models in e-learning context that measures user’s intention towards the use of technology in learning [6, 47, 48].
2.6 Conceptual Model Development and Research Hypotheses
Over the years, tremendous work has been published in the area of e-assessment most especially in the developed countries where e-assessment has recorded a huge success [17, 49,50,51,52]. Relatively little has been handled in the developing countries [8, 53] most especially in Nigeria and Cameroon context which forms the scope of this study. In addition, a number of studies have been conducted to establish links between computer anxiety and related factors, using suitable theoretical model such as TAM [41, 45], extended TAM [8], UTAUT [12, 54, 55, 94, 95], Social Cognitive Theory [93], Computer based assessment acceptance model [17] (CBAAM) only few of them have used TAM without extending it in the field of e-learning and e-assessment. This research therefore aims to investigate the role of technical support towards mitigating the effect of computer anxiety of prospective test takers. This study focuses on students in sub Saharan African countries where there are reported low level of computer self-efficacy. The effect of this technical support variable and other variables will be tested randomly across Universities in Nigeria and Cameroon.
However, the researchers of this study seek to maximize the strength of this model and adapt it within the context of e-assessment adoption towards reducing the effects of computer anxiety on prospective test takers most especially when the latter are perceived to have low computer self-efficacy. In this study, this extension did not include attitude and actual use variables of TAM since Venkatesh, Morris [45] mentioned that researchers are often faced with huge number of related constructs provided by many theories and discovered that they “pick and choose” variables from these models or simply go for a preferred model. In view of this, the researchers hypothesized based on the conceptual model in Fig. 2.
Perceived Ease of Use
In technology acceptance model (TAM), Perceived Ease of Use (PEOU) is defined as the extent to which a system or innovation is easy to use [41]. PU and BI are found to be influenced by PEOU [45, 56].
Therefore the researchers hypothesize as follows:
-
H1: Perceived Ease of Use has significant relationship with Perceived Usefulness of E-Assessment.
-
H2: Perceived Ease of Use has significant relationship with Behavioural Intention Use of E assessment.
Perceived Usefulness
Perceived Usefulness (PU) is defined as the extent to which a person believes that using a particular system will enhance his/her job performance [41, 96]. A strong relationship PU on BI were reported by many studies [17, 48, 57] and it is considered as one of the most important predictors of technology acceptance and actual use.
-
H3: Perceived Usefulness has significant relationship with Behavioural Intention use of E assessment.
Technical Support
Tarus, Gichoya [58] as cited by Saidu, Clarkson [59] mentioned that lack of technical support is one of the major factors affecting the implementation of e-Learning in Kenya public Universities. Similarly, Farzin and Dahlan [12] explained that the use of available features of e-assessment can be quite challenging to the test takers most especially when there is perceived lack of technical support during the assessment, a component of facilitating condition available in UTAUT [54] Model. Therefore the researchers aim to investigate the effects of technical support on Perceived Usefulness (PU) which is to what extent the user feels presence of technical support helps to see inherent benefits of e-assessment, and Perceived Ease of Use (PEOU) that is to what extent the user feels availability of technical support makes e-assessment easy to use. In the case of Computer Self-Efficacy (CSE), the researchers seek to find the influence of technical support on users with reported low or high computer self-efficacy. In other words, to know if technical support is desirable where there is reported high computer self-efficacy, while technical support on Computer Anxiety (CA) seeks to know if presence of technical support reduces fear associated with use of e-assessment on students, and finally, effect of technical support on users Behavioural Intention (BI) to use e-assessment, and thus hypothesize as follows:
-
H4: Technical Support has significant relationship with Perceived Usefulness of E assessment.
-
H5: Technical Support has significant relationship with Perceived Ease of Use of E assessment.
-
H6: Technical Support has significant relationship with Computer Self-efficacy use of E assessment.
-
H7: Technical Support has significant relationship with Computer Anxiety use of E assessment.
-
H8: Technical Support has significant relationship with Behavioural Intention to use E assessment.
Computer Self-Efficacy
Computer Self-Efficacy (CSE) is determined as the individual’s beliefs on his/her ability to use computers [60]. In Computer Based Assessment, computer self-efficacy is an important factor which influences students’ performance during assessment. Students with higher CSE were reported to gain significant time only by clicking, typing or reading through the PC quicker. Previous studies reported relationships between Computer Self-Efficacy and Perceived Ease of Use [17, 61, 62], and thus the researchers hypothesized that:
-
H9: Computer Self-Efficacy has significant relationship with Perceived Ease of Use of E assessment.
-
H10: Computer Self-Efficacy has significant relationship with Computer Anxiety use of E assessment.
Computer Anxiety
Computer anxiety is defined as the extent to which an individual expresses uneasiness or fear when he/she is faced with the possibility of using computers for assessment. Anxiety can be classified into three: trait anxiety (permanent since the source is fundamental), state anxiety (temporary or induced by the present circumstance), and dependent anxiety (a mixture of both trait and state anxiety) [63]. Farzin and Dahlan [12] suggested that the construct, computer anxiety, can be classified under the second type of anxiety (state anxiety) since the feeling will emerge before or during an engagement with an information system. The researchers of this study therefore support this notion by focusing on state anxiety since they are of the opinion that the anxiety associated with computer usage may be temporary since it might have been induced by the presence of computer technology for assessment. [64] in their study of citizen adoption of e-government systems also stressed that Anxiety will have significant relationship with behavioral intention and therefore the researchers hypothesized that:
-
H11: Computer Anxiety has significant relationship with behavioural Intention to use E assessment.
3 Research Method
3.1 Data Collection
This study followed a positivist research paradigm and in order to investigate e-assessment acceptance for this study, survey methodology was applied. The final questionnaire consisted of 24 questions (see Appendix I). All the questions were multiple choice close ended, five point likert scales (1–5), from strongly disagree to strongly agree. A random probability sampling technique was adopted and in order to ensure the adequacy of the sample size in this study, G* power software concept derived from [65, 66] was applied and total sample size was given at seventy-four 74. (See Appendix II). This was necessary to obtain a representative sample [67] that is generalizable to a larger population [68]. The study applied online survey methodology (see Appendix III) to distribute questionnaire across Universities in Cameroon and Nigeria. A total of one hundred and five (105) questionnaires were retrieved out of which 3 were excluded due to incomplete responses; thereby giving one hundred and two (102) responses as highlighted in Table 1. The final sample consisted of 64 (62.75%) male and 38 (37.25%) female. 44 participants (43.14%) were above average in ability to use computer, 52 respondents (50.98%) were average while only 6 students (5.88%) were below average. According to level of study, Undergraduate Year 1 respondents were 55 (53.92%), Year 2 were 18 (17.65%), Year 3 were 13 (12.74%) while Year 4 were 14 (13.72%) of the sample respectively.
3.2 Data Analysis
This study applied structural equation modeling (SEM) for data analysis. SEM is a causal modeling procedure with sole objective of maximizing the explained variation of the dependent latent variables to examine the quality of data with reference to the attributes of the measurement model [69]. Our applying SEM for data analysis in this study is necessary to ascertain if the measurement and structural model meet the quality criteria for evidence-based research. Empirical studies that applied structural equation modeling (SEM) are very common lately in the field of information systems [70] and it can be considered as having another distinctive and very functional approach called partial least square (PLS).
3.3 Measurement Model
To examine measurement model assessment, the PLS algorithm method was applied by examining the construct validity and reliability. This involved measuring the convergent, discriminant validity and loadings of all items with respect to the individual variables [70].
The first consideration for this study is the reliability of internal consistency. Reliability of a measurement has been defined as the consistency of a particular research instrument, and also regarded as the level to which a test consistently measures whatever it measures, as it is primarily concerned with the extent of stability between multiple measurements of constructs [71]. A construct is considered reliable when the value of a composite reliability for a construct is greater than 0.7 and according to Hair, Sarstedt [72], composite reliability (CR) measure of 0.7 and over is acceptable. Cronbach’s alpha (CA), (coefficient alpha), a type of reliability coefficient reported most often in the literature, is known to provide the conventional measure for this internal consistency reliability. Cronbach’s alpha measures internal consistency reliability, or the degree to which responses are consistent across the items of a measure. If internal consistency is low, then the content of the items may be so heterogeneous that the total score is not the best possible unit of analysis. A conceptual equation is
where ni is the number of items, not cases, and rij is the average Pearson correlation between all pairs of items [73]. Although, CA presumes that all items of a construct are equally dependable and therefore, sometimes perceived as having characteristics of measuring internal consistency conservatively. In view of these limitations, therefore, composite reliability was considered a good substitute for CA. [74] mentioned that generally accepted threshold for CA is 0.7. In social science, it may go down to 0.6 and still be considered valid [75, 76]. [77] maintained that in theoretical studies, even modest reliabilities of 0.60 or 0.50 may be acceptable for Cronbach’s alpha [78, 79]. Therefore the result suggested that construct reliability for this study may be accepted as shown in Table 2.
Convergent validity in this study was established with three criteria: item factor loading, composite reliability, and average variance extracted [80]. Firstly, the convergent validity was evaluated from the measurement model by evaluating the factor loading greater than or equal to 0.7 which is preferred as mentioned by [81] and cited in [82]. Therefore, factor loading lower than 0.7 were removed from the study as shown in Table 3. This is also mentioned by Hair, Sarstedt [72], that ordinarily composite reliability should be above 0.70. Consequently, AVE denotes the average variance from a set of items that were inspected. The indicators removed as highlighted in Table 3. (PEOU_1, PEOU_4, CSE_2 and CA_2) from the initial measure increased the AVE value of Computer self-efficacy from 0.412 to 0.559. Therefore, value of AVE greater than 0.5 indicates that the set of items has sufficient convergence in measuring the constructs, as reported by [83].
Discriminant validity is often considered as the extent to which a construct empirically varies from other constructs. Thus, discriminant validity is confirmed when a construct exhibits a different characteristic not captured by another construct in the same model. In this study, discriminant validity was measured using Fornell-Larcker criterion which is measured by substituting the square root of AVE for the correlation coefficient matrix diagonals, with values greater than the correlation coefficients in the other dimension as highlighted in Table 4. The AVE of this study ranged between 0.559 and 0.833. In view of these, this study however shows the square root of AVE is well above the correlation coefficients in other dimensions, which indicates that the model in this study has discriminant validity.
3.4 Structural Model
The Coefficient of Determinant (R2)
The first important measure in examining the structural model is the assessment of the coefficient of determinant (R2) for dependent constructs. The R-square measures the proportion of the variance of a dependent variable that is explained by the independent constructs [72]. It signifies model’s capability to interpret the dependent variable [84]. Following recommendation by [85], measures of approximately 0.670 is substantial, measure around 0.333 is considered moderate, and values less than or equal to 0.190 is weak. In this study, the predictive power of constructs for the model is 38% of the variance in students’ intention to use e-assessment thereby implying the descriptive strength of the whole model as well as the evaluation of the predictive power of the independent variables highlighted in Fig. 3.
4 Discussion and Implication
The hypotheses for this study were verified by evaluating the statistical importance of the path coefficients using t-statistics computed by means of the bootstrap resampling approach utilizing 5000 samples as highlighted below, and Non-parametric bootstrapping [86] was then applied with 5000 replications as recommended by [87]. The two tailed t-tests used as the hypotheses were directional and unidirectional [88]. The t-value and the Degree of Freedom (DF) were used to calculate the p-value for each hypothesis. Eleven hypotheses were evaluated for this study. The p-value results are shown in Table 5 and Fig. 4 along with the degree of significance for each p-value.
The purpose of this study is to examine the role of technical support in mitigating the effect of computer anxiety on acceptance of electronic assessment. Technology acceptance model (TAM) was extended to include Technical Support (TS), Computer Anxiety and Computer Self-Efficacy (CSE) and hypotheses formed based on the proposed model. The findings from the analysis reveal that Perceived Ease of Use has significant positive relationship with Perceived Usefulness. This indicates that students are able to see the potential benefits inherent in the use of e-assessment and thus can be considered useful. The outcome is consistent with findings from previous studies on computer-based assessment acceptance [8, 89].
Perceived Ease of Use was found to have positive significant relationship with behavioural intention. This outcome supports previous studies on acceptance of learning management system [48, 90]. This further suggested that when students see potential benefits of e-assessment, their will to use the technology increases. Similarly, Perceived Usefulness was found to have a very strong positive relationship with behavioural intention. This means that when students are convinced on the usefulness of e-assessment, it will enhance their decisions to use it. This is supported by Padilla-Meléndez, Garrido-Moreno [91] on acceptance of learning management systems and related study on students attitude to use blended learning systems by Padilla-Meléndez, del Aguila-Obra [92].
On the role of Technical Support, there were no empirical studies found prior to conducting this study. Although, Tarus, Gichoya [58] as cited by Saidu, Clarkson [59] did mention that lack of technical support is one of the major factors affecting the implementation of e-Learning in Kenya, and Farzin and Dahlan [12] stressed that e-assessment can be quite challenging to the test takers most especially when there is lack of technical support during assessment. The findings of the analysis show that Technical Support has strong relationship with Perceived Usefulness of e-Assessment system. This indicates that students are able to see the usefulness of e-Assessment when technical assistance is provided. On the other hand, Technical Support correlates with Perceived Ease of Use. This outcome indicates that availability of prompt technical assistance during assessment would help the students see the relative advantage hence the validity of this hypothesis. Nevertheless, outcome of the analysis did not show any significant relationship between Technical Support and Computer Self-Efficacy; therefore not supporting the hypothesis.
Technical Support has strong positive relationship with Computer Anxiety. This outcome indicates that presence of Technical Support has the capability to reduce fear associated with the use of computer for assessment by University students in Nigeria and Cameroon. This confirms the objective of this study which is to confirm if Technical Support has the capability to reduce computer anxiety on Undergraduate students during electronic assessment. Although, the outcome of the analysis did not support the hypothesis that Technical Support has relationship with students’ behavioural intention to use e-Assessment, the researchers believed that with larger sample size, the relationship between these two constructs might be significant since there is significant relationships between Technical Support and Perceived Usefulness and Perceived Ease of Use as reflected in Fig. 4.
Computer Self-Efficacy did not report any significant relationship with Perceived Ease of Use. The researchers believed CSE may not be an important factor for them since they are comfortable using computer and therefore using e-Assessment by this group of students would not be an issue after all. This is contrary to previous studies where there are reported relationships between Computer Self Efficacy and Perceived Ease of Use [17, 61, 62]. Notwithstanding, there is a strong relationship but negative path between Computer Self-efficacy and Computer Anxiety. This indicates that Computer Anxiety is likely to affect the students’ performance during assessment regardless of their level of proficiency in the use of Computer. This outcome is evident from the survey as most of the respondents claimed to have average and above average skills in the use of computer. Finally, there is a negative significant relationship between Computer Anxiety and students’ behavioural intention to use e-Assessment. This confirms the objective of this study which is to find ways to reduce fear that arises when students are faced with using computer for assessment.
5 Conclusion
The purpose of this study is to examine the role of technical support in mitigating effect of computer anxiety on students taking e-assessment in sub Saharan African countries, Cameroon and Nigeria in particular. This study extended Technology Acceptance Model (TAM) with additional variables such as Computer Self-Efficacy, Computer Anxiety and Technical Support. The model hypothesized eleven relationships and eight of them were found to be significant. The theoretical contribution of this study filled the research gaps where empirical validation of role of Technical Support construct towards mitigating effect of computer anxiety on University students during e-Assessment had not been carried out by any other researcher in the field of e-assessment acceptance. The conceptual model was found to be reliable having subjected it to measurement and structural analysis giving birth to the final model for this study (e-Assessment Acceptance Model, shown in Fig. 5). In addition, this study supports the body of knowledge by establishing that Computer Anxiety is an important factor which can affect University students regardless of their level of computer proficiency. It also revealed that when technical assistance is available, computer anxiety during e-Assessment on majority of University students in Nigeria and Cameroon is reduced.
The practical implication of this study’s outcome is that many students’ actual academic potentials may not be seen if education policy makers and University administrators do not always strive to ensure that all measures that can reduce computer anxiety are introduced, which include provision of Technical Support during electronic assessment.
Finally this study is not without its limitations, due to limited time available to complete this study, the total sample used for this study are not equal representation of students from the countries under investigation (Nigeria and Cameroon); caution should be taken when generalizing the outcome of this study. Future study is expected to increase the sample size and confirm the final model for this study so as to obtain findings that can be generalized. In addition, the study can be separated to see the effect on the individual countries and taking survey of students who do not have knowledge of computer may provide an insight that can further guide organizations and education administrators.
References
Gilbert, L., Whitelock, D., Gale, V.: Synthesis report on assessment and feedback with technology enhancement (2011)
Llamas-Nistal, M., et al.: Blended e-assessment: migrating classical exams to the digital world. Comput. Educ. 62, 72–87 (2013)
Alruwais, N., Wills, G., Wald, M.: Identifying factors that affect the acceptance and use of E-assessment by academics in Saudi Universities. IJAEDU-Int. E-J. Adv. Educ. 2(4), 132–140 (2016)
Dhar, D., Yammiyavar, P.: A cross-cultural study of navigational mechanisms in computer based assessment environment. Procedia Comput. Sci. 45, 862–871 (2015)
Conti-Ramsden, G., Durkin, K., Walker, A.J.: Computer anxiety: a comparison of adolescents with and without a history of specific language impairment (SLI). Comput. Educ. 54(1), 136–145 (2010)
Nurcan, A.: Identifying factors that affect students’ acceptance of web-based assessment tools within the context of higher education. M.Sc. dissertation. Midlle East Technical University. Retrieved from Middle East Technical University Digital Thesis (2010)
Putwain, D.W., Daniels, R.A.: Is the relationship between competence beliefs and test anxiety influenced by goal orientation? Learn. Individ. Differ. 20(1), 8–13 (2010)
Jimoh, R., et al.: Acceptability of Computer Based Testing (CBT) Mode for Undergraduate Courses in Computer Science (2013)
Sieber, V., Young, D.: Factors associated with the successful introduction of on-line diagnostic, formative and summative assessment in the Medical Sciences Division University of Oxford (2008)
Ndunagu, J., Agbasonu, V.C., Ihem, F.C.: E-assessment of bi-weekly report: a case study of National Orientation Agency (NOA), Imo State, Nigeria. West Afr. J. Ind. Acad. Res. 14(1), 49–60 (2015)
Beckers, J.J., Wicherts, J.M., Schmidt, H.G.: Computer anxiety: “Trait” or “state”? Comput. Hum. Behav. 23(6), 2851–2862 (2007)
Farzin, S., Dahlan, H.M.: Proposing a model to predict students’ perception towards adopting an e-assessment system. J. Theor. Appl. Inf. Technol. 90(1), 144–153 (2016)
Lachman, S.J.: Learning is a process: toward an improved definition of learning. J. Psychol. 131(5), 477–480 (1997)
De Houwer, J., Barnes-Holmes, D., Moors, A.: What is learning? On the nature and merits of a functional definition of learning. Psychon. Bull. Rev. 20(4), 631–642 (2013)
Ausubel, D.P., Novak, J.D., Hanesian, H.: Educational Psychology: A Cognitive View (1968)
Imtiaz, M.A., Maarop, N.: A review of technology acceptance studies in the field of education. Jurnal Teknologi 69(2), 27–32 (2014)
Terzis, V., Economides, A.A.: The acceptance and use of computer based assessment. Comput. Educ. 56(4), 1032–1044 (2011)
Mason, O., Grove-Stephensen, I.: Automated free text marking with paperless school (2002)
Bennett, R.E.: Inexorable and inevitable: the continuing story of technology and assessment. Comput.-Based Test. Internet: Issues Adv. 1, 201–217 (2006)
Siozos, P., et al.: Computer based testing using “digital ink”: participatory design of a tablet PC based assessment application for secondary education. Comput. Educ. 52(4), 811–819 (2009)
Mohamadi, Z.: Comparative effect of online summative and formative assessment on EFL student writing ability. Stud. Educ. Eval. 59, 29–40 (2018)
Deutsch, T., et al.: Implementing computer-based assessment–a web-based mock examination changes attitudes. Comput. Educ. 58(4), 1068–1075 (2012)
Singleton, C.: Computer-based assessment in education. Educ. Child Psychol. 18(3), 58–74 (2001)
Darkwa, O., Mazibuko, F.: Virtual learning communities in Africa: challenges and prospects. FirstMonday (2002)
Unwin, T.: Survey of e-Learning in Africa. E-Learn. UNESCO Chair in ICT for Development, Royal Holloway, University of London, UK, pp. 1–10 (2008)
Nwana, S.: Challenges in the applications of e-learning by secondary school teachers in Anambra State, Nigeria. Afr. J. Teach. Educ. 2(1), 1–9 (2012)
Ajadi, T.O., Salawu, I.O., Adeoye, F.A.: E-learning and distance education in Nigeria. Online Submission 7(4), 1–10 (2008)
Osang, F.: Electronic examination in Nigeria, academic staff perspective—case study: National Open University of Nigeria (NOUN). Int. J. Inf. Educ. Technol. 2(4), 304–307 (2012)
Abubakar, A.S., Adebayo, F.O.: Using computer based test method for the conduct of examination in Nigeria: prospects, challenges and strategies. Mediterr. J. Soc. Sci. 5(2), 47 (2014)
Adomi, E.E., Kpangban, E.: Application of ICTs in Nigerian secondary schools. Library Philosophy and Practice (2010)
Simonson, M.R., et al.: Development of a standardized test of computer literacy and a computer anxiety index. J. Educ. Comput. Res. 3(2), 231–247 (1987)
Laguna, K., Babcock, R.L.: Computer anxiety in young and older adults: implications for human-computer interactions in older populations. Comput. Hum. Behav. 13(3), 317–326 (1997)
Rosen, L.D., Maguire, P.: Myths and realities of computerphobia: a meta-analysis. Anxiety Res. 3(3), 175–191 (1990)
Daly, C., Waldron, J.: Introductory programming, problem solving and computer assisted assessment (2002)
Babo, R.B., Azevedo, A.I., Suhonen, J.: Students’ perceptions about assessment using an e-learning platform. In: 2015 IEEE 15th International Conference on Advanced Learning Technologies. IEEE (2015)
Chien, S.-P., Wu, H.-K., Hsu, Y.-S.: An investigation of teachers’ beliefs and their use of technology-based assessments. Comput. Hum. Behav. 31, 198–210 (2014)
Taylor, S., Todd, P.: Decomposition and crossover effects in the theory of planned behavior: a study of consumer adoption intentions. Int. J. Res. Mark. 12(2), 137–155 (1995)
Fluck, A.: State wide adoption of e-assessments. In: Ensuring Quality and Standards for e-Assessments in Tertiary Education: Redefining Innovative Assessment in the Digital Age (2012)
Fluck, A.E., Mogey, N.: Comparison of institutional innovation: two universities’ nurturing of computer-based examinations. In: 10th IFIP World Conference on Computers in Education (2013)
Al-Qeisi, K., et al.: Website design quality and usage behavior: unified theory of acceptance and use of technology. J. Bus. Res. 67(11), 2282–2290 (2014)
Davis, F.D.: Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 13(3), 319–340 (1989)
Davis, F.D., Bagozzi, R.P., Warshaw, P.R.: User acceptance of computer technology: a comparison of two theoretical models. Manag. Sci. 35(8), 982–1003 (1989)
Robey, D.: Research commentary: diversity in information systems research: threat, promise, and responsibility. Inf. Syst. Res. 7(4), 400–408 (1996)
Malhotra, Y., Galletta, D.F.: Extending the technology acceptance model to account for social influence: theoretical bases and empirical validation. In: Proceedings of the 32nd Annual Hawaii International Conference on Systems Sciences, HICSS-32. IEEE (1999)
Venkatesh, V., et al.: User acceptance of information technology: toward a unified view. MIS Q. 27(3), 425–478 (2003)
Szajna, B.: Empirical evaluation of the revised technology acceptance model. Manag. Sci. 42(1), 85–92 (1996)
Park, S.Y.: An analysis of the technology acceptance model in understanding university students’ behavioral intention to use e-learning. Educ. Technol. Soc. 12(3), 150–162 (2009)
Van Raaij, E.M., Schepers, J.J.: The acceptance and use of a virtual learning environment in China. Comput. Educ. 50(3), 838–852 (2008)
Sun, P.-C., et al.: What drives a successful e-Learning? An empirical investigation of the critical factors influencing learner satisfaction. Comput. Educ. 50(4), 1183–1202 (2008)
Terzis, V., Moridis, C.N., Economides, A.A.: The effect of emotional feedback on behavioral intention to use computer based assessment. Comput. Educ. 59(2), 710–721 (2012)
Terzis, V., Moridis, C.N., Economides, A.A.: Continuance acceptance of computer based assessment through the integration of user’s expectations and perceptions. Comput. Educ. 62, 50–61 (2013)
Kalogeropoulos, N., et al.: Computer-based assessment of student performance in programing courses. Comput. Appl. Eng. Educ. 21(4), 671–683 (2013)
Bhuasiri, W., et al.: Critical success factors for e-learning in developing countries: a comparative analysis between ICT experts and faculty. Comput. Educ. 58(2), 843–855 (2012)
Venkatesh, V., et al.: Individual reactions to new technologies in the workplace: the role of gender as a psychological construct. J. Appl. Soc. Psychol. 34(3), 445–467 (2004)
Venkatesh, V., Brown, S.A., Bala, H.: Bridging the qualitative-quantitative divide: Guidelines for conducting mixed methods research in information systems. MIS Q. 37(1), 21–54 (2013)
Agarwal, R., Karahanna, E.: Time flies when you’re having fun: cognitive absorption and beliefs about information technology usage. MIS Q. 24(4), 665–694 (2000)
Ong, C.-S., Lai, J.-Y.: Gender differences in perceptions and relationships among dominants of e-learning acceptance. Comput. Hum. Behav. 22(5), 816–829 (2006)
Tarus, J.K., Gichoya, D., Muumbo, A.: Challenges of implementing e-learning in Kenya: a case of Kenyan public universities. Int. Rev. Res. Open Distrib. Learn. 16(1), 120–141 (2015)
Saidu, A., Clarkson, M.A., Mohammed, M.: E-Learning Security Challenges, Implementation and Improvement in Developing Countries: A Review (2016)
Compeau, D.R., Higgins, C.A.: Computer self-efficacy: development of a measure and initial test. MIS Q. 19(2), 189–211 (1995)
Agarwal, R., Sambamurthy, V., Stair, R.M.: The evolving relationship between general and specific computer self-efficacy—an empirical assessment. Inf. Syst. Res. 11(4), 418–430 (2000)
Venkatesh, V., Davis, F.D.: A model of the antecedents of perceived ease of use: development and test. Decis. Sci. 27(3), 451–481 (1996)
Oye, N., Iahad, A., Rabin, A.: Behavioral intention to accept and use ICT in public universities: integrating quantitative and qualitative data. J. Emerg. Trends Comput. Inf. Sci. 3(6), 957–969 (2012)
Rana, N.P., Dwivedi, Y.K.: Citizen’s adoption of an e-government system: validating extended social cognitive theory (SCT). Gov. Inf. Q. 32(2), 172–181 (2015)
Mayr, S., et al.: A short tutorial of GPower. Tutorials Quant. Methods Psychol. 3(2), 51–59 (2007)
Erdfelder, E., Faul, F., Buchner, A.: GPOWER: a general power analysis program. Behav. Res. Methods Instrum. Comput. 28(1), 1–11 (1996)
Schreuder, H.T., Gregoire, T.G., Weyer, J.P.: For what applications can probability and non-probability sampling be used? Environ. Monit. Assess. 66(3), 281–291 (2001)
Kasunic, M.: Designing an effective survey. Carnegie-Mellon Univ Pittsburgh PA Software Engineering Inst (2005)
Ahmad, S., Afthanorhan, W.M.A.B.W.: The importance-performance matrix analysis in partial least square structural equation modeling (PLS-SEM) with smartpls 2.0 M3. Int. J. Math. Res. 3(1), 1–14 (2014)
Urbach, N., Ahlemann, F.: Structural equation modeling in information systems research using partial least squares. JITTA: J. Inf. Technol. Appl. 11(2), 5–40 (2010)
Sitzia, J.: How valid and reliable are patient satisfaction data? An analysis of 195 studies. Int. J. Qual. Health Care 11(4), 319–328 (1999)
Hair, J.F., et al.: An assessment of the use of partial least squares structural equation modeling in marketing research. J. Acad. Mark. Sci. 40(3), 414–433 (2012)
Kline, R.B.: Principles and Practice of Structural Equation Modeling. Guilford Publications, New York (2015)
Hair, J.F., Ringle, C.M., Sarstedt, M.: PLS-SEM: indeed a silver bullet. J. Mark. Theory Pract. 19(2), 139–152 (2011)
Santos, J.R.A.: Cronbach’s alpha: a tool for assessing the reliability of scales. J. Extension 37(2), 1–5 (1999)
Peterson, R.A.: A meta-analysis of Cronbach’s coefficient alpha. J. Consum. Res. 21(2), 381–391 (1994)
Nunnally, J.C., Bernstein, I.H., Berge, J.M.F.: Psychometric Theory, vol. 226. McGraw-Hill, New York (1967)
Ozturk, M.A.: Confirmatory factor analysis of the educators’ attitudes toward educational research scale. Educ. Sci.: Theory Pract. 11(2), 737–748 (2011)
Nazari, J.A., et al.: Organizational culture, climate and IC: an interaction analysis. J. Intellect. Capital 12(2), 224–248 (2011)
Fornell, C., Larcker, D.F.: Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 18(1), 39–50 (1981)
Hulland, J.: Use of partial least squares (PLS) in strategic management research: a review of four recent studies. Strateg. Manag. J. 20(2), 195–204 (1999)
Wong, K.K.-K.: Partial least squares structural equation modeling (PLS-SEM) techniques using SmartPLS. Mark. Bull. 24(1), 1–32 (2013)
Barclay, D., Higgins, C., Thompson, R.: The partial least squares (PLS) approach to casual modeling: personal computer adoption and use as an Illustration (1995)
Ringle, C.M., Sarstedt, M., Straub, D.: A critical look at the use of PLS-SEM in MIS Quarterly. MIS Q. (MISQ) 36(1), 3–14 (2012)
Chin, W.W.: The partial least squares approach to structural equation modeling. Mod. Methods Bus. Res. 295(2), 295–336 (1998)
Efron, B., Tibshirani, R.J.: An Introduction to the Bootstrap. CRC Press, Boca Raton (1994)
Hair Jr., J.F., et al.: A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM). Sage Publications, Thousand Oaks (2016)
Gudergan, S.P., et al.: Confirmatory tetrad analysis in PLS path modeling. J. Bus. Res. 61(12), 1238–1249 (2008)
Terzis, V., Economides, A.A.: Computer based assessment: gender differences in perceptions and acceptance. Comput. Hum. Behav. 27(6), 2108–2122 (2011)
Teo, T.: A path analysis of pre-service teachers’ attitudes to computer use: applying and extending the technology acceptance model in an educational context. Interact. Learn. Environ. 18(1), 65–79 (2010)
Padilla-Meléndez, A., Garrido-Moreno, A., Del Aguila-Obra, A.R.: Factors affecting e-collaboration technology use among management students. Comput. Educ. 51(2), 609–623 (2008)
Padilla-Meléndez, A., del Aguila-Obra, A.R., Garrido-Moreno, A.: Perceived playfulness, gender differences and technology acceptance model in a blended learning scenario. Comput. Educ. 63, 306–317 (2013)
Rana, N.P., Dwivedi, Y.K.: Citizen’s adoption of an e-government system: validating extended social cognitive theory (SCT). Govern. Inf. Q. 32(2), 172–181 (2015)
Rana, N.P., Dwivedi, Y.K., Williams, M.D., Weerakkody, V.: Adoption of online public grievance redressal system in India: Toward developing a unified view. Comput. Hum. Behav. 59, 265–282 (2016)
Rana, N.P., Dwivedi, Y.K., Lal, B., Williams, M.D., Clement, M.: Citizens’ adoption of an electronic government system: towards a unified view. Inf. Syst. Front. 19(3), 549–568 (2017)
Dwivedi, Y.K., Wade, M.R., Schneberger, S.L. (eds.): Information Systems Theory: Explaining and Predicting Our Digital Society, vol. 1. Springer, Heidelberg (2011). https://doi.org/10.1007/978-1-4419-6108-2
Dwivedi, Y.K., Mustafee, N., Carter, L.D., Williams, M.D.: A bibliometric comparision of the usage of two theories of IS/IT acceptance (TAM and UTAUT). In: AMCIS 2010 Proceedings, Paper #183 (2010). http://aisel.aisnet.org/amcis2010/183
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
Appendix I: Final Survey Description
Perceived Usefulness | |
PU1 | Using e-assessment will improve my work |
PU2 | Using e-assessment will enhance my effectiveness |
PU3 | Using e-assessment will increase my productivity |
Perceived Ease of Use | |
PEOU1 | My interaction with e-assessment is clear and understandable |
PEOU2 | It is easy for me to become skilful in using the e-assessment |
PEOU3 | E-assessment system enabled me to take exams easily |
PEOU4 | I find the e-assessment easy to use |
PEOU5 | Using e-assessment to take exams was a good idea |
Computer Self-Efficacy | |
CSE1 | I could complete a job or task using the computer |
CSE2 | I could complete a task using the computer if someone showed me how to do it first |
CSE3 | I can navigate easily through the web to find any information I need |
CSE4 | I believe I have the basic skills required to use internet and computer before I begin to use e-assessment |
Computer Anxiety | |
CA1 | The e-assessment system is somewhat intimidating to me (Reverse) |
CA2 | I hesitated to use the e- assessment system for fear of making mistakes that I couldn’t correct (Reverse) |
CA3 | I am afraid about using the e-assessment system (Reverse) |
CA4 | Working with the e-assessment system made me nervous (Reverse) |
Technical Support | |
TS1 | It will be easy to use e-assessment if there is technical staff around me |
TS2 | It will be easy to use e-assessment if I’m shown its inherent benefits |
TS3 | I will use e-assessment if I’m guided on how to use some of its features during assessment |
TS4 | Fear of e-assessment is reduced if I have a feeling that technical support staff is around |
TS5 | Technical support is important for me to use e-assessment |
Behavioural Intention | |
BI1 | I intend to use e-assessment in the future |
BI2 | I predict I would use e-assessment in the future |
BI3 | I plan to use e-assessment in the future |
Appendix II: G* Power Analysis
Appendix III: Data Source Link
Rights and permissions
Copyright information
© 2019 IFIP International Federation for Information Processing
About this paper
Cite this paper
Adenuga, K.I., Mbarika, V.W., Omogbadegun, Z.O. (2019). Technical Support: Towards Mitigating Effects of Computer Anxiety on Acceptance of E-Assessment Amongst University Students in Sub Saharan African Countries. In: Dwivedi, Y., Ayaburi, E., Boateng, R., Effah, J. (eds) ICT Unbounded, Social Impact of Bright ICT Adoption. TDIT 2019. IFIP Advances in Information and Communication Technology, vol 558. Springer, Cham. https://doi.org/10.1007/978-3-030-20671-0_5
Download citation
DOI: https://doi.org/10.1007/978-3-030-20671-0_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-20670-3
Online ISBN: 978-3-030-20671-0
eBook Packages: Computer ScienceComputer Science (R0)