Keywords

1 Introduction

Learning technology (LT) refers to a wide range of technologies that can be used to support learning, teaching and assessment [13]. In particular, the study of assessment processes taking advantage of capabilities offered by LT is known in the literature as Computer-Assisted Assessment (CAA) [4]. In this case CAA will be focused on formative e-assessment, i.e., the set of activities supported by LT that enable learners and teachers to monitor learning, and to use the information generated to align subsequent learning and teaching activities [5, 6]. Regarding to formative e-assessment support LT arises as a powerful tool because feedback delivered by teachers may be enhanced during evaluation processes. However, a suitable feedback when the evaluation is being executed is subjected to capacity for fast detection of student difficulties [7].

This research is aimed to early detection of student difficulties when the assessment is performed. Early detection is important because feedback of teachers can be opportunely provided even in large groups of individuals [8]. As a result of proper feedback students speedily may correct misconceptions and thus, to be guided toward learning objectives. In this context, LT becomes an aid for detecting difficulties due to its capabilities for fast collecting and processing of evaluation information [5]. Hence, the information extraction gathered from the educational resources delivered to learners is streamlined, the fast processing lets expand feedback capacity of teachers while their workload is reduced, and assessment experience is improved by means of multimedia possibilities.

For early detection of difficulties when formative assessments are executed is necessary to take into account multidimensional issues [8, 9]. We consider that among the most important of such issues are highlighted the next three: The identification of proper information to determine performance of students who are being evaluated, the methodologies definition for analyzing and processing of obtained data, and the most convenient selection of LT bearing in mind that hardware and web services are steadily lowering the price. It is also necessary to prevent difficulties caused by the inadequate task design and the usability of LT [10, 11]. Although the CAA has been studied from multiple research approaches still remains a gap regarding to the opportune detection of difficulties associated to evaluation [12].

From a previous review of state of art regarding to formative e-assessment we can infer that research is mainly addressed to learning environments and intended learning outcomes [12, 13]. Both, peer review and teacher feedback arise as support to reinforce the formative effects of assessment and learning. However, teacher guidance when students are solving the assigned tasks has not been sufficiently explored because research has focused in contexts based on asynchronous communications (e.g., via text or menu-driven systems). In document at hand is proposed a model aimed to detecting student difficulties within a context where feedback delivered by teacher may be supported by synchronous communications. For instance, we may refer to a context where teacher feedback is not suitable because formative task—subjected to fixed time and place—is being solved by a large number of students.

The model is divided into components that we have classified as the most relevant for evaluation development. On one hand, the component division enables to sample meaningful information throughout the evaluation process in order to detect student difficulties. On the other hand, from component-based approach can be independently dealt other key issues of the process, e.g., the requirements for establishing human computer interaction, the proper application of capabilities offered by LT, and the use of different educational resource kinds. According to attained results by means of an implementation we can infer that model is able to detect student difficulties when tasks are being solved. However, the proposed experiment in this paper is not adequate to determine if formative purposes are improved by feedback from teacher.

In previous paragraphs we have defined the research scope. The remainder of this work is organized as follows: in Sect. 2 is presented the conceptual model for detecting student difficulties when formative e-assessment is being performed. In Sect. 3 is proposed the research experiment by means of the model implementation and the questionnaire design of the test. In Sect. 4 are presented the experiment results and the work discussion. In Sect. 5 are made known the conclusions and the future research work.

2 Proposed Model

This model can be understood as a complement to traditional activities reported at formative e-assessment state of art. Bearing in mind the complexity associated to formative e-assessment process, for establishing a synergy between LT capabilities and evaluation is necessary to make use of modeling. Thus, in Fig. 1 is presented the proposed model for early detection of the learners difficulties when formative e-assessments are executed. The conceptual structure of the model is mainly constituted by three components: teacher, evaluation and student. Additionally, the model is complemented by including both a module that represents the source of educational resources and the teacher-student feedback flow.

Fig. 1.
figure 1figure 1

Model proposed for early detection of the learner difficulties in formative e-assessments. The conceptual structure of the model is constituted by three components: teacher, evaluation and student.

Delivering of formative tasks and the continuous obtaining of apprentices responses are executed by the student component. When teacher assigns a new task this component handles its distribution among learners. Subsequently this part of the model must guarantee permanent interaction between the student and the assessment educational resource. From this interaction are periodically sampled each set of student responses entered to assigned tasks. Finally, answers are organized and delivered to evaluation component.

The condition of assessment is established by the evaluation component from the sampled student data. The performance of each student is determined by the comparison between collected answers and the task solution information. In this case the comparison term is used to allude the set of techniques implemented to determine the student performance. For example, some comparison-based algorithms are useful for reviewing closed question whereas semantic analysis algorithms are applied to open-ended questions [14]. After responses processing the obtained results are sent to the teacher component.

Summarizing, the processed data and the selection of educational resources are actions performed by the teacher component. On one hand, this component presents—in an understandable format for teacher—the outcomes that define the evaluation state. Therefore, based on LT capabilities, the information may be summarized by a proper interface design that would take advantage of color-based abstractions, percentages, images, among others. On the other hand, the teacher component provides the means to select the task type and the required responses for comparison process. For selecting tasks again the teacher interface is proposed as a convenient tool because it enables to set up a signal that contains the student list and the resource information.

Additionally, a representation of an educational resource source has been included in the model as a complement. A task assignment signal contains the identifiers of stored resources and the list of students tied to formative activity. In fact, when the signal is released towards the source, both the task and the task solution information are sent to the student and evaluation components, respectively. The source-based abstraction offers an advantage for implementing the model because it can be easily substituted, for example, the source may represent a repository that contains some learning objects addressed to formative e-assessment [15].

In short, the model performance is focused on maintaining a continuous information flow from the learner towards the tutor. Flow information is constructed using constant sampling of the student responses. The flow is completed when processed information is provided to the teacher for detecting and assisting of difficulties arisen from evaluation. Indeed, student attendance is improved because the teacher feedback is enriched by means of LT capabilities for processing data. In this case the feedback has been differentiated from the LT-based detection core for indicating that assistance may be applied in other contexts, e.g., face-to-face environments or virtual systems supported by synchronous tools.

3 Experiments

We have proposed an experiment aimed to prove the model capability for detecting student difficulties when formative e-assessments are performed. The experiment is mainly composed by the model implementation and the evaluation questionnaire design. Both are described below.

3.1 Model Implementation

The implementation is achieved by considering a set of technological tools that were selected bearing in mind the model components proposed. In Fig. 2 is presented the correspondence between the theoretical components and the implementation tool set. The model was initially implemented using three applications of Google: Gmail, Drive and Sheets. Additionally, an Android mobile application was developed by us for enabling system-teacher interaction.

Fig. 2.
figure 2figure 2

Proposed model implementation. The model was implemented using three applications of Google: Gmail, Drive and Sheets. Additionally, an Android mobile application was developed for enabling system-teacher interaction.

In this case students use personal computers for interacting with the system, whereas teachers use a mobile device. The monitoring of assessments is initiated when application is linked to each task and then the functioning model described in Sect. 2 is followed. Regarding to the student component, questionnaires and answer acquisitions are enabled by means of spreadsheets. For comparison process the evaluation component is based on the power processing of a mobile device and the task solution information stored in a spreadsheet. Finally, the teacher component uses the application interface and a spreadsheet for representing the assessment summary by an abstraction based on color variations.

3.2 Questionnaire Development

The test consisted of ten questions. For assessment we select two types of questions: Multiple Choice Questions (MCQs) and Open-ended Questions. On one hand, we have considered the use of MCQs as a suitable assessment approach because computer capabilities can be utilized for rating the marked responses. In addition, MCQs provide a quantitative measurement regarding to student abilities and a wider scope of application for different kinds of content and objectives [16]. On the other hand, the use of open-ended questions is justified because these are excellent for measuring high level cognitive learning and overall subject understanding. These types of questions are often more applicable to real life situations.

Taking into account that feedback must improve assessment process, the evaluated concepts were interconnected by an increasingly difficulty level. The feedback delivered during evaluation execution is aimed to correct the deficiencies and reinforce student concepts for subsequent questions. The test was carried out in a common physical place equipped with all LTs necessary for model application. In that way, when the system detects some student difficulty the teacher may deliver face to face feedback. This experiment matches with the situation mentioned in Sect. 2 where formative processes may be supported by synchronous tools.

4 Results

In Fig. 3, the results of the test are summarized. From the previously alluded questionnaire a four Colombian students group of final year of high school were evaluated. This questionnaire was designed to assess basics of differential calculus. Closed questions were formulated as quantitative response problems whereas open-ended questions were addressed to qualitative evaluation of concepts. In this case the questions 1, 2, 4 and 5 are open-ended type and the remaining ones are closed type.

Fig. 3.
figure 3figure 3

Summary of the test results. Learning regarding to basics of differential calculus was evaluated in a four student group. For each question, the number of assists that required each student was counted. Questions 1, 2, 4 and 5 are open-ended type and the remaining ones are closed type.

The results show that all students needed support at least once. However, by means of model implementation it was possible to deliver the appropriate feedback during the evaluation time. The evaluated concepts interconnected by an increasingly difficulty level and the teacher feedback were useful for students who failed a question, to approve the subsequent one in almost all cases. Finally, the teacher may also use these results to identify the students who have difficulties that need to be reinforced.

5 Conclusions and Future Work

According to attained results we can conclude that model implementation is able to detect early the students difficulties when formative assessment is executed. However, determining if the model is useful as a complement to evaluation processes is beyond the scope of the posed test in document at hand. Therefore, the future work will be focused on proposing a research, aimed to determine if the model can complement formative e-assessment environments.

In addition, from multiple concepts—both theoretical and practical–concerning to the model proposed, we can explore other research fields. First of all, an inquiry field that would be worth to explore is the application of the model to enhance the group learning. Also the division by components offers an advantage for the model because the overall performance of the implementation can be enhanced by research independently performed in each of the model parts. For instance, a sophisticated semantic algorithm analysis for analysing open-ended questions may be easily integrated into the model implementation.