Advertisement

Optimising Moodle quizzes for online assessments

  • Sithara H. P. W. GamageEmail author
  • Jennifer R. Ayres
  • Monica B. Behrend
  • Elizabeth J. Smith
Open Access
Short report
  • 210 Downloads

Abstract

Background

Computer-aided learning management systems (LMSs) are widely used in higher education and are viewed as beneficial when transitioning from conventional face-to-face teaching to fully online courses. While LMSs have unique tools for transferring and assessing knowledge, their ability to engage and assess learners needs further investigation. This paper focuses on a study examining the LMS “Moodle” to ascertain the effectiveness of “Moodle quizzes” to improve, assess and distinguish knowledge in a civil engineering course at an Australian university. The course has a database comprising 62 formative and 61 summative quiz questions with embedded text, images, audio and video. This study investigates the use of these quiz questions with four course cohorts and 169 students. The quizzes assessed competencies of students during various stages of a study period through automated marking. The suitability of questions to assess and distinguish student knowledge levels was determined using a psychometric analysis based on facility index (FI) and the discrimination index (DI) statistics embedded within the Moodle quizzes.

Results

This study highlights strategies used to set and review quiz questions for formative and summative assessments. Results indicated that students were engaged and satisfied in the formative assessment because they viewed the interactive videos between 2 and 6 times and 65% of students attempted all the formative questions. The FI indicated student pass rate for the summative questions and DI indicated the difficulty of these questions, while the combination of FI and DI results separated students with different knowledge levels. Using these Moodle statistics provided information to make effective decisions on how to improve the summative quizzes.

Conclusion

The multimodal quizzes were effective in teaching and assessing a theoretical engineering course and provided efficient methods to replace conventional assessments. The FI and DI indexes are useful statistical tools in redesigning appropriate sets of questions. Time-poor academics will benefit from using these easily attainable Moodle statistics to inform decisions while revising the quizzes and making assessments more autonomous.

Keywords

Online teaching Online assessments Moodle Online quizzes Psychometric analysis Facility index Discrimination index 

Abbreviations

C

Credit

D

Distinction

DI

Discrimination index

FI

Facility index

HD

High distinction

LMS

Learning management system

Introduction

The growth of online learning in recent decades has resulted in higher education institutes offering more online courses with up to 30% of American college and university students participating in at least one online course (Broadbent & Fuller-Tyszkiewicz, 2018; Liagkou & Stylios, 2018). Despite significant technological advances in online education tools, developing online course materials can be still challenging (Jackson, 2017). New technologies and educational design methodologies continuously redefine the role of professionals (Philipsen, Tondeur, Roblin, Vanslambrouck, & Zhu, 2019), therefore education institutes must ensure that educational needs of those professionals remain a primary concern. A main concern educators face is identifying the appropriate technology to develop online course materials for various disciplines (Salamon, Ali, Miskon, & Ahmad, 2016), as the requirements for digital resources can vary significantly between different disciplines (Martins, 2017; Sancho-Vinuesa, Masià, Fuertes-Alpiste, & Molas-Castells, 2018). Alongside learning and adjusting to new technologies, educators face challenges in developing resources which will successfully engage online users, working with students’ different knowledge levels and assessing the required course objectives, all while maintaining the quality of an institute’s graduates.

Various learning management systems (LMSs) and tools are available to develop digital resources for courses which were previously solely based on traditional face-to-face teaching. Research has identified Moodle as a complete and adequate platform for implementation in higher education (Aydin & Tirkes, 2010; Williams van Rooij, 2012). Moodle provides different user-friendly tools such as “quizzes”, “forums”, “databases” and “workshops” to develop various digital resources for teaching and assessment purposes. It is viewed as a best-practice instructional mode and that students who do not engage with blended learning are academically disadvantaged (Francis & Shannon, 2013). Moodle was ranked among the top 20 best LMSs based on user experiences in 2018 and 2019 (Andre, 2019; eLearning Industry, 2018).

Online quizzes in LMSs have been implemented for summative assignments, formative assessments and instructional design methods in diverse disciplines such as engineering, biology, medicine and the social sciences (Jaeger & Adair, 2017; Krause et al., 2017; Sullivan, 2016). Recent studies highlight the benefits of online quizzes and students’ positive attitude towards them (Cohen & Sasson, 2016; Wallihan et al., 2018). Such benefits include improving student motivation, enhancing understanding and active learning, and deterring cheating, as long as the quiz questions are not too easy (Cook & Babon, 2017). Carefully designed online quizzes can be one of many solutions to pre-empt student plagiarism by randomising questions, shuffling responses, providing timestamps and logs of multiple quiz attempts with systematic evaluation processes (Sullivan, 2016).

Furthermore, online quizzes can address student failure to correctly solve word problems. Word problems aim to connect mathematical problem-solving activities to real-world examples (Geary, 2017). Students, nevertheless, can have difficulties in constructing a mental model of such situations (Thevenot, 2010). One solution to this issue is to utilise mathematical word problems which allow students to make real sense of the situations described in the problems (Vicente, Orrantia, & Verschaffel, 2008). Placing the real-world situation before the mathematical problem is vital to understand the mathematical representation of the problem and integrate the information into real-world problems (Thevenot, Devidal, Barrouillet, & Fayol, 2007). Therefore, educators need to develop resources to assist students to develop a mental model to solve real-world problems represented by mathematical formula. This process can be challenging yet potentially addressed by using video-enhanced explanations of problems and feedback in the form of online quizzes (West & Turner, 2016). Quizzes with interactive videos enhance student performance giving the students an opportunity to view examples and analyse scenarios (Heo & Chow, 2005; Maarek, 2018).

One benefit of online tests is that feedback can be automatic and timely, providing students with immediate feedback on their learning with opportunities to improve their understanding. Research on outcomes of instant feedback to quizzes within a variety of teaching courses reported that instant feedback opens new avenues of communication between educators and students (Rinaldi, Lorr, & Williams, 2017). Immediate feedback is useful for students who could be struggling to understand the subject matter or who are reticent to ask questions in a conventional classroom setting. Instant feedback also provides a quick snapshot of the cohort’s understanding of the subject matter. Numerous studies identify that instant feedback recognises students’ level of understanding across a cohort providing a positive and interactive course for students (Fales-Williams, Kramer, Heer, & Danielson, 2005; James, 2016; Root-Kustritz, 2014). When immediate feedback with online tests is provided, an active learning environment is created, and students are more likely to engage with the feedback rather than avoid it if provided later (Schneider, Ruder, & Bauer, 2018). The quality and detail of the feedback will also determine the level of learning. Student learning is enhanced and reinforced when detailed feedback to online tests is provided (Wojcikowski & Kirk, 2013). Platforms—such as Moodle—are being used to embed quizzes and instant feedback into teaching courses within universities. Moodle quizzes with their facility to use numerous multimedia options such as audio and video can support this interaction and provide the immediate feedback which positively gauges the students’ level of understanding.

Student evaluation techniques and their relationship to grades have been a discussion topic for many decades in various academic disciplines (Huang & Fang, 2013; Ransdell, 2001; Ting, 2001). Relating evaluation to grades enables educators to take proactive measures in the classroom, for example, changing instruction, reviewing lecture materials and assignments, providing extra resources to students, and setting up prerequisites courses (Huang & Fang, 2013). Automated evaluation processes used in some online assessment tools such as Moodle quizzes enable instructors to identify patterns between a student’s response to a question and overall course performance using inbuilt statistical features of the platform.

Therefore, this paper investigates:
  1. 1.

    Can online quizzes enhance student engagement and performance?

     
  2. 2.

    How can Moodle statistics be used to determine the effectiveness of online quiz questions?

     
  3. 3.

    What benefits do online quizzes have for academics?

     

Course Background

The engineering course Hydraulics and Hydrology is a compulsory course for third-year undergraduate students of the civil engineering programme at the University of South Australia. The course was developed to meet Engineers Australia standards for an accredited civil engineering programme. The course content focuses on hydrological processes, measurement and interpretation of hydro-meteorological data, design flood estimations, open channel systems design and modelling river channel systems. The course consists of complex mathematical calculations usually based on Excel formula and hydrological data analysis.

Moodle quizzes were developed for this course to introduce and develop students’ declarative knowledge in engineering hydrology and open-channel hydraulics, when many university courses moved from the conventional face-to-face learning environment to online teaching. The conventional Hydraulics and Hydrology course had a 2-h lecture followed by a 2-h tutorial per week for 12 weeks (with one week set aside for revision). Seven of these weekly tutorials focused on structured problem-solving activities. When developing the online course, these seven tutorials were turned into 123 sequenced Moodle quiz questions with integrated stages. Each Moodle question consisted of text, images, audio or video explanations and feedback. Face-to-face lectures were reproduced using the Camtasia software as a series of short videos, highlighting key concepts and key terminology. These videos were then linked to the online quizzes as additional feedback to some questions. Instead of attending face-to-face lectures and tutorials, online students engage with lecturers through a real-time virtual classroom to discuss their questions. Methods used in preparing lecture videos and real-time virtual classes in the online course are not discussed in this paper.

Methods

Methods used to develop online quizzes

The process of appropriate structuring and selection of questions is vital in transferring and assessing knowledge in learning environments. In online learning environments, extra care must be taken to engage online learners. The quizzes for this online course were developed to achieve two objectives: (1) develop student knowledge by providing them with an opportunity to practise and apply the new concepts learnt in the course, and (2) assess student knowledge and assign an appropriate grade that accurately distinguish between student competency levels (novice, competent, proficient and expert). To achieve these objectives “formative” and “summative” Moodle quizzes were developed.

In this course, online quizzes were developed using various multimodal resources to explain complicated mathematical solutions and course-specific terminology. The quizzes provide an approach to construct detailed and coherent mental models of key course concepts. For example, an instructional video was prepared to teach the concept of a sluice gate (sliding gate) to control water flow where a hydraulic jump (abrupt rise in water surface) could occur in different opening positions of the sluice gate. The solution to this problem requires performing numerous calculations and drawing flow profiles. Visual representations of all scenarios aimed to help students understand the importance of each calculation when designing a sluice gate because miscalculations can lead to serious consequences. Attention was given to repetitive solutions of similar problems as research shows this technique strengthens the memory traces of appropriate schemata (Thevenot, 2010). Student prior knowledge and appropriate use of technical keywords were also considered by defining and repeating terminology.

Formative quizzes

Moodle quizzes have been used to improve instructional design (Wallihan et al., 2018) and formative assessments (Cohen & Sasson, 2016). In this study, two sets of formative quiz questions with detailed answers were prepared as the primary method of teaching students the course-required theories. In the first set, each question was embedded with a short interactive video (3–12 min) explaining the concept. These videos were segmented to clearly indicate the problem, information given, detailed solution and the final solution. At the commencement of the video, students could choose a section of the video as shown in Fig. 1 (e.g. “Solution”, “Just the answer”) and navigate the video sections according to their preference, rather than necessarily watching the entire video. This facility allowed a degree of personalisation for each student. To complement this activity, both structured and scripted worksheets were given to each student enabling them to read through the scripts if they chose not to listen to the video. The scripted sheets were an added resource for students, particularly those for whom English was an additional language.
Fig. 1

Screenshot showing the commencement of the short video embedded

In the second set of formative quiz questions, students were given instructions for each question (see Fig. 2). Conventional tutorial questions were reproduced resulting in staged questions with detailed answers. Students could confirm answers before moving to the next question (see Fig. 3).
Fig. 2

Downloadable instruction sheet with reference to the Moodle quiz

Fig. 3

Sample questions with “Check” button to view detailed answers

Summative quizzes

Two sets of summative quizzes were used to assess student knowledge. The first set had a time restriction of 1–3 h. These quizzes were developed to challenge the students in weeks 3, 6, 9 and 12 of the 13-week course. The second set was developed for Excel-based interactive activities fulfilling the course requirement for students to derive complicated Excel formula and iterations. Students were given thorough instructions on how to complete the task. These were available for download via the course home page (as shown in Fig. 2) and students had a week to follow the instructions and attempt the tasks using the Excel sheets. The students were then required to complete a quiz based on the tasks they attempted. At this stage questions were selected depending on the knowledge that needed to be assessed. For example, in a situation where a student was to be assessed on their ability to write a correct Excel formula, a dropdown menu in a table was created (see Fig. 4). Students would write a formula in an Excel sheet and then select an answer for each cell with a dropdown menu. Students do not need to enter a value for every cell in the same column, because if the formula is correct they will get answers for every correct cell. Alternatively, if the knowledge is assessed based on the students’ ability to identify a certain point on a graph, students would draw a graph using the Excel sheets and then complete the “drag-and-drop” type question in the quiz identifying the correct curve or point on the graph (see Fig. 5). The instruction sheet includes step-by-step instructions, making it easier to understand the concepts. The instructions sheets also have references to the adjoining quiz (see Fig. 2). Students are required to complete the tasks using the Excel formula before attempting the corresponding quiz question.
Fig. 4

A Moodle question where students must enter answers for selected cells using their pre-worked Excel formula

Fig. 5

A Moodle question where students must identify the points by dragging and dropping labels

Both formative and summative quizzes included a range of questions: multiple choice, selecting from a dropdown menu, entering a numerical or short text answer, uploading sketches, drag-and-drop labels and reflective and detailed descriptive (essay) type questions (as shown in Fig. 6). The percentage of different formats for questions varied in both summative and formative quizzes (see Fig. 7a, b). The format for each question was modified based on the concepts/theories being taught for that question. Setting up multiple-choice questions was labour intensive and time-consuming, although the grading was instantaneous. By contrast, essay-type questions were quick to setup but time-consuming to grade. A range of questions was created to aid students’ learning because a combination of different types of questions can be used to improve students’ critical thinking (Ennis, 1993). The variation of question type also optimised the hours required for constructing and grading questions. Student engagement and performance in each type of question were then investigated.
Fig. 6

Examples of question types included in the quizzes

Fig. 7

a Percentages of different types of summative questions. b Percentages of different types of formative questions

Evaluation of the quiz questions

The quizzes have been implemented with four cohorts of students enrolled in the course (n = 169). Formative quizzes were analysed according to the number of times each student attempted each question. For the summative quiz questions, the suitability of each online question was based on psychometric analysis.

Psychometric analysis is a statistical procedure to determine the suitability of the proposed questions for grading purposes based on student responses and their relationship with the rest of the responses (Gómez-Soberón et al., 2013). As formative questions could have unlimited attempts, psychometric analysis was not suitable for analysing these questions; instead, they were analysed according to the number of times each student attempted a question. Psychometric analysis was only used to evaluate summative quizzes as it is an effective tool to assess the suitability of the questions to discriminate between competent and less competent students. The psychometric analysis in this study addressed: (1) Were the quiz questions well-constructed and did they have an appropriate level of difficulty? and (2) Did the questions discriminate between the higher and lower knowledge levels of students? The analysis of the summative quiz was carried out using the Moodle statistics of the facility index (FI) and the discrimination index (DI).

Facility Index

The FI describes the overall difficulty of the questions and the index represents the ratio of users that answer the question correctly. In principle, a very low (≤ 5%) or very high (≥ 95%) FI suggests that the question is not useful as an instrument of measurement (Blanco & Ginovart, 2010). The FI is calculated using Eq. 1 (see Appendix). The FI values, based on the student’s score in each question, are interpreted by “Moodle Statistics” (2019a) (see Table 1).
Table 1

Interpretation of FI (facility index values) of each question (adapted from Moodle Statistics, 2019a)

FI (%)

Interpretation

< 5

Extremely difficult or something wrong with the question

6–10

Very difficult

11–20

Difficult

21–34

Moderately difficult

35–65

About right for the average student

66–80

Fairly easy

81–89

Easy

90–94

Very easy

95–100

Extremely easy

Discrimination Index

The DI is the correlation between the weighted scores on the question and those on the rest of the test. It indicates how effective the question is at sorting out more able from less able students. A question which is very easy or very difficult cannot discriminate between students of differing ability as most students may get the same result. According to Moodle statistics (Butcher, 2010), the maximum discrimination requires a facility index in the range 30–70% (although such a value is no guarantee of a high discrimination index). The discrimination index is expressed in Eq. 3 (see Appendix). The discrimination index is calculated based on the number of students who completed the quiz. According to the “Moodle statistics” (2019a), DI values should be interpreted as shown in Table 2.
Table 2

Interpretation of DI (discrimination index values) of each question (adapted from Moodle Statistics, 2019a)

DI (%)

Interpretation

50 and above

Very good discrimination

30–50

Adequate discrimination

20–29

Weak discrimination

0–19

Very weak discrimination

-ve

Question probably invalid

To categorise a question with “very good discrimination”, students who scored high in other sections of the quiz should also have scored high on this question; similarly, students who have scored low on other sections of the quiz must also have scored low on this question. Hence, the score for the question and the score for the test should be well correlated.

In this study, both FI and DI are used as tools to measure the power of a question to distinguish proficient from weak learners.

Features of online quizzes which improve academics’ productivity

Once the quiz questions have been designed and refined, the features available in Moodle quizzes can streamline the workload of academics. The key features are:
  1. 1.

    Having a database of resources to teach and assess course content—such as complicated mathematical calculations and course-specific terminology—which can be modified with minimal academic effort

     
  2. 2.

    Customised and automated feedback which provide adequate and prompt responses to students

     
  3. 3.

    Automated marking which reduces academics’ workload

     
  4. 4.

    Randomising and shuffling quiz questions, together with monitoring Moodle logs, which enable academics to address issues of plagiarism more effectively and efficiently.

     

These features collectively benefit time-poor academics and improve the quality of teaching.

Results and Discussion

Can online quizzes enhance student engagement and performance?

A focus when preparing the quizzes was to improve student engagement. Although the formative questions were not compulsory, 65% of the students engaged in all the formative questions at least once (see Fig. 8). A student testimonial reflecting the benefits that they received through these resources is: “... They are very very helpful and easily understandable... just gives us another chance to understand the example afterwards” [A student in 2017 Course Evaluations]. This level of engagement in formative quizzes can be explained by the variety of the question formats—detailed written and/or audio/video answers provided—and different levels of complexity for each question. Embedding videos into Moodle quizzes improved the students’ engagement and satisfaction of the activity. The number of views for each embedded video per student ranged from 2 to 6, which meant that the interactive videos were popular and added to the student’s overall satisfaction. At the end of the study period, the interactive videos received positive feedback comments from the students, for example “The online quizzes and videos are practical and helpful. And allows students to learn in their own time.” [2017 Focus Group Report].
Fig. 8

Analysis of student engagement in formative questions

How can Moodle statistics be used to determine the effectiveness of online quiz questions?

Psychometric analysis was performed to evaluate the summative questions to identify which questions needed to be revised to achieve appropriate targets, for example, increase the overall pass rate or limit the number of students who received high distinctions (HDs). It was imperative to assess a student’s knowledge level as well as distinguish the difference in knowledge levels between students.

Calculating FI was used to maintain the required pass rate using a set of questions that varied from easy to difficult. The distribution of level of difficulty of all summative questions after the first cohort is shown in Fig. 9. After implementing the quizzes for a cohort of students, the difficulty of each question could be assessed by analysing the FI, aiming for a value between 11% and 89%. These results led to further modification of the assessments if the questions were deemed too easy or too hard, thus optimising the effectiveness of the questions.
Fig. 9

Distribution of level of difficulty of each question, based on FI

Although FI is a useful measurement to determine the level of difficulty of each question, making questions easy or difficult can only control the number of students that pass or fail. This measure cannot be used effectively to categorise or determine students’ varying levels of knowledge from novice to expert. DI was a more effective scale to measure the ability of each question to “discriminate” between each student with a “very good discrimination” level being 50% or above. Questions with weak discrimination (<30%) were unsuitable as this may result in students receiving final grades that do not reflect their level of competency. The percentage distribution of DI analysis of all the summative questions after the first cohort is shown in Fig. 10. The questions with a negative DI, which signifies an invalid question with an error, were removed or revised for the following cohort of students.
Fig. 10

Level of discrimination of summative question based on DI

FI and DI were analysed simultaneously prior to assessing the suitability of each question. The FI and DI of the 61 summative questions are shown in Fig. 11. FI only indicates whether the questions are easy or difficult based on the number of students that got the question correct. For example, Fig. 11 shows that questions 1, 5 and 6 have the same FI of 85%, which meant all three questions were easy. However, DIs of the same questions were 40%, 9% and 51% which represent moderate, weak and very good discrimination. Thus an “easy” question can also be a question that has good “discrimination” features, meaning that most students who answered this question correctly also answered other questions correctly. Similarly, students who did not get it correct did not get other questions correct either. However, some “easy” questions which had the same FI value can have “very weak” discrimination, meaning that most students answered this question correctly regardless of how they performed in other parts of the test. If the focus is on separating students who deserve high distinctions (HD) from distinctions (D) and D from Credit (C), then “easy” questions with low discrimination are not suitable as most students can answer this type of questions correctly regardless of their ability. In such a scenario, this type of questions can be removed/revised for future cohorts. Therefore, relying solely on the FI index or “easiness” of the question is not appropriate. FI does not provide a direct correlation between students who scored well in this question with students who scored well in other parts of the test. Nevertheless, “very good” discrimination can be achieved with “fairly easy” questions, for example questions 9, 38 and 47 (see Fig. 11). On the other hand, very difficult questions always have “weak” discrimination, for example questions 7, 16 and 48 (see Fig. 11). Such questions are difficult for every student, regardless of their performance in other parts of the test. By contrast, some questions (questions 28, 41 and 50 in Fig. 11) show negative values for discrimination showing an inverse correlation with students who get this question correct with those who get other parts of the test correct. Usually, this happens with an error or over-simplification in a question.
Fig. 11

Analysis of FI and DI when evaluating the questions for revision purposes

Further analysis of FI and DI showed no significant relationship between question types (shown in Fig. 7a) and their DI values, except for the “reflective or descriptive” type of questions. These types of questions mostly had DI index higher than 50% indicating the importance of including such questions in online quizzes. Investigating FI together with DI provided useful data to evaluate each question and correct/modify them as necessary to provide all students with a fair grade, thus optimising the effectiveness of quizzes.

What benefits do online quizzes have for academics?

The online quizzes have numerous practical benefits for academics. The first benefit is that the quiz questions were effective in the teaching and learning of a complex applied mathematics course. The quiz questions which were designed used various multimodal resources with embedded videos enabling students to visualise real-life scenarios as a preliminary to necessary problem-solving. Carefully staging questions with necessary repetition thus ensured the development of course concepts and skills. Once the questions have been designed they can be refined in an ongoing manner with minimal academic effort.

Another benefit of the online quizzes was the customised and automated feedback. The relevant immediate automated feedback for each question explained common mistakes made by students and significant time was saved by not providing repetitive feedback for common mistakes. The staged online activities significantly reduced the time that academics spent on explaining the concepts. Instead, they could spend this saved time with students during weekly helpdesk sessions via a virtual classroom, focusing on questions from students, rather than explaining the basic concepts. This change in approach from face-to-face teaching to online learning provided lecturers with vital one-on-one time with students focusing on content that needed more clarification. In saving academics time, it was therefore also cost-effective.

Similarly, automated marking in online quizzes saved significant time for academics, particularly for the Excel-based assignments. Before the introduction of these online quizzes, students used Excel formula for calculations and tutors used onerous answer sheets as a guide when grading. With the online Moodle quiz questions, students still have to practise how to use Excel for iterative type of questions, but markers do not have to use the lengthy Excel sheets. The Moodle quizzes reduced marking time as the results were calculated instantaneously for numerical and short text-type answers. This approach allowed time to be re-directed into thorough marking of reflective and descriptive questions. Tutors marked these questions and provided customised feedback, ensuring that students did not feel isolated or disenfranchised from the University.

The online quizzes also supported academics in combating plagiarism. In the previous marking practices, especially in large classes, Excel formula could be copied without being detected in text-matching software. This issue was overcome by developing the online quizzes by using Moodle’s advanced features such as randomising questions, shuffling answers and checking logs. Therefore, online quizzes have reduced workloads of academics while creating an active and engaged learning environment for students.

Conclusion

This paper discusses the comprehensive ability of Moodle quizzes to transfer and assess engineering knowledge based on a study conducted in an undergraduate civil engineering course at the University of South Australia. The paper explains approaches used when preparing online materials to enhance student engagement and performance. The paper highlights how Moodle statistics can be used to measure the effectiveness of quiz questions to award students a fair grade by calculating FI and DI. It also discusses the benefits for academics when using features within the Moodle quizzes.

The study found that the Moodle quizzes assessed the competencies of students during the various stages of a study period through automated marking and easily extractable statistics. These carefully prepared Moodle quizzes catered to students with different levels of knowledge within the discipline. The study identified FI and DI as control indices from which the students’ educational performance is inferred, identified and predicted by detecting whether the proposed questions are appropriate to assess the level of knowledge, degree of difficulty, and degree of discrimination between different types of conceptual skills. The study showed that the FI and DI must be used together to ascertain a reliable indication of which questions are beneficial in achieving ultimate course success. By investigating FI and DI, educators can make decisions about question selection based on their intended goal (e.g. whether they want to discriminate between students or whether they want to increase the pass rate without decreasing the quality of the course). If educators want to increase the pass rate by not sacrificing the quality of the course, then questions with high FI (≥ 66% or above) and high DI (≥ 30% or above) should be selected. The questions that require modification and additional instruction can be identified quickly by this type of analysis.

The quality of these quizzes was enhanced by using a variety of question formats with various multimodal instructions, followed by prompt feedback. The variety and combination of quiz questions increased student engagement and satisfaction as they catered to different knowledge levels of students. The variety of question formats also helped educators to balance the time required for constructing and grading the quiz (e.g. the balance between multiple choice and descriptive type of questions). This paper highlights ways to reduce grading time (e.g. not having to manually mark Excel-based assignments) and reallocate time for marking other assessments (e.g. reflective and descriptive assignments).

One limitation of this study is that the results are based on only one course, although with four cohorts. Analysing results from a few similar applied mathematics courses needs to be investigated to generalise the results. Of the various question types used in this study, most reflective and descriptive questions had a high DI, indicating the importance of including these types of questions in an applied mathematical course. All the other question types had a range of DI. However, the reasons why some questions in the same category had different DI values and the features of these questions were not investigated in this study. Further analysis of individual questions and their DI values would be a worthwhile research focus for future studies.

This study provides numerous methods that can be applied to develop effective and interactive online quizzes. The principles discussed in this paper are not limited to Moodle LMS and could be applied to other LMS systems which support online quizzes. In addition, some methods presented in this paper (e.g. how to calculate and interpret FI and DI), enable educators to use these methods in any online or offline quiz. Overall, the approach detailed in this paper is concise and flexible for online resource development. This paper will enhance mandated and non-mandated digital teaching approaches and will lead to optimising assessments to provide a dynamic and valued student learning experience.

Notes

Acknowledgements

The authors would like to thank Dr. Guna Hewa for her initial work on this study and Andrea Duff for her editorial work.

Authors’ contributions

All authors contributed to the writing of the paper. SHPWG planned the research, conducted the statistical analysis and designed the online tools. JRA, MB and EJS conducted the content analysis and provided pedagogical data. The names are in order of the amount of contribution given. All authors read and approved the final manuscript.

Funding

This research received no specific grant from any funding agency.

Ethics approval and consent to participate

This research was approved by the Human Ethic Committee, University of South Australia under reference number 201281.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

References

  1. Andre, L. (2019). 20 best LMS software solutions of 2019, FinancesOnline. All B2B. https://financesonline.com/top-20-lms-software-solutions/. Accessed 9 June 2019.
  2. Aydin, C. C., & Tirkes, G. (2010). Open source learning management systems in distance learning. Turkish Online Journal of Educational Technology, 9(2), 175–184.Google Scholar
  3. Blanco, M., & Ginovart, M. (2010). Moodle quizzes for assessing statistical topics in engineering studies. Proceedings of the the Joint International IGIP-SEFI Annual Conference, September 19-22, 2010. https://upcommons.upc.edu/bitstream/handle/2117/9992/blanco_ginovart_igip_sefi_2010.pdf. Accessed 8 June 2019.
  4. Broadbent, J., & Fuller-Tyszkiewicz, M. (2018). Profiles in self-regulated learning and their correlates for online and blended learning students. A bi-monthly publication of the Association for Educational Communications & Technology, 66(6), 1435–1455.  https://doi.org/10.1007/s11423-018-9595-9.CrossRefGoogle Scholar
  5. Butcher, P. (2010). Quiz report statistics. Moodle. https://docs.moodle.org/dev/Quiz_report_statistics. Accessed 26 Feb 2019.
  6. Cohen, D., & Sasson, I. (2016). Online quizzes in a virtual learning environment as a tool for Formative Assessment. Journal of Technology and Science Education, 6(3), 188–208.  https://doi.org/10.3926/jotse.217.CrossRefGoogle Scholar
  7. Cook, B. R., & Babon, A. (2017). Active learning through Online Quizzes: Better learning and less (busy) work. Journal of Geography in Higher Education, 41(1), 24–38.  https://doi.org/10.1080/03098265.2016.1185772.CrossRefGoogle Scholar
  8. eLearning Industry (2018). Top LMS software based on customer experience. https://elearningindustry.com/directory/software-categories/learning-management-systems/best/customer-experience. Accessed 9 June 2019.
  9. Ennis, R. H. (1993). Critical thinking assessment. Theory Into Practice, 32(3), 179–186.  https://doi.org/10.1080/00405849309543594.CrossRefGoogle Scholar
  10. Fales-Williams, A., Kramer, T., Heer, R., & Danielson, J. (2005). A quiz becomes a multidirectional dialogue with web-based instructional tools for an anatomical pathology rotation. Journal of Veterinary Medical Education, 32(1), 144–149.  https://doi.org/10.3138/jvme.32.1.144.CrossRefGoogle Scholar
  11. Francis, R., & Shannon, S. J. (2013). Engaging with blended learning to improve students’ learning outcomes. European Journal of Engineering Education, 38(4), 359–369.  https://doi.org/10.1080/03043797.2013.766679.CrossRefGoogle Scholar
  12. Geary, D. C. (2017). Acquisition of complex arithmetic skills and higher order mathematics concepts. London: Elsevier.Google Scholar
  13. Gómez-Soberón, J. M., Gómez-Soberón, M. C., Corral-Higuera, R., Arredondo-Rea, S. P., Almaral-Sánchez, J. L., & Cabrera-Covarrubias, F. G. (2013). Calibrating questionnaires by psychometric analysis to evaluate knowledge. SAGE Open, 3(3), 1–14.  https://doi.org/10.1177/2158244013499159.CrossRefGoogle Scholar
  14. Heo, M., & Chow, A. (2005). The impact of computer augmented online learning and assessment tool. Educational Technology & Society, 8(1), 113–125.Google Scholar
  15. Huang, S., & Fang, N. (2013). Predicting student academic performance in an engineering dynamics course: A comparison of four types of predictive mathematical models. Computers & Education, 61(1), 133–145.  https://doi.org/10.1016/j.compedu.2012.08.015.CrossRefGoogle Scholar
  16. Jackson, B. L. (2017). Higher education faculty utilization of online technological tools: A multilevel analysis. Journal of Educational Multimedia and Hypermedia, 26(3), 271–283.Google Scholar
  17. Jaeger, M., & Adair, D. (2017). Time pressure in scenario-based online construction safety quizzes and its effect on students’ performance. European Journal of Engineering Education, 42(3), 241–251.  https://doi.org/10.1080/03043797.2016.1153042.CrossRefGoogle Scholar
  18. James, R. (2016). Tertiary student attitudes to invigilated, online summative examinations. International Journal of Educational Technology in Higher Education, 13(1), 1–13.  https://doi.org/10.1186/s41239-016-0015-0.CrossRefGoogle Scholar
  19. Krause, C., Krause, R., Krause, R., Gomez, N., Jafry, Z., & Dinh, V. A. (2017). Effectiveness of a 1-hour extended focused assessment with sonography in trauma session in the medical student surgery clerkship. Journal of Surgical Education, 74(6), 968–974.  https://doi.org/10.1016/j.jsurg.2017.03.007.CrossRefGoogle Scholar
  20. Liagkou, V., & Stylios, C. (2018). A trustworthy and privacy preserving model for online competence evaluation system. In International conference on dependability and complex Systems (pp. 338–347). Springer.  https://doi.org/10.1007/978-3-319-91446-6.Google Scholar
  21. Maarek, J. (2018). Benefits of active learning embedded in online content material supporting a flipped classroom. Proceedings fo the ASEE Annual Conference & Exposition, Salt Lake City. https://peer.asee.org/29845. Accessed 14 June 2019.
  22. Martins, S. G. (2017). Weekly online quizzes to a mathematics course for engineering students. Teaching Mathematics and its Applications: An International Journal of the IMA, 36(1), 56–63.  https://doi.org/10.1093/teamat/hrw011.CrossRefGoogle Scholar
  23. Moodle Statistics. (2019a). Moodle. https://docs.moodle.org/dev/Quiz_report_statistics. Accessed 26 Feb 2019.
  24. Moodle Statistics. (2019b). Moodle. https://docs.moodle.org/dev/Quiz_statistics_calculations#Introduction. Accessed 21 July 2019.
  25. Philipsen, B., Tondeur, J., Roblin, N. P., Vanslambrouck, S., & Zhu, C. (2019). Improving teacher professional development for online and blended learning: A systematic meta-aggregative review. Educational Technology Research and Development, 1–30.  https://doi.org/10.1007/s11423-019-09645-8.
  26. Ransdell, S. (2001). Predicting college success: the importance of ability and non-cognitive variables. International Journal of Educational Research, 35(4), 357–364.  https://doi.org/10.1016/S0883-0355(01)00032-5.CrossRefGoogle Scholar
  27. Rinaldi, V. D., Lorr, N. A., & Williams, K. (2017). Evaluating a technology supported interactive response system during the laboratory section of a histology course. Anatomical Sciences Education, 10(4), 328–338.  https://doi.org/10.1002/ase.1667.CrossRefGoogle Scholar
  28. Root-Kustritz, M. V. (2014). Canine theriogenology for dog enthusiasts: teaching methodology and outcomes in a massive open online course (MOOC). Journal of Veterinary Medical Education, 41(1), 9–18.  https://doi.org/10.3138/jvme.0813-112R1.CrossRefGoogle Scholar
  29. Salamon, H., Ali, N., Miskon, S., & Ahmad, N. (2016). Initial recommendations of MOOCs characteristics for academic discipline clusters. Journal of Theoretical and Applied Information Technology, 87(2), 204–213.Google Scholar
  30. Sancho-Vinuesa, T., Masià, R., Fuertes-Alpiste, M., & Molas-Castells, N. (2018). Exploring the effectiveness of continuous activity with automatic feedback in online calculus. Computer Applications in Engineering Education, 26(1), 62–74.  https://doi.org/10.1002/cae.21861.CrossRefGoogle Scholar
  31. Schneider, J. L., Ruder, S. M., & Bauer, C. F. (2018). Student perceptions of immediate feedback testing in student centered chemistry classes. Chemistry Education Research and Practice, 19(2), 442–451.  https://doi.org/10.1039/c7rp00183e.CrossRefGoogle Scholar
  32. Sullivan, D. P. (2016). An integrated approach to preempt cheating on asynchronous, objective, online assessments in graduate business classes. Online Learning, 20(3), 195–209.  https://doi.org/10.24059/olj.v20i3.650.CrossRefGoogle Scholar
  33. Thevenot, C. (2010). Arithmetic word problem solving: Evidence for the construction of a mental model. Acta Psychologica, 133(1), 90–95.  https://doi.org/10.1016/j.actpsy.2009.10.004.CrossRefGoogle Scholar
  34. Thevenot, C., Devidal, M., Barrouillet, P., & Fayol, M. (2007). Why does placing the question before an arithmetic word problem improve performance? A situation model account. Quarterly Journal of Experimental Psychology, 60(1), 43–56.  https://doi.org/10.1080/17470210600587927.CrossRefGoogle Scholar
  35. Ting, S. M. R. (2001). Predicting academic success of first-year engineering students from standardized test scores and psychosocial variables. International Journal of Engineering Education, 17(1), 75–80.Google Scholar
  36. Vicente, S., Orrantia, J., & Verschaffel, L. (2008). Influence of mathematical and situational knowledge on arithmetic word problem solving: Textual and graphical aids. Infancia Y Aprendizaje, 31(4), 463–483.  https://doi.org/10.1174/021037008786140959.CrossRefGoogle Scholar
  37. Wallihan, R., Smith, K. G., Hormann, M. D., Donthi, R. R., Boland, K., & Mahan, J. D. (2018). Utility of intermittent online quizzes as an early warning for residents at risk of failing the pediatric board certification examination.(Report). BMC Medical Education, 18, 1.  https://doi.org/10.1186/s12909-018-1366-0.CrossRefGoogle Scholar
  38. West, J., & Turner, W. (2016). Enhancing the assessment experience: Improving student perceptions, engagement and understanding using online video feedback. Innovations in Education and Teaching International, 53(4), 400–410.  https://doi.org/10.1080/14703297.2014.1003954.CrossRefGoogle Scholar
  39. Williams van Rooij, S. (2012). Open-source learning management systems: A predictive model for higher education. Journal of Computer Assisted Learning, 28(2), 114–125.  https://doi.org/10.1111/j.1365-2729.2011.00422.x.CrossRefGoogle Scholar
  40. Wojcikowski, K., & Kirk, L. (2013). Immediate detailed feedback to test-enhanced learning: An effective online educational tool. Medical Teacher, 35(11), 915–919.  https://doi.org/10.3109/0142159X.2013.826793.CrossRefGoogle Scholar

Copyright information

© The Author(s). 2019

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.School of Natural and Built Environments, University of South Australia, Mawson Lakes CampusMawson LakesAustralia
  2. 2.Research and Innovation Services, University of South Australia, Mawson Lakes CampusMawson LakesAustralia

Personalised recommendations