Abstract
Conducting assessments is necessary to evaluate student performance. Online tests offer a scaling solution over traditional tests administered by examiners. However, these online tests come with the drawback of losing the interactivity that one has with examiners, as online tests mostly take the form of self-correcting tests which expect students to answer all available questions in a survey-like manner with feedback that may be given at the end. In this work, we present the implementation and evaluation of social bots capable of performing online tests. These bots are implemented as chatbots that can access and evaluate existing tests from learning management systems. With these bots, we enhance mobile learning support for students. Students can perform assessments anytime, anywhere using their favorite messenger application such as Rocket.Chat, Slack or Telegram. Furthermore, the activity data in the chat flows into learning record stores which can be aggregated and visualized there. These types of bots can be used as additional learning opportunities in university courses. As before, instructors can create their tests in their regular environment and make them automatically accessible to the bot, so that students can take the quiz in their learning management system, but also in a familiar chat environment. Our evaluation shows that assessment chatbots are an attractive and accessible self-assessment opportunity for students on a mobile device.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Al-Emran, M., Elsherif, H.M., Shaalan, K.: Investigating attitudes towards the use of mobile learning in higher education. Comput. Hum. Behav. 56, 93–102 (2016)
Alruwais, N., Wills, G., Wald, M.: Advantages and challenges of using e-assessment. Int. J. Inf. Educ. Technol. 8(1), 34–37 (2018)
Bozkurt, A., Kilgore, W., Crosslin, M.: Bot-teachers in hybrid massive open online courses (MOOCs): a post-humanist experience. Australas. J. Educ. Technol. 34(3), 39–59 (2018)
Brown, G.T.L., Harris, L.R.: Student self-assessment. In: SAGE Handbook of Research on Classroom Assessment, pp. 367–393. SAGE Publications Inc., Thousand Oaks (2013)
Buzhardt, J., Semb, G.B.: Item-by-item versus end-of-test feedback in a computer-based PSI course. J. Behav. Educ. 11(2), 89–104 (2002). https://doi.org/10.1023/A:1015479225777
Cheung, S.K.S.: A case study on the students’ attitude and acceptance of mobile learning. In: Li, K.C., Wong, T.L., Cheung, S.K.S., Lam, J., Ng, K.K. (eds.) Technology in Education. Transforming Educational Practices with Technology. CCIS, vol. 494, pp. 45–54. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-46158-7_5
Clarizia, F., Colace, F., Lombardi, M., Pascale, F., Santaniello, D.: Chatbot: an education support system for student. In: Castiglione, A., Pop, F., Ficco, M., Palmieri, F. (eds.) CSS 2018. LNCS, vol. 11161, pp. 291–302. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01689-0_23
Coates, H., James, R., Baldwin, G.: A critical examination of the effects of learning management systems on university teaching and learning. Tert. Educ. Manag. 11(1), 19–36 (2005). https://doi.org/10.1007/s11233-004-3567-9
Ferrara, E., Varol, O., Davis, C., Menczer, F., Flammini, A.: The rise of social bots. Commun. ACM 59(7), 96–104 (2016)
Govindasamy, T.: Successful implementation of e-learning. Internet High. Educ. 4(3–4), 287–299 (2001)
Gray, K., Thompson, C., Sheard, J., Clerehan, R., Hamilton, M.: Students as web 2.0 authors: implications for assessment design and conduct. Australas. J. Educ. Technol. 26(1), 105–122 (2010)
Huang, W., Hew, K.F., Gonda, D.E.: Designing and evaluating three chatbot-enhanced activities for a flipped graduate course. Int. J. Mech. Eng. Robot. Res. 8(5), 813–818 (2019)
Ibabe, I., Jauregizar, J.: Online self-assessment with feedback and metacognitive knowledge. High. Educ. 59(2), 243–258 (2010). https://doi.org/10.1007/s10734-009-9245-6
Klamma, R., Renzel, D., de Lange, P., Janßen, H.: las2peer - a primer. https://doi.org/10.13140/RG.2.2.31456.48645
Klimova, B.: Impact of mobile learning on students’ achievement results. Educ. Sci. 9(2), 90 (2019)
Marsh, E.J., Roediger, H.L., Bjork, R.A., Bjork, E.L.: The memorial consequences of multiple-choice testing. Psychon. Bull. Rev. 14(2), 194–199 (2007). https://doi.org/10.3758/BF03194051
Murairwa, S.: Voluntary sampling design. Int. J. Adv. Res. Manag. Soc. Sci. 4(2), 185–200 (2015)
Neumann, A.T., et al.: Chatbots as a tool to scale mentoring processes: individually supporting self-study in higher education. Front. Artif. Intell. 4, 64–71 (2021). https://doi.org/10.3389/frai.2021.668220
Neumann, A.T., de Lange, P., Klamma, R.: Collaborative creation and training of social bots in learning communities. In: 2019 IEEE 5th International Conference on Collaboration and Internet Computing (CIC), pp. 11–19. IEEE (2019). https://doi.org/10.1109/CIC48465.2019.00011
Neumann, A.T., de Lange, P., Klamma, R., Pengel, N., Arndt, T., et al.: Intelligent mentoring bots in learning management systems. In: Pang, C. (ed.) SETE/ICWL -2020. LNCS, vol. 12511, pp. 3–14. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-66906-5_1
Pereira, J.: Leveraging chatbots to improve self-guided learning through conversational quizzes. In: García-Peñalvo, F.J. (ed.) Proceedings of the Fourth International Conference on Technological Ecosystems for Enhancing Multiculturality - TEEM 2016, pp. 911–918. ACM Press, New York (2016)
Pereira, J., Díaz, Ó.: Chatbot dimensions that matter: lessons from the trenches. In: Mikkonen, T., Klamma, R., Hernández, J. (eds.) ICWE 2018. LNCS, vol. 10845, pp. 129–135. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-91662-0_9
Roberts, T.S.: Self, peer, and group assessment in e-learning. IGI Global Research Collection. IGI Global, Hershey (2006)
Shao, C., Ciampaglia, G.L., Varol, O., Yang, K.C., Flammini, A., Menczer, F.: The spread of low-credibility content by social bots. Nat. Commun. 9(1), 4787 (2018)
van der Kleij, F.M., Feskens, R.C.W., Eggen, T.J.H.M.: Effects of feedback in a computer-based learning environment on students’ learning outcomes. Rev. Educ. Res. 85(4), 475–511 (2015)
Winkler, R., Söllner, M.: Unleashing the potential of chatbots in education: a state-of-the-art analysis. In: Academy of Management Annual Meeting (AOM) (2018)
Zhang, N., Henderson, C.N.R.: Can formative quizzes predict or improve summative exam performance? J. Chiropr. Educ. 29(1), 16–21 (2015)
Acknowledgments
The authors would like to thank the German Federal Ministry of Education and Research (BMBF) for their kind support within the project “Personalisierte Kompetenzentwicklung durch skalierbare Mentoringprozesse” (tech4comp) under the project id 16DHB2110.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
Appendix
A Extracting Content of Quizzes From Moodle
Moodle currently possesses a variety of RESTful API function calls which allows for content to be extracted. To extract the quiz content, following API calls are made with JSON as the content format (in calling order):
-
1.
core_course_get_contents: Used to get the quizzes for a given course.
-
2.
mod_quiz_start_attempt: Used to start a quiz on the Moodle platform.
-
3.
mod_quiz_process_attempt: Used to stop the quiz attempt.
-
4.
mod_quiz_get_attempt_review: Used to get HTML code of the review page.
Note that for each call an authentication token called wstoken is needed, which is provided by an admin of the moodle instance. The first function call returns every activity or resource contained in a course based on the given course id. The Assessment Handler then uses the response to find out the id for the chosen quiz topic. The second function call is used to choose and start a quiz attempt on the Moodle platform. As wstokens are linked to a specific account, the attempt starts on the respective account. The last two calls allow us to stop the started attempt and retrieve the HTML code of the final review page of the quiz. This review page contains every question that is in the quiz, the question types, the corresponding correct answers to each question, the optional feedback, and the marks corresponding to the questions. The jsoup library was used to parse the retrieved HTML code in Java and thus extract the quiz information using DOM methods provided by the library.
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Neumann, A.T., Conrardy, A.D., Klamma, R. (2021). Supplemental Mobile Learner Support Through Moodle-Independent Assessment Bots. In: Zhou, W., Mu, Y. (eds) Advances in Web-Based Learning – ICWL 2021. ICWL 2021. Lecture Notes in Computer Science(), vol 13103. Springer, Cham. https://doi.org/10.1007/978-3-030-90785-3_7
Download citation
DOI: https://doi.org/10.1007/978-3-030-90785-3_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-90784-6
Online ISBN: 978-3-030-90785-3
eBook Packages: Computer ScienceComputer Science (R0)