Abstract
This paper presents a retrospective analysis of students’ use of self-regulated learning strategies while using an educational technology that connects physical and digital learning spaces. A classroom study was carried out in a Data Structures & Algorithms course offered by the School of Computer Science. Students’ reviewing behaviors were logged and the associated learning impacts were analyzed by monitoring their progress throughout the course. The study confirmed that students who had an improvement in their performance spent more time and effort reviewing formal assessments, particularly their mistakes. These students also demonstrated consistency in their reviewing behavior throughout the semester. In contrast, students who fell behind in class ineffectively reviewed their graded assessments by focusing mostly on what they already knew instead of their knowledge misconceptions.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Butler, D.L., Winne, P.H.: Feedback and self-regulated learning: a theoretical synthesis. Rev. Educ. Res. 65(3), 245–281 (1995)
Bergin, S., Reilly, R.: The influence of motivation and comfort-level on learning to program. In: Proceedings of the 17th Workshop of the Psychology of Programming Interest Group, Sussex, UK, Psychology of Programming Interest Group, pp. 293–304 (2005)
Loksa, D., Ko, A.J.: The role of self-regulation in programming problem solving process and success. In: ICER, pp. 83–91. ACM, New York (2016)
Eteläpelto, A.: Metacognition and the expertise of computer program comprehension. Scand. J. Educ. Res. 37(3), 243–254 (1993)
Hsiao, I.H., Bakalov, F., Brusilovsky, P., König-Ries, B.: Progressor: social navigation support through open social student modeling. New Rev. Hypermedia Multimed. 19(2), 112–131 (2013)
Falkner, K., Vivian, R., Falkner, N.J.: Identifying computer science self-regulated learning strategies. In: Proceedings of the 2014 Conference on Innovation & Technology in Computer Science Education, pp. 291–296. ACM, New York (2014)
Morrison, B.B., Decker, A., Margulieux, L.E.: Learning loops: a replication study illuminates impact of hs courses. In: Proceedings of the 2016 ACM Conference on International Computing Education Research, pp. 221–230. ACM, New York (2016)
Gehringer, E.F.: Electronic peer review and peer grading in computer-science courses. ACM SIGCSE Bull. 33(1), 139–143 (2001)
Trees, A.R., Jackson, M.H.: The learning environment in clicker classrooms: student processes of learning and involvement in large university-level courses using student response systems. Learn. Media Technol. 32(1), 21–40 (2007)
Martinez-Maldonado, R., Dimitriadis, Y., Martinez-Monés, A., Kay, J., Yacef, K.: Capturing and analyzing verbal and physical collaborative learning interactions at an enriched interactive tabletop. Int. J. Comput. Support. Collab. Learn. 8(4), 455–485 (2013)
Hattie, J., Timperley, H.: The power of feedback. Rev. Educ. Res. 77(1), 81–112 (2007)
Edwards, S.H., Perez-Quinones, M.A.: Web-cat: automatically grading programming assignments. In: ACM SIGCSE Bulletin, vol. 40, pp. 328–328. ACM, New York (2008)
Jackson, D., Usher, M.: Grading student programs using assyst. In: ACM SIGCSE Bulletin, vol. 29, pp. 335–339. ACM, New York (1997)
Hartmann, B., MacDougall, D., Brandt, J., Klemmer, S.R.: What would other programmers do: suggesting solutions to error messages. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1019–1028. ACM, New York (2010)
Hsiao, I.H., Sosnovsky, S., Brusilovsky, P.: Guiding students to the right questions: adaptive navigation support in an E-learning system for Java programming. J. Comput. Assist. Learn. 26(4), 270–283 (2010)
Denny, P., Luxton-Reilly, A., Hamer, J.: Student use of the peerwise system. In: ACM SIGCSE Bulletin, vol. 40, pp. 73–77. ACM, New York (2008)
Singh, A., Karayev, S., Gutowski, K., Abbeel, P.: Gradescope: A fast, flexible, and fair system for scalable assessment of handwritten work. In: Proceedings of the Fourth ACM Conference on Learning@ Scale, pp. 81–88. ACM, New York (2017)
Guerra, J., Sahebi, S., Lin, Y.R., Brusilovsky, P.: The problem solving genome: analyzing sequential patterns of student work with parameterized exercises. In: Educational Data Mining, EDM, North Carolina (2014)
Piech, C., Sahami, M., Koller, D., Cooper, S., Blikstein, P.: Modeling how students learn to program. In: Proceedings of the 43rd ACM Technical Symposium on Computer Science Education, pp. 153–160. ACM, New York (2012)
Boyer, K.E., et al.: Investigating the relationship between dialogue structure and tutoring effectiveness: a hidden Markov modeling approach. Int. J. Artif. Intell. Educ. 21(1–2), 65–81 (2011)
Lu, Y., Sharon, I., Hsiao, H.: Seeking programming-related information from large scaled discussion forums, help or harm? In: Proceedings of the 9th International Conference on Educational Data Mining, EDM, North Carolina, pp. 442–447 (2016)
Altadmri, A., Brown, N.C.: 37 million compilations: investigating novice programming mistakes in large-scale student data. In: Proceedings of the 46th ACM Technical Symposium on Computer Science Education, pp. 522–527. ACM, NY (2015)
Buffardi, K., Edwards, S.H.: Effective and ineffective software testing behaviors by novice programmers. In: Proceedings of the Ninth Annual International ACM Conference on International Computing Education Research, pp. 83–90. ACM, New York (2013)
Carter, A.S., Hundhausen, C.D., Adesope, O.: The normalized programming state model: predicting student performance in computing courses based on programming behavior. In: Proceedings of the Eleventh Annual International Conference on International Computing Education Research, pp. 141–150. ACM, New York (2015)
Montalvo, O., Baker, R.S., Sao Pedro, M.A., Nakama, A., Gobert, J.D.: Identifying students’ inquiry planning using machine learning. In: Educational Data Mining, EDM, North Carolina (2010)
Bernardini, A., Conati, C.: Discovering and recognizing student interaction patterns in exploratory learning environments. In: Aleven, V., Kay, J., Mostow, J. (eds.) ITS 2010. LNCS, vol. 6094, pp. 125–134. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-13388-6_17
Hsiao, I.H., Huang, P.K., Murphy, H.: Uncovering reviewing and reflecting behaviors from paper-based formal assessment. In: Proceedings of the Seventh International Learning Analytics & Knowledge Conference, pp. 319–328. ACM, New York (2017)
Blikstein, P.: Using learning analytics to assess students’ behavior in open-ended programming tasks. In: Proceedings of the 1st International Conference on Learning Analytics and Knowledge, pp. 110–116. ACM, New York (2011)
Jenks, G.F.: The data model concept in statistical mapping. Int. Yearb. Cartogr. 7, 186–190 (1967)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Paredes, Y.V., Azcona, D., Hsiao, IH., Smeaton, A. (2018). Learning by Reviewing Paper-Based Programming Assessments. In: Pammer-Schindler, V., PĂ©rez-SanagustĂn, M., Drachsler, H., Elferink, R., Scheffel, M. (eds) Lifelong Technology-Enhanced Learning. EC-TEL 2018. Lecture Notes in Computer Science(), vol 11082. Springer, Cham. https://doi.org/10.1007/978-3-319-98572-5_39
Download citation
DOI: https://doi.org/10.1007/978-3-319-98572-5_39
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-98571-8
Online ISBN: 978-3-319-98572-5
eBook Packages: Computer ScienceComputer Science (R0)