Abstract
The automated static and dynamic assessment of programs makes it practical to increase the learning opportunities of large student classes through the regular assessment of programming assignments. Automatic assessments are traditionally specified in tool-specific languages which are closely linked to the functionality and implementation of a particular tool. This paper considers existing specification languages for assessments and proposes a generic and extensible domain-specific language for the specification of programming assignment assessments.
Keywords
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ahoniemi, T., Reinikainen, T.: Aloha: a grading tool for semi-automatic assessment of mass programming courses. In: Proceedings of the 6th Baltic Sea Conference on Computing Education Research: Koli Calling 2006, Baltic Sea 2006, pp. 139–140. ACM, New York (2006)
Al Shamsi, F., Elnagar, A.: An intelligent assessment tool for students’ Java submissions in introductory programming courses. J. Intell. Learn. Syst. Appl. 4(1), 59–69 (2012)
Ala-Mutka, K., Uimonen, T., Järvinen, H.M., Knight, L.: Supporting students in C++ programming courses with automatic program style assessment. J. Inf. Technol. Educ. 3, 245–262 (2004)
Albuquerque, D., Cafeo, B., Garcia, A., Barbosa, S., Abrahão, S., Ribeiro, A.: Quantifying usability of domain-specific languages: an empirical study on software maintenance. J. Syst. Softw. 101, 245–259 (2015)
Bariic, A., Amaral, V., Goulão, M.: Usability evaluation of domain-specific languages. In: Eighth International Conference on the Quality of Information and Communications Technology (QUATIC), pp. 342–347, September 2012
Dumas, J.S., Redish, J.C.: A Practical Guide to Usability Testing. Intellect Bks, Portland (1999)
Fonte, D., da Cruz, D.C., Gançarski, A.L., Henriques, P.R.: A flexible dynamic system for automatic grading of programming exercises. In: 2nd Symposium on Languages, Applications and Technologies, SLATE 2013, Porto, Portugal, pp. 129–144, June 2013
Fowler, M.: Domain Specific Languages, 1st edn. Addison-Wesley Professional, Boston (2010)
Gronback, R.C.: Eclipse Modeling Project: A Domain-Specific Language (DSL) Toolkit, 1st edn. Addison-Wesley Professional, Boston (2009)
Guduric, P., Puder, A., Todtenhoefer, R.: A comparison between relational and operational QVT mappings. In: Sixth International Conference on Information Technology: New Generations, ITNG 2009, pp. 266–271, April 2009
Heidenreich, F., Johannes, J., Karol, S., Seifert, M., Wende, C.: Model-based language engineering with EMFText. In: Lämmel, R., Saraiva, J., Visser, J. (eds.) GTTSE 2011. LNCS, vol. 7680, pp. 322–345. Springer, Heidelberg (2013)
Insa, D., Silva, J.: Semi-automatic assessment of unrestrained Java code: a library, a DSL, and a workbench to assess exams and exercises. In: Proceedings of the 2015 ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE 2015, pp. 39–44. ACM, New York (2015)
Jouault, F., Allilaire, F., Bézivin, J., Kurtev, I.: ATL: a model transformation tool. Sci. Comput. Program. 72(1–2), 31–39 (2008). http://dx.doi.org/10.1016/j.scico.2007.08.002
Pieterse, V.: Automated assessment of programming assignments. In: Proceedings of the 3rd Computer Science Education Research Conference on Computer Science Education Research, CSERC 2013, pp. 4:45–4:56. Open Universiteit, Heerlen, Open Univ., Heerlen, The Netherlands (2013). http://0-dl.acm.org.innopac.up.ac.za/citation.cfm?id=2541917.2541921
Poženel, M., Fürst, L., Mahnič, V.: Introduction of the automated assessment of homework assignments in a university-level programming course. In: 38th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 761–766, May 2015
Tremblay, G., Guérin, F., Pons, A., Salah, A.: Oto, a generic and extensible tool for marking programming assignments. Softw. Pract. Exper. 38(3), 307–333 (2008)
Tremblay, G., Lessard, P.: A marking language for the Oto assignment marking tool. In: Proceedings of the 16th Annual Joint Conference on Innovation and Technology in Computer Science Education, ITiCSE 2011, pp. 148–152. ACM, New York (2011). http://0-doi.acm.org.innopac.up.ac.za/10.1145/1999747.1999791
Wang, T., Su, X., Ma, P., Wang, Y., Wang, K.: Ability-training-oriented automated assessment in introductory programming course. Comput. Educ. 56(1), 220–226 (2011)
Wilcox, C.: Testing strategies for the automated grading of student programs. In: Proceedings of the 47th ACM Technical Symposium on Computing Science Education, SIGCSE 2016, pp. 437–442. ACM, New York (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing AG
About this paper
Cite this paper
Solms, F., Pieterse, V. (2016). Towards a Generic DSL for Automated Marking Systems. In: Gruner, S. (eds) ICT Education. SACLA 2016. Communications in Computer and Information Science, vol 642. Springer, Cham. https://doi.org/10.1007/978-3-319-47680-3_6
Download citation
DOI: https://doi.org/10.1007/978-3-319-47680-3_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-47679-7
Online ISBN: 978-3-319-47680-3
eBook Packages: Computer ScienceComputer Science (R0)