Abstract
Recent changes in Higher Education including larger numbers of students and larger staff student ratios mean that assessments need to be marked quickly and consistently, whilst also benefitting the students’ learning experience. This paper initially introduces a portfolio assessment adopted three years ago on a software development module, with the aim of improving student engagement, and then goes on to discuss the inspiration behind attempting to automate the marking of this assessment. The paper then focuses on how the JUnit testing framework was used to achieve this, dissecting the challenges faced along the way, and how these were individually addressed. The end result was considered to be successful, with a significant reduction in marking time, and a guaranteed high consistency between markers. Students also benefitted through ongoing feedback from the unit tests they were provided during the assessment. In future, using such an approach provides the potential to assess students more frequently, giving them more regular feedback on their progress, and further helping them to engage with their studies.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ala-Mutka K (2005) A survey of automated assessment approaches for programming assignments. Comput Sci Educ 15(2):83–102
Al-Azawei A, Serenelli F, Lundqvist K (2016) Universal Design for Learning (UDL): a content analysis of peer reviewed journals from 2012 to 2015. J Sch Teach Learn 16(3):39–56
Aniche M, Oliva GA, Gerosa MA (2013) What do the asserts in a unit test tell us about code quality? A study on open source and industrial projects. In: 17th European conference on software maintenance and reengineering (CSMR 2013)
Ball S, Bew C, Bloxham S, Brown S, Kleiman P, May H, McDowell L, Morris E, Orr S, Payne E, Price M, Rust C, Smith B, Waterfield J (2012) A marked improvement: transforming assessment in higher education, The HEA
Biggan J (2010) Using automated assessment feedback to enhance the quality of student learning in universities: a case study. In: Technology enhanced learning. Quality of teaching and educational reform. Communications in computer and information science, vol 73. Springer, Berlin, Heidelberg. JISC (2016) Transforming assessment and feedback with technology [online]
Douce C, Livingstone D, Orwell J (2005) Automatic test-based assessment of programming: a review. J Educ Resour Comput, 5(3)
English J (2006) The Checkpoint automated assessment system. In: Proceedings of e-learn 2006, pp 2780–2787
English J, English T (2015) Experiences of using automated assessment in computer science courses. J Inf Technol Educ: Innovations Pract 14:237–254
Forman IR, Forman N (2004) Java reflection in action. Manning Publications 2004
Helmick M (2007) Interface-based programming assignments and automatic grading of java programs. In: ITiCSE ’07: proceedings of the 12th annual conference on innovation and technology in computer science education, ACM Press, Dundee
Janzen D, Saiedian H (2005) Test-driven development: concepts, taxonomy, and future direction. IEEE Comput 2005 38(9):43–50
JISC (2016) Transforming assessment and feedback with technology [online]
Khalid A (2013) Automatic assessment of Java code. The Maldives Natl J Res 1(1):7–32
PAS 754:2014 (2014) Software trustworthiness. Governance and management. Specification, British Standards Institution, May 2014
QAA (2013) UK quality code for higher education—Chapter B6: assessment of students and the recognition of prior learning
Rosen C (2016) A philosophical comparison of chinese and european models of computer science education (A discussion paper). In: Software engineering education going agile (CEISEE 2015), pp 9–14
Schmolitzky A (2004) “Objects first, interfaces next” or interfaces before inheritance. In: OOPSLA ’04: companion to the 19th annual ACM SIGPLAN conference on object-oriented programming systems, languages, and applications, pp 64–67
Tremblay G, Labonte E (2003) Semi-automatic marking of java programs using junit. In: International conference on education and information systems: technologies and applications (EISTA ’03), pp 42–47
Tremblay G, Guerin F, Pons A, Salah A (2008) Oto, a generic and extensible tool for marking programming assignments. Softw: Pract Experience 38(3), 307–333
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Attwood, L., Carter, J. (2018). Semi-automating the Marking of a Java Programming Portfolio Assessment: A Case Study from a UK Undergraduate Programme. In: Carter, J., O'Grady, M., Rosen, C. (eds) Higher Education Computer Science. Springer, Cham. https://doi.org/10.1007/978-3-319-98590-9_11
Download citation
DOI: https://doi.org/10.1007/978-3-319-98590-9_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-98589-3
Online ISBN: 978-3-319-98590-9
eBook Packages: Computer ScienceComputer Science (R0)