Skip to main content

Semi-automating the Marking of a Java Programming Portfolio Assessment: A Case Study from a UK Undergraduate Programme

  • Chapter
  • First Online:
Higher Education Computer Science

Abstract

Recent changes in Higher Education including larger numbers of students and larger staff student ratios mean that assessments need to be marked quickly and consistently, whilst also benefitting the students’ learning experience. This paper initially introduces a portfolio assessment adopted three years ago on a software development module, with the aim of improving student engagement, and then goes on to discuss the inspiration behind attempting to automate the marking of this assessment. The paper then focuses on how the JUnit testing framework was used to achieve this, dissecting the challenges faced along the way, and how these were individually addressed. The end result was considered to be successful, with a significant reduction in marking time, and a guaranteed high consistency between markers. Students also benefitted through ongoing feedback from the unit tests they were provided during the assessment. In future, using such an approach provides the potential to assess students more frequently, giving them more regular feedback on their progress, and further helping them to engage with their studies.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Ala-Mutka K (2005) A survey of automated assessment approaches for programming assignments. Comput Sci Educ 15(2):83–102

    Article  Google Scholar 

  • Al-Azawei A, Serenelli F, Lundqvist K (2016) Universal Design for Learning (UDL): a content analysis of peer reviewed journals from 2012 to 2015. J Sch Teach Learn 16(3):39–56

    Article  Google Scholar 

  • Aniche M, Oliva GA, Gerosa MA (2013) What do the asserts in a unit test tell us about code quality? A study on open source and industrial projects. In: 17th European conference on software maintenance and reengineering (CSMR 2013)

    Google Scholar 

  • Ball S, Bew C, Bloxham S, Brown S, Kleiman P, May H, McDowell L, Morris E, Orr S, Payne E, Price M, Rust C, Smith B, Waterfield J (2012) A marked improvement: transforming assessment in higher education, The HEA

    Google Scholar 

  • Biggan J (2010) Using automated assessment feedback to enhance the quality of student learning in universities: a case study. In: Technology enhanced learning. Quality of teaching and educational reform. Communications in computer and information science, vol 73. Springer, Berlin, Heidelberg. JISC (2016) Transforming assessment and feedback with technology [online]

    Google Scholar 

  • Douce C, Livingstone D, Orwell J (2005) Automatic test-based assessment of programming: a review. J Educ Resour Comput, 5(3)

    Google Scholar 

  • English J (2006) The Checkpoint automated assessment system. In: Proceedings of e-learn 2006, pp 2780–2787

    Google Scholar 

  • English J, English T (2015) Experiences of using automated assessment in computer science courses. J Inf Technol Educ: Innovations Pract 14:237–254

    Google Scholar 

  • Forman IR, Forman N (2004) Java reflection in action. Manning Publications 2004

    Google Scholar 

  • Helmick M (2007) Interface-based programming assignments and automatic grading of java programs. In: ITiCSE ’07: proceedings of the 12th annual conference on innovation and technology in computer science education, ACM Press, Dundee

    Google Scholar 

  • Janzen D, Saiedian H (2005) Test-driven development: concepts, taxonomy, and future direction. IEEE Comput 2005 38(9):43–50

    Google Scholar 

  • JISC (2016) Transforming assessment and feedback with technology [online]

    Google Scholar 

  • Khalid A (2013) Automatic assessment of Java code. The Maldives Natl J Res 1(1):7–32

    Google Scholar 

  • PAS 754:2014 (2014) Software trustworthiness. Governance and management. Specification, British Standards Institution, May 2014

    Google Scholar 

  • QAA (2013) UK quality code for higher education—Chapter B6: assessment of students and the recognition of prior learning

    Google Scholar 

  • Rosen C (2016) A philosophical comparison of chinese and european models of computer science education (A discussion paper). In: Software engineering education going agile (CEISEE 2015), pp 9–14

    Google Scholar 

  • Schmolitzky A (2004) “Objects first, interfaces next” or interfaces before inheritance. In: OOPSLA ’04: companion to the 19th annual ACM SIGPLAN conference on object-oriented programming systems, languages, and applications, pp 64–67

    Google Scholar 

  • Tremblay G, Labonte E (2003) Semi-automatic marking of java programs using junit. In: International conference on education and information systems: technologies and applications (EISTA ’03), pp 42–47

    Google Scholar 

  • Tremblay G, Guerin F, Pons A, Salah A (2008) Oto, a generic and extensible tool for marking programming assignments. Softw: Pract Experience 38(3), 307–333

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luke Attwood .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Attwood, L., Carter, J. (2018). Semi-automating the Marking of a Java Programming Portfolio Assessment: A Case Study from a UK Undergraduate Programme. In: Carter, J., O'Grady, M., Rosen, C. (eds) Higher Education Computer Science. Springer, Cham. https://doi.org/10.1007/978-3-319-98590-9_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-98590-9_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-98589-3

  • Online ISBN: 978-3-319-98590-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics