Towards Efficient Teacher Assisted Assignment Marking Using Ranking Metrics

  • Nils Ulltveit-MoeEmail author
  • Terje GjøsæterEmail author
  • Sigurd AssevEmail author
  • Halvard ØysædEmail author
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 739)


This paper describes a tool with supporting methodology for efficient teacher assisted marking of open assignments based on student answer ranking metrics. It includes a methodology for how to design tasks for markability. This improves marking efficienty and reduces cognitive strain for the teacher during marking, and also allows for easily giving feedback to students on common pitfalls and misconceptions to improve both the learning outcome for the students as well as the teacher’s productivity by reducing the time needed for marking open assignments. An advantage with the method is that it is language agnostic as well as generally being agnostic to the discipline of the course being assessed. The ranking metrics also provide implicit plagiarism detection.


Entropy Cross assignment marking Learning Management Systems Efficient teaching methods 


  1. 1.
    Barstad, V., Goodwin, M., Gjøsæter, T.: Predicting source code quality with static analysis and machine learning. In: Proceedings of NIK 2014. Norsk Informatikkonferanse (2014)Google Scholar
  2. 2.
    Bloom, B.S., Others, A.: Handbook on Formative and Summative Evaluation of Student Learning. McGraw-Hill, New York (1971)Google Scholar
  3. 3.
    Braham, R.: A knowledge metric with applications to learning assessment. In: Proceedings of KEOD 2010, pp. 5–9. SciTePress - Science and Technology Publications (2010)Google Scholar
  4. 4.
    Buckley, E., Cowap, L.: An evaluation of the use of Turnitin for electronic submission and marking and as a formative feedback tool from an educator’s perspective. Br. J. Educ. Tech. 44(4), 562–570 (2013)CrossRefGoogle Scholar
  5. 5.
    Dretske, F.: Knowledge and the Flow of Information. MIT Press, Cambridge (1981)zbMATHGoogle Scholar
  6. 6.
    Dretske, F.: Perception, Knowledge and Belief: Selected Essays. Cambridge University Press, Cambridge (2000)CrossRefGoogle Scholar
  7. 7.
    Dretske, F.I.: Naturalizing the Mind. MIT Press, Cambridge (1997)Google Scholar
  8. 8.
    Eurostat: Education and training in the EU - facts and figures (2013)Google Scholar
  9. 9.
    Foltz, P.W., Laham, D., Landauer, T.K., Foltz, P.W., Laham, D., Landauer, T.K.: Automated essay scoring: applications to educational technology. In: Proceedings of EdMedia 1999, vol. 1999, pp. 939–944 (1999)Google Scholar
  10. 10.
    Kakkonen, T., Myller, N., Timonen, J., Sutinen, E.: Automatic essay grading with probabilistic latent semantic analysis. In: Proceedings of the Second Workshop on Building Educational Applications Using NLP, EdAppsNLP 2005, pp. 29–36. Association for Computational Linguistics, Stroudsburg (2005)Google Scholar
  11. 11.
    Lin, D.: An information-theoretic definition of similarity. In: Proceedings of the 15th International Conference on Machine Learning, pp. 296–304. Morgan Kaufmann (1998)Google Scholar
  12. 12.
    Moen, R.D., Nolan, T.W., Provost, L.P.: Quality Improvement Through Planned Experimentation. McGraw-Hill, New York (1999)Google Scholar
  13. 13.
    Rehder, B., Schreiner, M.E., Wolfe, M.B.W., Laham, D., Landauer, T.K., Kintsch, W.: Using latent semantic analysis to assess knowledge: some technical considerations. Discourse Process. 25(2–3), 337–354 (1998)CrossRefGoogle Scholar
  14. 14.
    Ricketts, C., Brice, J., Coombes, L.: Are multiple choice tests fair to medical students with specific learning disabilities? Adv. Health Sci. Educ. 15(2), 265–275 (2009)CrossRefGoogle Scholar
  15. 15.
    Shannon, C.E.: A mathematical theory of communication. Bell Syst. Tech. J. 27(379–423), 623–656 (1948)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Sikora, A.S.: Mathematical theory of student assessment through grading (2015)Google Scholar
  17. 17.
    Ulltveit-Moe, N., Assev, S., Gjøsæter, T., Øysæd, H.: Streamlining assessment using a knowledge metric. In: Proceedings of CSEDU 2016, pp. 190–197. SCITEPRESS - Science and Technology Publications (2016)Google Scholar
  18. 18.
    Valenti, S., Neri, F., Cucchiarelli, A., Valenti, S., Neri, F., Cucchiarelli, A.: An overview of current research on automated essay grading. J. Inf. Technol. Educ. Res. 2(1), 319–330 (2003)Google Scholar
  19. 19.
    Winship, C., Mare, R.D.: Regression models with ordinal variables. Am. Sociol. Rev. 49, 513–524 (1984)CrossRefGoogle Scholar
  20. 20.
    Zen, K., Iskandar, D., Linang, O.: Using Latent Semantic Analysis for automated grading programming assignments. In: 2011 International Conference on Semantic Technology and Information Retrieval (STAIR), pp. 82–88 (2011)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Department of Information and Communication TechnologyUniversity of AgderGrimstadNorway
  2. 2.NC-SpectrumKviteseidNorway

Personalised recommendations