Skip to main content

Group Optimization to Maximize Peer Assessment Accuracy Using Item Response Theory

  • Conference paper
  • First Online:
Artificial Intelligence in Education (AIED 2017)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10331))

Included in the following conference series:

Abstract

As an assessment method based on a social constructivist approach, peer assessment has become popular in recent years. When the number of learners increases as in MOOCs, peer assessment is often conducted by dividing learners into multiple groups to reduce the learner’s assessment workload. However, in this case, a difficulty remains that the assessment accuracies of learners in each group depends on the assigned rater. To solve that problem, this study proposes a group optimization method to maximize peer assessment accuracy based on item response theory using integer programming. Experimental results, however, showed that the proposed method does not necessarily present higher accuracy than a random group formation. Therefore, we further propose an external rater selection method that assigns a few outside-group raters to each learner. Simulation and actual data experiments demonstrate that introduction of external raters using the proposed method improves the peer assessment accuracy considerably.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ueno, M., Okamoto, T.: Item response theory for peer assessment. In: Proceedings of IEEE International Conference on Advanced Learning Technologies, pp. 554–558(2008)

    Google Scholar 

  2. Uto, M., Ueno, M.: Item response theory for peer assessment. IEEE Trans. Learn. Technol. 9(2), 157–170 (2016)

    Article  Google Scholar 

  3. Davies, P.: Review in computerized peer-assessment. Will it affect student marking consistency? In: Proceedings of 11th CAA International Computer Assisted Conference, pp. 143–151(2007)

    Google Scholar 

  4. Lan, C.H., Graf, S., Lai, K.R., Kinshuk, K.: Enrichment of peer assessment with agent negotiation. IEEE Trans. Learn. Technol. 4(1), 35–46 (2011)

    Article  Google Scholar 

  5. Cho, K., Schunn, C.D.: Scaffolded writing and rewriting in the discipline: a web-based reciprocal peer review system. Comput. Educ. 48(3), 409–426 (2007)

    Article  Google Scholar 

  6. Topping, K.J., Smith, E.F., Swanson, I., Elliot, A.: Formative peer assessment of academic writing between postgraduate students. Assess. Eval. High. Educ. 25(2), 149–169 (2000)

    Article  Google Scholar 

  7. Moccozet, L., Tardy, C.: An assessment for learning framework with peer assessment of group works. In: Proceedings of International Conference on Information Technology Based Higher Education and Training, pp. 1–5 (2015)

    Google Scholar 

  8. Staubitz, T., Petrick, D., Bauer, M., Renz, J., Meinel, C.: Improving the peer assessment experience on mooc platforms. In: Proceeings of Third ACM Conference on Learning at Scale, New York, NY, USA 389–398 (2016)

    Google Scholar 

  9. ArchMiller, A., Fieberg, J., Walker, J., Holm, N.: Group peer assessment for summative evaluation in a graduate-level statistics course for ecologists. Assess. Eval. High. Educ. 1–13 (2016)

    Google Scholar 

  10. Suen, H.: Peer assessment for massive open online courses (MOOCs). Int. Rev. Res. Open Distrib. Learn. 15(3), 313–327 (2014)

    Article  Google Scholar 

  11. Shah, N.B., Bradley, J., Balakrishnan, S., Parekh, A., Ramchandran, K., Wainwright, M.J.: Some scaling laws for MOOC assessments. In: ACM KDD Workshop on Data Mining for Educational Assessment and Feedback (2014)

    Google Scholar 

  12. Lave, J., Wenger, E.: Situated Learning. Legitimate Peripheral Participation. Cambridge University Press, New York, Port Chester, Melbourne, Sydney (1991)

    Book  Google Scholar 

  13. Eckes, T.: Introduction to Many-Facet Rasch Measurement: Analyzing and Evaluating Rater-Mediated Assessments. Peter Lang Pub Inc., Bern (2015)

    Google Scholar 

  14. Lord, F.: Applications of Item Response Theory to Practical Testing Problems. Erlbaum Associates, Mahwah (1980)

    Google Scholar 

  15. Patz, R.J., Junker, B.W., Johnson, M.S., Mariano, L.T.: The hierarchical rater model for rated test items and its application to large-scale educational assessment data. J. Educ. Behav. Stat. 27(4), 341–366 (1999)

    Article  Google Scholar 

  16. Nguyen, T., Uto, M., Abe, Y., Ueno, M.: Reliable peer assessment for team project based learning using item response theory. In: Proceedings of International Conference on Computers in Education, pp. 144–153 (2015)

    Google Scholar 

  17. Pang, Y., Mugno, R., Xue, X., Wang, H.: Constructing collaborative learning groups with maximum diversity requirements. In: 15th IEEE International Conference on Advanced Learning Technologies, pp. 34–38, July 2015

    Google Scholar 

  18. Lin, Y.S., Chang, Y.C., Chu, C.P.: Novel approach to facilitating tradeoff multi-objective grouping optimization. IEEE Trans. Learn. Technol. 9(2), 107–119 (2016)

    Article  Google Scholar 

  19. Ueno, M.: Data mining and text mining technologies for collaborative learning in an ILMS “samurai”. In: Proceedings of IEEE International Conference on Advanced Learning Technologies, pp. 1052–1053 (2004)

    Google Scholar 

  20. Persky, H., Daane, M., Jin, Y.: The nation’s report card: writing 2002. Technical report. National Center for Education Statistics (2003)

    Google Scholar 

  21. Salahu-Din, D., Persky, H., Miller, J.: The nation’s report card: writing 2007. Technical report. National Center for Education Statistics (2008)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Masaki Uto .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Uto, M., Thien, N.D., Ueno, M. (2017). Group Optimization to Maximize Peer Assessment Accuracy Using Item Response Theory. In: André, E., Baker, R., Hu, X., Rodrigo, M., du Boulay, B. (eds) Artificial Intelligence in Education. AIED 2017. Lecture Notes in Computer Science(), vol 10331. Springer, Cham. https://doi.org/10.1007/978-3-319-61425-0_33

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-61425-0_33

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-61424-3

  • Online ISBN: 978-3-319-61425-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics