Abstract
Peer assessments, in which people review the works of peers and have their own works reviewed by peers, are useful for assessing homework, reviewing academic papers and so on. In conventional peer assessment systems, works are usually allocated to people before the assessment begins; therefore, if people drop out (abandoning reviews) during an assessment period, an imbalance occurs between the number of works a person reviews and that of peers who have reviewed the work. When the total imbalance increases, some people who diligently complete reviews may suffer from a lack of reviews and be discouraged to participate in future peer assessments. Therefore, in this study, we adopt a new adaptive allocation approach in which people are allocated review works only when requested and propose an algorithm for allocating works to people, which reduces the total imbalance. To show the effectiveness of the proposed algorithm, we provide an upper bound of the total imbalance that the proposed algorithm yields. In addition, we experimentally compare the proposed adaptive allocation to existing nonadaptive allocation methods.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Abraham, I., Alonso, O., Kandylas, V., Slivkins, A.: Adaptive crowdsourcing algorithms for the bandit survey problem. In: Conference on Learning Theory, pp. 882–910 (2013)
Acosta, E.S., Otero, J.J.E., Toletti, G.C.: Peer review experiences for MOOC. Development and testing of a peer review system for a massive online course. New Educ. Rev. 37(3), 66–79 (2014)
de Alfaro, L., Shavlovsky, M.: CrowdGrader: a tool for crowdsourcing the evaluation of homework assignments. In: Proceedings of the 45th ACM Technical Symposium on Computer Science Education, pp. 415–420 (2014)
Babik, D., Gehringer, E.F., Kidd, J., Pramudianto, F., Tinapple, D.: Probing the landscape: toward a systematic taxonomy of online peer assessment systems in education. In: Educational Data Mining (Workshops) (2016)
Chan, H.P., King, I.: Leveraging social connections to improve peer assessment in MOOCs. In: Proceedings of the 26th International Conference on World Wide Web Companion, pp. 341–349 (2017)
Chen, X., Lin, Q., Zhou, D.: Optimistic knowledge gradient policy for optimal budget allocation in crowdsourcing. In: International Conference on Machine Learning, pp. 64–72 (2013)
Díez Peláez, J., Luaces Rodríguez, Ó., Alonso Betanzos, A., Troncoso, A., Bahamonde Rionda, A.: Peer assessment in MOOCs using preference learning via matrix factorization. In: NIPS Workshop on Data Driven Education (2013)
Er, E., Bote-Lorenzo, M.L., Gómez-Sánchez, E., Dimitriadis, Y., Asensio-Pérez, J.I.: Predicting student participation in peer reviews in moocs. In: Proceedings of the Second European MOOCs Stakeholder Summit 2017 (2017)
Estévez-Ayres, I., García, R.M.C., Fisteus, J.A., Kloos, C.D.: An algorithm for peer review matching in massive courses for minimising students’ frustration. J. UCS 19(15), 2173–2197 (2013)
Gehringer, E.F.: A survey of methods for improving review quality. In: Cao, Y., Väljataga, T., Tang, J.K.T., Leung, H., Laanpere, M. (eds.) ICWL 2014. LNCS, vol. 8699, pp. 92–97. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-13296-9_10
Karger, D.R., Oh, S., Shah, D.: Budget-optimal task allocation for reliable crowdsourcing systems. Oper. Res. 62(1), 1–24 (2014)
Onah, D.F., Sinclair, J., Boyatt, R.: Dropout rates of massive open online courses: behavioural patterns. In: International Conference on Education and New Learning Technologies, pp. 5825–5834 (2014)
Pappano, L.: The year of the MOOC. New York Times 2(12), 2012 (2012)
Piech, C., Huang, J., Chen, Z., Do, C., Ng, A., Koller, D.: Tuned models of peer assessment in MOOCs. In: Educational Data Mining 2013 (2013)
Ramachandran, L.: Automated Assessment of Reviews. North Carolina State University (2013)
Raman, K., Joachims, T.: Methods for ordinal peer grading. In: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1037–1046 (2014)
Shah, N.B., Bradley, J.K., Parekh, A., Wainwright, M., Ramchandran, K.: A case for ordinal peer-evaluation in MOOCs. In: NIPS Workshop on Data Driven Education (2013)
Weld, D.S., et al.: Personalized online education-a crowdsourcing challenge. In: Workshops at the Twenty-Sixth AAAI Conference on Artificial Intelligence, pp. 1–31 (2012)
Yan, Y., Rosales, R., Fung, G., Dy, J.G.: Active learning from crowds. In: International Conference on Machine Learning, vol. 11, pp. 1161–1168 (2011)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Ohashi, H., Asano, Y., Shimizu, T., Yoshikawa, M. (2019). Give and Take: Adaptive Balanced Allocation for Peer Assessments. In: Du, DZ., Duan, Z., Tian, C. (eds) Computing and Combinatorics. COCOON 2019. Lecture Notes in Computer Science(), vol 11653. Springer, Cham. https://doi.org/10.1007/978-3-030-26176-4_38
Download citation
DOI: https://doi.org/10.1007/978-3-030-26176-4_38
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-26175-7
Online ISBN: 978-3-030-26176-4
eBook Packages: Computer ScienceComputer Science (R0)