Abstract
Correction of open answers questions is an heavy task as, in principle, all the students answers have to be graded. In this paper we give evidence of the possibility to reduce the teacher’s workload on open questions questionnaires, by a module managing a rough constraint-based model of the students’ decisions, involved in a peer-assessment task. By modeling students decisions we relate their competences on the topic (K) to their ability to judge (J) others’ work and to the correctness (C) of their own (open) answer. The network of constraints and relations established among the above variables through the students’ choices, allows us to constraint the set of possible values of the answers’ correctness (C). Our system suggests what subset of the answers the teacher should correct, in order to narrow the set of hypotheses and produce a complete set of grades. The model is quite simple, yet sufficient to show that the number of required corrections is as small as half of the initial answers. In order to show this result, we report on an extensive set of simulated experiments which answer to three research questions: 1) is the method described able to deduce the whole set of grades with few corrections? 2) what set of parameters is best to run actual experiments? 3) is the model “robust” respect to simulations with high probability of random data?
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Cheng, Y., Ku, H.: An investigation of the effects of reciprocal peer tutoring. Computers in Human Behavior 25 (2009)
Kreijns, K., Kirschner, P.A., Jochems, W.: Identifying the pitfalls for social interaction in computer supported collaborative learning environments: a review of the research. Computers in Human Behavior 19, 335–353 (2003)
Li, H., Yamanishi, K.: Mining from open answers in questionnaire data. In: Proc. KDD 2001, pp. 443–449. ACM, USA (2001)
Rosell, M., Velupillai, S.: Revealing Relations between Open and Closed Answers in Questionnaires through Text Clustering Evaluation. In: Proc. LREC 2008, Eur. Language Resources Ass. (2008)
Yamanishi, K., Li, H.: Mining Open Answers in Questionnaire data. IEEE Intelligent Systems, 58–63 (September/October 2002)
Sterbini, A., Temperini, M.: Good students help each other: improving knowledge sharing through reputation systems. In: Proc. 8th International Conference on Information Technology Based Higher Education and Training (ITHET), Kumamoto, Japan (2007)
Falchikov, N., Goldfinch, J.: Student Peer Assessment in Higher Education: A Meta-Analysis Comparing Peer and Teacher Marks. Review of Educational Research 70(3), 287–322 (2000)
iPeer, http://ipeer.ctlt.ubc.ca
Bloom, B.S. (ed.): Taxonomy of Educational Objectives. David McKay Company Inc., New York (1964)
Sterbini, A., Temperini, M.: Supporting Assessment of Open Answers in a Didactic Setting. In: Social and Personal Computing for Web-Supported Learning Communities (SPEL 2012), ICALT 2012, Rome, Italy (2012)
Sterbini, A., Temperini, M.: Correcting open-answer questionnaires through a Bayesian-network model of peer-based assessment. In: International Conference on Information Technology Based Higher Education and Training (ITHET 2012), Istanbul, Turkey, June 21-23 (2012)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Sterbini, A., Temperini, M. (2012). Dealing with Open-Answer Questions in a Peer-Assessment Environment. In: Popescu, E., Li, Q., Klamma, R., Leung, H., Specht, M. (eds) Advances in Web-Based Learning - ICWL 2012. ICWL 2012. Lecture Notes in Computer Science, vol 7558. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-33642-3_26
Download citation
DOI: https://doi.org/10.1007/978-3-642-33642-3_26
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-33641-6
Online ISBN: 978-3-642-33642-3
eBook Packages: Computer ScienceComputer Science (R0)