Advertisement

Teacher Supported Peer Evaluation Through OpenAnswer: A Study of Some Factors

  • Maria De Marsico
  • Andrea Sterbini
  • Marco TemperiniEmail author
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 739)

Abstract

In the OpenAnswer system it is possible to compute grades for/to the answers to open-ended questions given to a class of students, based on the students’ peer-evaluation and on the teacher’s grading work, performed on a subset of the answers. Here we analyze the systems’ performances, expressed as the capability to infer correct grades based on a limited amount of grading work by the teacher. In particular, considering that the performance may well depend on alternative definitions (valorization) of several aspects of the system, we show an analysis of such alternative choices, with the intention of seeing what choices might result in better system’s behavior. The factors we investigate are related to the Bayesian framework underpinning OpenAnswer. In particular we tackle the different possibilities to define probability distribution of key variables, conditional probabilities tables, and methods to map our statistical variables onto usable grades. Moreover we analyze the relationship between two main variables that express knowledge possessed by the student and her/his peer-assessing skill. By exploring alternative configurations of the system’s parameters we can conclude that Knowledge is in general more difficult than Assessment. The way to reach such a (not astonishing) conclusion provides also a quantitative evidence of Bloom’s ranking.

Keywords

Peer assessment Open-answer questions Automatic grade prediction 

References

  1. 1.
    Anderson, L.W., Krathwohl, D.R. (eds.): A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives. Allyn and Bacon, Boston (2000)Google Scholar
  2. 2.
    Anderson, J.R., Corbett, A.T., Koedinger, K.R., Pelletier, R.: Cognitive tutors: lessons learned. J. Learn. Sci. 4(2), 167–207 (1995)CrossRefGoogle Scholar
  3. 3.
    Bloom, B.S., Engelhart, M.D., Furst, E.J., Hill, W.H., Krathwohl, D.R.: Taxonomy of Educational Objectives: The Classification of Educational Goals. Handbook I: Cognitive Domain. David McKay, New York (1956)Google Scholar
  4. 4.
    Castellanos-Nieves, D., Fernández-Breis, J., Valencia-García, R., Martínez-Béjar, R., Iniesta-Moreno, M.: Semantic Web Technologies for supporting learning assessment. Inf. Sciences 181, 9 (2011)CrossRefGoogle Scholar
  5. 5.
    Cho, K., MacArthur, C.: Student revision with peer and expert reviewing. Learn. Instr. 20(4), 328–338 (2010)CrossRefGoogle Scholar
  6. 6.
    Chung, H., Graf, S., Robert Lai, K.: Kinshuk: enrichment of peer assessment with agent negotiation. IEEE Trans. Learn. Technol. 4(1), 35–46 (2011)CrossRefGoogle Scholar
  7. 7.
    Conati, C., Gartner, A., Vanlehn, K.: Using Bayesian networks to manage uncertainty in student modeling. User Model. User-Adap. Inter. 12, 371–417 (2002)CrossRefzbMATHGoogle Scholar
  8. 8.
    De Marsico, M., Sterbini, A., Temperini, M.: The definition of a tunneling strategy between adaptive learning and reputation-based group activities. In: Proceedings of 11th IEEE International Conference on Advanced Learning Technologies, pp. 498–500 (2011)Google Scholar
  9. 9.
    De Marsico, M., Sterbini, A., Temperini, M.: A framework to support social-collaborative personalized e-Learning. In: Kurosu, M. (ed.) HCI 2013. LNCS, vol. 8005, pp. 351–360. Springer, Heidelberg (2013). doi: 10.1007/978-3-642-39262-7_40 CrossRefGoogle Scholar
  10. 10.
    De Marsico, M., Sterbini, A., Temperini, M.: A strategy to join adaptive and reputation-based social-collaborative e-learning, through the Zone of Proximal Development. Int. J. Distance Educ. Tech. (IJDET) 11(3), 12–31 (2013)CrossRefGoogle Scholar
  11. 11.
    El-Kechaï, N., Delozanne, É., Prévit, D., Grugeon, B., Chenevotot, F.: Evaluating the performance of a diagnosis system in school algebra. In: Leung, H., Popescu, E., Cao, Y., Lau, R.W.H., Nejdl, W. (eds.) ICWL 2011. LNCS, vol. 7048, pp. 263–272. Springer, Heidelberg (2011). doi: 10.1007/978-3-642-25813-8_28 CrossRefGoogle Scholar
  12. 12.
    Formisano, A., Omodeo, E.G., Temperini, M.: Layered map reasoning: an experimental approach put to trial on sets. Electron. Notes Theor. Comput. Sci. 48, 1–28 (2001). Elsevier Science B. V., AmsterdamCrossRefzbMATHGoogle Scholar
  13. 13.
    Formisano, A., Omodeo, E.G., Temperini, M.: Goals and benchmarks for automated map reasoning. J. Symb. Comput. 29(2), 259–297 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Gasparetti, F., Limongelli, C., Sciarrone, F.: Wiki course builder: a system for retrieving and sequencing didactic materials from wikipedia. In: Proceedings of ITHET 2015 (2015)Google Scholar
  15. 15.
    Jackson, K., Trochim, W.: Concept mapping as an alternative approach for the analysis of open-ended survey responses. Organ. Res. Methods 5, 307–336 (2002). SageCrossRefGoogle Scholar
  16. 16.
    Li, L.X., Liu, X., Steckelberg, A.L.: Assessor or assessee: how student learning improves by giving and receiving peer feedback. Br. J. Ed. Tech. 41(3), 525–536 (2010)CrossRefGoogle Scholar
  17. 17.
    Limongelli, C., Lombardi, M., Marani, A., Sciarrone, F., Temperini, M.: A recommendation module to help teachers build courses through the Moodle Learning Management System. New Rev. Hypermedia Multimed. 22, 1–2 (2016)CrossRefGoogle Scholar
  18. 18.
    Limongelli, C., Lombardi, M., Marani, A., Sciarrone, F.: A teacher model to speed up the process of building courses. In: Kurosu, M. (ed.) HCI 2013. LNCS, vol. 8005, pp. 434–443. Springer, Heidelberg (2013). doi: 10.1007/978-3-642-39262-7_50 CrossRefGoogle Scholar
  19. 19.
    Limongelli, C., Lombardi, M., Marani, A., Sciarrone, F.: A teaching-style based social network for didactic building and sharing. In: Lane, H.C., Yacef, K., Mostow, J., Pavlik, P. (eds.) AIED 2013. LNCS, vol. 7926, pp. 774–777. Springer, Heidelberg (2013). doi: 10.1007/978-3-642-39112-5_110 CrossRefGoogle Scholar
  20. 20.
    Limongelli, C., Sciarrone, F., Temperini, M.: A social network-based teacher model to support course construction. Comput. Hum. Behav. 51, 1077–1085 (2015)CrossRefGoogle Scholar
  21. 21.
    Metcalfe, J., Shimamura, A.P.: Metacognition: Knowing About Knowing. MIT Press, Cambridge (1994)Google Scholar
  22. 22.
    Miller, P.: The effect of scoring criteria specificity on peer and self-assessment. Assess. Eval. High. Educ. 28(4), 383–394 (2003)CrossRefGoogle Scholar
  23. 23.
    Palmer, K., Richardson, P.: On-line assessment and free-response input-a pedagogic and technical model for squaring the circle. In: Proceedings of 7th Computer Assisted Assessment Conference, pp. 289–300 (2003)Google Scholar
  24. 24.
    Romero, C., Ventura, S.: Educational data mining: a review of the state of the art. IEEE Trans. SMC Part C 40(6), 601–618 (2010)Google Scholar
  25. 25.
    Sadler, P.M., Good, E.: The impact of self- and peer-grading on student learning. Educ. Assess. 11(1), 1–31 (2006)CrossRefGoogle Scholar
  26. 26.
    Sterbini, A., Temperini, M.: Collaborative projects and self evaluation within a social reputation-based exercise-sharing system. In: Proceedings of WI-IAT 2009, vol. 3, pp. 243–246 (2009)Google Scholar
  27. 27.
    Sterbini, A., Temperini, M.: Dealing with open-answer questions in a peer-assessment environment. In: Popescu, E., Li, Q., Klamma, R., Leung, H., Specht, M. (eds.) ICWL 2012. LNCS, vol. 7558, pp. 240–248. Springer, Heidelberg (2012). doi: 10.1007/978-3-642-33642-3_26 CrossRefGoogle Scholar
  28. 28.
    Sterbini, A., Temperini, M.: Analysis of OpenAnswers via mediated peer-assessment. In: Proceedings of the International Conference on System Theory, Control and Computing, Workshop SPEL (2013)Google Scholar
  29. 29.
    Sterbini, A., Temperini, M.: OpenAnswer, a framework to support teacher’s management of open answers through peer assessment. In: Proceedings of Frontiers in Education, pp. 164–170 (2013)Google Scholar
  30. 30.
    Topping, K.: Peer assessment between students in colleges and universities. Rev. Educ. Res. 68, 249–276 (1998)CrossRefGoogle Scholar
  31. 31.
    Yamanishi, K., Li, H.: Mining open answers in questionnaire data. IEEE Intell. Syst. 17, 58–63 (2002)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Maria De Marsico
    • 1
  • Andrea Sterbini
    • 1
  • Marco Temperini
    • 2
    Email author
  1. 1.Department of Computer ScienceSapienza UniversityRomeItaly
  2. 2.Department of Computer, Control and Management EngineeringSapienza UniversityRomeItaly

Personalised recommendations