Abstract
Typical standard Semantic Textual Similarity (STS) solutions assess free student answers without considering context. Furthermore, they do not provide an explanation for why student answers are similar, related or unrelated to a benchmark answer. We propose a concept map based approach that incorporates contextual information resulting in a solution that can both better assess and interpret student responses. The approach relies on a novel tuple extraction method to automatically map student responses to concept maps. Using tuples as the unit of learning (learning components) allows us to track students’ knowledge at a finer grain level. We can thus better assess student answers beyond the binary decision of correct and incorrect as we can also identify partially-correct student answers. Moreover, our approach can easily detect missing learning components in student answers. We present experiments with data collected from dialogue-based intelligent tutoring systems and discuss the added benefit of the proposed method to adaptive interactive learning systems such as the capability of providing relevant targeted feedback to students which could significantly improve the effectiveness of such intelligent tutoring systems.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Agirre, E., et al.: Semeval-2015 task 2: semantic textual similarity, English, Spanish and pilot on interpretability. In: Proceedings of the 9th International Workshop on Semantic Evaluation SemEval 2015, pp. 252–263. Association for Computational Linguistics (2015). http://aclweb.org/anthology/S15-2045
Agirre, E., et al.: Semeval-2016 task 1: semantic textual similarity, monolingual and cross-lingual evaluation. In: Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pp. 497–511. Association for Computational Linguistics (2016). http://aclweb.org/anthology/S16-1081
Agirre, E., Gonzalez-Agirre, A., Lopez-Gazpio, I., Maritxalar, M., Rigau, G., Uria, L.: Semeval-2016 task 2: interpretable semantic textual similarity. In: Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pp. 512–524. Association for Computational Linguistics (2016). http://aclweb.org/anthology/S16-1082
All, A.C., Huycke, L.I., Fisher, M.J.: Instructional tools for nursing education: concept maps. Nurs. Educ. Perspect. 24(6), 311–317 (2003)
Anderson, T.H., Huang, S.c.C.: On using concept maps to assess the comprehension effects of reading expository text. Center for the Study of Reading Technical report, no. 483 (1989)
Angeli, G., Johnson Premkumar, M.J., Manning, C.D.: Leveraging linguistic structure for open domain information extraction. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 344–354. Association for Computational Linguistics (2015). http://aclweb.org/anthology/P15-1034
Ausubel, D.P.: The Psychology of Meaningful Verbal Learning (1963)
Bailey, S., Meurers, D.: Diagnosing meaning errors in short answers to reading comprehension questions. In: Proceedings of the Third Workshop on Innovative Use of NLP for Building Educational Applications, pp. 107–115. Association for Computational Linguistics (2008). http://aclweb.org/anthology/W08-0913
Banjade, R., Maharjan, N., Niraula, N.B., Gautam, D., Samei, B., Rus, V.: Evaluation dataset (DT-Grade) and word weighting approach towards constructed short answers assessment in tutorial dialogue context. In: Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications, pp. 182–187. Association for Computational Linguistics (2016). http://aclweb.org/anthology/W16-0520
Banjade, R., Maharjan, N., Niraula, N.B., Rus, V.: DTSim at SemEval-2016 task 2: interpreting similarity of texts based on automated chunking, chunk alignment and semantic relation prediction. In: Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pp. 809–813. Association for Computational Linguistics (2016). http://aclweb.org/anthology/S16-1125
Banjade, R., et al.: NeRoSim: a system for measuring and interpreting semantic textual similarity. In: Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pp. 164–171. Association for Computational Linguistics (2015). http://aclweb.org/anthology/S15-2030
Cañas, A.J., et al.: CmapTools: a knowledge modeling and sharing environment (2004)
Cer, D., Diab, M., Agirre, E., Lopez-Gazpio, I., Specia, L.: SemEval-2017 task 1: semantic textual similarity multilingual and crosslingual focused evaluation. In: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pp. 1–14. Association for Computational Linguistics (2017). http://aclweb.org/anthology/S17-2001
Deese, J.: The Structure of Associations in Language and Thought. Johns Hopkins University Press, Baltimore (1966)
Fader, A., Soderland, S., Etzioni, O.: Identifying relations for open information extraction. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 1535–1545. Association for Computational Linguistics (2011)
Glaser, R., Bassok, M.: Learning theory and the study of instruction. Ann. Rev. Psychol. 40(1), 631–666 (1989)
Gouli, E., Gogoulou, A., Papanikolaou, K., Grigoriadou, M.: COMPASS: an adaptive web-based concept map assessment tool (2004)
Horton, P.B., McConney, A.A., Gallo, M., Woods, A.L., Senn, G.J., Hamelin, D.: An investigation of the effectiveness of concept mapping as an instructional tool. Sci. Educ. 77(1), 95–111 (1993)
Lomask, M., Baron, J., Greig, J., Harrison, C.: ConnMap: connecticut’s use of concept mapping to assess the structure of students’ knowledge of science. In: Annual Meeting of the National Association of Research in Science Teaching, Cambridge, pp. 21–25 (1992)
Maharjan, N., Banjade, R., Gautam, D., Tamang, L.J., Rus, V.: Dt\(\_\)team at SemEval-2017 task 1: Semantic similarity using alignments, sentence-level embeddings and Gaussian mixture model output. In: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pp. 120–124. Association for Computational Linguistics (2017). http://aclweb.org/anthology/S17-2014
Maharjan, N., Banjade, R., Rus, V.: Automated assessment of open-ended student answers in tutorial dialogues using Gaussian mixture models. In: Proceedings of the Thirtieth International Florida Artificial Intelligence Research Society Conference, pp. 98–103 (2017). https://aaai.org/ocs/index.php/FLAIRS/FLAIRS17/paper/view/15489
Maharjan, N., Gautam, D., Rus, V.: Assessing free student answers in tutorial dialogues using LSTM models. In: Penstein Rosé, C., et al. (eds.) AIED 2018. LNCS (LNAI), vol. 10948, pp. 193–198. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-93846-2_35
Maharjan, N., Rus, V.: Towards concept map based free student answer assessment. In: Proceedings of the Thirty-second International Florida Artificial Intelligence Research Society Conference (2019)
Martinez Maldonado, R., Kay, J., Yacef, K., Schwendimann, B.: An interactive teacher’s dashboard for monitoring groups in a multi-tabletop learning environment. In: Cerri, S.A., Clancey, W.J., Papadourakis, G., Panourgia, K. (eds.) ITS 2012. LNCS, vol. 7315, pp. 482–492. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-30950-2_62
Mausam, Schmitz, M., Soderland, S., Bart, R., Etzioni, O.: Open language learning for information extraction. In: Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pp. 523–534. Association for Computational Linguistics (2012). http://aclweb.org/anthology/D12-1048
Miller, G.A.: WordNet: a lexical database for english. Commun. ACM 38(11), 39–41 (1995)
Mohammad, S., Dorr, B., Hirst, G.: Computing word-pair antonymy. In: Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pp. 982–991. Association for Computational Linguistics (2008). http://aclweb.org/anthology/D08-1103
Niraula, N.B., Rus, V., Banjade, R., Stefanescu, D., Baggett, W., Morgan, B.: The DARE corpus: a resource for anaphora resolution in dialogue based intelligent tutoring systems. In: LREC, pp. 3199–3203 (2014). http://www.lrec-conf.org/proceedings/lrec2014/pdf/372_Paper.pdf
Novak, J.D., Bob Gowin, D., Johansen, G.T.: The use of concept mapping and knowledge vee mapping with junior high school science students. Sci. Educ. 67(5), 625–645 (1983)
Novak, J.D., Musonda, D.: A twelve-year longitudinal study of science concept learning. Am. Educ. Res. J. 28(1), 117–153 (1991). https://www.jstor.org/stable/pdf/1162881.pdf
Olney, A.M., et al.: Guru: a computer tutor that models expert human tutors. In: Cerri, S.A., Clancey, W.J., Papadourakis, G., Panourgia, K. (eds.) ITS 2012. LNCS, vol. 7315, pp. 256–261. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-30950-2_32. http://www.academia.edu/download/46942204/Guru_A_Computer_tutor_that_models_expert20160701-24127-rkuna9.pdf
Roth, W.M., Roychoudhury, A.: The concept map as a tool for the collaborative construction of knowledge: a microanalysis of high school physics students. J. Res. Sci. Teach. 30(5), 503–534 (1993)
Royer, J.M., Cisero, C.A., Carlo, M.S.: Techniques and procedures for assessing cognitive skills. Rev. Educ. Res. 63(2), 201–243 (1993)
Rus, V., D’Mello, S., Hu, X., Graesser, A.: Recent advances in conversational intelligent tutoring systems. AI Mag. 34(3), 42–54 (2013). http://www.aaai.org/ojs/index.php/aimagazine/article/viewFile/2485/2378
Rus, V., Lintean, M.: A comparison of greedy and optimal assessment of natural language student input using word-to-word similarity metrics. In: Proceedings of the Seventh Workshop on Building Educational Applications Using NLP, pp. 157–162. Association for Computational Linguistics (2012). http://aclweb.org/anthology/W12-2018
Rus, V., Niraula, N.B., Banjade, R.: DeepTutor: an effective, online intelligent tutoring system that promotes deep learning. In: AAAI, pp. 4294–4295 (2015). http://www.aaai.org/ocs/index.php/AAAI/AAAI15/paper/download/10019/9857
Schmid, R.F., Telaro, G.: Concept mapping as an instructional strategy for high school biology. J. Educ. Res. 84(2), 78–85 (1990)
Tian, J., Zhou, Z., Lan, M., Wu, Y.: ECNU at SemEval-2017 task 1: leverage kernel-based traditional NLP features and neural networks to build a universal model for multilingual and cross-lingual semantic textual similarity. In: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pp. 191–197. Association for Computational Linguistics (2017). http://aclweb.org/anthology/S17-2028
Wallace, J.D., Mintzes, J.J.: The concept map as a research tool: exploring conceptual change in biology. J. Res. Sci. Teach. 27(10), 1033–1052 (1990)
Wu, P.H., Hwang, G.J., Milrad, M., Ke, H.R., Huang, Y.M.: An innovative concept map approach for improving students’ learning performance with an instant feedback mechanism. Br. J. Educ. Technol. 43(2), 217–232 (2012)
Acknowledgments
This work was partially supported by The University of Memphis, the National Science Foundation (awards CISE-IIS-1822816 and CISE-ACI-1443068), and a contract from the Advanced Distributed Learning Initiative of the United States Department of Defense.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Maharjan, N., Rus, V. (2019). A Concept Map Based Assessment of Free Student Answers in Tutorial Dialogues. In: Isotani, S., Millán, E., Ogan, A., Hastings, P., McLaren, B., Luckin, R. (eds) Artificial Intelligence in Education. AIED 2019. Lecture Notes in Computer Science(), vol 11625. Springer, Cham. https://doi.org/10.1007/978-3-030-23204-7_21
Download citation
DOI: https://doi.org/10.1007/978-3-030-23204-7_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-23203-0
Online ISBN: 978-3-030-23204-7
eBook Packages: Computer ScienceComputer Science (R0)