Skip to main content

A Concept Map Based Assessment of Free Student Answers in Tutorial Dialogues

  • Conference paper
  • First Online:
Artificial Intelligence in Education (AIED 2019)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11625))

Included in the following conference series:

Abstract

Typical standard Semantic Textual Similarity (STS) solutions assess free student answers without considering context. Furthermore, they do not provide an explanation for why student answers are similar, related or unrelated to a benchmark answer. We propose a concept map based approach that incorporates contextual information resulting in a solution that can both better assess and interpret student responses. The approach relies on a novel tuple extraction method to automatically map student responses to concept maps. Using tuples as the unit of learning (learning components) allows us to track students’ knowledge at a finer grain level. We can thus better assess student answers beyond the binary decision of correct and incorrect as we can also identify partially-correct student answers. Moreover, our approach can easily detect missing learning components in student answers. We present experiments with data collected from dialogue-based intelligent tutoring systems and discuss the added benefit of the proposed method to adaptive interactive learning systems such as the capability of providing relevant targeted feedback to students which could significantly improve the effectiveness of such intelligent tutoring systems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Agirre, E., et al.: Semeval-2015 task 2: semantic textual similarity, English, Spanish and pilot on interpretability. In: Proceedings of the 9th International Workshop on Semantic Evaluation SemEval 2015, pp. 252–263. Association for Computational Linguistics (2015). http://aclweb.org/anthology/S15-2045

  2. Agirre, E., et al.: Semeval-2016 task 1: semantic textual similarity, monolingual and cross-lingual evaluation. In: Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pp. 497–511. Association for Computational Linguistics (2016). http://aclweb.org/anthology/S16-1081

  3. Agirre, E., Gonzalez-Agirre, A., Lopez-Gazpio, I., Maritxalar, M., Rigau, G., Uria, L.: Semeval-2016 task 2: interpretable semantic textual similarity. In: Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pp. 512–524. Association for Computational Linguistics (2016). http://aclweb.org/anthology/S16-1082

  4. All, A.C., Huycke, L.I., Fisher, M.J.: Instructional tools for nursing education: concept maps. Nurs. Educ. Perspect. 24(6), 311–317 (2003)

    Google Scholar 

  5. Anderson, T.H., Huang, S.c.C.: On using concept maps to assess the comprehension effects of reading expository text. Center for the Study of Reading Technical report, no. 483 (1989)

    Google Scholar 

  6. Angeli, G., Johnson Premkumar, M.J., Manning, C.D.: Leveraging linguistic structure for open domain information extraction. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 344–354. Association for Computational Linguistics (2015). http://aclweb.org/anthology/P15-1034

  7. Ausubel, D.P.: The Psychology of Meaningful Verbal Learning (1963)

    Google Scholar 

  8. Bailey, S., Meurers, D.: Diagnosing meaning errors in short answers to reading comprehension questions. In: Proceedings of the Third Workshop on Innovative Use of NLP for Building Educational Applications, pp. 107–115. Association for Computational Linguistics (2008). http://aclweb.org/anthology/W08-0913

  9. Banjade, R., Maharjan, N., Niraula, N.B., Gautam, D., Samei, B., Rus, V.: Evaluation dataset (DT-Grade) and word weighting approach towards constructed short answers assessment in tutorial dialogue context. In: Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications, pp. 182–187. Association for Computational Linguistics (2016). http://aclweb.org/anthology/W16-0520

  10. Banjade, R., Maharjan, N., Niraula, N.B., Rus, V.: DTSim at SemEval-2016 task 2: interpreting similarity of texts based on automated chunking, chunk alignment and semantic relation prediction. In: Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pp. 809–813. Association for Computational Linguistics (2016). http://aclweb.org/anthology/S16-1125

  11. Banjade, R., et al.: NeRoSim: a system for measuring and interpreting semantic textual similarity. In: Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pp. 164–171. Association for Computational Linguistics (2015). http://aclweb.org/anthology/S15-2030

  12. Cañas, A.J., et al.: CmapTools: a knowledge modeling and sharing environment (2004)

    Google Scholar 

  13. Cer, D., Diab, M., Agirre, E., Lopez-Gazpio, I., Specia, L.: SemEval-2017 task 1: semantic textual similarity multilingual and crosslingual focused evaluation. In: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pp. 1–14. Association for Computational Linguistics (2017). http://aclweb.org/anthology/S17-2001

  14. Deese, J.: The Structure of Associations in Language and Thought. Johns Hopkins University Press, Baltimore (1966)

    Google Scholar 

  15. Fader, A., Soderland, S., Etzioni, O.: Identifying relations for open information extraction. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 1535–1545. Association for Computational Linguistics (2011)

    Google Scholar 

  16. Glaser, R., Bassok, M.: Learning theory and the study of instruction. Ann. Rev. Psychol. 40(1), 631–666 (1989)

    Article  Google Scholar 

  17. Gouli, E., Gogoulou, A., Papanikolaou, K., Grigoriadou, M.: COMPASS: an adaptive web-based concept map assessment tool (2004)

    Google Scholar 

  18. Horton, P.B., McConney, A.A., Gallo, M., Woods, A.L., Senn, G.J., Hamelin, D.: An investigation of the effectiveness of concept mapping as an instructional tool. Sci. Educ. 77(1), 95–111 (1993)

    Article  Google Scholar 

  19. Lomask, M., Baron, J., Greig, J., Harrison, C.: ConnMap: connecticut’s use of concept mapping to assess the structure of students’ knowledge of science. In: Annual Meeting of the National Association of Research in Science Teaching, Cambridge, pp. 21–25 (1992)

    Google Scholar 

  20. Maharjan, N., Banjade, R., Gautam, D., Tamang, L.J., Rus, V.: Dt\(\_\)team at SemEval-2017 task 1: Semantic similarity using alignments, sentence-level embeddings and Gaussian mixture model output. In: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pp. 120–124. Association for Computational Linguistics (2017). http://aclweb.org/anthology/S17-2014

  21. Maharjan, N., Banjade, R., Rus, V.: Automated assessment of open-ended student answers in tutorial dialogues using Gaussian mixture models. In: Proceedings of the Thirtieth International Florida Artificial Intelligence Research Society Conference, pp. 98–103 (2017). https://aaai.org/ocs/index.php/FLAIRS/FLAIRS17/paper/view/15489

  22. Maharjan, N., Gautam, D., Rus, V.: Assessing free student answers in tutorial dialogues using LSTM models. In: Penstein Rosé, C., et al. (eds.) AIED 2018. LNCS (LNAI), vol. 10948, pp. 193–198. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-93846-2_35

    Chapter  Google Scholar 

  23. Maharjan, N., Rus, V.: Towards concept map based free student answer assessment. In: Proceedings of the Thirty-second International Florida Artificial Intelligence Research Society Conference (2019)

    Google Scholar 

  24. Martinez Maldonado, R., Kay, J., Yacef, K., Schwendimann, B.: An interactive teacher’s dashboard for monitoring groups in a multi-tabletop learning environment. In: Cerri, S.A., Clancey, W.J., Papadourakis, G., Panourgia, K. (eds.) ITS 2012. LNCS, vol. 7315, pp. 482–492. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-30950-2_62

    Chapter  Google Scholar 

  25. Mausam, Schmitz, M., Soderland, S., Bart, R., Etzioni, O.: Open language learning for information extraction. In: Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pp. 523–534. Association for Computational Linguistics (2012). http://aclweb.org/anthology/D12-1048

  26. Miller, G.A.: WordNet: a lexical database for english. Commun. ACM 38(11), 39–41 (1995)

    Article  Google Scholar 

  27. Mohammad, S., Dorr, B., Hirst, G.: Computing word-pair antonymy. In: Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pp. 982–991. Association for Computational Linguistics (2008). http://aclweb.org/anthology/D08-1103

  28. Niraula, N.B., Rus, V., Banjade, R., Stefanescu, D., Baggett, W., Morgan, B.: The DARE corpus: a resource for anaphora resolution in dialogue based intelligent tutoring systems. In: LREC, pp. 3199–3203 (2014). http://www.lrec-conf.org/proceedings/lrec2014/pdf/372_Paper.pdf

  29. Novak, J.D., Bob Gowin, D., Johansen, G.T.: The use of concept mapping and knowledge vee mapping with junior high school science students. Sci. Educ. 67(5), 625–645 (1983)

    Article  Google Scholar 

  30. Novak, J.D., Musonda, D.: A twelve-year longitudinal study of science concept learning. Am. Educ. Res. J. 28(1), 117–153 (1991). https://www.jstor.org/stable/pdf/1162881.pdf

    Article  Google Scholar 

  31. Olney, A.M., et al.: Guru: a computer tutor that models expert human tutors. In: Cerri, S.A., Clancey, W.J., Papadourakis, G., Panourgia, K. (eds.) ITS 2012. LNCS, vol. 7315, pp. 256–261. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-30950-2_32. http://www.academia.edu/download/46942204/Guru_A_Computer_tutor_that_models_expert20160701-24127-rkuna9.pdf

    Chapter  Google Scholar 

  32. Roth, W.M., Roychoudhury, A.: The concept map as a tool for the collaborative construction of knowledge: a microanalysis of high school physics students. J. Res. Sci. Teach. 30(5), 503–534 (1993)

    Article  Google Scholar 

  33. Royer, J.M., Cisero, C.A., Carlo, M.S.: Techniques and procedures for assessing cognitive skills. Rev. Educ. Res. 63(2), 201–243 (1993)

    Article  Google Scholar 

  34. Rus, V., D’Mello, S., Hu, X., Graesser, A.: Recent advances in conversational intelligent tutoring systems. AI Mag. 34(3), 42–54 (2013). http://www.aaai.org/ojs/index.php/aimagazine/article/viewFile/2485/2378

    Article  Google Scholar 

  35. Rus, V., Lintean, M.: A comparison of greedy and optimal assessment of natural language student input using word-to-word similarity metrics. In: Proceedings of the Seventh Workshop on Building Educational Applications Using NLP, pp. 157–162. Association for Computational Linguistics (2012). http://aclweb.org/anthology/W12-2018

  36. Rus, V., Niraula, N.B., Banjade, R.: DeepTutor: an effective, online intelligent tutoring system that promotes deep learning. In: AAAI, pp. 4294–4295 (2015). http://www.aaai.org/ocs/index.php/AAAI/AAAI15/paper/download/10019/9857

  37. Schmid, R.F., Telaro, G.: Concept mapping as an instructional strategy for high school biology. J. Educ. Res. 84(2), 78–85 (1990)

    Article  Google Scholar 

  38. Tian, J., Zhou, Z., Lan, M., Wu, Y.: ECNU at SemEval-2017 task 1: leverage kernel-based traditional NLP features and neural networks to build a universal model for multilingual and cross-lingual semantic textual similarity. In: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pp. 191–197. Association for Computational Linguistics (2017). http://aclweb.org/anthology/S17-2028

  39. Wallace, J.D., Mintzes, J.J.: The concept map as a research tool: exploring conceptual change in biology. J. Res. Sci. Teach. 27(10), 1033–1052 (1990)

    Article  Google Scholar 

  40. Wu, P.H., Hwang, G.J., Milrad, M., Ke, H.R., Huang, Y.M.: An innovative concept map approach for improving students’ learning performance with an instant feedback mechanism. Br. J. Educ. Technol. 43(2), 217–232 (2012)

    Article  Google Scholar 

Download references

Acknowledgments

This work was partially supported by The University of Memphis, the National Science Foundation (awards CISE-IIS-1822816 and CISE-ACI-1443068), and a contract from the Advanced Distributed Learning Initiative of the United States Department of Defense.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nabin Maharjan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Maharjan, N., Rus, V. (2019). A Concept Map Based Assessment of Free Student Answers in Tutorial Dialogues. In: Isotani, S., Millán, E., Ogan, A., Hastings, P., McLaren, B., Luckin, R. (eds) Artificial Intelligence in Education. AIED 2019. Lecture Notes in Computer Science(), vol 11625. Springer, Cham. https://doi.org/10.1007/978-3-030-23204-7_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-23204-7_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-23203-0

  • Online ISBN: 978-3-030-23204-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics