Advertisement

Educational Technology Research and Development

, Volume 67, Issue 1, pp 105–122 | Cite as

Automatic representation of knowledge structure: enhancing learning through knowledge structure reflection in an online course

  • Kyung KimEmail author
  • Roy B. Clarianay
  • Yanghee Kim
Development Article

Abstract

Summary writing is an important skill that students use throughout their academic careers, writing supports reading and vocabulary skills as well as the acquisition of content knowledge. This exploratory and development-oriented investigation appraises the recently released online writing system, Graphical Interface of Knowledge Structure (GIKS) that provides structural feedback of students’ essays as network graphs for reflection and revision. Is the quality of students’ summary essays better with GIKS relative to some other common approaches? Using the learning materials, treatments, and procedure of a dissertation by Sarwar (Doctoral Thesis, University of Ottawa, 2012) but adapted for this setting, over a three-week period Grade 10 students (n = 180) read one of three physics lesson texts each week, wrote a summary essay of it, and then immediately received one of three counterbalanced treatments including reflection with GIKS, solving physics problems as multiple-choice questions, and viewing video information, and finally students rewrote the summary essay. All three treatments showed pre-to-post essay improvement in the central concepts subgraph structure that almost exactly matched the results obtained in the previous dissertation. GIKS with reflection obtained the largest improvement due to the largest increase in relevant links and the largest decrease for irrelevant links. The different treatments led to different knowledge structures in a regular way. These findings confirm those of Sarwar (2012) and support the use of GIKS as immediate focused formative feedback that supports summary writing in online settings.

Keywords

Knowledge structure Reflection Writing GIKS Feedback 

Notes

Acknowledgements

Kyung Kim acknowledges support by the Pennsylvania State University’s Center for Online Innovation in Learning (Grant No. 05-042-23 UP10010).

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.

References

  1. Bangert-Drowns, R. L., Hurley, M. M., & Wilkinson, B. (2004). The effects of school-based writing-to-learn interventions on academic achievement: A meta-analysis. Review of Educational Research, 74, 29–58.Google Scholar
  2. Clariana, R. B. (2010). Multi-decision approaches for eliciting knowledge structure. In D. Ifenthaler, P. Pirnay-Dummer, & N. M. Seel (Eds.), Computer-based diagnostics and systematic analysis of knowledge (pp. 41–59). New York: Springer.Google Scholar
  3. Clariana, R. B., Engelmann, T., & Yu, W. (2013). Using centrality of concept maps as a measure of problem space states in computer-supported collaborative problem solving. Educational Technology Research and Development, 61(3), 423–442.  https://doi.org/10.1007/s11423-013-9293-6.Google Scholar
  4. Clariana, R. B., Wallace, P. E., & Godshalk, V. M. (2009). Deriving and measuring group knowledge structure from essays: The effects of anaphoric reference. Educational Technology Research and Development, 57(6), 725–737.  https://doi.org/10.1007/s11423-009-9115-z.Google Scholar
  5. Clariana, R. B., Wolfe, M. B., & Kim, K. (2014). The influence of narrative and expository lesson text structures on knowledge structures: Alternate measures of knowledge structure. Educational Technology Research and Development, 62(5), 601–616.  https://doi.org/10.1007/s11423-014-9348-3.Google Scholar
  6. Clark, R. E. (1994). Media will never influence learning. Educational Technology Research and Development, 42(2), 21–29.Google Scholar
  7. Coştu, B., & Ayas, A. (2005). Evaporation in different liquids: Secondary students’ conceptions. Research in Science & Technological Education, 23(1), 75–97.Google Scholar
  8. DiCerbo, K. E. (2007). Knowledge structures of entering computer networking students and their instructors. Journal of Information Technology Education, 6(1), 263–277.Google Scholar
  9. Draper, D. C. (2013). The instructional effects of knowledge-based community of practice learning environment on student achievement and knowledge convergence. Performance Improvement Quarterly, 25(4), 67–89.  https://doi.org/10.1002/piq.21132.Google Scholar
  10. Emig, J. (1977). Writing as a mode of learning. College Composition and Communication, 28(2), 122–128.  https://doi.org/10.2307/356095.Google Scholar
  11. Fesel, S. S., Segers, E., Clariana, R. B., & Verhoeven, L. (2015). Quality of children’s knowledge representations in digital text comprehension: Evidence from pathfinder networks. Computers in Human Behavior, 48, 135–146.Google Scholar
  12. Gogus, A. (2013). Evaluating mental models in mathematics: A comparison of methods. Educational Technology Research and Development, 61(2), 171–195.  https://doi.org/10.1007/s11423-012-9281-2.Google Scholar
  13. Graham, S., & Hebert, M. (2010). Writing to read: A report from Carnegie Corporation of New York. Evidence for how writing can improve reading. New York: Carnegie Corporation. https://www.carnegie.org/media/filer_public/9d/e2/9de20604-a055-42da-bc00-77da949b29d7/ccny_report_2010_writing.pdf.
  14. Ifenthaler, D. (2010). Relational, structural, and semantic analysis of graphical representations and concept maps. Educational Technology Research and Development, 58(1), 81–97.Google Scholar
  15. Ifenthaler, D., Pirnay-Dummer, P., & Seel, N. M. (Eds.). (2010). Computer-based diagnostics and systematic analysis of knowledge. New York: Springer.  https://doi.org/10.1007/978-1-4419-5662-0.Google Scholar
  16. Johnson-Laird, P. N. (2004). The history of mental models. In K. Manktelow & M. C. Chung (Eds.), Psychology of reasoning: Theoretical and historical perspectives (pp. 179–212). New York: Psychology Press.Google Scholar
  17. Jonassen, D. H., Beissner, K., & Yacci, M. (1993). Structural knowledge: Techniques for representing, conveying, and acquiring structural knowledge. Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
  18. Kim, M. K. (2012). Cross-validation study of methods and technologies to assess mental models in a complex problem solving situation. Computers in Human Behavior, 28(2), 703–717.  https://doi.org/10.1016/j.chb.2011.11.018.Google Scholar
  19. Kim, K. (2017a). Visualizing first and second language interactions in science reading: A knowledge structure network approach. Language Assessment Quarterly, 14, 328–345.Google Scholar
  20. Kim, K. (2017b). Graphical interface of knowledge structure: A web-based research tool for representing knowledge structure in text. Technology Knowledge and Learning.  https://doi.org/10.1007/s10758-017-9321-4.Google Scholar
  21. Kim, K. (2018). An automatic measure of cross-language text structures. Technology Knowledge and Learning, 23, 301–314.  https://doi.org/10.1007/s10758-017-9320-5.Google Scholar
  22. Kim, K., & Clariana, R. B. (2015). Knowledge structure measures of reader’s situation models across languages: Translation engenders richer structure. Technology, Knowledge and Learning, 20(2), 249–268.  https://doi.org/10.1007/s10758-015-9246-8.Google Scholar
  23. Kim, K., & Clariana, R. B. (2017). Text signals influence second language expository text comprehension: Knowledge structure analysis. Educational Technology Research and Development, 65, 909–930.  https://doi.org/10.1007/s11423-016-9494-x.Google Scholar
  24. Kim, K., & Clariana, R. B. (2018). Applications of Pathfinder Network scaling for identifying an optimal use of first language for second language science reading comprehension. Educational Technology Research and Development.  https://doi.org/10.1007/s11423-018-9607-9.Google Scholar
  25. Kiuhara, S. A., Graham, S., & Hawken, L. S. (2009). Teaching writing to high school students: A national survey. Journal of Educational Psychology, 101(1), 136–160.  https://doi.org/10.1037/a0013097.Google Scholar
  26. Koul, R., Clariana, R. B., & Salehi, R. (2005). Comparing several human and computer-based methods for scoring concept maps and essays. Journal of Educational Computing Research, 32(3), 261–273.Google Scholar
  27. Kozma, R. B. (1994). Will media influence learning? Reframing the debate. Educational Technology Research and Development, 42(2), 7–19.Google Scholar
  28. Li, P., & Clariana, R. B. (2018). Reading comprehension in L1 and L2: An integrative approach. Journal of Neurolinguistics, 45. Retrieved form http://blclab.org/wp-content/uploads/2018/04/Li_Clariana_2018.pdf.
  29. Mørch, A. I., Engeness, I., Cheng, V. C., Cheung, W. K., & Wong, K. C. (2017). EssayCritic: Writing to learn with a knowledge-based design critiquing system. Educational Technology & Society, 20(2), 213–223.Google Scholar
  30. Mun, Y. (2015). The effect of sorting and writing tasks on knowledge structure measure in bilinguals’ reading comprehension. Masters Thesis. Retrieved from https://scholarsphere.psu.edu/files/x059c7329.
  31. Nesbit, J. C., & Adesope, O. O. (2006). Learning with concept and knowledge maps: A meta-analysis. Review of Educational Research, 76(3), 413–448.  https://doi.org/10.3102/00346543076003413.Google Scholar
  32. Ong, W. J. (1982). Orality and literacy: The technologizing of the word. London: Methuen.Google Scholar
  33. Osborne, R., & Wittrock, M. (1985). The Generative Learning Model and its implications for science education. Studies in Science Education, 12, 59–87.Google Scholar
  34. Ozuru, Y., Briner, S., Kurby, C. A., & McNamara, D. S. (2013). Comparing comprehension measured by multiple-choice and open-ended questions. Canadian Journal of Experimental Psychology, 67(3), 215–227.Google Scholar
  35. Sarwar, G. S. (2012). Comparing the effect of reflections, written exercises, and multimedia instruction to address learners’ misconceptions using structural assessment of knowledge. Doctoral Thesis, University of Ottawa.Google Scholar
  36. Sarwar, G. S., & Trumpower, D. L. (2015). Effects of conceptual, procedural, and declarative reflection on students’ structural knowledge in physics. Educational Technology Research and Development, 63(2), 185–201.Google Scholar
  37. Spector, J., & Koszalka, T. (2004). The DEEP methodology for assessing learning in complex domains. Final report to the National Science Foundation Evaluative Research and Evaluation. Syracuse, NY: Syracuse University.Google Scholar
  38. Su, I.-H., & Hung, Pi.-H. (2010).Validity study on automatic scoring methods for the summarization ofscientific articles. A paper presented at the 7th conference of the international test commission, 19–21 July, 2010, Hong Kong. Retrieved from https://bib.irb.hr/datoteka/575883.itc_programme_book_-final_2.pdf.
  39. Tang, H., & Clariana, R. (2017). Leveraging a sorting task as a measure of knowledge structure in bilingual settings. Technology, Knowledge and Learning, 22(1), 23–35.  https://doi.org/10.1007/s10758-016-9290-z.Google Scholar
  40. Tawfik, A. A., Law, V., Ge, X., Xing, W., & Kim, K. (2018). The effect of sustained vs. faded scaffolding on students’ argumentation in ill-structured problem solving. Computers in Human Behavior.  https://doi.org/10.1016/j.chb.2018.01.035.Google Scholar
  41. Tippett, C. D. (2010). Refutation text in science education: a review of two decades of research. International Journal of Science and Mathematics Education, 8(6), 951–970.Google Scholar
  42. Treagust, D. F., & Duit, R. (2008). Conceptual change: a discussion of theoretical, methodological and practical challenges for science education. Cultural Studies of Science Education, 3(2), 297–328.  https://doi.org/10.1007/s11422-008-9090-4.Google Scholar
  43. Tripto, J., Assaraf, O. B. Z., & Amit, M. (2018). Recurring patterns in the development of high school biology students’ system thinking over time. Instructional Science.  https://doi.org/10.1007/s11251-018-9447-3.Google Scholar
  44. Trumpower, D. L., & Sarwar, G. S. (2010). Effectiveness of structural feedback provided by Pathfinder networks. Journal of Educational Computing Research, 43(1), 7–24.Google Scholar
  45. Van Dijk, T. A., & Kintsch, W. (1983). Strategies of discourse comprehension. New York: Academic Press.Google Scholar
  46. Zimmerman, W. A., Kang, H. B., Kim, K., Gao, M., Johnson, G., Clariana, R., et al. (2018). Computer-automated approach for scoring short essays in an introductory statistics course. Journal of Statistics Education, 26(1), 40–47.Google Scholar
  47. Zwaan, R. A., & Radvansky, G. A. (1998). Situation models in language comprehension and memory. Psychological Bulletin, 123(2), 162–185.  https://doi.org/10.1037/0033-2909.123.2.162.Google Scholar

Copyright information

© Association for Educational Communications and Technology 2018

Authors and Affiliations

  1. 1.Educational Technology, Research and AssessmentNorthern Illinois UniversityDeKalbUSA
  2. 2.Learning, Design, and TechnologyThe Pennsylvania State UniversityState CollegeUSA

Personalised recommendations