Text-Guided Automated Self Assessment

A Graph-Based Approach to Help Learners with Ongoing Writing
  • Pablo Pirnay-DummerEmail author
  • Dirk Ifenthaler


Writing plays an important role in institutionalized learning environments. However, it has to be monitored in several ways in order to be successful. We developed automated knowledge assessment tools which allow us to produce instant feedback on a text during the writing process in order to promote self-regulated writing skills. The tools may serve as a complement to human tutoring approaches in learning settings in which individual face-to-face coaching is not possible. To generate text feedback, we used natural language oriented knowledge assessment strategies based on mental model theory and graph theory. Different types of text feedback are then automatically created on the basis of graphs and presented to the learner both for reflection and preflection. So far, we have succeeded in implementing the crucial parts of the coaching into computer-based technology as well as developing and implementing both static and dynamic feedback. Hence, a study of a text-guided assessment tool is presented and discussed. Finally, limitations on the volitional level of the toolset will have to be addressed in future studies.


Self-assessment Writing Automated online-coaching Learning progression Feedback Reflection Preflection 


  1. Boud, D. (1995). Enhancing learning through self assessment. London: Kogan Page.Google Scholar
  2. Chan, J. C. Y., & Lam, S.-F. (2010). Effects of different evaluative feedback on students’ self-efficacy in learning. Learning and Instruction, 38, 37–58.Google Scholar
  3. Cho, K., & Schunn, C. D. (2007). Scaffolded writing and rewriting in the discipline: A web-based reciprocal peer review system. Computers and Education, 48(3), 409–426.CrossRefGoogle Scholar
  4. Eigler, G. (2005). Textproduzieren als Wissensnutzungs- und Wissenserwerbsstrategie. In H. Mandl & H. F. Friedrich (Eds.), Handbuch Lernstrategien (pp. 187–205). Göttingen: Hogrefe.Google Scholar
  5. Fetterman, D. M., Kaftarian, S. J., & Wandersman, A. (1996). Empowerment evaluation knowledge and tools for self-assessment & accountability. Thousand Oaks, CA: Sage.Google Scholar
  6. Flower, L., & Hayes, J. R. (1981). A cognitive process theory of writing. College Composition and Communication, 32(4), 365–387.CrossRefGoogle Scholar
  7. Glaser, C., & Brunstein, J. C. (2008). Förderung selbstregulierten Schreibens. In W. Schneider & M. Hasselhorn (Eds.), Handbuch der Pädagogischen Psychologie (pp. 371–380). Göttingen: Hogrefe.Google Scholar
  8. Graham, S., & Dolores, P. (2007). A meta-analysis of writing instruction for adolescent students. Journal of Educational Psychology, 99(3), 445–476.CrossRefGoogle Scholar
  9. Haswell, R. H. (2008). Teaching of writing in higher education. In C. Bazerman (Ed.), Handbook of research on writing. History, society, school, individual, text (pp. 331–346). New York: Lawrence Erlbaum.Google Scholar
  10. Ifenthaler, D. (2008). Practical solutions for the diagnosis of progressing mental models. In D. Ifenthaler, P. Pirnay-Dummer, & J. M. Spector (Eds.), Understanding models for learning and instruction. Essays in honor of Norbert M. Seel (pp. 43–61). New York: Springer.Google Scholar
  11. Ifenthaler, D., & Seel, N. M. (2005). The measurement of change: Learning-dependent progression of mental models. Technology, Instruction, Cognition and Learning, 2(4), 317–336.Google Scholar
  12. Kintsch, E., Steinhart, D., & Stahl, G. (2000). Developing summarization skills through the use of LSA-based feedback. Interactive Learning Environments, 8(2), 87–109.CrossRefGoogle Scholar
  13. Koedinger, K. R., & Aleven, V. (2007). Exploring the assistance dilemma in experiments with cognitive tutors. Educational Psychology Review, 19(3), 239–264.CrossRefGoogle Scholar
  14. Lavelle, E., & Zuercher, N. (2001). The writing approaches of university students. Higher Education, 42(3), 373–391.CrossRefGoogle Scholar
  15. Pirnay-Dummer, P., & Ifenthaler, D. (2010). Automated knowledge visualization and assessment. In D. Ifenthaler, P. Pirnay-Dummer & N. M. Seel (Eds.), Computer-based diagnostics and systematic analysis of knowledge (pp. 77–115). New York: Springer.Google Scholar
  16. Rindermann, H., & Amelang, M. (1994). Das Heidelberger Inventar zur Lehrveranstaltungs-Evaluation (HILVE). Handanweisung. Heidelberg: Asanger.Google Scholar
  17. Rose, M. (1985). The language of exclusion: Writing instruction at the university. College English, 47(4), 341–359.CrossRefGoogle Scholar
  18. Schuler, H., & Prochaska, M. (2001). Leistungsmotivationsinventar. Göttingen: Hogrefe.Google Scholar
  19. Seel, N. M. (2003). Model-centered learning and instruction. Technology, Instruction, Cognition and Learning, 1(1), 59–85.Google Scholar
  20. Taylor, B. M., & Beach, R. W. (1984). The effects of text structure instruction on middle grade students’ comprehension and production of expository text. Reading Research Quarterly, 68, 277–321.Google Scholar
  21. Willett, J. B. (1988). Questions and answers in the measurement of change. Review of Research in Education, 15, 345–422.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  1. 1.Albert-Ludwigs-University FreiburgFreiburgGermany
  2. 2.Albert-Ludwigs-University FreiburgFreiburgGermany

Personalised recommendations