ReadME – Generating Personalized Feedback for Essay Writing Using the ReaderBench Framework

  • Robert-Mihai Botarleanu
  • Mihai Dascalu
  • Maria-Dorinela Sirbu
  • Scott A. Crossley
  • Stefan Trausan-Matu
Conference paper
Part of the Smart Innovation, Systems and Technologies book series (SIST, volume 95)


Writing quality is an important component in defining students’ capabilities. However, providing comprehensive feedback to students about their writing is a cumbersome and time-consuming task that can dramatically impact the learning outcomes and learners’ performance. The aim of this paper is to introduce a fully automated method of generating essay feedback in order to help improve learners’ writing proficiency. Using the TASA (Touchstone Applied Science Associates, Inc.) corpus and the textual complexity indices reported by the ReaderBench framework, more than 740 indices were reduced to five components using a Principal Component Analysis (PCA). These components may represent some of the basic linguistic constructs of writing. Feedback on student writing for these five components is generated using an extensible rule engine system, easily modifiable through a configuration file, which analyzes the input text and detects potential feedback at various levels of granularity: sentence, paragraph or document levels. Our prototype consists of a user-friendly web interface to easily visualize feedback based on a combination of text color highlighting and suggestions of improvement.


Automated writing evaluation Textual complexity Feedback generation and visualization Natural language processing 



This research was partially supported by the README project “Interactive and Innovative application for evaluating the readability of texts in Romanian Language and for improving users’ writing styles”, contract no. 114/15.09.2017, MySMIS 2014 code 119286, as well as the FP7 2008-212578 LTfLL project.


  1. 1.
    Roscoe, R.D., Varner, L.K., Crossley, S.A., McNamara, D.S.: Developing pedagogically-guided algorithms for intelligent writing feedback. Int. J. Learn. Technol. 25(8(4)), 362–381 (2013)CrossRefGoogle Scholar
  2. 2.
    Attali, Y., Burstein, J.: Automated essay scoring with e-rater V.2.0. In: Annual Meeting of the International Association for Educational Assessment, p. 23. Association for Educational Assessment, Philadelphia (2004)CrossRefGoogle Scholar
  3. 3.
    Warschauer, M., Ware, P.: Automated writing evaluation: defining the classroom research agenda. Lang. Teach. Res. 10, 1–24 (2006)CrossRefGoogle Scholar
  4. 4.
    Crossley, S.A., Kyle, K., McNamara, D.S.: To aggregate or not? Linguistic features in automatic essay scoring and feedback systems. J. Writ. Assess. 8(1), 1–16 (2015)Google Scholar
  5. 5.
    McNamara, D.S., Crossley, S.A., Roscoe, R., Allen, L.K., Dai, J.: A hierarchical classification approach to automated essay scoring. Assess. Writ. 23, 35–59 (2015)CrossRefGoogle Scholar
  6. 6.
    Oelke, D., Spretke, D., Stoffel, A., Keim, D.A.: Visual readability analysis: how to make your writings easier to read. IEEE Trans. Visual Comput. Graph. 18(5), 662–674 (2012)CrossRefGoogle Scholar
  7. 7.
    McNamara, D.S., Graesser, A., Cai, Z., Dai, J.: Coh-Metrix Common Core TERA version 1.0, vol. 2018 (2013).
  8. 8.
    Zeno, S.M., Ivens, S.H., Millard, R.T., Duvvuri, R.: The Educator’s Word Frequency Guide. Touchstone Applied Science Associates Inc., Brewster (1995)Google Scholar
  9. 9.
    Dascalu, M., Stavarache, L.L., Dessus, P., Trausan-Matu, S., McNamara, D.S., Bianco, M.: ReaderBench: an integrated cohesion-centered framework. In: EC-TEL 2015, pp. 505–508. Springer, Toledo (2015)CrossRefGoogle Scholar
  10. 10.
    Dascalu, M.: Analyzing discourse and text complexity for learning and collaborating. In: Studies in Computational Intelligence, vol. 534. Springer, Cham (2014)Google Scholar
  11. 11.
    Dascalu, M., McNamara, D.S., Trausan-Matu, S., Allen, L.K.: Cohesion network analysis of CSCL participation. Behav. Res. Methods, 1–16 (2017)Google Scholar
  12. 12.
    Dascalu, M., Allen, K.A., McNamara, D.S., Trausan-Matu, S., Crossley, S.A.: Modeling comprehension processes via automated analyses of dialogism. In: CogSci 2017, pp. 1884–1889. Cognitive Science Society, London (2017)Google Scholar
  13. 13.
    Stone, P.J., Dunphy, D.C., Smith, M.S.: The general inquirer: a computer approach to content analysis (1966)Google Scholar
  14. 14.
    Jolliffe, I.T.: Principal component analysis and factor analysis. In: Principal Component Analysis, pp. 115–128. Springer (1986)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2019

Authors and Affiliations

  • Robert-Mihai Botarleanu
    • 1
  • Mihai Dascalu
    • 1
    • 2
    • 3
  • Maria-Dorinela Sirbu
    • 1
  • Scott A. Crossley
    • 4
  • Stefan Trausan-Matu
    • 1
    • 2
    • 3
  1. 1.University Politehnica of BucharestBucharestRomania
  2. 2.Academy of Romanian ScientistsBucharestRomania
  3. 3.Cognos Business Consulting S.R.L.BucharestRomania
  4. 4.Department of Applied Linguistics/ESLGeorgia State UniversityAtlantaUSA

Personalised recommendations