Writing flexibility in argumentative essays: a multidimensional analysis

  • Laura K. AllenEmail author
  • Aaron D. Likens
  • Danielle S. McNamara


The assessment of argumentative writing generally includes analyses of the specific linguistic and rhetorical features contained in the individual essays produced by students. However, researchers have recently proposed that an individual’s ability to flexibly adapt the linguistic properties of their writing may more accurately capture their proficiency. However, the features of the task, learner, and educational context that influence this flexibility remain largely unknown. The current study extends this research by examining relations between linguistic flexibility, reading comprehension ability, and feedback in the context of an automated writing evaluation system. Students (n = 131) wrote and revised six argumentative essays in an automated writing evaluation system and were provided both summative and formative feedback on their writing. Additionally, half of the students had access to a spelling and grammar checker that provided lower-level feedback during the writing period. The results provide evidence for the supposition that skilled writers demonstrate linguistic flexibility across the argumentative essays that they produce. However, analyses also indicate that lower-level feedback (i.e., spelling and grammar feedback) have little to no impact on the properties of students’ essays nor on their variability across prompts or drafts. Overall, the current study provides important insights into the role of flexibility in argumentative writing skill and develops a strong foundation on which to conduct future research and educational interventions.


Writing Flexibility Dynamics Linguistics Natural language processing Individual differences Intelligent tutoring systems Feedback 



This research was supported in part by IES Grants R305A120707 and R305A180261 as well as the Office of Naval Research (Grant No. N00014-16-1-2611). Opinions, conclusions, or recommendations do not necessarily reflect the view of the Department of Education, IES, or the Office of Naval Research.


  1. Allen, L. K., Crossley, S. A., Snow, E. L., & McNamara, D. S. (2014). Game-based writing strategy tutoring for second language learners: Game enjoyment as a key to engagement. Language Learning and Technology, 18, 124–150.Google Scholar
  2. Allen, L. K., Jacovina, M. E., & McNamara, D. S. (2016). Computer-based writing instruction. In C. A. MacArthur, S. Graham, & J. Fitzgerald (Eds.), Handbook of writing research (2nd ed., pp. 316–329). New York, NY: Guilford Press.Google Scholar
  3. Allen, L. K., & Perret, C. A. (2016). Commercialized writing systems. In D. S. McNamara & S. A. Crossley (Eds.), Adaptive educational technologies for literacy instruction (pp. 145–162). NY: Taylor & Francis, Routledge.CrossRefGoogle Scholar
  4. Allen, L. K., Snow, E. L., & McNamara, D. S. (2016). The narrative waltz: The role of flexibility on writing performance. Journal of Educational Psychology, 108, 911–924.CrossRefGoogle Scholar
  5. Allen, L. K., Snow, E. L., & McNamara, D. S. (2014). The long and winding road: Investigating the differential writing patterns of high and low skilled writers. In J. Stamper, Z. Pardos, M. Mavrikis, & B. M. McLaren (Eds.), Proceedings of the 7th international conference on educational data mining (pp. 304–307). LondonGoogle Scholar
  6. Attali, Y., & Burstein, J. (2006). Automated essay scoring with e-rater vol 2. Journal of Technology, Learning, and Assessment, 4(3), 3.Google Scholar
  7. Bates, D., Maechler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67, 1–48.CrossRefGoogle Scholar
  8. Biancarosa, G., & Snow, C. E. (2006). Reading next: A vision for action and research in middle and high school literacy—A report from the Carnegie Corporation of New York (2nd ed.). Washington, DC: Alliance for Excellent Education.Google Scholar
  9. Biber, D., Gray, B., & Staples, S. (2016). Predicting patterns of grammatical complexity across language exam task types and proficiency levels. Applied Linguistics, 37, 639–668.CrossRefGoogle Scholar
  10. Crossley, S. A., Roscoe, R. D., & McNamara, D. S. (2014). What is successful writing? An investigation into the multiple ways writers can write high quality essays. Written Communication, 31, 181–214.CrossRefGoogle Scholar
  11. Crossley, S. A., Weston, J., McLain-Sullivan, S. T., & McNamara, D. S. (2011). The development of writing proficiency as a function of grade level: A linguistic analysis. Written Communication, 28, 282–311.CrossRefGoogle Scholar
  12. Crossley, S. A., Kyle, K., Allen, L. K., & McNamara, D. S. (2014). The importance of grammar and mechanics in writing assessment and instruction: Evidence from data mining. In J. Stamper, Z. Pardos, M. Mavrikis, & B. M. McLaren (Eds.), Proceedings of the 7th international conference on educational data mining (pp. 300–303). LondonGoogle Scholar
  13. Crossley, S. A., Varner (Allen), L. K., & McNamara, D. S. (2013). Cohesion-based prompt effects in argumentative writing. In C. Boonthum-Denecke & G. M. Youngblood (Eds.), Proceedings of the 26th annual flordia artificial intelligence research society (FLAIRS) conference (pp. 202–207). Menlo Park, CA: The AAAI Press.Google Scholar
  14. Deane, P. (2013). On the relation between automated essay scoring and modern views of the writing construct. Assessing Writing, 18, 7–24.CrossRefGoogle Scholar
  15. Dikli, S. (2006). An overview of automated scoring of essays. Journal of Technology, Learning, and Assessment, 5, 3–35.Google Scholar
  16. Duran, N., Bellissens, C., Taylor, R., & McNamara, D. (2007). In D. S. McNamara & G. Trafton (Eds.), Qualifying text difficulty with automated indices of cohesion and semantics (pp. 233–238). Austin, TX: Cognitive Science Society.Google Scholar
  17. Everitt, B. (1998). The Cambridge dictionary of statistics. Cambridge, NY: Cambridge University Press.Google Scholar
  18. Ferretti, R., & Fan, Y. (2016). Argumentative writing. In C. A. MacArthur, S. Graham, & J. Fitzgerald (Eds.), Handbook of writing research (2nd ed., pp. 301–315). NY: Guilford.Google Scholar
  19. Flower, L. S., & Hayes, J. (1981). A cognitive process theory of writing. College Composition and Communication, 32, 365–387.CrossRefGoogle Scholar
  20. Graesser, A. C., & McNamara, D. S. (2011). Computational analyses of multilevel discourse comprehension. Topics in Cognitive Science, 2, 371–398.CrossRefGoogle Scholar
  21. Graesser, A. C., McNamara, D. S., & Kulikowich, J. (2011). Coh-Metrix: Providing multilevel analyses of text characteristics. Educational Researcher, 40, 223–234.CrossRefGoogle Scholar
  22. Graham, S., Hebert, M., & Harris, K. R. (2015). Formative assessment and writing: A meta-analysis. Elementary School Journal, 115, 524–547.Google Scholar
  23. Graham, S., & Perin, D. (2007). Writing next: Effective strategies to improve writing of adolescents in middle and high schools—A report to Carnegie Corporation of New York. Washington, DC: Alliance for Excellent Education.Google Scholar
  24. Guo, L., Crossley, S. A., & McNamara, D. S. (2013). Predicting human judgments of essay quality in both integrated and independent second language writing samples: A comparison study. Assessing Writing, 18, 218–238.CrossRefGoogle Scholar
  25. Haswell, R. H. (2000). Documenting improvement in college writing: A longitudinal approach. Written Communication, 17(3), 307–352.CrossRefGoogle Scholar
  26. Hayes, J. R. (1996). A new framework for understanding cognition and affect in writing. In C. M. Levy & L. S. Ransdell (Eds.), The science of writing: Theories, methods, individual differences and applications (pp. 1–27). Hillsdale, NJ: Erlbaum.Google Scholar
  27. Kellogg, R. T. (2008). Training writing skills: A cognitive developmental perspective. Journal of Writing Research, 1, 1–26.CrossRefGoogle Scholar
  28. Kim, M., & Crossley, S. A. (2018). Modeling second language writing quality: A structural equation investigation of lexical, syntactic, and cohesive features in source-based and independent writing. Assessing Writing, 37, 39–56.CrossRefGoogle Scholar
  29. Kim, Y. G., Schatschneider, C., Wanzek, J., Gatlin, B., & Otaiba, S. (2017). Writing evaluation: Rater and task effects on the reliability of writing scores for children in Grades 3 and 4. Reading and Writing, 30, 1287–1310.CrossRefGoogle Scholar
  30. MacGinitie, W. H., & MacGinitie, R. K. (1989). Gates MacGinitie reading tests. Chicago: Riverside.Google Scholar
  31. McNamara, D. S., & Allen, L. K. (2018). Toward an integrated perspective of writing as a discourse process. In M. Schober, A. Britt, & D. N. Rapp (Eds.), Handbook of discourse processes (2nd ed.). New York: Routledge.Google Scholar
  32. McNamara, D. S., Crossley, S. A., Roscoe, R. D., Allen, L. K., & Dai, J. (2015). Natural language processing in a writing strategy tutoring system: Hierarchical classification approach to automated essay scoring. Assessing Writing, 23, 35–59.CrossRefGoogle Scholar
  33. McNamara, D. S., Graesser, A. C., McCarthy, P., & Cai, Z. (2014). Automated evaluation of text and discourse with Coh-Metrix. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  34. National Center for Education Statistics. (2012). The Nation’s Report Card: Writing 2011 (NCES 2012-470). Washington, DC: Institute for Education Sciences, U.S. Department of Education.Google Scholar
  35. National Commission on Writing. (2004). Writing: A ticket to work. Or a ticket out. New York: College Board.Google Scholar
  36. Perelman, L. (2012). Construct validity, length, score, and time in holistically graded writing assessments: The case against automated essay scoring (AES). In C. Bazerman, C. Dean, J. Early, K. Lunsford, S. Null, P. Rogers, & A. Stansell (Eds.), International advances in writing research: Cultures, places, measures (pp. 121–131). Fort Collins: Parlor Press.Google Scholar
  37. Phillips, L. M., Norris, S. P., Osmond, W. C., & Maynard, A. M. (2002). Relative reading achievement: A longitudinal study of 187 children from first through sixth grades. Journal of Educational Psychology, 94, 3–13.CrossRefGoogle Scholar
  38. Roscoe, R. D., Allen, L. K., Weston, J. L., Crossley, S. A., & McNamara, D. S. (2014). The Writing Pal intelligent tutoring system: Usability testing and development. Computers and Composition, 34, 39–59.CrossRefGoogle Scholar
  39. Roscoe, R. D., & McNamara, D. S. (2013). Writing pal: Feasibility of an intelligent writing strategy tutor in the high school classroom. Journal of Educational Psychology, 105, 1010–1025.CrossRefGoogle Scholar
  40. Roscoe, R. D., Varner, L. K., Crossley, S. A., & McNamara, D. S. (2013). Developing pedagogically-guided threshold algorithms for intelligent automated essay feedback. International Journal of Learning Technology, 8, 362–381.CrossRefGoogle Scholar
  41. Schoonen, R. (2005). Generaliability of writing scores: An application of structural equation modeling. Language Testing, 22, 1–30.CrossRefGoogle Scholar
  42. Shanahan, T. (1984). Nature of the reading-writing relation: An exploratory multivariate analysis. Journal of Educational Psychology, 76, 466–477.CrossRefGoogle Scholar
  43. Shanahan, T. (2016). Relationships between reading and writing development. In C. MacArthur, S. Graham, & J. Fitzgerald (Eds.), Handbook of writing research (2nd ed., pp. 194–207). New York: Guilford.Google Scholar
  44. Shermis, M., & Burstein, J. (Eds.). (2003). Automated essay scoring: A cross-disciplinary perspective. Mahwah, NJ: Erlbaum.Google Scholar
  45. Shermis, M. D., & Burstein, J. (Eds.). (2013). Handbook of automated essay evaluation: Current applications and future directions. New York: Routledge.Google Scholar
  46. Varner, L. K., Roscoe, R. D., & McNamara, D. S. (2013). Evaluative misalignment of 10th-grade student and teacher criteria for essay quality: An automated textual analysis. Journal of Writing Research, 5, 35–59.CrossRefGoogle Scholar
  47. Walton, D. N. (1992). Plausible argument in everyday conversation. Albany, NY: State University of New York Press.Google Scholar
  48. Warschauer, M., & Ware, P. (2006). Automated writing evaluation: Defining the classroom research agenda. Language Teaching Research, 10, 1–24.CrossRefGoogle Scholar
  49. Weigle, S. C. (2013). English as a second language writing and automated essay evaluation. Handbook of automated essay evaluation: Current applications and new directions (pp. 36–54). New York: Routledge.Google Scholar
  50. Witte, S. P., & Faigley, L. (1981). Coherence, cohesion, and writing quality. College Composition and Communication, 32(2), 189–204.CrossRefGoogle Scholar
  51. Wong, B. (1999). Metacognition in writing. In R. Gallimore, L. P. Bernheimer, D. L. MacMillan, D. L. Speech, & S. Vaughn (Eds.), Developmental perspectives on children with high-incidence disabilities (pp. 183–198). Mahwah, NJ: Erlbaum.Google Scholar

Copyright information

© Springer Nature B.V. 2018

Authors and Affiliations

  • Laura K. Allen
    • 1
    Email author
  • Aaron D. Likens
    • 2
  • Danielle S. McNamara
    • 3
  1. 1.Psychology DepartmentMississippi State UniversityMississippi StateUSA
  2. 2.University of NebraskaOmahaUSA
  3. 3.Learning Sciences Institute, Psychology DepartmentArizona State UniversityTempeUSA

Personalised recommendations