Rubrics and Exemplars in Writing Assessment

  • Johanna de LeeuwEmail author
Part of the The Enabling Power of Assessment book series (EPAS, volume 3)


The use of rubrics for performance assessment as opposed to holistic methods is widely accepted as current enlightened practice and continues to receive considerable attention particularly in the current drive for increased accountability for student achievement. This has resulted in extensive discussion regarding their appropriateness, use and misuse, particularly in the assessment of writing. In order to understand the basis of the conflicting viewpoints that have characterised the rubrics debate in assessment of writing over the last decade, its historical roots and philosophical underpinnings are considered. A critical analysis of the scholarly literature on the role of rubrics and their relationship with writing exemplars provides the context for a discussion of current trends in assessment for learning and increased emphasis on student peer and self-assessment.


Rubrics Assessment Formative Summative Criterion-referenced Norm-referenced Outcomes Measures Achievement Accountability Policies Practice Standardised testing Performance assessment Authentic assessment 


  1. Andrade, H. L. (2006). The trouble with a narrow view of rubrics. English Journal, 95(6), 9–9.CrossRefGoogle Scholar
  2. Andrade, H. L., & Boulay, B. A. (2003). Role of rubric-referenced self-assessment in learning to write. Journal of Educational Research, 97(1), 21–34.CrossRefGoogle Scholar
  3. Andrade, H. L., Du, Y., & Wang, X. (2008). Putting rubrics to the test: The effect of a model, criteria generation, and rubric-referenced self-assessment on elementary school students’ writing. Educational Measurement: Issues and Practice, 27(2), 3–13.CrossRefGoogle Scholar
  4. Andrade, H. L., Wang, X., Du, Y., & Akawi, R. L. (2009). Rubric-referenced self-assessment and self-efficacy for writing. Journal of Educational Research, 102(4), 287–302.CrossRefGoogle Scholar
  5. Arter, J., & McTighe, J. (2001). Scoring rubrics in the classroom. Thousand Oakes, CA: Sage Publications.Google Scholar
  6. Black, P., & Wiliam, D. (1998). Inside the black box. Phi Delta Kappan, 80(2), 139.Google Scholar
  7. Breland, H. M. (1983). The direct assessment of writing skill: A measurement review. New York, NY: College Board Report, No. 83-6. (ETS RR No. 83-32).Google Scholar
  8. Broad, B. (2002). What we really value beyond rubrics in teaching and assessing writing. Logan, UT: Utah State, UP.Google Scholar
  9. Brookhart, S. M. (2013). How to create rubrics for formative assessment and grading. Alexandria, VA: Association for Supervision and Curriculum Development (ASCD).Google Scholar
  10. Brown, G. T. L., Glasswell, K., & Harland, D. (2014). Accuracy in the scoring of writing: Studies of reliability and validity using a New Zealand writing assessment system. Assessing Writing, 9(2), 105–121.CrossRefGoogle Scholar
  11. Crooks, T. (2011). Assessment for learning in the accountability era: New Zealand. Studies in Educational Evaluation, 37(1), 71–77.CrossRefGoogle Scholar
  12. Cumming, J. J., & Maxwell, G. S. (1999). Contextualising authentic assessment. Assessment in Education: Principles, Policy and Practice, 6(2), 177.CrossRefGoogle Scholar
  13. Dann, R. (2002). Promoting assessment as learning. New York, NY: RoutledgeFalmer.CrossRefGoogle Scholar
  14. Darling-Hammond, L. (2010). Performance counts: Assessment systems that support high-quality learning. Washington, DC: Council of Chief State School Officers. Retrieved from:
  15. Davies, A. (2000). Making classroom assessment work. Courtney, BC, Canada: Connections Publishing Co.Google Scholar
  16. Diederich, P. B., French, J. W., & Carlton, S. T. (1961). Factors in judgments of writing ability (ETS Research Bulletin 61-15). Princeton, NJ: Educational Testing Service.Google Scholar
  17. Eisner, E. (2002, May). From episteme to phronesis to artistry in the study and improvement of teaching. Teaching and Teacher Education, 18(4), 375.Google Scholar
  18. Eisner, E. W. (1999). The uses and limits of performance assessment. Phi Delta Kappan, 80(9), 658.Google Scholar
  19. Frey, B. B., & Schmitt, V. L. (2007). Coming to terms with classroom assessment. Journal of Advanced Academics, 18(3), 402–423.Google Scholar
  20. Gadamer, H. G. (1975). Truth and method. New York, NY: The Continuum Publishing Company.Google Scholar
  21. Goodrich Andrade, H. (2005). Teaching with rubrics. College Teaching, 53(1), 27–30.CrossRefGoogle Scholar
  22. Goodrich Andrade, H. L., & Du, Y. (2005). Student perspectives on rubric-referenced assessment. Practical Assessment, Research and Evaluation, 10, 1–11.Google Scholar
  23. Government of Western Australia. (2007). Writing and marking guide: Western Australian literacy and numeracy assessment. Perth, Australia: Department of Education and Training.Google Scholar
  24. Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77, 81–112.CrossRefGoogle Scholar
  25. Hillegas, M. B. (1912). A scale for the measurement of quality in English composition for young people. New York, NY: Bureau of Publications, Teachers College, Columbia University.Google Scholar
  26. Hudelson, E. (1923). The development and comparative values of composition scales. English Journal, 12(3), 163–168.CrossRefGoogle Scholar
  27. Huot, B. (1990). The literature of direct writing assessment: Major concerns and prevailing trends. Review of Educational Research, 60(2), 237–263.CrossRefGoogle Scholar
  28. Huot, B. (2002). (Re)Articulating writing assessment. Logan, UT: Utah State University Press.Google Scholar
  29. Kohn, A. (2006). The trouble with rubrics. English Journal, 95(4), 12–15.CrossRefGoogle Scholar
  30. Limbrick, L., & Knight, N. (2005). Close reading of students’ writing: What teachers learn about writing. English Teaching: Practice and Critique, 4(2), 5–22.Google Scholar
  31. Livingston, M. (2012). The infamy of grading rubrics. English Journal, 102(2), 108–113.Google Scholar
  32. Mabry, L. (1999). Writing to the rubric. Phi Delta Kappan, 80(9), 673.Google Scholar
  33. Mabry, L. (2004). Strange, yet familiar: Assessment-driven education. In K. A. Sirotnik (Ed.), Holding accountability accountable: What ought to matter in public education. New York, NY: Teachers College Press.Google Scholar
  34. McTighe, J., & Wiggins, G. (1999). The understanding by design handbook. Alexandria, VA: Association for Supervision and Curriculum Development (ASCD).Google Scholar
  35. Moran, D., & Mooney, T. (2002). The phenomenological reader. London, UK: Routledge.Google Scholar
  36. Morgan, W., & Wyatt-Smith, C. (2000). Im/proper accountability: Towards a theory of critical literacy and assessment. Assessment in Education: Principles, Policy and Practice, 7(1), 123–142.CrossRefGoogle Scholar
  37. New Zealand Ministry of Education. (n.d.). Assessment online. Retrieved from:
  38. Noyes, E. C. (1912). Progress in standardizing the measurement of composition. The English Journal, 1(9), 532–536.CrossRefGoogle Scholar
  39. Orsmond, P., Merry, S., & Reiling, K. (2002). The use of exemplars and formative feedback when using student derived marking criteria in peer and self-assessment. Assessment & Evaluation in Higher Education, 27(4), 309–323.CrossRefGoogle Scholar
  40. Panadero, E., & Alonso-Tapia, J. (2013). Self-assessment: Theoretical and practical connotations. When it happens, how is it acquired and what to do to develop it in our students. Electronic Journal of Research in Educational Psychology, 11(2), 551–576.CrossRefGoogle Scholar
  41. Panadero, E., & Jonsson, A. (2013). The use of scoring rubrics for formative assessment purposes revisited: A review. Educational Research Review, 9, 129–144.CrossRefGoogle Scholar
  42. Panadero, E., Romero, M., & Strijbos, J. (2013). The impact of a rubric and friendship on peer assessment: Effects on construct validity, performance, and perceptions of fairness and comfort. Studies in Educational Evaluation, 39(4), 195–203.CrossRefGoogle Scholar
  43. Parker, F. E. (1919). The measurement of composition in English classes. The English Journal, 98(4), 203–208.CrossRefGoogle Scholar
  44. Parsons, J., & Beauchamp, L. (2012). From knowledge to action: Shaping the future of curriculum development in Alberta. Edmonton, AB, Canada: Alberta Education. Retrieved:
  45. Popham, J. W. (1997). What’s wrong – And what’s right – With rubrics. Educational Leadership, 12, 72–75.Google Scholar
  46. Popham, J. W. (2003). Test better, teach better. Alexandria, VA: Association for Curriculum Development.Google Scholar
  47. Redecker, C., & Johannessen, O. (2013). Changing assessment: Towards a new assessment paradigm using ICT. European Journal of Education, 48(1), 79–96.CrossRefGoogle Scholar
  48. Rice, J. M. (1903). English: The need of a new basis in education. Forum, October, 35, 269–293.Google Scholar
  49. Rothman, R. (2010). Policy brief: Principles for a comprehensive assessment system. Washington, DC: Alliance for Excellent Education. Retrieved from:
  50. Saddler, B., & Andrade, H. (2004). The writing rubric. Educational Leadership, 62(2), 48–52.Google Scholar
  51. Sadler, D. R. (2008). Formative assessment and the design of instructional systems. In W. Harlen (Ed.), Student assessment and testing (pp. 3–28). Thousand Oakes, CA: Sage.Google Scholar
  52. Sadler, D. R. (2009). Indeterminacy in the use of preset criteria for assessment and grading. Assessment & Evaluation in Higher Education, 34(2), 159–179.CrossRefGoogle Scholar
  53. Sadler, D. R. (2010). Beyond feedback: Developing student capability in complex appraisal. Assessment & Evaluation in Higher Education, 35(5), 535–550.CrossRefGoogle Scholar
  54. Slomp, D. H. (2008). Harming not helping: The impact of a Canadian standardized writing assessment on curriculum and pedagogy. Assessing Writing, 13(3), 180–200.CrossRefGoogle Scholar
  55. Spandel, V. (2006). In defense of rubrics. English Journal, 96(1), 19–22.CrossRefGoogle Scholar
  56. Stewart, D., & Mickunas, A. (1990). Exploring phenomenology. Athens, OH: Ohio University Press.Google Scholar
  57. Thorndike, E. L. (1915). Thorndike extension of the Hillegas scale. New York, NY: Bureau of Publications, Teachers College, Columbia University.Google Scholar
  58. Timperley, H. S., & Parr, J. M. (2009). What is this lesson about? Instructional processes and student understandings in writing classrooms. Curriculum Journal, 20(1), 43–60.CrossRefGoogle Scholar
  59. Towne, C. F. (1918). Making a scale for the measurement of English composition. The Elementary School Journal, 19(1), 41–53.CrossRefGoogle Scholar
  60. Turley, E. D., & Gallagher, C. W. (2008). On the uses of rubrics: Refraining the great rubric debate. English Journal, 97(4), 87–92.Google Scholar
  61. Wiggins, G. (1991). Standards, not standardization: Evoking quality student work. (Cover story). Educational Leadership, 48(5), 18–25.Google Scholar
  62. Wiggins, G. (1996). Anchoring assessment with exemplars: Why students and teachers need models. Gifted Child Quarterly, 40(2), 66.CrossRefGoogle Scholar
  63. Wilson, M. (2007a). The view from somewhere. Educational Leadership, 65(4), 76–80.Google Scholar
  64. Wilson, M. (2007b). Why I won’t be using rubrics to respond to students’ writing. English Journal, 96(4), 62–66.CrossRefGoogle Scholar
  65. Yancey, K. B. (1999). Looking back as we look forward: Historicizing writing assessment. College Composition and Communication, 50(3), 483–503.CrossRefGoogle Scholar
  66. Yoshina, J. M., & Harada, V. H. (2007). Involving students in learning through rubrics. Library Media Connection, 25(5), 10–14.Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Visible Assessment for Learning Inc.CalgaryCanada

Personalised recommendations