Classroom Observation as Method for Research and Improvement

  • Tony LoughlandEmail author
Part of the SpringerBriefs in Education book series (BRIEFSEDUCAT)


Classroom observation as a methodology is not without its critics. This critique ranges from epistemological arguments to validity issues with its controversial application as an evaluation measure of teacher effectiveness. On the methodological front, there are significant reliability and validity threats when classroom observation is used in both educational research and teacher evaluation (Harris in Carnegie Knowledge Network Brief 5, 2012). This chapter acknowledges this critique and proposes a third way for classroom observation in teacher improvement. The improvement agenda disciplines the classroom observation and moves it away from pure research or evaluation (judgement of performance) to helping teachers improve their practice. This position is supported by the argument approach to test validation endorsed by the AERA, APA and NCME.


Classroom Observation Validation Improvement Measures 


  1. AERA, APA, & NCME. (2014). Standards for educational and psychological testing. Washington D.C.: AERA.Google Scholar
  2. AITSL. (2011). Australian professional standards for teachers. Melbourne: AITSL.Google Scholar
  3. AITSL. (2013). Guide to the certification of highly accomplished and lead teachers in Australia. Retrieved from Melbourne:
  4. Allen, J., Gregory, A., Mikami, A., Lun, J., Hamre, B., & Pianta, R. (2013). Observations of effective teacher–student interactions in secondary school classrooms: Predicting student achievement with the classroom assessment scoring system-secondary. School Psychology Review, 42(1), 76–98.Google Scholar
  5. Allen, J. P., Pianta, R. C., Gregory, A., Mikami, A. Y., & Lun, J. (2011). An interaction-based approach to enhancing secondary school instruction and student achievement. Science, 333(6045), 1034–1037. Scholar
  6. Australian Institute for Teaching and School Leadership (AITSL). (2014). Looking at classroom practice. Retrieved from
  7. Baird, J.-A., Andrich, D., Hopfenbeck, T. N., & Stobart, G. (2017). Assessment and learning: fields apart? Assessment in Education: Principles, Policy & Practice, 24(3), 317–350. Scholar
  8. Bell, C. A., Gitomer, D. H., McCaffrey, D. F., Hamre, B. K., Pianta, R. C., & Qi, Y. (2012). An argument approach to observation protocol validity. Educational Assessment, 17(2–3), 62–87. Scholar
  9. Box, G. E. P. (1976). Science and statistics. Journal of the American Statistical Association, 71(356), 791–799. Scholar
  10. City, E. A., Elmore, R. F., Fiarman, S. E., & Teitel, L. (2011). Instructional rounds in education. A network approach to improving teaching and learning. Cambridge, MA.: Harvard Education Press.Google Scholar
  11. Collie, R. J., & Martin, A. J. (2016). Adaptability: An important capacity for effective teachers. Educational Practice and Theory, 38(1), 27–39.CrossRefGoogle Scholar
  12. Conley, S., Smith, J. L., Collinson, V., & Palazuelos, A. (2016). A small step into the complexity of teacher evaluation as professional development. Professional Development in Education, 42(1), 168–170. Scholar
  13. Curry School of Education University of Virginia. (2018). My teaching partner. Retrieved from
  14. Danielson, C. (1996). Enhancing professional practice: A framework for teaching. Alexandria, VA: Association of Supervision and Curriculum Development.Google Scholar
  15. Danielson, C. (2011). Evaluations that help teachers learn. Educational Leadership, 68(4), 35–39.Google Scholar
  16. Danielson, C. (2013). The framework for teacher evaluation instrument (2013th ed.). Princeton, NJ: The Danielson Group.Google Scholar
  17. Danielson, C. (2016). Charlotte Danielson on rethinking teacher evaluation. Retrieved from Bethseda, MD: Charlotte Danielson on Rethinking Teacher Evaluation.Google Scholar
  18. Derrington, M. L., & Kirk, J. (2016). Linking job-embedded professional development and mandated teacher evaluation: teacher as learner. Professional Development in Education, 1–15.
  19. Fried, E. I. (2017). What are psychological constructs? On the nature and statistical modelling of emotions, intelligence, personality traits and mental disorders. Health Psychology Review, 11(2), 130–134. Scholar
  20. Gill, B., Shoji, M., Coen, T., & Place, K. (2016). The content, predictive power, and potential bias in five widely used teacher observation instruments (REL 2017-191). Washington, DC: U.S. Retrieved from
  21. Halpin, P. F., & Kieffer, M. J. (2015). Describing profiles of instructional practice. Educational Researcher, 44(5), 263–277. Scholar
  22. Hamre, B. K., Pianta, R. C., Burchinal, M., & Downer, J. T. (2010). A course on supporting early language and literacy development through effective teacher-child interactions: Effects on teachers’ beliefs, knowledge and practice. Paper presented at the Society for Research on Educational Effectiveness, Washington, DC.Google Scholar
  23. Harris, D. N. (2012). How do value-added indicators compare to other measures of teacher effectiveness. Carnegie Knowledge Network Brief (5).Google Scholar
  24. Hattie, J. (2003). Teachers make a difference. What is the research evidence. Distinguishing Expert Teachers from Novice and Experienced Teachers. Retrieved from Melbourne.Google Scholar
  25. Hattie, J. (2012). Visible learning for teachers. Maximising impact on learning. London: Routledge.Google Scholar
  26. Kane, M. T., & Staiger, D. O. (2012). Gathering feedback for teaching. Combining high-quality observations with student surveys and achievement gains. Retrieved from Seattle, WA:
  27. Mashburn, A., Meyer, J., Allen, J., & Pianta, R. (2014). The effect of observation length and presentation order on the reliability and validity of an observational measure of teaching quality. Educational and Psychological Measurement, 74(3), 400–422. Scholar
  28. MET Project. (2013). Ensuring fair and reliable measures of effective teaching: Culminating findings from the MET project’s three-year study—Policy and practitioner brief. Seattle, WA: Bill & Melinda Gates Foundation.Google Scholar
  29. O’Leary, M., & Wood, P. (2016). Performance over professional learning and the complexity puzzle: Lesson observation in England’s further education sector. Professional Development in Education, 1–19.
  30. Pianta, R. (2011). Teaching children well. New evidence-based approaches to teacher professional development and training. Retrieved from
  31. Pianta, R. C., & Hamre, B. K. (2009). Conceptualization, measurement, and improvement of classroom processes: Standardized observation can leverage capacity. Educational Researcher, 38(2), 109–119. Scholar
  32. Pianta, R. C., Hamre, B. K., & Mintz, S. (2012). Classroom assessment scoring system: Secondary manual. Curry School of Education University of Virginia: Teachstone.Google Scholar
  33. Plake, B. S., & Wise, L. L. (2014). What is the role and importance of the revised AERA, APA, NCME standards for educational and psychological testing? Educational Measurement: Issues and Practice, 33(4), 4–12.
  34. Popper, K. (1959). The logic of scientific discovery. New York: Basic Books.Google Scholar
  35. Stuhlman, M., Hamre, B., Downer, J., & Pianta, R. C. (2014). How to select the right classroom observation tool. Retrieved from
  36. The Danielson Group. (2013). The framework. Retrieved from

Copyright information

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.School of EducationUNSW SydneySydneyAustralia
  2. 2.Research Centre for Teacher EducationBeijing Normal UniversityBeijingChina

Personalised recommendations