Developments in Quantitative Methods in Research Into Teachers and Teaching

  • John P. Keeves
  • I Gusti Ngurah Darmawan
Part of the Springer International Handbooks of Education book series (SIHE, volume 21)

There is perhaps no situation greater than that of teachers in classrooms where sizeable groups of people work together under the direct guidance of a single person for longer periods of a day on a regular basis and for sustained periods of time than that of teachers in primary school classrooms. In the home, the group is smaller, the guidance is shared between two and more people, the situation is similar but with longer periods of time involved where similar problems of analysis arise. Both situations present specific methodological challenges, involving multilevel and multivariate analysis. However, the size of school and classroom groups and the relative ease with which data can be collected, has led to a break-through occurring in the analysis of data in the field of education. Nevertheless, the sensitivity of teachers to intrusion into their closed operational setting has led to relatively little use being made of the advances that have occurred in these quantitative analytical procedures in the investigation of the problems associated with teachers and teaching. This article raises these issues and suggests that the developments that have occurred during recent decades in this area are opening up a domain for investigation that has the potential to spread to many other fields of societal and human activity, including industry and commerce, medical practice, and the whole of the fields of sociology and social psychological inquiry.


Educational Research Path Analysis Interval Scale Classroom Level Maximum Likelihood Estimation Procedure 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Aikin, W. M. (1942). Adventures in American education, Vol. 1. Story of the eight year study. New York: Harper.Google Scholar
  2. Alagumalai, S., Curtis, D.D., & Hungi, N. (2004) Applied Rasch Measurement: A Book of Exemplars, Dordrecht, The Netherlands: Springer-Kluwer.Google Scholar
  3. Bloom, B. S. (Ed.). (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook 1: Cognitive domain. New York: David McKay; London: Longman, Green.Google Scholar
  4. Bloom, B. S., Hastings, J. T., & Madaus, G. F. (1971). Handbook of formative and summative evaluation of student learning. New York: McGraw-Hill.Google Scholar
  5. Bronfenbrenner, U. (1979). The ecology of human development. Harvard: Harvard University Press.Google Scholar
  6. Bryk, A.S. & Raudenbush, S.W. (1992) Hierarchical linear models: Applications and data analysis methods. New Park, CA: Sage.Google Scholar
  7. Cooley, W. W., & Lohnes, P. R. (1976). Evaluation research in education. New York: Wiley.Google Scholar
  8. Cronbach, L. J., & Suppes, P. (Eds.). (1969). Research for tomorrow's schools: Disciplined inquiry for education: Report. New York: Macmillan.Google Scholar
  9. Dahlöff, V. (1967). Relevance and fitness analysis on comparative education. In D. E. Super (Ed.), Towards a cross-national model of educational achievement in a national economy. New York: Teachers College Press.Google Scholar
  10. Darmawan, I. G. N., & Keeves, J. P. (2006a). Accountability of teachers and schools: A value-added approach. International Education Journal, 7(2), 174–188.Google Scholar
  11. Darmawan, I. G. N., & Keeves, J. P. (2006b). Suppressor variables and multilevel mixture modelling. International Education Journal, 7(2), 160–173.Google Scholar
  12. De Landsheere, G. (1997). History of educational research. In J. P. Keeves (Ed.), Educational research, methodology, and measurement: An international handbook (2nd ed., pp. 8–16). Oxford: Pergamon (Elsevier).Google Scholar
  13. Design-Based Research Collective. (2003). Design-based research: an emerging paradigm for educational inquiry. Educational Researcher, 32(1), 5–9.CrossRefGoogle Scholar
  14. Flanders, N. A. (1970). Analyzing teaching behavior. Reading, MA: Addison-Wesley.Google Scholar
  15. Ford, M. J., & Forman, E. A. (2006). Redefining disciplinary learning in classroom contexts. In J. Green & A. Luke (Eds.), Rethinking learning: What counts as learning and what learning counts. Review of research in education (Vol. 30, pp. 1–32). Washington, DC: AERA.Google Scholar
  16. Fraser, B. J. (1997). Classroom environments. In J. P. Keeves (Ed.), Educational research, methodology and measurement: An international handbook (2nd ed., pp. 896–900). Oxford: Pergamon (Elsevier).Google Scholar
  17. Gustafsson, J. E. (2007). Understanding causal inferences on educational achievement through analysis of within country differences over time. Invited paper presented at the 2nd IEA research conference, Washington, November 8–1, 2006.Google Scholar
  18. Gustafsson, J. E., & Stahl, P. A. (1996). STREAMS user's guide: Structural equation modelling made simple. Goteborg: Goteborg University.Google Scholar
  19. Hattie, J. (1992). Measuring the effects of schooling. Australian Journal of Education, 36(1), 5–13.Google Scholar
  20. Hauser, R. M., & Goldberger, A. S. (1971). The treatment of unobservable variables in path analysis. In H. L. Costner (Ed.), Sociological methodology. San Francisco, CA: Jossey-Bass.Google Scholar
  21. Hungi, N. (2006). In Report of World Bank, Vietnam Reading and Mathematics Assessment Study. Washington, DC: World Bank.Google Scholar
  22. Jöreskog, K. G., & Sörbom, D. (1979). Advances in factor analysis and structural equation models. Cambridge, MA: Art Books.Google Scholar
  23. Keeves, J. P. (1972). Educational environment and student achievement. Stockholm: Almqvist and Wiksell.Google Scholar
  24. Keeves, J. P., Hungi, N., & Afrassa, T. M. (2005). Measuring value added effects across schools: Should schools be compared in performance? Studies in educational evaluation 31(203), 247–266.CrossRefGoogle Scholar
  25. Kek, Y. C. (2006). Individual, classroom and teaching environments and student outcomes at university. Unpublished PhD thesis. University of Adelaide.Google Scholar
  26. Krathwohl, D. R., Bloom, B. S., & Masia, B. B. (1964). Taxonomy of educational objectives: The classification of educational goals. Handbook 2: Affective Domain. New York: David McKay.Google Scholar
  27. Larkin, A. I., & Keeves, J. P. (1984). The class size question: A study at different levels of analysis. Hawthorn, Victoria: ACER.Google Scholar
  28. Lawley, D.N. (1943) On problems connected with item selection and test construction. Proceedings of the Royal Society of Edinborough, 61, 273–287Google Scholar
  29. Lee, & Bryk, A. S. (1989). A multilevel model of the social distribution of high school achievement. Sociology of Education, 62(3), 172–192.CrossRefGoogle Scholar
  30. Linn, R. L. (1986). Quantitative methods in research on teaching. In M. Wittrock (Ed.), Handbook of research on teaching (3rd ed., pp. 92–118). New York: Macmillan.Google Scholar
  31. Mason, W. M., Wong, G. Y., & Entwisle, B. (1983). Contextual analysis through the multilevel linear model. In S. Leinhardt (Ed.), Sociological methodology 1983–1984. San Francisco, CA: Jossey-Bass.Google Scholar
  32. Masters, G. N., & Keeves, J. P. (1999). Advances in educational and psychological measurement. Oxford: Pergamon (Elsevier).Google Scholar
  33. Medley, D. M., & Mitzel, H. E. (1963). Measuring classroom behavior by systematic observation. In N. L. Gage (Ed.), Handbook of research on teaching. Chicago, IL: Rand McNally.Google Scholar
  34. Moos, R. H. (1974). The social climate scales: An overview. Palo Alto, CA: Consulting Psychologists Press.Google Scholar
  35. Munroe, W. S., DeVoss, J. C. & Kelly, F. J. (1924). Educational tests and measurements. Boston: Houghton Mifflin.Google Scholar
  36. Muthén, B. O. (1994). Multilevel covariance structure analysis. In J. Hox & I. Kreft (Eds.), Multilevel modeling, sociological methods and research, 22, 376–398.Google Scholar
  37. Muthén, L. K., & Muthén, B. O. (1998). MPlus: User's guide. Los Angeles: Muthén and Muthén.Google Scholar
  38. Pace, C. R., & Stern, G. G. (1958). An approach to the measurement of psychological characteristics of college environments. Journal of Educational Psychology, 4, 269–277.CrossRefGoogle Scholar
  39. Peaker, G. F. (1971). The plowden children four years later. Slough, England: NFER.Google Scholar
  40. Peaker, G. F. (1975). An empirical study of education in twenty-one countries: A technical report. New York: Wiley.Google Scholar
  41. Pedhazur, E. J. (1982). Multiple regression in behavioral research: Explanation and prediction (2nd ed.). New York: Holt, Rinehart and Winston.Google Scholar
  42. Postlethwaite, T. N. (2005). Monitoring educational achievement. Paris: IIEP.Google Scholar
  43. Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests. Copenhagen: Danish Institute for Educational Research.Google Scholar
  44. Raudenbush, S. W., & Bryk, A. S. (1986). A hierarchical model for studying school effects. Sociology of Education, 59(1), 1–17.CrossRefGoogle Scholar
  45. Raudenbush, S. W., & Bryk, A. S. (1989). Methodological advances in analyzing the effects of schools and classrooms on student learning. In E. Z. Rothkopf (Ed.), Review of research in education, 15, 423–475.Google Scholar
  46. Robinson, W. S. (1950) Ecological correlations and the behavior of individuals. American Sociological Review, 15, 351–357.CrossRefGoogle Scholar
  47. Shavelson, R. J., Webb, N. M., & Burstein, L. (1986). Measurement of teaching. In M. Wittrock (Ed.), Handbook of research on teaching (3rd ed., pp. 50–91). New York: Macmillan.Google Scholar
  48. Stern, G. G. (1970). People in context. New York: Wiley.Google Scholar
  49. Stevenson, H. W., & Stigler, J. W. (1992). The learning gap. New York: Touchstone.Google Scholar
  50. Super, D. D. (Ed.). (1967). Towards a cross-national model of educational achievement in a national economy. New York: Teachers College Press.Google Scholar
  51. Thurstone, L. L. (1925). A method of scaling psychological and educational tests. Journal of Educational Psychology, 16, 433–451.CrossRefGoogle Scholar
  52. Tyler, R. W. (1949). The basic principles of curriculum and instruction. Chicago, IL: University of Chicago Press.Google Scholar
  53. Welch, W. W., & Walberg, H. J. (1972). A national experiment in classroom observation. American Educational Research Journal, 9, 373–383.Google Scholar
  54. Willett, J. B. (1989). Questions and answers in the measurement of change. In E. Z. Rothkopf (Ed.), Review of research in education, 15, 345–422. Washington, DC: AERA.Google Scholar
  55. Willett, J. B. (1997). Change, measurement of. In J. P. Keeves (Ed.), Educational research methodology, and measurement: An international handbook (2nd ed. pp. 327–334). Oxford: Pergamon (Elsevier).Google Scholar
  56. Wold, H. O. (1982). Soft modeling. The basic design and some extensions. In K. G. Jöreskog & H. Wold (Eds.), Systems under indirect observation. Part II. Amsterdam: North Holland.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  • John P. Keeves
    • 1
  • I Gusti Ngurah Darmawan
    • 2
  1. 1.School of EducationThe University of AdelaideAdelaideAustralia
  2. 2.School of EducationThe University of AdelaideAdelaideAustralia

Personalised recommendations