Advertisement

The impact of linguistic similarity on cross-cultural comparability of students’ perceptions of teaching quality

  • Jessica FischerEmail author
  • Anna-Katharina Praetorius
  • Eckhard Klieme
Article

Abstract

Valid cross-country comparisons of student learning and pivotal factors contributing to it, such as teaching quality, offer the possibility to learn from outstandingly effective educational systems across the world and to improve learning in classrooms by providing policy relevant information. Yet, it often remains unclear whether the instruments used in international large–scale assessments work similarly across different cultural and linguistic groups, and thus can be used for comparing them. Using PISA 2012 data, we investigated data comparability of three teaching quality dimensions, namely student support, classroom management, and cognitive activation using a newly developed psychometric approach, namely alignment. Focusing on 15 countries, grouped into five linguistic clusters, we secondly assessed the impact of linguistic similarity on data comparability. Main findings include that (1) comparability of teaching quality measures is limited when comparing linguistically diverse countries; (2) the level of comparability varies across dimensions; (3) linguistic similarity considerably enhances the degree of comparability, except across the Chinese-speaking countries. Our study illustrates new and more flexible possibilities to test for data comparability and outlines the importance to consider cultural and linguistic differences when comparing teaching-related measures across groups. We discuss possible sources of lacking data comparability and implications for comparative educational research.

Keywords

Data comparability Teaching quality Alignment Linguistic similarity PISA 2012 Large-scale assessment 

Notes

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.

References

  1. Asparouhov, T., & Muthén, B. O. (2014). Multiple-group factor analysis alignment. Structural Equation Modeling: A Multidisciplinary Journal, 21, 495–508.CrossRefGoogle Scholar
  2. Baumert, J., Kunter, M., Blum, W., Brunner, M., Voss, T., Jordan, A., Klusmann, U., Krauss, S., Neubrand, M., & Tsai, Y. M. (2010). Teachers’ mathematical knowledge, cognitive activation in the classroom, and student progress. American Educational Research Journal, 47, 133–180.CrossRefGoogle Scholar
  3. Brophy, J. E. (2000). Teaching. Brussels: International Academy of Education.Google Scholar
  4. Brown, T. A. (2015). Confirmatory factor analysis for applied research (second edition). New York: The Guilford Press.Google Scholar
  5. Byrne, B. M., & van de Vijver, F. J. R. (2017). The maximum likelihood alignment approach to testing for approximate measurement invariance: a paradigmatic cross-cultural application. Psicothema, 29, 539–551.Google Scholar
  6. Byrne, B. M., Shavelson, R. J., & Muthén, B. O. (1989). Testing for the equivalence of factor covariance and mean structures - the issue of partial measurement invariance. Psychological Bulletin, 105, 456–466.CrossRefGoogle Scholar
  7. Caro, D. H., Lenkeit, J., & Kyriakides, L. (2016). Teaching strategies and differential effectiveness across learning contexts: evidence from PISA 2012. Studies in Educational Evaluation, 49, 30–41.CrossRefGoogle Scholar
  8. Çetin, B. (2010). Cross-cultural structural parameter invariance on PISA 2006 student questionnaires. Eurasian Journal of Educational Research, 38, 71–89.Google Scholar
  9. Cheung, F. M., van de Vijver, F. J. R., & Leong, F. T. L. (2011). Toward a new approach to the study of personality in culture. The American Psychologist, 66, 593–603.CrossRefGoogle Scholar
  10. Condon, L., Ferrando, P. J., & Demestre, J. (2006). A note on some item characteristics related to acquiescent responding. Personality & Individual Differences, 40, 403–407.CrossRefGoogle Scholar
  11. Davidov, E., Meuleman, B., Cieciuch, J., Schmidt, P., & Billiet, J. (2014). Measurement equivalence in cross-national research. Annual Review of Sociology, 40, 55–75.CrossRefGoogle Scholar
  12. De Roover, K., Vermunt, J. K., Timmerman, M. E., & Ceulemans, E. (2017). Mixture simultaneous factor analysis for capturing differences in latent variables between higher level units of multilevel data. Structural Equation Modeling: A Multidisciplinary Journal, 24, 506–523.CrossRefGoogle Scholar
  13. Deci, E. L., & Ryan, R. M. (1996). Intrinsic motivation and self-determination in human behavior. New York: Plenum.Google Scholar
  14. Desa, D. (2014). Evaluating measurement invariance of TALIS 2013 complex scales: comparison between continuous and categorical multiple-group confirmatory factor analyses. OECD Education Working Papers: Vol. 103. Paris: OECD Publishing.Google Scholar
  15. Dietrich, J., Dicke, A.-L., Kracke, B., & Noack, P. (2015). Teacher support and its influence on students’ intrinsic value and effort: dimensional comparison effects across subjects. Learning and Instruction, 39, 45–54.CrossRefGoogle Scholar
  16. Fauth, B., Decristan, J., Rieser, S., Klieme, E., & Buttner, G. (2014). Student ratings of teaching quality in primary school: dimensions and prediction of student outcomes. Learning and Instruction, 29, 1–9.CrossRefGoogle Scholar
  17. Fischer, H. E., Labudde, P., Neumann, K., & Viiri, J. (2014). Quality of instruction in physics: comparing Finland, Germany and Switzerland. Münster: Waxmann.Google Scholar
  18. Flake, J. K., & McCoach, D. B. (2018). An investigation of the alignment method with polytomous indicators under conditions of partial measurement invariance. Structural Equation Modeling: A Multidisciplinary Journal, 25, 56–70.CrossRefGoogle Scholar
  19. Gläser-Zikuda, M., Harring, M., & Rohlfs, C. (Eds.). (2017). Handbuch Schulpädagogik [handbook school pedagogy]. Stuttgart: UTB; Waxmann.Google Scholar
  20. Hattie, J. A. (2009). Visible learning: a synthesis of over 800 meta-analyses relating to achievement. New York: Routledge.Google Scholar
  21. He, J., & Kubacka, K. (2015). Data comparability in the teaching and learning international survey (TALIS) 2008 and 2013. OECD education working papers: Vol.124. Paris: OECD Publishing.Google Scholar
  22. He, J., & van de Vijver, F. J. R. (2016). Correcting for scale usage differences among Latin American countries, Portugal, and Spain in PISA. Revista Electronica de Investigacion y Evaluacion Educativa, 22.Google Scholar
  23. He, J., Buchholz, J., & Klieme, E. (2017). Effects of anchoring vignettes on comparability and predictive validity of student self-reports in 64 cultures. Journal of Cross-Cultural Psychology, 48, 319–334.CrossRefGoogle Scholar
  24. Klieme, E., Pauli, C., & Reusser, K. (2009). The Pythagoras study. In T. Janik & T. Seidel (Eds.), The power of video studies in investigating teaching and learning in the classroom (pp. 137–160). Münster: Waxmann.Google Scholar
  25. Klusmann, U., Kunter, M., Trautwein, U., Lüdtke, O., & Baumert, J. (2008). Teachers’ occupational well-being and quality of instruction: the important role of self-regulatory patterns. Journal of Educational Psychology, 100, 702–715.CrossRefGoogle Scholar
  26. Kounin, J. S. (1970). Discipline and group management in classrooms. New York: Holt, Rinehart & Winston.Google Scholar
  27. Kramsch, C. (1998). Language and culture. Oxford: University Press.Google Scholar
  28. Kuger, S., Klieme, E., Lüdtke, O., Schiepe-Tiska, A., & Reiss, K. (2017). Mathematikunterricht und Schülerleistung in der Sekundarstufe: Zur Validität von Schülerbefragungen in Schulleistungsstudien. [Mathematics achievement and student outcomes in secondary education: validity of student scores in educational studies]. Zeitschrift für Erziehungswissenschaft, 20, 61–98.CrossRefGoogle Scholar
  29. Kunter, M., Baumert, J., & Köller, O. (2007). Effective classroom management and the development of subject-related interest. Learning and Instruction, 17, 494–509.CrossRefGoogle Scholar
  30. Lafontaine, D., Dupont, V., Jaegers, D., & Schillings, P. (2018). Self-concept in reading: subcomponents, cross-cultural invariance and relationships with reading achievement in an international context (PIRLS 2011). Submitted to Studies in Educational Evaluation. Google Scholar
  31. Lee, J. (2012). Conducting cognitive interviews in cross-national settings. Assessment, 21.Google Scholar
  32. Lipowsky, F., Rakoczy, K., Pauli, C., Drollinger-Vetter, B., Klieme, E., & Reusser K. (2009). Quality of geometry instruction and its short-term impact on students’ understanding of the Pythagorean Theorem. Learning and Instruction, 19, 257–537.CrossRefGoogle Scholar
  33. Lomazzi, V. (2018). Using alignment optimization to test the measurement invariance of gender role attitudes in 59 countries. Methods, Data, Analyses: A Journal for Quantitative Methods and Survey Methodology, 12, 77–103.Google Scholar
  34. Lüdtke, O., Robitzsch, A., Trautwein, U., & Kunter, M. (2009). Assessing the impact of learning environments: how to use student ratings of classroom or school characteristics in multilevel modelling. Contemporary Educational Psychology, 34, 120–131.CrossRefGoogle Scholar
  35. Miller, K., Mont, D., Maitland, A., Altman, B., & Madans, J. (2011). Results of a cross-national structured cognitive interviewing protocol to test measures of disability. Quality & Quantity, 45, 801–815.CrossRefGoogle Scholar
  36. Munck, I., Barber, C., & Torney-Purta, J. (2017). Measurement invariance in comparing attitudes toward immigrants among youth across Europe in 1999 and 2009: the alignment method applied to IEA CIVED and ICCS. Sociological Methods & Research.Google Scholar
  37. Muthén, B. O., & Asparouhov, T. (2012). Bayesian structural equation modeling: a more flexible representation of substantive theory. Psychological Methods, 17, 313–335.CrossRefGoogle Scholar
  38. Muthén, B. O., & Asparouhov, T. (2014). IRT studies of many groups: the alignment method. Frontiers in Psychology, 5, 978.Google Scholar
  39. Muthén, B. O., & Asparouhov, T. (2018). Recent methods for the study of measurement invariance with many groups: alignment and random effects. Sociological Methods & Research, 47, 637–664.CrossRefGoogle Scholar
  40. Muthén, L. K., & Muthén, B.O (1998–2012). Mplus User’s Guide. Seventh Edition. Los Angeles, CA: Muthén & Muthén.Google Scholar
  41. Nilsen, T., & Gustafsson, J.-E. (2016). Teacher quality, instructional quality and student outcomes: Relationships across countries, cohorts and time. IEA research for education, a series of in-depth analyses based on data of the International Association for the Evaluation of Educational Achievement (IEA). Cham: Springer International Publishing.Google Scholar
  42. OECD. (2013). PISA 2012 results: ready to learn (volume III) – students’ engagement, drive and self-beliefs. Paris: OECD Publishing.CrossRefGoogle Scholar
  43. OECD. (2014). PISA 2012 technical report. Paris: OECD Publishing.Google Scholar
  44. OECD. (2015). Codebook for PISA 2012 Main Study Student Questionnaire. Paris: OECD Publishing.Google Scholar
  45. Pinger, P., Rakoczy, K., Besser, M., & Klieme, E. (2017). Interplay of formative assessment and instructional quality—interactive effects on students’ mathematics achievement. Learning Environments Research, 47, 133.Google Scholar
  46. Praetorius, A.-K., Pauli, C., Reusser, K., Rakoczy, K., & Klieme, E. (2014). One lesson is all you need? Stability of instructional quality across lessons. Learning and Instruction, 31, 2–12.CrossRefGoogle Scholar
  47. Praetorius, A.-K, Klieme, E., Bell, C.A., Qi, Y., Witherspoon, W., & Opfer, D. (2018a). Country conceptualizations of teaching quality in TALIS Video: Identifying similarities and differences. Paper presentation at the annual meeting of the American Educational Research Association, New York, NY.Google Scholar
  48. Praetorius, A.-K., Klieme, E., Herbert, B., & Pinger, P. (2018b). Generic dimensions of teaching quality: the German framework of Three Basic Dimensions. ZDM, 47, 97.Google Scholar
  49. Rutkowski, L., & Svetina, D. (2014). Assessing the hypothesis of measurement invariance in the context of large-scale international surveys. Educational and Psychological Measurement, 74, 31–57.CrossRefGoogle Scholar
  50. Scherer, R., Nilsen, T., & Jansen, M. (2016). Evaluating individual students’ perceptions of instructional quality: an investigation of their factor structure, measurement invariance, and relations to educational outcomes. Frontiers in Psychology, 7, 110.Google Scholar
  51. Schulz, W. (2003). Validating questionnaire constructs in international studies: two examples from PISA 2000. In Paper presentation at the annual meeting for the. Chicago: American Educational Research Association.Google Scholar
  52. Schulz, W. (2005). Testing parameter invariance for questionnaire indices using confirmatory factor analysis and item response theory. Paper presentation at the annual meeting for the American Educational Research Association, San Francisco.Google Scholar
  53. Seidel, T., & Shavelson, R. J. (2007). Teaching effectiveness research in the past decade: the role of theory and research design in disentangling meta-analysis results. Review of Educational Research, 77, 454–499.CrossRefGoogle Scholar
  54. Täht, K., & Must, O. (2013). Comparability of educational achievement and learning attitudes across nations. Educational Research and Evaluation, 19, 19–38.CrossRefGoogle Scholar
  55. van de Grift, W. J. C. M. (2014). Measuring teaching quality in several European countries. School Effectiveness and School Improvement, 25, 295–311.CrossRefGoogle Scholar
  56. van de Vijver, F. J. R. (2018a). Capturing bias in structural equation modeling. In E. Davidov, P. Schmidt, J. Billiet, & B. Meuleman (Eds.), Cross-cultural analysis: methods and applications. New York: Routledge.Google Scholar
  57. van de Vijver, F. J. R. (2018b). Talk at the OECD-GESIS seminar: translating and adapting instruments in large-scale assessments. Paris.Google Scholar
  58. van de Vijver, F. J. R., & He, J. (2016). Bias assessment and prevention in non-cognitive outcome measures in PISA questionnaires. In S. Kuger, E. Klieme, N. Jude, & D. Kaplan (Eds.), Methodology of educational measurement and assessment. Assessing contexts of learning: an international perspective. Cham: Springer International Publishing.Google Scholar
  59. van de Vijver, F. J. R., & Leung, K. (1997). Methods and data analysis of comparative research. Thousand Oaks: Sage.Google Scholar
  60. van de Vijver, F. J. R., & Tanzer, N. K. (2004). Bias and equivalence in cross-cultural assessment: an overview. Revue Européenne de Psychologie Appliqué, 54, 119–135.CrossRefGoogle Scholar
  61. Walberg, H. J., & Paik, S. J. (2000). Effective educational practices. Educational practices series. Brussels: IAE.Google Scholar
  62. Willis, G. B., & Miller, K. (2011). Cross-cultural cognitive interviewing: seeking comparability and enhancing understanding. Field Methods, 23, 331–341.CrossRefGoogle Scholar
  63. Yi, H. Y., & Lee, Y. (2017). A latent profile analysis and structural equation modeling of the instructional quality of mathematics classrooms based on the PISA 2012 results of Korea and Singapore. Asia Pacific Education Review, 18, 23–39.CrossRefGoogle Scholar

Copyright information

© Springer Nature B.V. 2019

Authors and Affiliations

  1. 1.DIPF | Leibniz Institute for Research and Information in EducationFrankfurt am MainGermany
  2. 2.University of ZurichZurichSwitzerland

Personalised recommendations