Advertisement

European Journal of Psychology of Education

, Volume 34, Issue 1, pp 169–186 | Cite as

Using the theory of planned behavior to predict teachers’ likelihood of taking a competency-based approach to instruction

  • Anna Eva LenskiEmail author
  • Dirk Richter
  • Oliver Lüdtke
Article

Abstract

Quality of mathematics education has gained significant attention in educational politics and among educators as mathematics advances the foundations of analytical thinking necessary to excel in today’s knowledge-based economy. Recent research on instructional quality has focused on students’ development of competencies. Competency-based instruction is believed to be an effective approach to instruction as it is closely aligned to educational standards. We use data from the National Assessment Study 2012 in Germany and apply the theory of planned behavior to determine what motivates mathematics teachers (n = 1660) to take a competency-based approach to instruction. Results indicate that competencies outlined in the educational standards are a tangible element of current mathematics instruction. Within the framework of this study, we identified teachers’ perceived behavior control as the strongest determinant of taking a competency-based approach to instruction. We conclude that advancement of competency-based instruction depends on teachers’ beliefs about their professional resources.

Keywords

Instructional quality Educational standards Mathematics instruction Large-scale assessment Theory of planned behavior 

Notes

References

  1. Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50, 179–211.CrossRefGoogle Scholar
  2. Ajzen, I. (2011). The theory of planned behaviour: reactions and reflections. Psychology & Health, 26(9), 1113–1127.  https://doi.org/10.1080/08870446.2011.613995.CrossRefGoogle Scholar
  3. Ajzen, I. (2012). The theory of planned behavior. In P. A. M. Lange, A. W. Kruglanski, & E. T. Higgins (Eds.), Handbook of theories of social psychology (pp. 438–459). London: Sage.CrossRefGoogle Scholar
  4. Ajzen, I. (2014). The theory of planned behaviour is alive and well, and not ready to retire: a commentary on Sniehotta, Presseau, and Araújo-Soares. Health Psychology Review.  https://doi.org/10.1080/17437199.2014.883474.
  5. Anderson, J. C., & Gerbing, D. W. (1988). Structural equation modeling in practice: a review and recommended two-step approach. Psychological Bulletin, 103(3), 411–423.CrossRefGoogle Scholar
  6. Baumert, J., Kunter, M., Blum, W., Brunner, M., Voss, T., Jordan, A., Klusmann, U., Krauss, S., Neubrand, & Tsai, Y. M. (2010). Teachers’ mathematical knowledge, cognitive activation in the classroom, and student progress. American Educational Research Journal, 47(1), 133–180.  https://doi.org/10.3102/0002831209345157.CrossRefGoogle Scholar
  7. Brophy, J. E. (2000). Teaching. Brussels: International Academy of Education.Google Scholar
  8. Cheung, G. W., & Rensvold, R. B. (2002). Evaluating goodness-of-fit indexes for testing measurement invariance. Structural Equation Modeling, 9, 233–255.  https://doi.org/10.1207/S15328007SEM0902_5.CrossRefGoogle Scholar
  9. Clune, W. H. (2001). Towards a theory of standards-based reform: the case of nine NSF statewide systematic initiatives. In S. H. Fuhrman (Ed.), From the capitol to the classroom: standards-based reform in the states (pp. 13–38). Chicago: The University of Chicago Press.Google Scholar
  10. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). New York: Academic.Google Scholar
  11. Crawley, F. E. (1990). Intentions of science teachers to use investigative teaching methods: a test of the theory of planned behavior. Journal of Research in Science Teaching, 27, 685–669.CrossRefGoogle Scholar
  12. Crawley, F. E., & Koballa, T. R. (1994). Attitude research in science education: contemporary models and methods. Science Education, 78, 35–55.CrossRefGoogle Scholar
  13. Creemers, B. P. M., & Kyriakides, L. (2008). The dynamics of educational effectiveness: a contribution to policy, practice, and theory in contemporary schools. London: Routledge.Google Scholar
  14. Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297–334.CrossRefGoogle Scholar
  15. Darling-Hammond, L. (2000). Teacher quality and student achievement. A review of state policy evidence. Education Policy Analysis, 8(1). doi: 10.14507/epaa.v8n1.2000.
  16. Enders, C. K. (2010). Applied missing data analysis. New York: Guilford.Google Scholar
  17. Fauth, B., Decristan, J., Rieser, S., Klieme, E., & Büttner, G. (2014). Student ratings of teaching quality in primary school: dimensions and prediction of student outcomes. Learning and Instruction, 29, 1–9.  https://doi.org/10.1016/j.learninstruc.2013.07.001.CrossRefGoogle Scholar
  18. Fraser, B. J. (1991). Two decades of classroom environment research. In B. J. Fraser & H. J. Walberg (Eds.), Educational environments: evaluation, antecedents and consequences (pp. 3–27). Elmsford: Pergamon.Google Scholar
  19. Fullan, M. (1994). Implementation of innovations. In T. Husen & T. N. Postlethwaite (Eds.), The international encyclopedia of education (pp. 2839–2847). Oxford: Pergamon.Google Scholar
  20. Geijsel, F., Sleegers, P., Leithwood, K., & Jantzi, D. (2003). Transformational leadership effects on teachers’ commitment and effort toward school reform. Journal of Educational Administration, 41, 228–256.CrossRefGoogle Scholar
  21. Hall, G. E., & Hord, S. M. (2006). Implementing change: patterns, principles, and potholes. Boston: Pearson/Allyn & Bacon.Google Scholar
  22. Hamilton, L., Stecher, B., & Yuan, K. (2008). Standards-based reform in the United States: history, research, and future directions (No. RP-1384). Santa Monica: Rand Education.Google Scholar
  23. Hamre, B. K., & Pianta, R. C. (2005). Can instructional and emotional support in the first-grade classroom make a difference for children at risk of school failure? Child Development, 76(5), 949–967.  https://doi.org/10.1111/j.1467-8624.2005.00889.CrossRefGoogle Scholar
  24. Haney, J. J., Czerniak, C. M., & Lumpe, A. T. (1996). Teacher beliefs and intentions regarding the implementation of science education reform strands. Journal of Research in Science Teaching, 33(9), 971–993.CrossRefGoogle Scholar
  25. Hattie, J. (2013). Visible learning: a synthesis of over 800 meta-analyses relating to achievement. London: Routledge.Google Scholar
  26. Helmke, A., Schneider, W., & Weinert, F. E. (1986). Quality of instruction and classroom learning outcomes: the German contribution to the IEA classroom environment study. Teaching and Teacher Education, 2(1), 1–18.CrossRefGoogle Scholar
  27. Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55.CrossRefGoogle Scholar
  28. Kelley, K., & Lai, K. (2010). MBESS (version 3.3.3) [Computer software and manual]. Retrieved from http://www.cran.r-project.org/
  29. Klassen, R., & Tze, V. M. C. (2014). Teachers’ self-efficacy, personality, and teaching effectiveness: a meta-analysis. Educational Research Review, 12, 59–76.  https://doi.org/10.1016/j.edurev.2014.06.001.CrossRefGoogle Scholar
  30. Klieme, E., Pauli, C., & Reusser, K. (2009). The Pythagoras study. In T. Janik & T. Seidel (Eds.), The power of video studies in investigating teaching and learning in the classroom (pp. 137–160). Münster: Waxmann.Google Scholar
  31. Künsting, J., Neuber, V., & Lipowsky, F. (2016). Teacher self-efficacy as a long-term predictor of instructional quality in the classroom. European Journal of Psychology of Education, 31(3), 299–322.  https://doi.org/10.1007/s10212-015-0272-7.CrossRefGoogle Scholar
  32. Kunter, M., & Baumert, J. (2006). Who is the expert? Construct and criteria validity of student and teacher ratings of instruction. Learning Environments Research, 9(3), 231–251.CrossRefGoogle Scholar
  33. Kunter, M., Tsai, Y.-M., Klusmann, U., Brunner, M., Krauss, S., & Baumert, J. (2008). Students’ and mathematics teachers’ perceptions of teacher enthusiasm and instruction. Learning and Instruction, 18(5), 468–482.  https://doi.org/10.1016/j.learninstruc.2008.06.008.CrossRefGoogle Scholar
  34. Kunter, M., Baumert, J., Blum, W., Klusmann, U., Krauss, S., & Neubrand, M. (2013). Cognitive activation in the mathematics classroom and professional competence of teachers. Results from the COACTIV project. New York: Springer.CrossRefGoogle Scholar
  35. McElvany, N., Schroeder, S., Baumert, J., Schnotz, W., Horz, H., & Ullrich, M. (2012). Cognitively demanding learning materials with texts and instructional pictures: teachers’ diagnostic skills, pedagogical beliefs and motivation. European Journal of Psychology of Education, 27(3), 403–420.  https://doi.org/10.1007/s10212-011-0078-1.CrossRefGoogle Scholar
  36. Muthén, L. K., & Muthén, B. O. (2012) (Los Angeles, CA). MPlus (Version 7.0). Stat model.Google Scholar
  37. Oh, Y. (2003). Applying theory of planned behavior model on studying teachers’ change in mathematics instruction. International Group for the Psychology of Mathematics Education, 3, 405–412.Google Scholar
  38. Ottmar, E. R., Decker, L. E., Cameron, C. E., Curby, T. W., & Rimm-Kaufman, S. E. (2014). Classroom instructional quality, exposure to mathematics instruction and mathematics achievement in fifth grade. Learning Environments Research, 1–20.Google Scholar
  39. Pant, H. A., Stanat, P., Schroeders, U., Roppelt, A., Siegle, T., & Pöhlmann, C. (Eds.). (2013). IQB-Ländervergleich 2012: Mathematische und naturwissenschaftliche Kompetenzen am Ende der Sekundarstufe I. [The IQB National Assessment Study 2012. Competencies in mathematics and the sciences at the end of secondary level I] Münster: Waxmann.Google Scholar
  40. Pianta, R. C., & Hamre, B. K. (2009). Conceptualization, measurement, and improvement of classroom processes: standardized observation can leverage capacity. Educational Researcher, 38(2), 109–119.CrossRefGoogle Scholar
  41. Pianta, R. C., La Paro, K. M., & Hamre, B. K. (2008). Classroom assessment scoring system (CLASS). Baltimore: Paul Brookes Publishing Company.Google Scholar
  42. Porter, A. (1994). National standards and school improvement in the 1990s: issues and promise. American Journal of Education, 102, 421–449.CrossRefGoogle Scholar
  43. Praetorius, A. K., Lenske, G., & Helmke, A. (2012). Observer ratings of instructional quality: do they fulfill what they promise? Learning and Instruction, 22(6), 387–400.  https://doi.org/10.1016/j.learninstruc.2012.03.002.CrossRefGoogle Scholar
  44. Preacher, K. J., & Kelley, K. (2011). Effect size measures for mediation models: quantitative strategies for communicating indirect effects. Psychological Methods, 16, 93–115.  https://doi.org/10.1037/a0022658.CrossRefGoogle Scholar
  45. Pynoo, B., Tondeur, J., van Braak, J., Duyck, W., Sijnave, B., & Duyck, P. (2012). Teachers’ acceptance and use of an educational portal. Computers & Education, 58, 1308–1317.CrossRefGoogle Scholar
  46. Resnick, L. B., & Resnick, D. P. (1992). Assessing the thinking curriculum: new tools for educational reform. In B. R. Gifford & M. C. O’Connor (Eds.), Changing assessment: alternative views of aptitude, achievement, and instruction (pp. 37–75). Boston: Kluwer.CrossRefGoogle Scholar
  47. Richards, J. C. (2013). Curriculum approaches in language teaching: forward, central, and backward design. RELC Journal, 44(1), 5–33.CrossRefGoogle Scholar
  48. Rogers, E. M. (2005). Diffusions of innovations. New York: Free Press.Google Scholar
  49. Rots, I., & Aelterman, A. (2009). Teacher education graduates’ entrance into the teaching profession: development and test of a model. European Journal of Psychology of Education, 24(4), 453–471.  https://doi.org/10.1007/BF03178761.CrossRefGoogle Scholar
  50. Schafer, J. L., & Graham, J. W. (2002). Missing data: our view of the state of the art. Psychological Methods, 7(2), 147–177.CrossRefGoogle Scholar
  51. Seidel, T., & Shavelson, R. J. (2007). Teaching effectiveness research in the past decade: the role of theory and research design in disentangling meta-analysis results. Review of Educational Research, 77(4), 454–499.  https://doi.org/10.3102/003465430731031.CrossRefGoogle Scholar
  52. Vieluf, S., & Klieme, E. (2011). Cross-nationally comparative results on teachers’ qualification, beliefs, and practices. In Y. Li & G. Kaiser (Eds.), Expertise in mathematics instruction (pp. 295–325). New York: Springer.CrossRefGoogle Scholar
  53. Walkowiak, T. A., Berry, R. Q., Meyer, J. P., Rimm-Kaufman, S. E., & Ottmar, E. R. (2014). Introducing an observational measure of standards-based mathematics teaching practices: evidence of validity and score reliability. Educational Studies in Mathematics, 85(1), 109–128.  https://doi.org/10.1007/s10649-013-9499-x.CrossRefGoogle Scholar
  54. Weinert, F. E., Schrader, F.-W., & Helmke, A. (1989). Quality of instruction and achievement outcomes. International Journal of Educational Research, 13, 895–914.  https://doi.org/10.1016/0883-0355(89)90072-4.CrossRefGoogle Scholar

Copyright information

© Instituto Superior de Psicologia Aplicada, Lisboa, Portugal and Springer Science+Business Media B.V. 2017

Authors and Affiliations

  • Anna Eva Lenski
    • 1
    Email author
  • Dirk Richter
    • 2
  • Oliver Lüdtke
    • 3
    • 4
  1. 1.Institute for Educational ResearchUniversity of MainzMainzGermany
  2. 2.Department of EducationUniversity of PotsdamPotsdamGermany
  3. 3.Leibniz Institute for Science and Mathematics EducationKielGermany
  4. 4.Centre for International Student AssessmentMunichGermany

Personalised recommendations