Advertisement

Pointing teachers in the wrong direction: understanding Louisiana elementary teachers’ use of Compass high-stakes teacher evaluation data

  • Timothy G. Ford
Article

Abstract

Spurred by Race to the Top, efforts to improve teacher evaluation systems have provided states with an opportunity to get teacher evaluation right. Despite the fact that a core reform area of Race to the Top was the use of teacher evaluation to provide on-going and meaningful feedback for instructional decision making, we still know relatively little about how states’ responses in this area have led to changes in teachers’ use of these sources of data for instructional improvement. Self-determination theory (SDT) and the concept of functional significance was utilized as a lens for understanding and explaining patterns of use (or non-use) of Compass-generated evaluation data by teachers over a period of 3 years in a diverse sample of Louisiana elementary schools. The analysis revealed that the majority of teachers exhibited either controlled or amotivated functional orientations to Compass-generated information, and this resulted in low or superficial use for improvement. Perceptions of the validity/utility of teacher evaluation data were critical determinants of use and were multifaceted: In some cases, teachers had concerns about how state and district assessments would harm vulnerable students, while some questioned the credibility and/or fairness of the feedback. These perceptions were compounded by (a) the lack of experience of evaluators in evaluating teachers with more specialized roles in the school, such as special education teachers; (b) a lack of support in terms of training on Compass and its processes; and (c) lack of teacher autonomy in selecting appropriate assessments and targets for Student Learning Target growth.

Keywords

Data-driven decision-making Data use Teacher motivation Self-determination theory Teacher evaluation Instructional improvement 

References

  1. Adams, C. M., Forsyth, P. B., Ware, J. K., & Mwavita, M. (2016). The informational significance of A-F school accountability grades. Teachers College Record, 118(7), 1–31 Retrieved from: http://www.tcrecord.org/Content.asp?contentid=20925. Accessed 15 Oct 2017.
  2. Adams, C. M., Ford, T. G., Forsyth, P. B., Ware, J. K., Barnes, L. B., Khojasteh, J., Mwavita, M., Olsen, J. J., & Lepine, J. A. (2017). Next generation school accountability: A vision for improvement under ESSA. Palo Alto, CA: Learning Policy Institute.Google Scholar
  3. American Educational Research Association [AERA], American Psychological Association [APA] National Council on Measurement in Education [NCME]. (2014). Standards for educational and psychological testing. Washington, D.C.: American Educational Research Association.Google Scholar
  4. Amrein-Beardsley, A., & Collins, C. (2012). The SAS education value-added assessment system (SAS-EVAAS) in the Houston independent School District (HISD): Intended and unintended consequences. Educational Policy Analysis Archives, 20(12) Retrieved from: http://epaa.asu.edu/ojs/article/view/1096.
  5. Beaver, J. K., & Weinbaum, E. H. (2015). State test data and school improvement efforts. Educational Policy, 29(3), 478–503.CrossRefGoogle Scholar
  6. Blase, J., & Blase, J. (1999). Principals’ instructional leadership and teacher development: Teachers’ perspectives. Educational Administration Quarterly, 35(3), 349–378.Google Scholar
  7. Booher-Jennings, J. (2005). Below the bubble: “Educational triage” and the Texas accountability system. American Educational Research Journal, 42(2), 231–268.CrossRefGoogle Scholar
  8. Bulletin 130. La. Admin. Code. tit. 28, pt. 147, §103 (2017). Retrieved from: http://www.doa.la.gov/osr/lac/28v147/28v147.doc
  9. Bulletin 130. La. Admin. Code. tit. 28, pt. 147, §311 (2017). Retrieved from: http://www.doa.la.gov/osr/lac/28v147/28v147.doc
  10. Campbell, J. L., Quincy, C., Osserman, J., & Pedersen, O. K. (2013). Coding in-depth semistructured interviews: Problems of unitization and intercoder reliability and agreement. Sociological Methods & Research, 42(3), 294–320.CrossRefGoogle Scholar
  11. Chambers, J., de los Reyes, I. B., O’Neil, C. (2013). How much are districts spending to implement teacher evaluation systems? Case studies of Hillsborough County Public Schools, Memphis City Schools, and Pittsburgh Public Schools. (RAND working paper # WR-989-BMGF). Retrieved from https://www.rand.org/pubs/working_papers/WR989.html
  12. Chow, A. P. Y., Wong, E. K. P., Yeung, A. S., & Mo, K. W. (2002). Teachers’ perceptions of appraiser–appraisee relationships. Journal of Personnel Evaluation in Education, 16(2), 85–101.CrossRefGoogle Scholar
  13. Collins, C., & Amrein-Beardsley A (2014). Putting growth and value-added models on the map: A national overview. Teachers College Record, 116(1). Retrieved from https://www.tcrecord.org/Content.asp?ContentId=17291. Accessed 15 Oct 2017.
  14. Cosner, S. (2011). Teacher learning, instructional considerations and principal communication: Lessons from a longitudinal study of collaborative data use by teachers. Educational Management Administration & Leadership, 39(5), 568–589.CrossRefGoogle Scholar
  15. Curry, K. A., Mwavita, M., Holter, A., & Harris, E. (2016). Getting assessment right at the classroom level: Using formative assessment for decision making. Educational Assessment, Evaluation and Accountability, 28(1), 89–104.CrossRefGoogle Scholar
  16. Darling-Hammond, L. (2013). Getting teacher evaluation right: What really matters for effectiveness and improvement. New York, NY: Teachers College Press.Google Scholar
  17. Darling-Hammond, L. (2014). One piece of the whole: Teacher evaluation as part of a comprehensive system for teaching and learning. American Educator, 38(1), 4–13.Google Scholar
  18. Darling-Hammond, L., Amrein-Beardsley, A., Haertel, E., & Rothstein, J. (2012). Evaluating teacher evaluation. Phi Delta Kappan, 93(6), 8–15.CrossRefGoogle Scholar
  19. Datnow, A., & Hubbard, L. (2015). Teachers' use of assessment data to inform instruction: Lessons from the past and prospects for the future. Teachers College Record, 117(4).Google Scholar
  20. Datnow, A., & Park, V. (2014). Data-driven leadership. San Francisco: Jossey-Bass.Google Scholar
  21. Datnow, A., Greene, J. C., & Gannon-Slater, N. (2017). Data use for equity: Implications for teaching, leadership, and policy. Journal of Educational Administration, 55(4), 354–360.CrossRefGoogle Scholar
  22. Deci, E. L., & Ryan, R. M. (2000). The “what” and the “why” of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry, 11(4), 227–268.CrossRefGoogle Scholar
  23. Deci, E. L., Koestner, R., & Ryan, R. M. (1999). A meta-analytic review of experiments examining the effects of extrinsic rewards on intrinsic motivation. Psychological Bulletin, 125, 627–668.CrossRefGoogle Scholar
  24. Delvaux, E., Vanhoof, J., Tuytens, M., Vekeman, E., Devos, G., & Van Petegem, P. (2013). How may teacher evaluation have an impact on professional development? A multilevel analysis. Teaching and Teacher Education, 36, 1–11.CrossRefGoogle Scholar
  25. Denzin, N. K. (2001). Interpretive interactionism (2nd ed.). Thousand Oaks, CA: Sage.CrossRefGoogle Scholar
  26. Doherty, K. M., & Jacobs, S. (2015). State of the states 2015: Evaluating teaching, leading, and learning. Washington, DC: National Council on Teacher Quality.Google Scholar
  27. Dynarski, M. (2016, December 8). Teacher observations have been a waste of time and money. The Brookings Institution. Retrieved from https://www.brookings.edu/research/teacher-observations-have-been-a-waste-of-time-and-money/
  28. Eccles, J. S., Adler, T. F., Futterman, R., Goff, S. B., Kaczala, C. M., Meece, J. L., & Midgley, C. (1983). Expectancies, values, and academic behaviors. In J. T. Spence (Ed.), Achievement and achievement motivation (pp. 75–146). San Francisco, CA: W. H. Freeman.Google Scholar
  29. Farley-Ripple, E. N., & Buttram, J. L. (2014). Developing collaborative data use through professional learning communities: Early lessons from Delaware. Studies in Educational Evaluation, 42, 41–53.CrossRefGoogle Scholar
  30. Farrell, C. C. (2015). Designing school systems to encourage data use and instructional improvement: A comparison of school districts and charter management organizations. Educational Administration Quarterly, 51(3), 438–471.CrossRefGoogle Scholar
  31. Farrell, C. C., & Marsh, J. A. (2016a). Metrics matter: How properties and perceptions of data shape teachers’ instructional responses. Educational Administration Quarterly, 52(3), 423–462.CrossRefGoogle Scholar
  32. Farrell, C. C., & Marsh, J. A. (2016b). Contributing conditions: A qualitative comparative analysis of teachers’ instructional responses to data. Teaching and Teacher Education, 60, 398–412.CrossRefGoogle Scholar
  33. Ford, T. G., Van Sickle, M. E., & Fazio-Brunson, M. (2016). The role of “informational significance” in shaping Louisiana elementary teachers’ use of high-stakes teacher evaluation data for instructional decision making. In K. K. Hewitt & A. Amrein-Beardsley (Eds.), Student growth measures in policy and practice: Intended and unintended consequences of high-stakes teacher evaluations (pp. 117–135). New York: Palgrave Macmillan.Google Scholar
  34. Ford, T. G., Van Sickle, M. E., Clark, L. V., Fazio-Brunson, M., & Schween, D. C. (2017). Teacher self-efficacy, professional commitment and high-stakes teacher evaluation (HSTE) policy in Louisiana. Educational Policy, 31(2), 202–248.Google Scholar
  35. Glover, T. A., Reddy, L. A., Kettler, R. J., Kurz, A., & Lekwa, A. J. (2016). Improving high-stakes decisions via formative assessment, professional development, and comprehensive educator evaluation: The school system improvement project. Teachers College Record, 118(14), 1–26.Google Scholar
  36. Grissom, J. A., & Youngs, P. A. (2016). Improving teacher evaluation systems: Making the most of multiple measures. New York: Teachers College Press.Google Scholar
  37. Haertel, E. H. (2013). Reliability and validity of inferences about teachers based on student test scores. Princeton, NJ: Educational Testing Service Retrieved from http://www.ets.org/Media/Research/pdf/PICANG14.pdf.Google Scholar
  38. Hallinger, P., Heck, R. H., & Murphy, J. (2014). Teacher evaluation and school improvement: An analysis of the evidence. Educational Assessment, Evaluation and Accountability, 26(1), 5–28.CrossRefGoogle Scholar
  39. Harris, D. N., & Herrington, C. D. (Eds.). (2015). Value added meets the schools: The effects of using test-based teacher evaluation on the work of teachers and leaders [special issue]. Educational Research, 44(2), 71–141.Google Scholar
  40. Herlihy, C., Karger, E., Pollard, C., Hill, H. C., Kraft, M. A., Williams, M., & Howard, S. (2014). State and local efforts to investigate the validity and reliability of scores from teacher evaluation systems. Teachers College Record, 116(1) Retrieved from http://www.tcrecord.org/Content.asp?ContentId=17292. Accessed 15 Oct 2017.
  41. Hewitt, K. (2015). Educator evaluation policy that incorporates EVAAS value-added measures: Undermined intentions and exacerbated inequities. Education Policy Analysis Archives, 23(76). Retrieved from).  https://doi.org/10.14507/epaa.v23.1968.
  42. Hewitt, K., & Amrein-Beardsley, A. (2016). Introduction: The use of growth measures for educator accountability at the intersection of policy and practice. In K. Hewitt & A. Amrein-Beardsley (Eds.), Student growth measures in policy and practice: Intended and unintended consequences of high-stakes teacher evaluations (pp. 1–25). New York: Palgrave Macmillan.CrossRefGoogle Scholar
  43. Honig, M. I., & Venkateswaran, N. (2012). School–central office relationships in evidence use: Understanding evidence use as a systems problem. American Journal of Education, 118(2), 199–222.CrossRefGoogle Scholar
  44. Huguet, A., Farrell, C. C., & Marsh, J. A. (2017). Light touch, heavy hand: Principals and data-use PLCs. Journal of Educational Administration, 55(4), 376–389.CrossRefGoogle Scholar
  45. Ikemoto, G. S., & Marsh, J. A. (2007). Cutting through the “data-driven” mantra: Different conceptions of data-driven decision making. Yearbook of the National Society for the Study of Education, 106(1), 105–131.CrossRefGoogle Scholar
  46. Ingram, D., Louis, K. S., & Schroeder, R. (2004). Accountability policies and teacher decision making: Barriers to the use of data to improve practice. Teachers College Record, 106, 1258–1287. Retrieved from: https://www.tcrecord.org/content.asp?contentid=11573. Accessed 15 Oct 2017.
  47. Jiang, J. Y., Sporte, S. E., & Luppescu, S. (2015). Teacher perspectives on evaluation reform: Chicago’s REACH students. Educational Researcher, 44, 105–116.Google Scholar
  48. Jones, N. D. (2016). Special education teacher evaluation: An examination of critical issues and recommendations for practice. In J. A. Grissom & P. Youngs (Eds.), Improving teacher evaluation systems: Making the most of multiple measures (pp. 63–76). New York: Teachers College Press.Google Scholar
  49. Kelly, K. O., Ang, S. Y. A., Chong, W. L., & Hu, W. S. (2008). Teacher appraisal and its outcomes in Singapore primary schools. Journal of Educational Administration, 46(1), 39–54.CrossRefGoogle Scholar
  50. Kerr, K. A., Marsh, J. A., Ikemoto, G. S., Darilek, H., & Barney, H. (2006). Strategies to promote data use for instructional improvement: Actions, outcomes and lessons from three urban districts. American Journal of Education, 112, 496–520.CrossRefGoogle Scholar
  51. Kraft, M. A., & Gilmour, A. F. (2017). Revisiting the widget effect: Teacher evaluation reforms and the distribution of teacher effectiveness. Educational Researcher, 46(5), 234–249.CrossRefGoogle Scholar
  52. Larkin, D., & Oluwole, J. O. (2014, March). The opportunity costs of teacher evaluation: A labor and equity analysis of the TEACHNJ legislation. New Brunswick, NJ: New Jersey educational policy Forum. Retrieved from https://njedpolicy.files.wordpress.com/2014/03/douglarkinjosepholuwole-opportunitycostpolicybrief.pdf
  53. Lavigne, A. L. (2014). Exploring the intended and unintended consequences of high-stakes teacher evaluation on schools, teachers, and students. Teachers College Record, 116(1). Retrieved from https://www.tcrecord.org/Content.asp?ContentId=17294. Accessed 15 Oct 2017.
  54. Lavigne, A. L., & Good, T. L. (2014). Teacher and student evaluation: Moving beyond the failure of school reform. New York: Routledge.Google Scholar
  55. Lavigne, A. L., & Good, T. L. (2015). Improving teaching through observation and feedback: Beyond state and federal mandates. New York: Routledge.Google Scholar
  56. Lipsky, M. (2010). Street-level bureaucracy: Dilemmas of the individual in public service (2nd ed.). Thousand Oaks, CA: Russell Sage Foundation.Google Scholar
  57. Little, J. W. (2012). Understanding data use practice among teachers: The contribution of micro-process studies. American Journal of Education, 118(2), 143–166.CrossRefGoogle Scholar
  58. Longo-Schmid, J. (2016). Teachers’ voices: Where policy meets practice. In K. Kappler Hewitt & A. Amrein-Beardsley (Eds.), Student growth measures in policy and practice (pp. 49–71). New York: Palgrave Macmillan.CrossRefGoogle Scholar
  59. Louisiana Department of Education. (2012). Compass: Louisiana’s path to excellence—Teacher evaluation guidebook. Baton Rouge, LA: Author.Google Scholar
  60. Louisiana Department of Education (2013). 2013 Compass final report. Baton Rouge, LA: Author. Retrieved from: https://www.louisianabelieves.com/resources/library/compass. Accessed 30 April 2018.
  61. Louisiana Department of Education (2014). 2013–2014 Compass annual report. Baton Rouge, LA: Author. Retrieved from: https://www.louisianabelieves.com/resources/library/compass. Accessed 30 April 2018.
  62. Louisiana Department of Education (2015a). Teacher student learning targets. Retrieved from: https://www.louisianabelieves.com/resources/classroom-support-toolbox/teacher-support-toolbox/student-learning-targets. Accessed 30 April 2018.
  63. Louisiana Department of Education (2015b). 2014–2015 Compass teacher results by LEA. Retrieved from: https://www.louisianabelieves.com/resources/library/compass. Accessed 30 April 2018.
  64. Louisiana Department of Education (2016). 2015–2016 Compass teacher results by district. Retrieved from: https://www.louisianabelieves.com/resources/library/compass. Accessed 30 April 2018.
  65. Louisiana House Bill 1033. (2010). Evaluation and Assessment Programs.Google Scholar
  66. Lortie, D. (1975). Schoolteacher: A sociological analysis. Chicago: University of Chicago Press.Google Scholar
  67. Mandinach, E. B. (2012). A perfect time for data-use: Using data-driven decision making to inform practice. Educational Psychologist, 47(2), 71–85.  https://doi.org/10.1080/00461520.2012.667064.CrossRefGoogle Scholar
  68. Mandinach, E. B., Honey, M., Light, D., & Brunner, C. (2008). A conceptual framework for data driven decision making. In E. B. Mandinach & M. Honey (Eds.), Data-driven school improvement: Linking data and learning (pp. 13–31). New York: Teachers College Press.Google Scholar
  69. Marques, J. F., & McCall, C. (2005). The application of interrater reliability as a solidification instrument in a phenomenological study. The Qualitative Report, 10(3), 439–462.Google Scholar
  70. Marsh, J. A. (2012). Interventions promoting educators’ use of data: Research insights and gaps. Teachers College Record, 114(11), 1–48.Google Scholar
  71. Marsh, J. A., & Farrell, C. C. (2015). How leaders can support teachers with data-driven decision making: A framework for understanding capacity building. Educational Management Administration & Leadership, 43(2), 269–289.CrossRefGoogle Scholar
  72. Marsh, J. A., Pane, J. F., & Hamilton, L. S. (2006). Making sense of data-driven decision making in education (RAND occasional paper #OP-170-EDU). Santa Monica, CA: RAND. Retrieved from: http://www.rand.org/pubs/occasional_papers/OP170.html
  73. Marsh, J. A., McCombs, J. S., & Martorell, F. (2010). How instructional coaches support data-driven decision making: Policy implementation and effects in Florida middle schools. Educational Policy, 24, 872–907.  https://doi.org/10.1177/0895904809341467.CrossRefGoogle Scholar
  74. Master, B. (2014). Staffing for success: Linking teacher evaluation and school personnel management in practice. Educational Evaluation and Policy Analysis, 36(2), 207–227.  https://doi.org/10.3102/0162373713506552.CrossRefGoogle Scholar
  75. McLaughlin, M. W. (1987). Learning from experience: Lessons from policy implementation. Educational Evaluation and Policy Analysis, 9, 171–178.CrossRefGoogle Scholar
  76. Means, B., Padilla, C., & Gallagher, L. (2010). Use of education data at the local level: From accountability to instructional improvement. Washington, DC: U.S. department of Education Retrieved from: https://www2.ed.gov/rschstat/eval/tech/use-of-education-data/use-of-education-data.pdf
  77. Milanowski, A. T., & Heneman, H. G. (2001). Assessment of teacher reactions to a standards-based teacher evaluation system: A pilot study. Journal of Personnel Evaluation in Education, 15(3), 193–212.CrossRefGoogle Scholar
  78. Miles, M. B., Huberman, A. M., & Saldaña, J. (2014). Qualitative data analysis: A methods sourcebook (3rd ed.). Thousand Oaks, CA: Sage.Google Scholar
  79. Murphy, J., Hallinger, P., & Heck, R. H. (2013). Leading via teacher evaluation: The case of the missing clothes? Educational Researcher, 42, 349–354.  https://doi.org/10.3102/0013189X13499625.CrossRefGoogle Scholar
  80. Niemiec, C. P., & Ryan, R. M. (2009). Autonomy, competence, and relatedness in the classroom: Applying self-determination theory to educational practice. Theory and Research in Education, 7, 133–144.Google Scholar
  81. Organization for Economic Co-Operation and Development. (2009). Teacher evaluation. A conceptual framework and examples of country practices. Retrieved from: http://www.oecd.org/edu/school/44568106.pdf
  82. Papay, J. P. (2011). Different tests, different answers: The stability of teacher value-added estimates across outcome measures. American Educational Research Journal, 48, 163–193.CrossRefGoogle Scholar
  83. Park, V., Daly, A. J., & Guerra, A. W. (2013). Strategic framing: How leaders craft the meaning of data use for equity and learning. Educational Policy, 27(4), 645–675.CrossRefGoogle Scholar
  84. Reddy, L. A., Dudek, C. M., Peters, S., Alperin, A., Kettler, R. J., Kurz, A. (2018). Teachers’ and school administrators’ attitudes and beliefs of teacher evaluation: A preliminary investigation of high poverty school districts. Educational Assessment, Evaluation, and Accountability, 30, 47–70.Google Scholar
  85. Rice, J. K., & Malen, B. (2016). When theoretical models meet school realities: Educator responses to student growth measures in an incentive pay program. In K. Kappler Hewitt & A. Amrein-Beardsley (Eds.), Student growth measures in policy and practice (pp. 29–47). Palgrave Macmillan US.Google Scholar
  86. Rosenholtz, S. J. (1991). Teachers’ workplace: The social organization of schools. New York: Teachers College Press.Google Scholar
  87. Ryan, R. M., & Brown, K. W. (2005). Legislating competence: The motivational impact of high-stakes testing as an educational reform. In C. Dweck & A. Elliot (Eds.), Handbook of competence and motivation (pp. 354–372). New York: Guilford Press.Google Scholar
  88. Ryan, R. M., & Deci, E. L. (2002). An overview of self-determination theory: An organismic dialectical perspective. In E. L. Deci & R. M. Ryan (Eds.), Handbook of self-determination research (pp. 3–33). Rochester, NY: University of Rochester Press.Google Scholar
  89. Ryan, R. M., & Deci, E. L. (2017). Self-determination theory: Basic psychological needs in motivation, development, and wellness. New York: Guilford Press.Google Scholar
  90. Ryan, R. M., & Weinstein, N. (2009). Undermining quality teaching and learning: A self-determination theory perspective on high-stakes testing. Theory and Research in Education, 7(2), 224–233.  https://doi.org/10.1177/1477878509104327.Google Scholar
  91. Schildkamp, K., & Visscher, A. (2010). The use of performance feedback in school improvement in Louisiana. Teaching and Teacher Education, 26(7), 1389–1403.CrossRefGoogle Scholar
  92. Schildkamp, K., Poortman, C., Luyten, H., & Ebbeler, J. (2017). Factors promoting and hindering data-based decision making in schools. School Effectiveness and School Improvement, 28(2), 242–258.CrossRefGoogle Scholar
  93. Schneider, A., & Ingram, H. (1990). Behavioral assumptions of policy tools. The Journal of Politics, 52(2), 510–529.CrossRefGoogle Scholar
  94. Skrla, L., Scheurich, J. J., Garcia, J., & Nolly, G. (2004). Equity audits: A practical leadership tool for developing equitable and excellent schools. Educational Administration Quarterly, 40(1), 133–161.CrossRefGoogle Scholar
  95. Sun, M., Mutcheson, R. B., & Kim, J. (2016). Teachers' use of evaluation for instructional improvement and school supports for such use. In J. A. Grissom & P. Youngs (Eds.), Improving teacher evaluation systems: Making the most of multiple measures (pp. 169–183). New York: Teachers College Press.Google Scholar
  96. The Joint Committee on Standards for Educational Evaluation [JCSEE]. (2009). The personnel evaluation standards: How to assess systems for evaluating educators. Thousand Oaks, CA: Corwin.Google Scholar
  97. The New Teacher Project. (2010). Teacher evaluation 2.0. New York: Author.Google Scholar
  98. Tuytens, M., & Devos, G. (2011). Stimulating professional learning through teacher evaluation: An impossible task for the school leader? Teaching and Teacher Education, 27(5), 891–899.CrossRefGoogle Scholar
  99. U. S. Department of Education. (2009). Race to the top program executive summary. Washington, DC: U.S. Department of Education. Retrieved from: http://www2.ed.gov/programs/racetothetop/executive-summary.pdf
  100. Van Gasse, R., Vanlommel, K., Vanhoof, J., & Van Petegem, P. (2017). The impact of collaboration on teachers’ individual data use. School Effectiveness and School Improvement, 28, 1–16.Google Scholar
  101. Vansteenkiste, M., Lens, W., De Witte, H., & Feather, N. T. (2005). Understanding unemployed people’s job search behavior, unemployment experience and well-being. A comparison of expectancy-value theory and self-determination theory. British Journal of Social Psychology, 44, 269–287.CrossRefGoogle Scholar
  102. Vansteenkiste, M., Lens, W., & Deci, E. L. (2006). Intrinsic versus extrinsic goal contents in self-determination theory: Another look at the quality of academic motivation. Educational Psychologist, 41(1), 19–31.CrossRefGoogle Scholar
  103. Watt, H. M. G., & Richardson, P. W. (2014). Why people choose teaching as a career: An expectancy-value approach to understanding teacher motivation. In P. W. Richardson, S. A. Karabenick, & H. M. G. Watt (Eds.), Teacher motivation: theory and practice (pp. 3–19). London: Routledge.Google Scholar
  104. Weisberg, D., Sexton, S., Mulhern, J., & Keeling, D. (2009). In ) (Ed.), The widget effect: Our national failure to acknowledge and act on differences in teacher effectiveness. Brooklyn, NY: The New Teacher Project.Google Scholar
  105. Wigfield, A., & Eccles, J. (1992). The development of achievement task values: A theoretical analysis. Developmental Review, 12, 265–310.CrossRefGoogle Scholar
  106. Yin, R. K. (2017). Case study research and applications: Design and methods (5th ed.). Thousand Oaks, CA: SAGE publications.Google Scholar
  107. Young, V. M. (2006). Teachers’ use of data: Loose coupling, agenda setting, and team norms. American Journal of Education, 112, 521–548.Google Scholar

Copyright information

© Springer Nature B.V. 2018

Authors and Affiliations

  1. 1.Department of Educational Leadership and Policy Studies, Jeannine Rainbolt College of EducationUniversity of OklahomaTulsaUSA

Personalised recommendations