Skip to main content

Advertisement

Log in

Pointing teachers in the wrong direction: understanding Louisiana elementary teachers’ use of Compass high-stakes teacher evaluation data

  • Published:
Educational Assessment, Evaluation and Accountability Aims and scope Submit manuscript

Abstract

Spurred by Race to the Top, efforts to improve teacher evaluation systems have provided states with an opportunity to get teacher evaluation right. Despite the fact that a core reform area of Race to the Top was the use of teacher evaluation to provide on-going and meaningful feedback for instructional decision making, we still know relatively little about how states’ responses in this area have led to changes in teachers’ use of these sources of data for instructional improvement. Self-determination theory (SDT) and the concept of functional significance was utilized as a lens for understanding and explaining patterns of use (or non-use) of Compass-generated evaluation data by teachers over a period of 3 years in a diverse sample of Louisiana elementary schools. The analysis revealed that the majority of teachers exhibited either controlled or amotivated functional orientations to Compass-generated information, and this resulted in low or superficial use for improvement. Perceptions of the validity/utility of teacher evaluation data were critical determinants of use and were multifaceted: In some cases, teachers had concerns about how state and district assessments would harm vulnerable students, while some questioned the credibility and/or fairness of the feedback. These perceptions were compounded by (a) the lack of experience of evaluators in evaluating teachers with more specialized roles in the school, such as special education teachers; (b) a lack of support in terms of training on Compass and its processes; and (c) lack of teacher autonomy in selecting appropriate assessments and targets for Student Learning Target growth.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. This estimate is based upon the product of Dynarski’s (2016) estimate of a principal’s salary of $45/h; the number of U.S. K-12 teachers (3.1 million); the average number of hours spent per evaluation; typical number of observations in a given year (2).

  2. All the descriptions of the Compass system discussed in this section are as they were during the study period of 2011–2015. Since this time, Compass has again changed to reflect adjustments to assessment policy as well as teacher evaluation policy.

  3. Teachers who receive a “highly effective” rating in a given year are only required to have one formal observation the following year.

  4. The Compass teacher evaluation rubric utilizes only 5 of the 22 domains and 20 of the 76 elements of the full Danielson Framework for Teaching.

  5. While not clearly specified in the policy, in most cases in our sample the same evaluator observed both lessons conducted by the teacher. Assignment of evaluators was ultimately up to each building principal.

  6. While there is no available data on how many teachers have been dismissed during the Compass era due to ineffective ratings, aggregate results from the Louisiana Department of Education (2013, 2014, 2015b, 2016) report that around 4% of teachers were rated “ineffective” in 2012–2013, 2% in 2013–2014, and less than 1% in 2014–2015 and 2015–2016.

  7. It is important to mention that both the Standards for Educational and Psychological Testing (AERA, APA, NCME, 2014), and the Joint Committee on Standards for Educational Evaluation (JCSEE 2009) define and delineate issues of evaluation related to clarity (JCSEE), credibility (JCSEE), and fairness (AERA et al.). The operational definitions of these terms in this paper share some overlap but also differ somewhat from theirs, as will be discussed as each term is defined below.

  8. In the JCSEE, one aspect of clarity that aligns with the definition used in this paper is accuracy standard A2, “defined expectations.” Another aspect, however, which was not a focus of our definition per se, is the necessity for clarity on how the assessments/evaluation tools are aligned with the expectations (JCSEE standard A1).

  9. The concept of credibility does not relate in any direct way to the JCSEE standards, but might nevertheless be an overall judgment rendered by an evaluatee of the process based on several of these standards. None of these standards are specifically referenced in this study.

  10. This aspect of fairness is only part of the Standards for Educational and Psychological Testing framework. Other aspects of fairness concern the degree of measurement bias as well as influences of test-taking contexts which were not as present in the literature on the topic of teacher evaluation.

  11. Our use of the concept of utility refers most specifically to the JCSEE standards of utility related to evaluator qualifications and functional reporting (Standards U3 and U5). The other utility standards were not as salient in the teacher evaluation literature.

  12. The three added interviews in the equation ((37 + 32 + 32) + 3) = 104 refer to three of the five teachers that were lost after the first wave that we were able to track down and interview one final time. Our main purpose in interviewing them was to get a sense of why they left. This is why they were not included in the second wave teacher sample numbers, but added their interviews separately to the total.

  13. The final sample of principal interviews was 20, and there were two instructional coaches interviewed in the third wave.

  14. Thirty randomly selected interview transcripts across the three waves (about one third of the total) were selected for the purpose of checking inter-rater reliability.

  15. Inter-rater reliability was calculated via the proportion agreement method (Campbell et al. 2013), which takes the sum of the number of coding agreements and disagreements for a given code divided by the total number of codings of the lowest submitter (the coder with the fewest instances of the code).

References

  • Adams, C. M., Forsyth, P. B., Ware, J. K., & Mwavita, M. (2016). The informational significance of A-F school accountability grades. Teachers College Record, 118(7), 1–31 Retrieved from: http://www.tcrecord.org/Content.asp?contentid=20925. Accessed 15 Oct 2017.

  • Adams, C. M., Ford, T. G., Forsyth, P. B., Ware, J. K., Barnes, L. B., Khojasteh, J., Mwavita, M., Olsen, J. J., & Lepine, J. A. (2017). Next generation school accountability: A vision for improvement under ESSA. Palo Alto, CA: Learning Policy Institute.

  • American Educational Research Association [AERA], American Psychological Association [APA] National Council on Measurement in Education [NCME]. (2014). Standards for educational and psychological testing. Washington, D.C.: American Educational Research Association.

    Google Scholar 

  • Amrein-Beardsley, A., & Collins, C. (2012). The SAS education value-added assessment system (SAS-EVAAS) in the Houston independent School District (HISD): Intended and unintended consequences. Educational Policy Analysis Archives, 20(12) Retrieved from: http://epaa.asu.edu/ojs/article/view/1096.

  • Beaver, J. K., & Weinbaum, E. H. (2015). State test data and school improvement efforts. Educational Policy, 29(3), 478–503.

    Article  Google Scholar 

  • Blase, J., & Blase, J. (1999). Principals’ instructional leadership and teacher development: Teachers’ perspectives. Educational Administration Quarterly, 35(3), 349–378.

    Google Scholar 

  • Booher-Jennings, J. (2005). Below the bubble: “Educational triage” and the Texas accountability system. American Educational Research Journal, 42(2), 231–268.

    Article  Google Scholar 

  • Bulletin 130. La. Admin. Code. tit. 28, pt. 147, §103 (2017). Retrieved from: http://www.doa.la.gov/osr/lac/28v147/28v147.doc

  • Bulletin 130. La. Admin. Code. tit. 28, pt. 147, §311 (2017). Retrieved from: http://www.doa.la.gov/osr/lac/28v147/28v147.doc

  • Campbell, J. L., Quincy, C., Osserman, J., & Pedersen, O. K. (2013). Coding in-depth semistructured interviews: Problems of unitization and intercoder reliability and agreement. Sociological Methods & Research, 42(3), 294–320.

    Article  Google Scholar 

  • Chambers, J., de los Reyes, I. B., O’Neil, C. (2013). How much are districts spending to implement teacher evaluation systems? Case studies of Hillsborough County Public Schools, Memphis City Schools, and Pittsburgh Public Schools. (RAND working paper # WR-989-BMGF). Retrieved from https://www.rand.org/pubs/working_papers/WR989.html

  • Chow, A. P. Y., Wong, E. K. P., Yeung, A. S., & Mo, K. W. (2002). Teachers’ perceptions of appraiser–appraisee relationships. Journal of Personnel Evaluation in Education, 16(2), 85–101.

    Article  Google Scholar 

  • Collins, C., & Amrein-Beardsley A (2014). Putting growth and value-added models on the map: A national overview. Teachers College Record, 116(1). Retrieved from https://www.tcrecord.org/Content.asp?ContentId=17291. Accessed 15 Oct 2017.

  • Cosner, S. (2011). Teacher learning, instructional considerations and principal communication: Lessons from a longitudinal study of collaborative data use by teachers. Educational Management Administration & Leadership, 39(5), 568–589.

    Article  Google Scholar 

  • Curry, K. A., Mwavita, M., Holter, A., & Harris, E. (2016). Getting assessment right at the classroom level: Using formative assessment for decision making. Educational Assessment, Evaluation and Accountability, 28(1), 89–104.

    Article  Google Scholar 

  • Darling-Hammond, L. (2013). Getting teacher evaluation right: What really matters for effectiveness and improvement. New York, NY: Teachers College Press.

    Google Scholar 

  • Darling-Hammond, L. (2014). One piece of the whole: Teacher evaluation as part of a comprehensive system for teaching and learning. American Educator, 38(1), 4–13.

    Google Scholar 

  • Darling-Hammond, L., Amrein-Beardsley, A., Haertel, E., & Rothstein, J. (2012). Evaluating teacher evaluation. Phi Delta Kappan, 93(6), 8–15.

    Article  Google Scholar 

  • Datnow, A., & Hubbard, L. (2015). Teachers' use of assessment data to inform instruction: Lessons from the past and prospects for the future. Teachers College Record, 117(4).

  • Datnow, A., & Park, V. (2014). Data-driven leadership. San Francisco: Jossey-Bass.

    Google Scholar 

  • Datnow, A., Greene, J. C., & Gannon-Slater, N. (2017). Data use for equity: Implications for teaching, leadership, and policy. Journal of Educational Administration, 55(4), 354–360.

    Article  Google Scholar 

  • Deci, E. L., & Ryan, R. M. (2000). The “what” and the “why” of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry, 11(4), 227–268.

    Article  Google Scholar 

  • Deci, E. L., Koestner, R., & Ryan, R. M. (1999). A meta-analytic review of experiments examining the effects of extrinsic rewards on intrinsic motivation. Psychological Bulletin, 125, 627–668.

    Article  Google Scholar 

  • Delvaux, E., Vanhoof, J., Tuytens, M., Vekeman, E., Devos, G., & Van Petegem, P. (2013). How may teacher evaluation have an impact on professional development? A multilevel analysis. Teaching and Teacher Education, 36, 1–11.

    Article  Google Scholar 

  • Denzin, N. K. (2001). Interpretive interactionism (2nd ed.). Thousand Oaks, CA: Sage.

    Book  Google Scholar 

  • Doherty, K. M., & Jacobs, S. (2015). State of the states 2015: Evaluating teaching, leading, and learning. Washington, DC: National Council on Teacher Quality.

    Google Scholar 

  • Dynarski, M. (2016, December 8). Teacher observations have been a waste of time and money. The Brookings Institution. Retrieved from https://www.brookings.edu/research/teacher-observations-have-been-a-waste-of-time-and-money/

  • Eccles, J. S., Adler, T. F., Futterman, R., Goff, S. B., Kaczala, C. M., Meece, J. L., & Midgley, C. (1983). Expectancies, values, and academic behaviors. In J. T. Spence (Ed.), Achievement and achievement motivation (pp. 75–146). San Francisco, CA: W. H. Freeman.

    Google Scholar 

  • Farley-Ripple, E. N., & Buttram, J. L. (2014). Developing collaborative data use through professional learning communities: Early lessons from Delaware. Studies in Educational Evaluation, 42, 41–53.

    Article  Google Scholar 

  • Farrell, C. C. (2015). Designing school systems to encourage data use and instructional improvement: A comparison of school districts and charter management organizations. Educational Administration Quarterly, 51(3), 438–471.

    Article  Google Scholar 

  • Farrell, C. C., & Marsh, J. A. (2016a). Metrics matter: How properties and perceptions of data shape teachers’ instructional responses. Educational Administration Quarterly, 52(3), 423–462.

    Article  Google Scholar 

  • Farrell, C. C., & Marsh, J. A. (2016b). Contributing conditions: A qualitative comparative analysis of teachers’ instructional responses to data. Teaching and Teacher Education, 60, 398–412.

    Article  Google Scholar 

  • Ford, T. G., Van Sickle, M. E., & Fazio-Brunson, M. (2016). The role of “informational significance” in shaping Louisiana elementary teachers’ use of high-stakes teacher evaluation data for instructional decision making. In K. K. Hewitt & A. Amrein-Beardsley (Eds.), Student growth measures in policy and practice: Intended and unintended consequences of high-stakes teacher evaluations (pp. 117–135). New York: Palgrave Macmillan.

  • Ford, T. G., Van Sickle, M. E., Clark, L. V., Fazio-Brunson, M., & Schween, D. C. (2017). Teacher self-efficacy, professional commitment and high-stakes teacher evaluation (HSTE) policy in Louisiana. Educational Policy, 31(2), 202–248.

  • Glover, T. A., Reddy, L. A., Kettler, R. J., Kurz, A., & Lekwa, A. J. (2016). Improving high-stakes decisions via formative assessment, professional development, and comprehensive educator evaluation: The school system improvement project. Teachers College Record, 118(14), 1–26.

    Google Scholar 

  • Grissom, J. A., & Youngs, P. A. (2016). Improving teacher evaluation systems: Making the most of multiple measures. New York: Teachers College Press.

    Google Scholar 

  • Haertel, E. H. (2013). Reliability and validity of inferences about teachers based on student test scores. Princeton, NJ: Educational Testing Service Retrieved from http://www.ets.org/Media/Research/pdf/PICANG14.pdf.

    Google Scholar 

  • Hallinger, P., Heck, R. H., & Murphy, J. (2014). Teacher evaluation and school improvement: An analysis of the evidence. Educational Assessment, Evaluation and Accountability, 26(1), 5–28.

    Article  Google Scholar 

  • Harris, D. N., & Herrington, C. D. (Eds.). (2015). Value added meets the schools: The effects of using test-based teacher evaluation on the work of teachers and leaders [special issue]. Educational Research, 44(2), 71–141.

  • Herlihy, C., Karger, E., Pollard, C., Hill, H. C., Kraft, M. A., Williams, M., & Howard, S. (2014). State and local efforts to investigate the validity and reliability of scores from teacher evaluation systems. Teachers College Record, 116(1) Retrieved from http://www.tcrecord.org/Content.asp?ContentId=17292. Accessed 15 Oct 2017.

  • Hewitt, K. (2015). Educator evaluation policy that incorporates EVAAS value-added measures: Undermined intentions and exacerbated inequities. Education Policy Analysis Archives, 23(76). Retrieved from). https://doi.org/10.14507/epaa.v23.1968.

  • Hewitt, K., & Amrein-Beardsley, A. (2016). Introduction: The use of growth measures for educator accountability at the intersection of policy and practice. In K. Hewitt & A. Amrein-Beardsley (Eds.), Student growth measures in policy and practice: Intended and unintended consequences of high-stakes teacher evaluations (pp. 1–25). New York: Palgrave Macmillan.

    Chapter  Google Scholar 

  • Honig, M. I., & Venkateswaran, N. (2012). School–central office relationships in evidence use: Understanding evidence use as a systems problem. American Journal of Education, 118(2), 199–222.

    Article  Google Scholar 

  • Huguet, A., Farrell, C. C., & Marsh, J. A. (2017). Light touch, heavy hand: Principals and data-use PLCs. Journal of Educational Administration, 55(4), 376–389.

    Article  Google Scholar 

  • Ikemoto, G. S., & Marsh, J. A. (2007). Cutting through the “data-driven” mantra: Different conceptions of data-driven decision making. Yearbook of the National Society for the Study of Education, 106(1), 105–131.

    Article  Google Scholar 

  • Ingram, D., Louis, K. S., & Schroeder, R. (2004). Accountability policies and teacher decision making: Barriers to the use of data to improve practice. Teachers College Record, 106, 1258–1287. Retrieved from: https://www.tcrecord.org/content.asp?contentid=11573. Accessed 15 Oct 2017.

  • Jiang, J. Y., Sporte, S. E., & Luppescu, S. (2015). Teacher perspectives on evaluation reform: Chicago’s REACH students. Educational Researcher, 44, 105–116.

  • Jones, N. D. (2016). Special education teacher evaluation: An examination of critical issues and recommendations for practice. In J. A. Grissom & P. Youngs (Eds.), Improving teacher evaluation systems: Making the most of multiple measures (pp. 63–76). New York: Teachers College Press.

    Google Scholar 

  • Kelly, K. O., Ang, S. Y. A., Chong, W. L., & Hu, W. S. (2008). Teacher appraisal and its outcomes in Singapore primary schools. Journal of Educational Administration, 46(1), 39–54.

    Article  Google Scholar 

  • Kerr, K. A., Marsh, J. A., Ikemoto, G. S., Darilek, H., & Barney, H. (2006). Strategies to promote data use for instructional improvement: Actions, outcomes and lessons from three urban districts. American Journal of Education, 112, 496–520.

    Article  Google Scholar 

  • Kraft, M. A., & Gilmour, A. F. (2017). Revisiting the widget effect: Teacher evaluation reforms and the distribution of teacher effectiveness. Educational Researcher, 46(5), 234–249.

    Article  Google Scholar 

  • Larkin, D., & Oluwole, J. O. (2014, March). The opportunity costs of teacher evaluation: A labor and equity analysis of the TEACHNJ legislation. New Brunswick, NJ: New Jersey educational policy Forum. Retrieved from https://njedpolicy.files.wordpress.com/2014/03/douglarkinjosepholuwole-opportunitycostpolicybrief.pdf

  • Lavigne, A. L. (2014). Exploring the intended and unintended consequences of high-stakes teacher evaluation on schools, teachers, and students. Teachers College Record, 116(1). Retrieved from https://www.tcrecord.org/Content.asp?ContentId=17294. Accessed 15 Oct 2017.

  • Lavigne, A. L., & Good, T. L. (2014). Teacher and student evaluation: Moving beyond the failure of school reform. New York: Routledge.

    Google Scholar 

  • Lavigne, A. L., & Good, T. L. (2015). Improving teaching through observation and feedback: Beyond state and federal mandates. New York: Routledge.

    Google Scholar 

  • Lipsky, M. (2010). Street-level bureaucracy: Dilemmas of the individual in public service (2nd ed.). Thousand Oaks, CA: Russell Sage Foundation.

    Google Scholar 

  • Little, J. W. (2012). Understanding data use practice among teachers: The contribution of micro-process studies. American Journal of Education, 118(2), 143–166.

    Article  Google Scholar 

  • Longo-Schmid, J. (2016). Teachers’ voices: Where policy meets practice. In K. Kappler Hewitt & A. Amrein-Beardsley (Eds.), Student growth measures in policy and practice (pp. 49–71). New York: Palgrave Macmillan.

    Chapter  Google Scholar 

  • Louisiana Department of Education. (2012). Compass: Louisiana’s path to excellence—Teacher evaluation guidebook. Baton Rouge, LA: Author.

  • Louisiana Department of Education (2013). 2013 Compass final report. Baton Rouge, LA: Author. Retrieved from: https://www.louisianabelieves.com/resources/library/compass. Accessed 30 April 2018.

  • Louisiana Department of Education (2014). 2013–2014 Compass annual report. Baton Rouge, LA: Author. Retrieved from: https://www.louisianabelieves.com/resources/library/compass. Accessed 30 April 2018.

  • Louisiana Department of Education (2015a). Teacher student learning targets. Retrieved from: https://www.louisianabelieves.com/resources/classroom-support-toolbox/teacher-support-toolbox/student-learning-targets. Accessed 30 April 2018.

  • Louisiana Department of Education (2015b). 2014–2015 Compass teacher results by LEA. Retrieved from: https://www.louisianabelieves.com/resources/library/compass. Accessed 30 April 2018.

  • Louisiana Department of Education (2016). 2015–2016 Compass teacher results by district. Retrieved from: https://www.louisianabelieves.com/resources/library/compass. Accessed 30 April 2018.

  • Louisiana House Bill 1033. (2010). Evaluation and Assessment Programs.

  • Lortie, D. (1975). Schoolteacher: A sociological analysis. Chicago: University of Chicago Press.

  • Mandinach, E. B. (2012). A perfect time for data-use: Using data-driven decision making to inform practice. Educational Psychologist, 47(2), 71–85. https://doi.org/10.1080/00461520.2012.667064.

    Article  Google Scholar 

  • Mandinach, E. B., Honey, M., Light, D., & Brunner, C. (2008). A conceptual framework for data driven decision making. In E. B. Mandinach & M. Honey (Eds.), Data-driven school improvement: Linking data and learning (pp. 13–31). New York: Teachers College Press.

    Google Scholar 

  • Marques, J. F., & McCall, C. (2005). The application of interrater reliability as a solidification instrument in a phenomenological study. The Qualitative Report, 10(3), 439–462.

    Google Scholar 

  • Marsh, J. A. (2012). Interventions promoting educators’ use of data: Research insights and gaps. Teachers College Record, 114(11), 1–48.

    Google Scholar 

  • Marsh, J. A., & Farrell, C. C. (2015). How leaders can support teachers with data-driven decision making: A framework for understanding capacity building. Educational Management Administration & Leadership, 43(2), 269–289.

    Article  Google Scholar 

  • Marsh, J. A., Pane, J. F., & Hamilton, L. S. (2006). Making sense of data-driven decision making in education (RAND occasional paper #OP-170-EDU). Santa Monica, CA: RAND. Retrieved from: http://www.rand.org/pubs/occasional_papers/OP170.html

  • Marsh, J. A., McCombs, J. S., & Martorell, F. (2010). How instructional coaches support data-driven decision making: Policy implementation and effects in Florida middle schools. Educational Policy, 24, 872–907. https://doi.org/10.1177/0895904809341467.

    Article  Google Scholar 

  • Master, B. (2014). Staffing for success: Linking teacher evaluation and school personnel management in practice. Educational Evaluation and Policy Analysis, 36(2), 207–227. https://doi.org/10.3102/0162373713506552.

    Article  Google Scholar 

  • McLaughlin, M. W. (1987). Learning from experience: Lessons from policy implementation. Educational Evaluation and Policy Analysis, 9, 171–178.

    Article  Google Scholar 

  • Means, B., Padilla, C., & Gallagher, L. (2010). Use of education data at the local level: From accountability to instructional improvement. Washington, DC: U.S. department of Education Retrieved from: https://www2.ed.gov/rschstat/eval/tech/use-of-education-data/use-of-education-data.pdf

  • Milanowski, A. T., & Heneman, H. G. (2001). Assessment of teacher reactions to a standards-based teacher evaluation system: A pilot study. Journal of Personnel Evaluation in Education, 15(3), 193–212.

    Article  Google Scholar 

  • Miles, M. B., Huberman, A. M., & Saldaña, J. (2014). Qualitative data analysis: A methods sourcebook (3rd ed.). Thousand Oaks, CA: Sage.

    Google Scholar 

  • Murphy, J., Hallinger, P., & Heck, R. H. (2013). Leading via teacher evaluation: The case of the missing clothes? Educational Researcher, 42, 349–354. https://doi.org/10.3102/0013189X13499625.

    Article  Google Scholar 

  • Niemiec, C. P., & Ryan, R. M. (2009). Autonomy, competence, and relatedness in the classroom: Applying self-determination theory to educational practice. Theory and Research in Education, 7, 133–144.

  • Organization for Economic Co-Operation and Development. (2009). Teacher evaluation. A conceptual framework and examples of country practices. Retrieved from: http://www.oecd.org/edu/school/44568106.pdf

  • Papay, J. P. (2011). Different tests, different answers: The stability of teacher value-added estimates across outcome measures. American Educational Research Journal, 48, 163–193.

    Article  Google Scholar 

  • Park, V., Daly, A. J., & Guerra, A. W. (2013). Strategic framing: How leaders craft the meaning of data use for equity and learning. Educational Policy, 27(4), 645–675.

    Article  Google Scholar 

  • Reddy, L. A., Dudek, C. M., Peters, S., Alperin, A., Kettler, R. J., Kurz, A. (2018). Teachers’ and school administrators’ attitudes and beliefs of teacher evaluation: A preliminary investigation of high poverty school districts. Educational Assessment, Evaluation, and Accountability, 30, 47–70.

  • Rice, J. K., & Malen, B. (2016). When theoretical models meet school realities: Educator responses to student growth measures in an incentive pay program. In K. Kappler Hewitt & A. Amrein-Beardsley (Eds.), Student growth measures in policy and practice (pp. 29–47). Palgrave Macmillan US.

  • Rosenholtz, S. J. (1991). Teachers’ workplace: The social organization of schools. New York: Teachers College Press.

  • Ryan, R. M., & Brown, K. W. (2005). Legislating competence: The motivational impact of high-stakes testing as an educational reform. In C. Dweck & A. Elliot (Eds.), Handbook of competence and motivation (pp. 354–372). New York: Guilford Press.

    Google Scholar 

  • Ryan, R. M., & Deci, E. L. (2002). An overview of self-determination theory: An organismic dialectical perspective. In E. L. Deci & R. M. Ryan (Eds.), Handbook of self-determination research (pp. 3–33). Rochester, NY: University of Rochester Press.

  • Ryan, R. M., & Deci, E. L. (2017). Self-determination theory: Basic psychological needs in motivation, development, and wellness. New York: Guilford Press.

    Google Scholar 

  • Ryan, R. M., & Weinstein, N. (2009). Undermining quality teaching and learning: A self-determination theory perspective on high-stakes testing. Theory and Research in Education, 7(2), 224–233. https://doi.org/10.1177/1477878509104327.

    Google Scholar 

  • Schildkamp, K., & Visscher, A. (2010). The use of performance feedback in school improvement in Louisiana. Teaching and Teacher Education, 26(7), 1389–1403.

    Article  Google Scholar 

  • Schildkamp, K., Poortman, C., Luyten, H., & Ebbeler, J. (2017). Factors promoting and hindering data-based decision making in schools. School Effectiveness and School Improvement, 28(2), 242–258.

    Article  Google Scholar 

  • Schneider, A., & Ingram, H. (1990). Behavioral assumptions of policy tools. The Journal of Politics, 52(2), 510–529.

    Article  Google Scholar 

  • Skrla, L., Scheurich, J. J., Garcia, J., & Nolly, G. (2004). Equity audits: A practical leadership tool for developing equitable and excellent schools. Educational Administration Quarterly, 40(1), 133–161.

    Article  Google Scholar 

  • Sun, M., Mutcheson, R. B., & Kim, J. (2016). Teachers' use of evaluation for instructional improvement and school supports for such use. In J. A. Grissom & P. Youngs (Eds.), Improving teacher evaluation systems: Making the most of multiple measures (pp. 169–183). New York: Teachers College Press.

    Google Scholar 

  • The Joint Committee on Standards for Educational Evaluation [JCSEE]. (2009). The personnel evaluation standards: How to assess systems for evaluating educators. Thousand Oaks, CA: Corwin.

    Google Scholar 

  • The New Teacher Project. (2010). Teacher evaluation 2.0. New York: Author.

  • Tuytens, M., & Devos, G. (2011). Stimulating professional learning through teacher evaluation: An impossible task for the school leader? Teaching and Teacher Education, 27(5), 891–899.

    Article  Google Scholar 

  • U. S. Department of Education. (2009). Race to the top program executive summary. Washington, DC: U.S. Department of Education. Retrieved from: http://www2.ed.gov/programs/racetothetop/executive-summary.pdf

  • Van Gasse, R., Vanlommel, K., Vanhoof, J., & Van Petegem, P. (2017). The impact of collaboration on teachers’ individual data use. School Effectiveness and School Improvement, 28, 1–16.

  • Vansteenkiste, M., Lens, W., De Witte, H., & Feather, N. T. (2005). Understanding unemployed people’s job search behavior, unemployment experience and well-being. A comparison of expectancy-value theory and self-determination theory. British Journal of Social Psychology, 44, 269–287.

    Article  Google Scholar 

  • Vansteenkiste, M., Lens, W., & Deci, E. L. (2006). Intrinsic versus extrinsic goal contents in self-determination theory: Another look at the quality of academic motivation. Educational Psychologist, 41(1), 19–31.

    Article  Google Scholar 

  • Watt, H. M. G., & Richardson, P. W. (2014). Why people choose teaching as a career: An expectancy-value approach to understanding teacher motivation. In P. W. Richardson, S. A. Karabenick, & H. M. G. Watt (Eds.), Teacher motivation: theory and practice (pp. 3–19). London: Routledge.

  • Weisberg, D., Sexton, S., Mulhern, J., & Keeling, D. (2009). In ) (Ed.), The widget effect: Our national failure to acknowledge and act on differences in teacher effectiveness. Brooklyn, NY: The New Teacher Project.

    Google Scholar 

  • Wigfield, A., & Eccles, J. (1992). The development of achievement task values: A theoretical analysis. Developmental Review, 12, 265–310.

    Article  Google Scholar 

  • Yin, R. K. (2017). Case study research and applications: Design and methods (5th ed.). Thousand Oaks, CA: SAGE publications.

    Google Scholar 

  • Young, V. M. (2006). Teachers’ use of data: Loose coupling, agenda setting, and team norms. American Journal of Education, 112, 521–548.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Timothy G. Ford.

Teacher Interview Protocols

Teacher Interview Protocols

1.1 Data Collection Wave I

  1. 1.

    TELL ME ABOUT YOURSELF: Could you describe your background in education and your current responsibilities? Probes: Years of experience teaching, certification, grade level(s) taught, content area expertise, outside school-related involvement, school committees.

  2. 2.

    WHAT HAS IT BEEN LIKE TRANSITIONING TO THE CCSS AT YOUR SITE? Tell me a little bit about the transition of your school/district to the Common Core State Standards (CCSS). What has been good about the transition? Where have the challenges been? Probes: Opinions/orientations to the idea of CCSS (centralization of curricula across states); impact on students; developmentally appropriate practices; training/preparation/PD

  3. 3.

    HOW PREPARED ARE YOU TO IMPLEMENT THE CCSS? What is the aspect of the CCSS that you feel most equipped to deal with/handle in the classroom? The least? Do you feel as though there is a system in place to support you in addressing this area of concern? Probe the nature of their feelings of support more deeply if necessary.

  4. 4.

    HOW DO YOU FEEL ABOUT Compass? Moving from CCSS to Compass, the new teacher evaluation system, what have been your initial impressions of this evaluation tool? (Reword: What do you see as the benefits of this evaluation system? What are the drawbacks of such an evaluation system in your opinion?) Probes: Student Learning Targets; Value-Added Measures; observation rubric and scaling (modified Danielson rubric); training and support.

  5. 5.

    Can you give an example of when you feel the most control over your teaching in terms of who you are as an individual and a professional? Can you give an example of when you feel the least control over your teaching?

  6. 6.

    Since the beginning of these two initiatives, have you ever found yourself questioning your identity either professionally or personally? Would you mind describing a good example of one these instances?

1.1.1 Data Collection Wave II

  1. 1)

    (Overall Background) Tell me a little bit about how this school year is going for you right now. What are some successes? Some challenges? (Verify same teaching position as last year/same responsibilities. Probe any new responsibilities.)

  2. 2)

    (District/School Context) Now that the Common Core State Standards are being fully implemented across the state, what has your school/district done differently this year to incorporate these standards into your curriculum? (Probes: adopt an already-developed system (e.g., Engage NY) or develop your own).

  3. 3)

    (CCSS Feelings activity) Have your feelings changed towards CCSS now that you have had some time to work with the standards? How would you describe your feelings about the CCSS using an image or a metaphor? Would you mind drawing writing your feelings for me? (Give them a separate sheet of paper with space to draw/write).

  4. 4)

    (CCSS Support activity) Directions: Provide participant with the Support activity instrument. In the center of the circle the participant is to label “self” or “teacher.” In each of the boxes with arrows pointing toward the center, participants are to write one type of support they are receiving in implementing the CCSS. In boxes that have arrows pointing away from the circle, participants are to write in feelings about the support they are receiving. Finally, interviewer is to ask the teacher to fill in the blank/answer the question “What do you feel is missing in this picture of support for CCSS?”

  5. 5)

    (Compass Follow-up) Can you tell me about your initial reaction to your overall Compass score? How are you dealing with your rating personally? How well do you feel your Compass evaluation and SLT/VAM score reflect your teaching? Will you share your overall rating with me? (Probes: Student Learning Targets; Value-Added Measures; observation rubric and scaling (modified Danielson rubric); training and support. Collect their COMPASS evaluation score from last year (ineffective, effective emerging, etc.) both overall and the subscores, if teacher willing to share).

1.1.2 Data Collection Wave III

  1. 1.)

    (Overall Background) Tell me a little bit about how this school year is going for you right now. What are some successes? Some challenges? (Verify same teaching position as last year/same responsibilities. Probe any new responsibilities.)

  2. 2.)

    (District/School Context) How has your district/school responded to the PARCC test from a curriculum standpoint? (Probes: Are new programs being implemented to better align with PARCC and/or are curriculum materials the same from last year?) Do you feel the conversation this year in your district/school has moved toward PARCC preparation or continued with curriculum alignment to the CCSS?

  3. 3.)

    (CCSS Feelings) Last time we met we discussed your feelings about the CCSS. Let us look at what you said then (present written metaphor activity to teacher and use the following as guiding questions for unpacking the image). Are you still feeling this way? (Probes: Has the easing of VAM-based accountability changed your attitude toward the standards this year or the way you are teaching? Has the PARCC test added or taken away from your feelings (positive or negative) towards the CCSS?)

  4. 4.)

    (CCSS Support activity) Last time we met we also discussed the type of support you are getting from your school/district. (Present prior written support activity image to teacher, and use the following as guiding questions for unpacking the image). How would you modify this image to add or take away support items? Have you received more/less support this year? Has the support focused changed as a result of the new testing guidelines?

  5. 5.)

    (COMPASS Follow-up) Would you discuss with me your informal evaluation from the fall? If you would, share with me an example of how you have used the information from your COMPASS evaluation last year to prepare for your evaluation/teaching this year. (Probes: feelings about using data for improvement; training/support for improvement, collaboration around using data) (Also collect their COMPASS evaluation score from last year (ineffective, effective emerging, etc.) both overall and the subscores, if teacher willing to share.)

  6. 6.)

    (SLT Follow-up) As they have been emphasized at the state level this year, would you talk a little bit about your experience now with SLTs? How has this increased focus shaped your preparation and teaching this year? (Probe: Have your feelings towards your SLT’s (if applicable) changed? In what ways?; professional development/support; follow-up on results).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ford, T.G. Pointing teachers in the wrong direction: understanding Louisiana elementary teachers’ use of Compass high-stakes teacher evaluation data. Educ Asse Eval Acc 30, 251–283 (2018). https://doi.org/10.1007/s11092-018-9280-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11092-018-9280-x

Keywords

Navigation