Abstract
Informal reading inventories (IRI) and curriculum-based measures of reading (CBM-R) have continued importance in instructional planning, but raters have exhibited difficulty in accurately identifying students’ miscues. To identify and tabulate scorers’ mismarkings, this study employed examiners and raters who scored 15,051 words from 108 passage readings by students in Grades 5 and 6. Word-by-word scoring from these individuals was compared with a consensus score obtained from the first author and two graduate students after repeated replaying of the audio from the passage readings. Microanalysis conducted on all discrepancies identified a cumulative total of 929 mismarkings (range = 1–37 per passage) that we categorized in 37 unique types. Examiners scoring live made significantly more mismarkings than raters scoring audio recordings, t(214) = 4.35, p = .0001, with an effect size of d = 0.59. In 98% of the passages, scorers disagreed on the number of words read correctly—the score used for screening and progress monitoring decisions. Results suggest that IRIs and CBM-Rs may not be accurate as diagnostic tools for determining students’ particular word-level difficulties.
Similar content being viewed by others
References
Becker, D.M., Newell, K.W., & Christ, T.J. (2015). Subskill analysis of reading fluency (SARF): Word coding process. (Report No. 2). Minneapolis, MN: University of Minnesota, Department of Educational Psychology.
Burgin, J., & Hughes, G. D. (2009). Credibly assessing reading and writing abilities for both elementary student and program assessment. Assessing Writing, 14(1), 25–37. https://doi.org/10.1016/j.asw.2008.12.001.
Burns, M. K., Pulles, S. M., Helman, L., & McComas, J. (2016). Assessment-based intervention frameworks: An example of a Tier 1 reading intervention in an urban school. In S. L. Graves & J. J. Blake (Eds.), Psychoeducational assessment and intervention for ethnic minority children: Evidence-based approaches (pp. 165–182). Washington, DC: American Psychological Association.
Burns, M. K., Pulles, S. M., Maki, K. E., Kanive, R., Hodgson, J., Helman, L. A., et al. (2015). Accuracy of student reading performance while reading leveled books rated at their instructional level by a reading inventory. Journal of School Psychology, 53, 437–445. https://doi.org/10.1016/j.jsp.2015.09.003.
Chaparro, E. A., Stoolmiller, M., Park, Y., Baker, S. K., Basaraba, D., Fien, H., & Smith, J. L. M. (2017). Evaluating passage and order effects of oral reading fluency passages in second grade: A partial replication. Assessment for effective intervention. Advanced online publication. https://doi.org/10.1177/1534508417741128
Christ, T. J., & Ardoin, S. P. (2009). Curriculum-based measurement of oral reading: Passage equivalence and probe-set development. Journal of School Psychology, 47, 55–75. https://doi.org/10.1016/j.jsp.2008.09.004.
Christ, T. J., Zopluoglu, C., Long, J. D., & Monaghen, B. D. (2012). Curriculum-based measurement of oral reading: Quality of progress monitoring outcomes. Exceptional Children, 78, 356–373. https://doi.org/10.1177/001440291207800306.
Christ, T. J., Zopluoglu, C., Monaghen, B. D., & Van Norman, E. R. (2013). Curriculum-based measurement of oral reading: Multi-study evaluation of schedule, duration, and dataset quality on progress monitoring outcomes. Journal of School Psychology, 51, 19–57. https://doi.org/10.1016/j.jsp.2012.11.001.
Clay, M. M. (1993). An observation survey of early literacy achievement. Portsmouth, NH: Heinemann.
Codding, R. S., Petscher, Y., & Truckenmiller, A. (2015). CBM reading, mathematics, and written expression at the secondary level: Examining latent composite relations among indices and unique predictions with a state achievement test. Journal of Educational Psychology, 107, 437–450. https://doi.org/10.1037/a0037520.
Colón, E. P., & Kranzler, J. H. (2006). Effect of instructions on curriculum-based measurement of reading. Journal of Psychoeducational Assessment, 24, 318–328. https://doi.org/10.1177/0734282906287830.
Cummings, K. D., Biancarosa, G., Schaper, A., & Reed, D. K. (2014). Examiner error in curriculum-based measurement of oral reading. Journal of School Psychology, 52, 361–375. https://doi.org/10.1016/j.jsp.2014.05.007.
Cummings, K. D., Park, Y., & Bauer Schaper, H. A. (2013). Form effects on DIBELS Next oral reading fluency progress-monitoring passages. Assessment for Effective Intervention, 38, 91–104. https://doi.org/10.1177/1534508412447010.
Danielson, L., & Rosenquist, C. (Eds.). (2014). Data-based Individualization [Special Issue].Teaching Exceptional Children, 46(4), 1–63.
Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children, 52, 219–232.
Edwards, O. W., & Rottman, A. (2011). Empirical analysis of the relationship between student examiners’ learning with deliberate test practice and examinees’ intelligence test performance. Journal of Instructional Psychology, 38, 157–163.
Fawson, P. C., Reutzel, D. R., Smith, J. A., Ludlow, B., & Sudweeks, R. (2006). Examining the reliability of running records: Attaining generalizable results. The Journal of Educational Research, 100, 113–126. https://doi.org/10.3200/JOER.100.2.113-126.
Flynn, L. J., Hosp, J. L., Hosp, M. K., & Robbins, K. P. (2011). Word recognition error analysis: Comparing isolated word list and oral passage reading. Assessment for Effective Intervention, 36, 167–178. https://doi.org/10.1177/1534508411398649.
Francis, D. J., Santi, K. L., Barr, C., Fletcher, J. M., Varisco, A., & Foorman, B. R. (2008). Form effects on the estimation of students’ oral reading fluency using DIBELS. Journal of School Psychology, 46, 315–342. https://doi.org/10.1016/j.jsp.2007.06.003.
Fuchs, L. S., Fuchs, D., & Hamlett, C. L. (1989). Effects of instrumental use of curriculum-based measurement to enhance instructional programs. Remedial and Special Education, 10, 43–52. https://doi.org/10.1177/074193258901000209.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., & Ferguson, C. (1992). Effects of expert system consultation within curriculum-based measurement using a reading maze task. Exceptional Children, 58, 436–450. https://doi.org/10.1177/001440299205800507.
Fuchs, D., Fuchs, L. S., & Vaughn, S. (2014). What is intensive instruction and why is it important? Teaching Exceptional Children, 46(4), 13–18. https://doi.org/10.1177/0040059914522966.
Gersten, R. M., Compton, D. L., Connor, C. M., Dimino, J., Santoro, L., Linan-Thompson, S., & Tilly, W. D. (2009). Assisting students struggling with reading: Response to intervention and multi-tier intervention in the primary grades. Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance. Retrieved from https://ies.ed.gov/ncee/wwc/Docs/PracticeGuide/rti_reading_pg_021809.pdf.
Good, R.H., & Kaminski, R.A. (Eds.). (2002). Dynamic indicators of basic early literacy skills (6th ed.). Eugene, OR: Institute for the Development of Educational Achievement. Retrieved from https://dibels.uoregon.edu/market/assessment/material/
Goodman, Y. M., Watson, D. J., & Burke, C. L. (2005). Reading miscue inventory: From evaluation to instruction. Portsmouth, NH: Heinemann.
Harlacher, J. E., Sakelaris, T. L., & Kattleman, M. (2013). Practitioner’s guide to curriculum-based evaluation in reading. New York, NY: Springer.
Howell, K. W., Kurns, S., & Antil, L. (2002). Best practices in curriculum-based evaluation. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology IV (pp. 753–771). Washington, DC: National Association of School Psychologists.
IRIS Center. (2018). Intensive intervention (part 2): Collecting and analyzing data for Data-based individualization. Nashville, TN: Peabody College Vanderbilt University. Retrieved from https://iris.peabody.vanderbilt.edu/module/dbi2
January, S. A. A., & Ardoin, S. P. (2015). Technical adequacy and acceptability of curriculum-based measurement and the Measures of Academic Progress. Assessment for Effective Intervention, 41, 3–15. https://doi.org/10.1177/1534508415579095.
Kilgus, S. P., Methe, S. A., Maggin, D. M., & Tomasula, J. L. (2014). Curriculum-based measurement of oral reading (R-CBM): A diagnostic test accuracy meta-analysis of evidence supporting use in universal screening. Journal of School Psychology, 52, 377–405. https://doi.org/10.1016/j.jsp.2014.06.002.
Kim, Y.-S., & Wagner, R. K. (2015). Text (oral) reading fluency as a construct in reading development: An investigation of its mediating role for children from Grades 1 to 4. Scientific Studies of Reading, 19, 224–242. https://doi.org/10.1080/10888438.2015.1007375.
Kucer, S. B. (2010). Readers’ tellings: Narrators, settings, flashbacks and comprehension. Journal of Research in Reading, 33, 320–331. https://doi.org/10.1111/j.1467-98917.2009.01416.x.
L’Allier, S. K. (2013). Lessons learned from research about informal reading inventories: Keys to data-driven instructional recommendations. Reading & Writing Quarterly, 29, 288–307. https://doi.org/10.1080/10573569.2013.789780.
Leckie, G., & Baird, J. (2011). Rater effects on essay scoring: A multilevel analysis of severity drift, central tendency, and rater experience. Journal of Educational Measurement, 48, 399–418. https://doi.org/10.1111/j.1745-3984.2011.00152.x.
Lemons, C. J., Kearns, D. M., & Davison, K. A. (2014). Data-based individualization in reading: Intensifying intervention for students with significant reading disabilities. Teaching Exceptional Children, 46(4), 20–29. https://doi.org/10.1177/0040059914522978.
Limbos, M. M., & Geva, E. (2001). Accuracy of teacher assessments of second-language student at risk for reading disability. Journal of Learning Disabilities, 34, 136–151. https://doi.org/10.1177/002221940103400204.
McIntosh, K., Girvan, E. J., Horner, R. H., & Smolkowski, K. (2014). Education not incarceration: A conceptual model for reducing racial and ethnic disproportionality in school discipline. Journal of Applied Research on Children: Informing Policy for Children at Risk, 5(2), 1–22.
Munir-McHill, S., Bousselot, T., Cummings, K. D., & Smith, J. L. M. (2012). Profiles in school-level data-based decision making. Paper presented at the National Association of School Psychologists 44th Annual Convention, Philadelphia, PA.
National Center on Intensive Intervention (2014). Informal academic diagnostic assessment: Using data to guide intensive instruction: Miscue and skills analysis. Washington, DC: US Department of Education, Office of Special Education Programs, National Center on Intensive Intervention.
Parker, D. C., Zaslofsky, A. F., Burns, M. K., Kanive, R., Hodgson, J., Scholin, S. E., et al. (2015). A brief report of the diagnostic accuracy of oral reading fluency and reading inventory levels for reading failure risk among second- and third-grade students. Reading & Writing Quarterly, 31, 56–67. https://doi.org/10.1080/10573569.2013.857970.
Perfetti, C. A. (1985). Reading ability. New York, NY: Oxford University Press.
Reed, D. K., Cummings, K. D., Schaper, A., & Biancarosa, G. (2014). Assessment fidelity in reading intervention research: A synthesis of the literature. Review of Educational Research, 84, 275–321. https://doi.org/10.3102/0034654314522131.
Reed, D. K., & Sturges, K. M. (2013). An examination of assessment fidelity in the administration and interpretation of reading tests. Remedial and Special Education, 34, 259–268. https://doi.org/10.1177/0741932512464580.
Reynolds, C. R. (1990). Conceptual and technical problems in learning disability diagnosis. In C. R. Reynolds & R. W. Kamphaus (Eds.), Handbook of psychological and educational assessment of children: Intelligence and achievement (pp. 571–592). New York: Guilford.
Rosenblatt, L. M. (2004). The transactional theory of reading and writing. In R. B. Ruddell & N. J. Unrau (Eds.), Theoretical models and processes of reading (5th ed., pp. 1363–1398). Newark, DE: International Reading Association.
Ross, J. A. (2004). Effects of running records assessment on early literacy achievement. The Journal of Educational Research, 97(4), 186–194. https://doi.org/10.3200/JOER.97.4.186-195.
Stecker, P. M., & Fuchs, L. S. (2000). Effecting superior achievement using curriculum-based measurement: The importance of individual progress monitoring. Learning Disabilities Research and Practice, 15, 128–134. https://doi.org/10.1207/SLDRP1503_2.
Stecker, P. M., Fuchs, L. S., & Fuchs, D. (2005). Using curriculum-based measurement to improve student achievement: Review of research. Psychology in the Schools, 42, 795–819. https://doi.org/10.1002/pits.20113795.
Waterman, C., McDermott, P. A., Fantuzzo, J. W., & Gadsden, V. L. (2012). The matter of assessor variance in early childhood education: Or whose score it is anyway? Early Childhood Research Quarterly, 27, 46–54. https://doi.org/10.1016/j.ecresq.2011.06.003.
Wesson, C. L. (1991). Curriculum-based measurement and two models of follow-up consultation. Exceptional Children, 57, 246–256. https://doi.org/10.1177/001440299105700307.
Acknowledgements
This research was supported in part by Project HiFi: Promoting High Fidelity of Screening and Progress Monitoring (U.S. Department of Education, Institute of Education Sciences [IES], SBIR Phase I, EDIES-13-C-0038). The opinions expressed are those of the authors and do not represent view of the U.S. Department of Education or IES.
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
Description of mismarking types
Mismarking type | Description of mismarking | |
---|---|---|
1 | Student restarts | The student begins reading but then stops to read the title or asks the examiner if he/she should read the title before restarting the passage at the beginning. Some examiners inappropriately attempt to stop and restart the timer (rather than treating this as a repetition), thus contributing to misidentifying the TWR |
2 | Read title | When the student reads the title before reading the passage, some examiners start the timer at the title and some start at the first word of the passage. If the title and the first words of the passage are the same, it is unclear which the student is reading until additional words are read. The confusion over the title contributes to misidentifying the TWR |
3 | Repeated words around SC | The student makes a mistake in a sentence with words that are used multiple times. It is not always clear to which of the repeated words the student returns when self-correcting. This can create multiple possibilities for how the reading should be scored or can cause the examiner to add or miss mistakes |
4 | Rare word | The passage contains a low frequency word or word with multiple pronunciations, and the examiner does not know the correct pronunciation or all acceptable pronunciations. Therefore, the student’s reading is incorrectly scored |
5 | Hyphenated word | The hyphen rule for scoring is to count the hyphenated word as one if the parts cannot stand alone (e.g., t-shirt), but as separate words for scoring if each part can stand alone (e.g., mother-in-law). Mismarkings can occur if the rater counts multiple parts of the hyphenation as incorrect when the student makes a mistake on only one part |
6 | Last words | The examiner may add or omit marks due to preoccupation with the timer at the end of the 1-min, the chime obscuring the student’s reading, and the students speeding up around the 1-min mark. These issues make it difficult to accurately record the last word said within the time limit |
7 | Unclear when timer stops | An examiner may not start the timer, may not use a timer that chimes at 1 min, may allow the student to continue reading after the minute time limit, or may experience a timer malfunction—all of which result in uncertainty regarding the correct stopping point |
8 | Examiner stops early | The examiner stops the student’s reading at any point before 1 min has lapsed. Because the examiner stopped the student early, the TWR would have to be manually calculated (but was not) |
9 | Miscue on SC | The student re-reads a word or portion of the passage in an attempt to correct a perceived mistake, but still reads the word(s) incorrectly. The examiner does not mark the mistake on the self-correct |
10 | Miscue then SC | The student re-reads a word or portion of the passage and corrects a mistake, but the examiner still marks it as incorrect |
11 | Student clarifying word after reading | The student reads a word correctly but then asks the examiner to confirm that was correct. The examiner marks the correctly read word as incorrect |
12 | Examiner gives the word | The student asks the examiner to say a word or has hesitated for 3-s, thus prompting the examiner to tell the student the word. The examiner does not mark this word as incorrect |
13 | Dialect/colloquialism | The student is a native English speaker but uses a regional dialect. The examiner mistakenly marks words pronounced in the regional dialect as incorrect |
14 | Accent | The student is not a native English speaker and pronounces words in a way that is influenced by the student’s first language. The examiner mistakenly marks these as incorrect (either due to lack of knowledge or lack of consistency in applying the scoring procedure). [Additional note: The word says is often pronounced with a long/ā/sound by students whose first language is Spanish. Because says is a high frequency sight word, this pronunciation was considered incorrect. When examiners did not mark it as incorrect, it was counted as an omit mismarking.] |
15 | Stutter | Student stutters on the first letter(s) of a word but otherwise reads the word correctly |
16 | Drawing out pronunciation | The student draws out a sound within a word. The examiner either mistakenly marks this as incorrect if the student ultimately reads the word correctly or does not mark as incorrect the word that ultimately was mispronounced |
17 | Pause during word | Student pauses after starting to say a word, not making any sound as in a hesitation, but resumes reading the word correctly within the 3 s limit. The examiner mistakenly marks these as incorrect |
18 | Unclear pronunciation | The student does not enunciate the word, so the examiner marks what is perceived incorrectly (either adding or omitting mistakes) |
19 | Words read quickly | The student reads very quickly and may seem to be reading multiple words as though they were only one word (i.e., unclear where one word ended and the other began). The examiner mistakenly marks these as incorrect |
20 | Adding—s ending | The student incorrectly adds an -s to the end of a word, and the examiner does not mark it as incorrect |
21 | Dropping—s ending | The student reads a word without saying the -s at the end of the word, but the examiner does not mark it as incorrect |
22 | Dropping—ed ending | The student reads the word without the -ed ending, but the examiner does not mark it as incorrect |
23 | Derivation of word | The student says another form of the word in the passage, but the examiner does not mark it as incorrect |
24 | Formed contraction | The student reads two words in their contracted form, and the examiner does not mark it as incorrect |
25 | Replaced with other word | The student reads a word other than the word in the passage, and the examiner does not mark it as incorrect |
26 | Letter reversal | The student reverses letters within a word when reading it, but the examiner does not mark it as incorrect |
27 | Replaced syllable | Student reads a word with a change at the syllable level, but the examiner does not mark it as an incorrect |
28 | Add phoneme | The student adds a sound when reading a word, but the examiner does not mark it as incorrect. Note: This does not include adding -s to the end of a word, which is considered a separate category |
29 | Left out phoneme/syllable | The student read a word without saying one or more of its sounds, but the examiner does not mark it as incorrect |
30 | Saying the silent consonant | The student pronounces the silent letter in a word, but the examiner does not mark it as incorrect |
31 | Alterations of numbers | The student reads a different number than the one in the passage, but the examiner does not mark it as incorrect. The student also might say the correct number written in the passage, but the examiner mistakenly marks it as incorrect |
32 | Changed emphasis | The student places emphasis on the wrong part of a word, thus reading it incorrectly, but the examiner does not mark it as incorrect |
33 | Insert word | The student inserts a word, which is a time penalty only, but the examiner marks a word before or after the insertion as incorrect |
34 | Skipped word | The student skips a word while reading, but the examiner does not mark it as incorrect |
35 | Changing order | The student reverses the order of the words, but the examiner does not mark this as an insertion before (time penalty only) and skip after (incorrect) |
36 | Unclassifiable | The examiner marks a word as read incorrectly, but there is no apparent reason why the examiner thought a mistake was made |
37 | Around miscue | The student reads a word incorrectly, and the examiner marks one or more words around the original miscue as incorrect. This can be in place of the actual miscue or in addition to the actual miscue |
Rights and permissions
About this article
Cite this article
Reed, D.K., Cummings, K.D., Schaper, A. et al. Accuracy in identifying students’ miscues during oral reading: a taxonomy of scorers’ mismarkings. Read Writ 32, 1009–1035 (2019). https://doi.org/10.1007/s11145-018-9899-5
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11145-018-9899-5