Advertisement

Using Response Time and Accuracy Data to Inform the Measurement of Fluency

  • John J. Prindle
  • Alison M. Mitchell
  • Yaacov Petscher
Chapter

Abstract

Reading fluency identifies the ability for children to articulately evidence comprehension of passages presented and this type of task inherently has components related to ability and response latency. Children with higher rates of fluency will theoretically have higher abilities and lower response latencies. Traditional methods for analyzing performance have focused on ability to correctly respond, ignoring response latency information. Theoretical models for response latency have introduced frameworks relating item difficulty and response time, ignoring responses correctness. More recent work by van der Linden (2007) proposed a joint response and response latency framework, with simultaneous estimation of ability and speed parameters. We provide an overview of traditional ability modeling schemes and evidence in favor of including response latency in the estimation of ability. An applied example of reading fluency illustrates the combined response and response latency model and how to interpret these findings in relation to traditional response only models. Our findings show more accurate parameter estimates are obtained when response latency is modeled versus response only models. Researchers and educators are encouraged to gather data efficiently and embrace modern modeling methods to more closely model theoretical frameworks.

Keywords

Item response theory Speeded assessment Conditional item response theory Measurement Psychometrics Reliability 

References

  1. Bentler, P. M. (1990). Comparative fit indexes in structural models. Psychological Bulletin, 107, 238–246.CrossRefPubMedGoogle Scholar
  2. Bentler, P. M., & Bonnet, D. C. (1980). Significance tests and goodness of fit in the analysis of covariance structures. Psychological Bulletin, 88, 588–606.CrossRefGoogle Scholar
  3. Blackwell, C. K., Lauricella, A. R., Wartella, E., Robb, M., & Schomburg, R. (2013). Adoption and use of technology in early education: the interplay of extrinsic barriers and teacher attitudes. Computers & Education, 69, 310–319. doi:10.1016/j.compedu.2013.07.024.CrossRefGoogle Scholar
  4. Browne, M. W., & Cudeck, R. (1992). Alternative ways of assessing model fit. Sociological Methods and Research, 21, 230–258.CrossRefGoogle Scholar
  5. Casella, G., & Berger, R. L. (2002). Statistical inference (Vol. 2). Pacific Grove, CA: Duxbury.Google Scholar
  6. Cattell, R. B. (1948). Concepts and methods in the measurement of group syntality. Psychological Review, 55, 48–63. doi:10.1037/h0055921.CrossRefPubMedGoogle Scholar
  7. Chard, D. J., Vaughn, S., & Tyler, B. (2002). A synthesis of research on effective interventions for building reading fluency with elementary students with learning disabilities. Journal of Learning Disabilities, 35(5), 386–406. http://search.proquest.com/docview/619935634?accountid=4840.CrossRefPubMedGoogle Scholar
  8. Christ, T. J., & Silberglitt, B. (2007). Estimates of the standard error of measurement for curriculum-based measures of oral reading fluency. School Psychology Review, 36, 130–146.Google Scholar
  9. Cummings, K. D., Atkins, T., Allison, R., & Cole, C. (2008). Response to Intervention. Teaching Exceptional Children, 40, 24–31.Google Scholar
  10. Cummings, K. D., Park, Y., & Schaper, H. A. B. (2012). Form effects on DIBELS Next oral reading fluency progress-monitoring passages. Assessment for Effective Intervention, 38, 91–104.CrossRefGoogle Scholar
  11. Cunningham, A. E., & Stanovich, K. E. (1997). Early reading acquisition and its relation to reading experience and ability 10 years later. Developmental Psychology, 33(6), 934–945.CrossRefPubMedGoogle Scholar
  12. de Ayala, R. J. (2009). The theory and practice of item response theory. New York: Guilford.Google Scholar
  13. Deno, S. L. (2003). Developments in curriculum-based measurement. The Journal of Special Education, 37(3), 184–192.CrossRefGoogle Scholar
  14. Divgi, D. R. (1980). Dimensionality of binary items: Use of a mixed model. Paper presented at the annual meeting of the National Council on Measurement in Education. Boston.Google Scholar
  15. Dunn, T. J., Baguley, T., & Brunsden, V. (2014). From alpha to omega: A practical solution to the pervasive problem of internal consistency estimation. British Journal of Psychology, 105 (3), 399–412.Google Scholar
  16. Educational Testing Service. (2007). Test and score data summary for TOEFL internet-based test. Princeton: Author.Google Scholar
  17. Embretson, S. E., & Reise, S. (2000). Item response theory for psychologists. Mahwah: Erlbaum.Google Scholar
  18. Ferrando, P., & Lorenzo-Seva, U. (2007). An item response theory model for incorporating response time data in binary personality items. Applied Psychological Measurement, 31, 525–543. doi:10.1177/0146621606295197.CrossRefGoogle Scholar
  19. Foorman, B. R., Petscher, Y., & Bishop, M. D. (2012). The incremental variance of morphological knowledge to reading comprehension in grades 3–10 beyond prior reading comprehension, spelling, and text reading efficiency. Learning and Individual Differences, 22, 792–798. doi:10.1016/j.lindif.2012.07.009.CrossRefGoogle Scholar
  20. Fox, J. P., Klein Entink, R. H. K., & van der Linden, W. J. (2007). Modeling of responses and response time with the package CIRT. Journal of Statistical Software, 20, 1–14.Google Scholar
  21. Francis, D. J., Santi, K. S., Barr, C., Fletcher, J. M., Varisco, A., & Foorman, B. R. (2008). Form effects on the estimation of students' oral reading fluency using DIBELS. Journal of School Psychology, 46, 315–342. doi:10.1016/j.jsp.2007.06.003.PubMedCentralCrossRefPubMedGoogle Scholar
  22. Fuchs, L. S., Deno, S. L., & Mirkin, P. K. (1984). The effects of frequent curriculum-based measurement and evaluation on pedagogy, student achievement, and student awareness of learning. American Educational Research Journal, 21(2), 449–460.CrossRefGoogle Scholar
  23. Fuchs, L. S., Fuchs, D., Hosp, M. K., & Jenkins, J. R. (2001). Oral reading fluency as an indicator of reading competence: a theoretical, empirical, and historical analysis. Scientific Studies of Reading, 5, 239–256. doi:10.1207/S1532799XSSR0503_3.CrossRefGoogle Scholar
  24. Goodglass, H., Theurkauf, J. C., & Wingfield, A. (1984). Naming latencies as evidence for two modes for lexical retrieval. Applied Psycholinguistics, 5, 135–14.CrossRefGoogle Scholar
  25. Gray, L., Thomas, N., & Lewis, L. (2010). Teachers’ use of educational technology in U.S. public schools: 2009 (NCES 2010-040). Retrieved from the U.S. Department of Education, National Center for Educational Statistics, Institute of Education Sciences. http://nces.ed.gov/pubs2010/2010040.pdf.
  26. Jang, E. E., & Roussos, L. (2007). An investigation into the dimensionality of TOEFL using conditional covariance-based nonparametric approach. Journal of Educational Measurement, 44, 1–21.CrossRefGoogle Scholar
  27. Kamil, M. L. (2004). Vocabulary and comprehension instruction: Summary and implications of the national reading panel findings. In P. McCardle & V. Chhabra (Eds.), The voice of evidence in reading research (pp. 213–234). Baltimore: Paul H Brookes.Google Scholar
  28. Klein Entink, R. H., Kuhn, J.-T., Hornke, L. F., and Fox, J.-P. (2009). Evaluating cognitive theory: A joint modeling approach using responses and response times. Psychological Methods, 14, 54–75. doi:10.1037/a0014877.CrossRefPubMedGoogle Scholar
  29. Lord, F. M. (1980). Applications of item response theory to practical testing problems. New York: Erlbaum Associates.Google Scholar
  30. McDonald, R. P. (1999). Test theory: A unified treatment. Mahwah: Lawrence Erlbaum Associates, Inc.Google Scholar
  31. Mercer, S. H., Dufrene, B. A., Zoder-Martell, K., Harpole, L. L., Mitchell, R. R., & Blaze, J. T. (2012). Generalizability theory analysis of CBM maze reliability in third- through fifth- grade students. Assessment for Effective Intervention, 37, 183–190. doi:10.1177/1534508411430319.CrossRefGoogle Scholar
  32. Miranda, H., & Russell, M. (2011). Predictors of teacher-directed student use of technology in elementary classrooms: A multilevel SEM approach using data from the USEIT study. Journal of Research on Technology in Education, 43, 301–323.CrossRefGoogle Scholar
  33. Muthen, L. K., & Muthen, B. O. (1998–2012). Mplus. Seventh Edition. Los Angeles: Muthen & Muthen.Google Scholar
  34. National Institute of Child Health and Human Development. (2000). Report of the National Reading Panel. Teaching children to read: an evidence-based assessment of the scientific research literature on reading and its implications for reading instruction (NIH Publication No. 00–4769).Google Scholar
  35. Orlando, M., & Thissen, D. (2000). Likelihood-based item-fit indices for dichotomous item response theory models. Applied Psychological Measurement, 24(1), 50–64.Google Scholar
  36. Perfetti, C. A., & Hogaboam, T. (1975). Relationship between single word decoding and reading comprehension skill. Journal of Educational Psychology, 67, 461–469.CrossRefGoogle Scholar
  37. Petscher, Y., & Kim, Y. S. (2011). The utility and accuracy of oral reading fluency score types in predicting reading comprehension. Journal of School Psychology, 49, 107–129. doi:10.1016/j.jsp.2010.09.004.PubMedCentralCrossRefPubMedGoogle Scholar
  38. Petscher, Y., Mitchell, A. M., & Foorman, B. R. (2015). Improving the reliability of student scores from speeded assessments: an illustration of conditional item response theory using a computer-administered measure of vocabulary. Reading and Writing, 1–26.Google Scholar
  39. Poncy, B. C., Skinner, C. H., & Axtell, P. K. (2005). An investigation of the reliability and standard error of measurement of words read correctly per minute using curriculum-based measurement. Journal of Psychoeductional Assessment, 23, 326–338. doi:10.1177/073428290502300403.CrossRefGoogle Scholar
  40. Pressey, B. (2013). Comparative analysis of national teacher surveys. http://www.joanganzcooneycenter.org/wp-content/uploads/2013/10/jgcc_teacher_survey_analysis_final.pdf/.
  41. Prindle, J. J. (2012). A functional use of response time data in cognitive assessment. (Doctoral dissertation). Retrieved from USC Digital Library.Google Scholar
  42. R Core Team. (2014). R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing. https://www.R-project.org/.
  43. SAS Institute Inc. (2011). Base SAS® 9.3 Procedures Guide. Cary: SAS Institute.Google Scholar
  44. Scarborough, H. S. (2001). Connecting early language and literacy to later reading (dis)abilities: Evidence, theory, and practice. In S. Neumann & D. Dickinson (Eds.), Handbook for research in early literacy (pp. 97–110). New York: Guilford.Google Scholar
  45. Scheiblechner, H. (1985). Psychometric models for speed-test construction: The linear exponential model. In S. E. Embreston (Ed.), Test design developments in psychology and psychometrics (pp. 219–244). New York: Academic Press.Google Scholar
  46. Schnipke, D. L., & Scrams, D. J. (2002). Exploring issues of examinee behavior: insights gained from response-time analyses. In C. N. Mills, M. T. Potenza, J. J. Fremer, & W. C. Ward (Eds.), Computer-based testing: building the foundation for future assessments. Mahwah: Lawrence Erlbaum Associates.Google Scholar
  47. Sireci, S. G., Thissen, D., & Wainer, H. (1991). On the reliability of testlet-based tests. Journal of Educational Measurement, 28, 237–247. doi:10.1111/j.1745–3984.1991.tb00356.x.CrossRefGoogle Scholar
  48. Sternberg, S. (1969). Memory-scanning: Mental processes revealed by reaction-time experiments. American Scientist, 57(4), 421–457.Google Scholar
  49. Stout, W. F. (1987). A nonparametric approach for assessing latent trait dimensionality. Psychometrika, 52, 589–617.CrossRefGoogle Scholar
  50. Tate, R. (2003). A comparison of selected empirical methods for assessing the structure of responses to test items. Applied Psychological Measurement, 27, 159–203.CrossRefGoogle Scholar
  51. van der Linden, W. J. (2007). A hierarchical framework for modeling speed and accuracy on test items. Psychometrika, 72, 287–308. doi:10.1007/s11336-006-1478-z.CrossRefGoogle Scholar
  52. van der Linden, W. J. (2011). Modeling response times with latent variables: principles and applications. Psychological Test and Assessment Modeling, 53, 334–358.Google Scholar
  53. van der Linden, W. J., & van Krimpen-Stoop, E. M. L. A. (2003). Using response times to detect responses in computerized adaptive testing. Psychometrika, 68, 251–265.CrossRefGoogle Scholar
  54. Wainer, H., Bradlow, E. T., & Wang, X. (2007). Testlet response theory and its applications. New York: Cambridge University Press.CrossRefGoogle Scholar
  55. Zeno, S. M., Ivens, S. H., Millard, R. T., & Duvvuri, R. (1995). The educator’s word frequency guide. New York: Touchstone Applied Science Associates, Inc.Google Scholar
  56. Zhang, J., & Stout, W. (1999). The theoretical detect index of dimensionality and its application to approximate simple structure. Psychometrika, 64, 213–249.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2016

Authors and Affiliations

  • John J. Prindle
    • 1
  • Alison M. Mitchell
    • 2
  • Yaacov Petscher
    • 3
  1. 1.Max Planck InstituteBerlinGermany
  2. 2.Lexia LearningConcordUSA
  3. 3.Florida Center for Reading ResearchFlorida State UniversityTallahasseeUSA

Personalised recommendations