Abstract
The chapter presents previous and near term applications and innovations for the assessment of rate-based measures such as fluency. The historical and future developments are discussed within the context of an ideographic behavioral and nomothetic psychometric paradigms of assessment. These paradigms are described and contrasted with descriptions of classical test theory (CTT), generalizability theory (GT), and item response theory (IRT). The interpretation and use argument (IUA) is used to frame the contemporary view on unified validity. These theoretical models are combined with an applied perspective to contextualize and encourage future developments in the measurement of fluency.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ardoin, S. P., Roof, C. M., Klubnick, C., & Carfolite, J. (2008). Evaluating curriculum-based measurement from a behavioral assessment perspective. Behavior Analyst Today, 9, 36–49.
Betts, J., Pickart, M., & Heistad, D. (2009). An investigation of the psychometric evidence of CBM-R passage equivalence: Utility of readability statistics and equating for alternate forms. Journal of School Psychology, 47, 1–17.
Brennan, R. L. (2001). An essay on the history and future of reliability from the perspective of replications. Journal of Educational Measurement, 38(4), 295–317.
Brennan, R. L. (2011). Generalizability theory and classical test theory. Applied Measurement in Education, 24, 1–21.
Christ, T. J., & Hintze, J. M. (2007). Psychometric considerations of reliability when evaluating response to intervention. In S. R. Jimmerson, A. M. Vanderheyden, & M. K. Burns (Eds.), Handbook of response to intervention (pp. 93–105). New York: Springer.
Crocker, L. M., & Algina, J. (1986). Introduction to classical and modern test theory. New York: Holt Rinehart and Winston.
Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children, 52, 219–232.
Deno, S. L. (1989). Curriculum-based measurement and alternative special education services: A fundamental and direct relationship. In M. R. Shinn (Ed.), Curriculum-based measurement: Assessing special children (pp. 1–17). New York: Guilford Press.
Deno, S. L. (1990). Individual differences and individual difference: The essential difference of special education. The Journal of Special Education, 24, 160–173.
Deno, S. L. (2003). Developments in curriculum-based measurement. The Journal of Special Education, 37, 184–192.
Deno, S. L., & Mirkin, P. K. (1977). Data-based program modification: A manual. Reston: Council for Exceptional Children.
Fuchs, L. S., & Deno, S. L. (1991). Paradigmatic distinctions between instructionally relevant measurement models. Exceptional children, 57, 488–500.
Fuchs, L. S., & Deno, S. L. (1994). Must instructionally useful performance assessment be based in the curriculum? Exceptional Children, 61, 15–24.
Fuchs, L. S., & Fuchs, D. (1992). Identifying a measure for monitoring student reading progress. School Psychology Review, 21, 45–58.
Good, R. H., & Kaminski, R. (2002). Dynamic Indicators of Basic Early Literacy Skills 6th Edition (DIBELS). Eugene, OR: Institute for the Development of Educational Achievement. https://dibels.uoregon.edu.
Kane M. T., (2006). Validation. In R. L. Breenan (Ed.), Educational measurement (4th ed.), Westport: American Council on Education/Praeger.
Kane, M. T. (2013). Validating the interpretations and uses of test scores. Journal of Educational Measurement, 50, 1–73.
Kimble, G. A. (1989). Psychology from the standpoint of a generalist. American Psychologist, 44, 491–499.
Marston, D. B. (1989). Curriculum-based measurement: What it is and why we do it. In M. R. Shinn (Ed.), Curriculum-based measurement: Assessing special children (pp. 18–78). New York: Guilford Press.
Shinn, M. R. (1989). Curriculum-based measurement: Assessing special children. New York: Guilford Press.
Shinn, M. R. (1995). Best practices in curriculum-based measurement and its use in a problem-solving model. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology-III (pp. 547–567). Washington, DC: National Association of School Psychologists.
Spearman, C. (1904). The proof of measurement of association between two things. American Journal of Psychology, 15, 72–101.
Wainer, H., Wang, X. A., Skorupski, W. P., & Bradlow, E. T. (2005). A Bayesian method for evaluating passing scores: The PPoP curve. Journal of Educational Measurement, 42, 271–281.
Ware, J. E., Bjorner, J. B., Kosinski, M. (2000). Practical implications of item response theory and computer adaptive testing. Medical Care, 38, 73–82.
Wise, S. L., & DeMars, C. E. (2006). An application of item response time: The effort-moderated IRT model. Journal of Educational Measurement, 43, 19–38.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer Science+Business Media, LLC
About this chapter
Cite this chapter
Christ, T., Van Norman, E., Nelson, P. (2016). Foundations of Fluency-Based Assessments in Behavioral and Psychometric Paradigms. In: Cummings, K., Petscher, Y. (eds) The Fluency Construct. Springer, New York, NY. https://doi.org/10.1007/978-1-4939-2803-3_6
Download citation
DOI: https://doi.org/10.1007/978-1-4939-2803-3_6
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4939-2802-6
Online ISBN: 978-1-4939-2803-3
eBook Packages: Behavioral Science and PsychologyBehavioral Science and Psychology (R0)