Advertisement

Effects of Discontinue Rules on Psychometric Properties of Test Scores

  • Matthias von DavierEmail author
  • Youngmi Cho
  • Tianshu Pan
Article
  • 88 Downloads

Abstract

This paper provides results on a form of adaptive testing that is used frequently in intelligence testing. In these tests, items are presented in order of increasing difficulty. The presentation of items is adaptive in the sense that a session is discontinued once a test taker produces a certain number of incorrect responses in sequence, with subsequent (not observed) responses commonly scored as wrong. The Stanford-Binet Intelligence Scales (SB5; Riverside Publishing Company, 2003) and the Kaufman Assessment Battery for Children (KABC-II; Kaufman and Kaufman, 2004), the Kaufman Adolescent and Adult Intelligence Test (Kaufman and Kaufman 2014) and the Universal Nonverbal Intelligence Test (2nd ed.) (Bracken and McCallum 2015) are some of the many examples using this rule. He and Wolfe (Educ Psychol Meas 72(5):808–826, 2012.  https://doi.org/10.1177/0013164412441937) compared different ability estimation methods in a simulation study for this discontinue rule adaptation of test length. However, there has been no study, to our knowledge, of the underlying distributional properties based on analytic arguments drawing on probability theory, of what these authors call stochastic censoring of responses. The study results obtained by He and Wolfe (Educ Psychol Meas 72(5):808–826, 2012.  https://doi.org/10.1177/0013164412441937) agree with results presented by DeAyala et al. (J Educ Meas 38:213–234, 2001) as well as Rose et al. (Modeling non-ignorable missing data with item response theory (IRT; ETS RR-10-11), Educational Testing Service, Princeton, 2010) and Rose et al. (Psychometrika 82:795–819, 2017.  https://doi.org/10.1007/s11336-016-9544-7) in that ability estimates are biased most when scoring the not observed responses as wrong. This scoring is used operationally, so more research is needed in order to improve practice in this field. The paper extends existing research on adaptivity by discontinue rules in intelligence tests in multiple ways: First, an analytical study of the distributional properties of discontinue rule scored items is presented. Second, a simulation is presented that includes additional scoring rules and uses ability estimators that may be suitable to reduce bias for discontinue rule scored intelligence tests.

Keywords

discontinue rule ignorability bias local dependency DIF 

Notes

References

  1. Bolt, D. M., Cohen, A. S., & Wollack, J. A. (2002). Item parameter estimation under conditions of test speededness: Application of a mixture Rasch model with ordinal constraints. Journal of Educational Measurement, 39, 331–348.CrossRefGoogle Scholar
  2. Bracken, B. A., & McCallum, R. S. (2015). Universal nonverbal intelligence test (2nd ed.). Itasca, IL: Riverside Publishers.Google Scholar
  3. Chen, H., Yamamoto, K., & von Davier, M. (2014). Controlling multistage testing exposure rates in international large-scale assessments. In D. L. Yan, A. A. von Davier, & C. Lewis (Eds.), Computerized multistage testing: Theory and applications. New York: CRC Press.Google Scholar
  4. DeAyala, R. J., Plake, B. S., & Impara, J. C. (2001). The impact of omitted responses on the accuracy of ability estimation in item response theory. Journal of Educational Measurement, 38, 213–234.CrossRefGoogle Scholar
  5. Firth, D. (1993). Bias reduction of maximum likelihood estimates. Biometrika, 80(1), 27–38.CrossRefGoogle Scholar
  6. Glas, C. A. W. (2010). Item parameter estimation and item fit analysis. In W. J. van der Linden & C. A. W. Glas (Eds.), Elements of adaptive testing (pp. 269–288). New York: Springer.Google Scholar
  7. He, W., & Wolfe, E. W. (2012). Treatment of not-administered items on individually administered intelligence tests. Educational and Psychological Measurement, 72(5), 808–826.  https://doi.org/10.1177/0013164412441937.CrossRefGoogle Scholar
  8. Holland, P. W., & Rosenbaum, P. R. (1986). Conditional association and unidimensionality in monotone latent variable models. The Annals of Statistics, 14(4), 1523–1543.CrossRefGoogle Scholar
  9. Holland, P. W., & Thayer, D. T. (1986). Differential item functioning and the Mantel–Haenzel procedure. ETS Research Report Series.  https://doi.org/10.1002/j.2330-8516.1986.tb00186.x.
  10. Homack, S. R., & Reynolds, C. R. (2007). Essentials of assessment with brief intelligence tests. Hoboken: Wiley. ISBN: 978-0-471-26412-5.Google Scholar
  11. Kaufman, A. S., & Kaufman, N. L. (2004). Manual: Kaufman assessment battery for children (2nd ed.). Circle Pines, MN: AGS Publishing.Google Scholar
  12. Kaufman, A. S., & Kaufman, N. L. (2014). Kaufman adolescent and adult intelligence test. Encyclopedia of Special Education.  https://doi.org/10.1002/9781118660584.ese1323.
  13. Little, R. J. A. (1988). Missing-data adjustments in large surveys. Journal of Business and Economic Statistics, 6, 287–296.Google Scholar
  14. Little, R. J. A., & Rubin, D. B. (2002). Statistical analysis with missing data (2nd ed.). Hoboken, NJ: Wiley.CrossRefGoogle Scholar
  15. Little, R. J., & Zhang, N. (2011). Subsample ignorable likelihood for regression analysis with missing data. Journal of the Royal Statistical Society: Series C: Applied Statistics, 60(4), 591–605.  https://doi.org/10.1111/j.1467-9876.2011.00763.x.CrossRefGoogle Scholar
  16. Little, R. J., Rubin, D. B., & Zangeneh, S. Z. (2017). Conditions for ignoring the missing-data mechanism in likelihood inferences for parameter subsets. Journal of the American Statistical Association, 112(517), 314–320.CrossRefGoogle Scholar
  17. Lord, F. M. (1980). Applications of item response theory to practical testing problems. Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
  18. Mantel, N., & Haenszel, W. (1959). Statistical aspects of the analysis of data from retrospective studies of disease. Journal of the National Cancer Institute, 22, 719–748.Google Scholar
  19. Mislevy, R. J., & Wu, P.-K. (1996). Missing responses and IRT ability estimation: Omits, choice, time limits, and adaptive testing. ETS Research Report Series, 1996, i–36.  https://doi.org/10.1002/j.2333-8504.1996.tb01708.x.CrossRefGoogle Scholar
  20. Morris, T. P., White, I. R., & Royston, P. (2014). Tuning multiple imputation by predictive mean matching and local residual draws. BMC Medical Research Methodology, 14, 75–87.CrossRefGoogle Scholar
  21. Riverside Publishing Company. (2003). Stanford-Binet intelligence scales (SB5) (5th edn). Itasca, IL.Google Scholar
  22. Rose, N., von Davier, M., & Xu, X. (2010). Modeling non-ignorable missing data with item response theory (IRT; ETS RR-10-11). Princeton, NJ: Educational Testing Service.Google Scholar
  23. Rose, N., von Davier, M., & Nagengast, B. (2017). Modeling omitted and not-reached items in IRT models. Psychometrika, 82, 795–819.  https://doi.org/10.1007/s11336-016-9544-7.CrossRefGoogle Scholar
  24. Reichenbach, H. (1956). The direction of time. Berkeley, LA: University of California Press.CrossRefGoogle Scholar
  25. Rubin, D. B. (1976). Inference and missing data. Biometrika, 63, 581–592.CrossRefGoogle Scholar
  26. Rubin, D. B. (1986). Statistical matching using file concatenation with adjusted weights and multiple imputations. Journal of Business and Economic Statistics, 4, 87–94.Google Scholar
  27. Suppes, P. (1970). A probabilistic theory of causality. Amsterdam: North-Holland Publishing Company.Google Scholar
  28. Suppes, P., & Zanotti, M. (1981). When are probabilistic explanations possible? Synthese, 48, 191–199.CrossRefGoogle Scholar
  29. van der Linden, W. (ed.) (2016). Handbook of item response theory (Vol. 1, 2nd edn). Boca Raton: CRC Press.Google Scholar
  30. von Davier, M. (2005). A general diagnostic model applied to language testing data. In Research report RR-05-16. Princeton, NJ: ETS.Google Scholar
  31. von Davier, M. (2016a). The rasch model. Chapter 3. In W. van der Linden (Ed.), Handbook of item response theory (2nd ed., Vol. 1, pp. 31–48). Boca Raton: CRC Press.  https://doi.org/10.1201/9781315374512-4.Google Scholar
  32. von Davier, M. (2016b). CTT and No-DIF and ? = (almost) Rasch model. Chapter 14. In: M. Rosen, K. Y. Hansen, U. Wolff (Eds.). Cognitive abilities and educational outcomes: A festschrift in Honour of Jan-Eric Gustafsson (pp. 249–272). A Volume in the Springer Book Series: Methodology of Educational Measurement and Assessment.Google Scholar
  33. von Davier, M., & Rost, J. (1995). Polytomous mixed Rasch models. In G. H. Fischer & I. W. Molenaar (Eds.), Rasch models—foundations, recent developments, and applications (pp. 371–379). New York: Springer.Google Scholar
  34. Verhelst, N. D., & Glas, C. A. W. (1995). The one parameter logistic model. In G. H. Fischer & I. W. Molenaar (Eds.), Rasch models. New York, NY: Springer.  https://doi.org/10.1007/978-1-4612-4230-7_12.Google Scholar
  35. Warm, T. (1989). Weighted likelihood estimation of ability in item response theory. Psychometrika, 54(3), 427–450.CrossRefGoogle Scholar
  36. Yamamoto, K., & Everson, H. (1997). Modeling the effects of test length and test time on parameter estimation using the HYBRID model. In J. Rost & R. Langeheine (Eds.), Applications of latent trait and latent class models in the social sciences (pp. 89–98). New York: Waxman.Google Scholar

Copyright information

© The Psychometric Society 2019

Authors and Affiliations

  1. 1.National Board of Medical ExaminersPhiladelphiaUSA
  2. 2.American Institutes for ResearchWashington D.C.USA
  3. 3.PearsonSan AntonioUSA

Personalised recommendations