Advertisement

Using the discontinuation rule to reduce the effect of random guessing on parameter estimation in the item response theory

  • Tianshu PanEmail author
  • Youngmi Cho
Original Paper

Abstract

The discontinuation rule is often used to reduce the effect of random guessing in psychological tests. It may also play the similar role in the item response theory. In this article, a Monte Carlo study was implemented to explore the feasibility of the four- and six-consecutive-zero discontinuation rules to reduce the effect of random guessing on parameter estimation in the Rasch model. The results showed that random guessing inflated estimation errors, and these discontinuation rules can reduce this effect on item-parameter estimation under the joint and marginal maximum likelihood, but can do so only for person-parameter estimation under the marginal maximum likelihood and expected a posteriori methods.

Keywords

Discontinuation rules Maximum likelihood The Rasch model 

Notes

Compliance with ethical standards

Conflict of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

References

  1. Andrich D, Marais I, Humphry S (2012) Using a theorem by Andersen and the dichotomous Rasch model to assess the presence of random guessing in multiple choice items. J Educ Behav Stat 37:417–442CrossRefGoogle Scholar
  2. Birnbaum A (1968) Some latent trait models and their use in inferring an examinee’s ability. In: Lord FM, Novick MR (eds) Statistical theories of mental test scores. Addison-Wesley, ReadingGoogle Scholar
  3. Bock RD, Aitkin M (1981) Marginal maximum likelihood estimation of item parameters: application of an EM algorithm. Psychometrika 46:443–459MathSciNetCrossRefGoogle Scholar
  4. Charter RA (2000) Determining random responding to objective tests. J Psychoeduc Assess 18:308–315CrossRefGoogle Scholar
  5. De Ayala RJ (2006) Estimating person measures from partial credit responses containing missing data. J Appl Meas 7:278–291Google Scholar
  6. De Ayala RJ, Plake BS, Impara JC (2001) The impact of omitted responses on the accuracy of ability estimation in item response theory. J Educ Meas 38:213–234CrossRefGoogle Scholar
  7. Gershon R (1992) Guessing and measurement. Rasch Meas Trans 6:209–210Google Scholar
  8. Ghosh M (1995) Inconsistent maximum likelihood for the Rasch model. Stat Probab Lett 23:165–170MathSciNetCrossRefGoogle Scholar
  9. Harwell M, Stone CA, Hsu T-C, Kirisci L (1996) Monte Carlo studies in item response theory. Appl Psychol Meas 20:101–125CrossRefGoogle Scholar
  10. He W, Wolfe EW (2012) Treatment of not-administered items on individually administered intelligence tests. Educ Psychol Meas 72:808–826CrossRefGoogle Scholar
  11. Linacre JM (1994) Sample size and item calibration stability. Rasch Meas Trans 7:328Google Scholar
  12. Linacre JM (2008) A user’s guide to WINSTEPS-MINISTEP: Rasch-model computer programs. Chicago, IL. https://www.winsteps.com
  13. Lord FM, Novick MR (1968) Statistical theories of mental test scores. Addison-Wesley, ReadingzbMATHGoogle Scholar
  14. Rasch G (1960) Probabilistic models for some intelligence and attainment tests. Danmarks Paedagogiske Institut, CopenhagenGoogle Scholar
  15. Riverside Publishing Company (2003) Stanford-Binet Intelligence Scales, 5th edn. Itasca, ILGoogle Scholar
  16. Rose N, von Davier M, Nagengast B (2017) Modeling omitted and not-reached items in IRT models. Psychometrika 82:795–819MathSciNetCrossRefGoogle Scholar
  17. SAS Institute (2013) SAS/STAT® 9.4 [Computer program]. SAS Institute, CaryGoogle Scholar
  18. Stecker PM, Lembke ES, Foegen A (2008) Using progress-monitoring data to improve instructional decision making. Prev Sch Fail Altern Educ Child Youth 52:48–58CrossRefGoogle Scholar
  19. von Davier M, Cho Y, Pan T (2019) Effects of discontinue rules on psychometric properties of test scores. Psychometrika 84:1–17MathSciNetCrossRefGoogle Scholar
  20. Wechsler D (2014) Wechsler intelligence scale for children, 5th edn. NCS Pearson, San AntonioGoogle Scholar
  21. Wright BD, Douglas GA (1977) Best procedures for sample-free item analysis. Appl Psychol Meas 1:281–294CrossRefGoogle Scholar
  22. Wright BD, Stone MH (1979) Best test design. Measurement, Evaluation, Statistics, and Assessment Press, ChicagoGoogle Scholar
  23. Zhang B, Walker CM (2008) Impact of missing data on person model fit and person trait estimation. Appl Psychol Meas 32:466–479MathSciNetCrossRefGoogle Scholar

Copyright information

© The Behaviormetric Society 2019

Authors and Affiliations

  1. 1.NCS Pearson, Inc.San AntonioUSA
  2. 2.American Institute for ResearchWashingtonUSA

Personalised recommendations