Advertisement

Comparison of Two Item Preknowledge Detection Approaches Using Response Time

  • Chunyan LiuEmail author
Conference paper
Part of the Springer Proceedings in Mathematics & Statistics book series (PROMS, volume 265)

Abstract

Response time (RT) has been demonstrated to be effective in identifying compromised items and test takers with item preknowledge. This study compared the performance of the effective response time (ERT) approach and the residual based on the lognormal response time model (RES) approach in detecting the examinees with item preknowledge using item response time in a linear test. Three factors were considered in this study: the percentage of examinees with item preknowledge, the percentage of breached items, and the percent decrease of response time of the breached items. The results suggest that the RES approach not only controls the Type I error rate below 0.05 for all investigated conditions, but also flag the examinees with item preknowledge sensitively.

Keywords

Response time Compromised items Item preknowledge 

References

  1. Fox, J. -P., Klein Entink, R. H., & Klotzke, K. (2017). LNIRT: LogNormal response time item response theory models. R package version 0.2.0. Retrieved from http://CRAN.R-project.org/package=LNIRT.
  2. Meijer, R. R., & Sotaridona, L. S. (2006). Detection of advance item knowledge using response times in computer adaptive testing. (LSAC Computerized Testing Report No. 03-03). Newtown, PA: Law School Admission Council.Google Scholar
  3. Qian, H., Staniewska, D., Reckase, M., & Woo, A. (2016). Using response time to detect item preknowledge in computer-based licensure examinations. Educational Measurement: Issues and Practice, 35, 38–47.CrossRefGoogle Scholar
  4. Shao, C., Li, J., & Cheng, Y. (2016). Detection of test speededness using change-point analysis. Psychometrika, 81, 1118–1141.MathSciNetCrossRefGoogle Scholar
  5. van der Linden, W.J., & van Krimpen-Stoop, E. M. L. A. (2003). Using response times to detect aberrant responses in computerized adaptive testing. Psychometrika, 68, 251–265.Google Scholar
  6. van der Linden, W. J. (2006). A lognormal model for response times on test items. Journal of Educational and Behavioral Statistics, 31, 181–204.CrossRefGoogle Scholar
  7. van der Linden, W. J., & Guo, F. (2008). Bayesian procedures for identifying aberrant response-time patterns in adaptive testing. Psychometrika, 73, 365–384.MathSciNetCrossRefGoogle Scholar
  8. Wise, S. L. (2006). An investigation of the differential effort received by items on a low-stakes, computer-based test. Applied Measurement in Education, 19, 93–112.MathSciNetCrossRefGoogle Scholar
  9. Wise, S. L., & Kong, X. (2005). Response time effort: a new measure of examinee motivation in computer-based tests. Applied Measurement in Education, 18, 163–183.CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.National Board of Medical Examiners®PhiladelphiaUSA

Personalised recommendations