Advertisement

Cognitive diagnosis models for estimation of misconceptions analyzing multiple-choice data

  • Koken OzakiEmail author
  • Shingo Sugawara
  • Noriko Arai
Original Paper
  • 22 Downloads

Abstract

Incorrect options for multiple-choice questions are often intentionally included so that they may be selected by an examinee who possesses a misconception. Determining whether an examinee possess a misconception is useful for educational purposes. In the present paper, two statistical models that can estimate examinees’ possession of misconceptions by analyzing multiple-choice data, which are unscored data were developed. By converting multiple-choice data to binary data, which are scored data (\(1=\) correct, \(0=\) incorrect), the Bug-DINO model can estimate examinees’ possession of misconceptions. However, converting multiple-choice data to binary data causes a loss in information, because which incorrect option an examinee chooses is important information for an examinee’s knowledge state. The three models (two developed models and the Bug-DINO model) are compared in a simulation study, and the developed models are applied to the Reading Skill Test data.

Keywords

Multiple-choice item Cognitive diagnosis model Misconception DINO model 

Notes

Acknowledgements

This research was funded by Grant-in-Aid for Scientific Research(C) 18K03057.

Supplementary material

41237_2019_100_MOESM1_ESM.pdf (200 kb)
Supplementary material 1 (pdf 200 KB)

References

  1. Arai NH, Todo N, Arai T, Bunji K, Sugawara S, Inuzuka M, Matsuzaki T, Ozaki K (2017) Reading skill test to diagnose basic language skills in comparison to machines. In: Proceedings of the 39th annual cognitive science society meeting (CogSci 2017), pp 1556–1561Google Scholar
  2. Chen J (2017) A residual-based approach to validate Q-Matrix specifications. Appl Psychol Meas 41(4):277–293CrossRefGoogle Scholar
  3. de la Torre J, Douglas J (2004) Higher-order latent trait models for cognitive diagnosis. Psychometrika 69(3):333–353MathSciNetCrossRefGoogle Scholar
  4. de la Torre J (2009) A cognitive diagnosis model for cognitively based multiple-choice options. Appl Psychol Meas 33(3):163–183MathSciNetCrossRefGoogle Scholar
  5. de la Torre J (2011) The generalized DINA model framework. Psychometrika 76(2):179–199MathSciNetCrossRefGoogle Scholar
  6. de la Torre J, Chiu C-Y (2016) A general method of empirical Q-matrix validation. Psychometrika 81(2):253–273MathSciNetCrossRefGoogle Scholar
  7. DiBello LV, Henson RA, Stout WF (2015) A family of generalized diagnostic classification models for multiple choice option-based scoring. Appl Psychol Meas 39(1):62–79CrossRefGoogle Scholar
  8. Gelman A, Rubin DB (1992) Inference from iterative simulation using mutiple sequences. Stat Sci 7(4):457–472CrossRefGoogle Scholar
  9. Hartz S (2002) A Bayesian framework for the unified model for assessing cognitive abilities: blending theory with practicality (Doctoral dissertation). University of Illinois, Urbana-ChampaignGoogle Scholar
  10. Hastings WK (1970) Monte carlo sampling methods using markov chains and their applications. Biometrika 57(1):97–109MathSciNetCrossRefGoogle Scholar
  11. Im S, Corter JE (2011) Statistical consequences of attribute misspecification in the rule spece method. Educ Psychol Meas 71(4):712–731CrossRefGoogle Scholar
  12. Junker BW, Sijtsma K (2001) Cognitive assessment models with few assumptions, and connections with nonparametric item response theory. Appl Psychol Meas 25(3):258–272MathSciNetCrossRefGoogle Scholar
  13. Köhn HF, Chiu CY (2017) A procedure for assessing the completeness of the Q-Matrices of cognitively diagnostic tests. Psychometrika 82(1):112–132MathSciNetCrossRefGoogle Scholar
  14. Kuo B-C, Chen C-H, Yang C-W, Mok MMC (2016) Cognitive diagnostic models for tests with multiple-choice and constructed-response items. Educ Psychol 36(6):1115–1133CrossRefGoogle Scholar
  15. Kuo B-C, Chen C-H, de la Torre J (2018) A cognitive diagnosis model for identifying coexisting skills and misconceptions. Appl Psychol Meas 42(3):179–191CrossRefGoogle Scholar
  16. Maris E (1999) Estimating multiple classification latent class models. Psychometrika 64(2):187–212MathSciNetCrossRefGoogle Scholar
  17. Minchen ND, de la Torre J, Liu Y (2017) A cognitive diagnosis model for continuous response. J Educ Behav Stat 42(6):651–677CrossRefGoogle Scholar
  18. Ozaki K (2015) DINA models for multiple-choice items with few parameters: considering incorrect answers. Appl Psychol Meas 39(6):431–447CrossRefGoogle Scholar
  19. Richards JC, Schmidt R (2002) Dictionary of language teaching and applied linguistics, 3rd edn. Longman, LondonGoogle Scholar
  20. Rupp AA, Templin JL (2008) The effects of Q-matrix misspecification on parameter estimates and classification accuracy in the DINA model. Educ Psychol Meas 68(1):78–96MathSciNetCrossRefGoogle Scholar
  21. Templin J, Henson R (2006) Measurement of psychological disorders using cognitive diagnosis models. Psychol Methods 11(3):287–305CrossRefGoogle Scholar

Copyright information

© The Behaviormetric Society 2019

Authors and Affiliations

  1. 1.Graduate School of Business SciencesUniversity of TsukubaTokyoJapan
  2. 2.National Institute of InformaticsTokyoJapan

Personalised recommendations