Advertisement

Mastery Classification of Diagnostic Classification Models

  • Yuehmei ChienEmail author
  • Ning Yan
  • Chingwei D. Shin
Part of the Springer Proceedings in Mathematics & Statistics book series (PROMS, volume 140)

Abstract

The purpose of diagnostic classification models (DCMs) is to determine mastery or non-mastery of a set of attributes or skills. There are two statistics directly obtained from DCMs that can be used for mastery classification—the posterior marginal probabilities for attributes and the posterior probability for attribute profile.

When using the posterior marginal probabilities for mastery classification, a threshold of a probability is required to determine the mastery or non-mastery status for each attribute. It is not uncommon that a 0.5 threshold is adopted in real assessment for binary classification. However, 0.5 might not be the best choice in some cases. Therefore, a simulation-based threshold approach is proposed to evaluate several possible thresholds and even determine the optimal threshold. In addition to non-mastery and mastery, another category called the indifference region, for those probabilities around 0.5, seems justifiable. However, use of the indifference region category should be used with caution because there may not be any response vector falling in the indifference region based on the item parameters of the test.

Another statistic used for mastery classification is the posterior probability for attribute profile, which is more straightforward than the posterior marginal probability. However, it also has an issue—multiple-maximum—when a test is not well designed. The practitioners and the stakeholders of testing programs should be aware of the existence of the two potential issues when the DCMs are used for the mastery classification purpose.

References

  1. Cheng, Y. (2009). When cognitive diagnosis meets computerized adaptive testing: CDCAT. Psychometrika, 74, 619–632.MathSciNetCrossRefzbMATHGoogle Scholar
  2. Chiu, C.-Y., Douglas, J., & Li, X. (2009). Cluster analysis for cognitive diagnosis: Theory and applications. Psychometrika, 74, 633–665.MathSciNetCrossRefzbMATHGoogle Scholar
  3. de la Torre, J. (2011). The generalized DINA model framework. Psychometrika, 76, 179–199.MathSciNetCrossRefzbMATHGoogle Scholar
  4. DeCarlo, L. T. (2011). On the analysis of fraction subtraction data: The DINA model, classification, latent class sizes, and the Q-matrix. Applied Psychological Measurement, 35, 8–26.CrossRefGoogle Scholar
  5. Doignon, J. P., & Falmagne, J. C. (1999). Knowledge spaces. New York, NY: Springer.CrossRefzbMATHGoogle Scholar
  6. Haertel, E. H. (1989). Using restricted latent class models to map the skill structure of achievement items. Journal of Educational Measurement, 26, 333–352.CrossRefGoogle Scholar
  7. Hartz, S. M. (2002). A Bayesian framework for the unified model for assessing cognitive abilities: Blending theory with practicality. Unpublished doctoral dissertation, University of Illinois at Urbana-Champaign, Champaign, IL.Google Scholar
  8. Hartz, S., Roussos, L., & Stout, W. (2002). Skills diagnosis: Theory and practice [User manual for Arpeggio software]. Princeton, NJ: Educational Testing Service.Google Scholar
  9. Henson, R., Templin, J., & Willse, J. (2009). Defining a family of cognitive diagnosis models using log-linear models with latent variables. Psychometrika, 74, 191–210.MathSciNetCrossRefzbMATHGoogle Scholar
  10. Jang, E. (2005). A validity narrative: Effects of reading skills diagnosis on teaching and learning in the context of NG TOEFL. Doctoral dissertation, University of Illinois at Urbana-Champaign.Google Scholar
  11. Junker, B. W., & Sijtsma, K. (2001). Cognitive assessment models with few assumptions, and connections with nonparametric item response theory. Applied Psychological Measurement, 25, 258–272.MathSciNetCrossRefGoogle Scholar
  12. Mislevy, R. J., Almond, R. G., Yan, D., & Steinberg, L. S. (1999). Bayes nets in educational assessment: Where do the numbers come from? In K. B. Laskey & H. Prade (Eds.), Proceedings of the fifteenth conference on uncertainty in artificial intelligence (pp. 437–446). San Mateo, CA: Morgan Kaufmann.Google Scholar
  13. Rupp, A. A., Templin, J., & Henson, R. A. (2010). Diagnostic measurement: Theory, methods, and practice. New York, NY: Guilford.Google Scholar
  14. Tatsuoka, K. K. (1983). Rule space: An approach for dealing with misconceptions based on item response theory. Journal of Educational Measurement, 20, 345–354.CrossRefGoogle Scholar
  15. Templin, J. L., & Henson, R. (2006). Measurement of psychological disorders using cognitive diagnosis models. Psychological Methods, 11, 287–305.CrossRefGoogle Scholar
  16. von Davier, M. (2005). A general diagnostic model applied to language testing data, ETS Research Report RR-05-16. Princeton, NJ: Educational Testing Service. Retrieved from http://www.ets.org/Media/Research/pdf/RR-05-16.pdf
  17. Zhang, S. S. (2014). Statistical inference and experimental design for Q-matrix based cognitive diagnosis models. Doctoral dissertation, Columbia University.Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.PearsonIowa CityUSA
  2. 2.TianjinChina

Personalised recommendations