Bifactor MIRT as an Appealing and Related Alternative to CDMs in the Presence of Skill Attribute Continuity

  • Daniel M. BoltEmail author
Part of the Methodology of Educational Measurement and Assessment book series (MEMA)


For virtually all tests analyzed using CDMs, low-dimensional compensatory item response theory (IRT) models with continuous abilities appear to provide an equivalent or better statistical fit, as noted in a recent commentary by von Davier and Haberman (Psychometrika, 79:340–346, 2014). We examine these issues using both simulation and real data analyses. We suggest that the results motivate consideration of bifactor MIRT models as an attractive alternative for diagnostic measurement, particular in cases where skill attribute continuity is suspected or can be confirmed. The potential usefulness of bifactor MIRT for diagnostic scoring is also based on other considerations. For example, bifactor MIRT reflects a tendency for items to measure primarily one of the required conjunctively interacting skill attributes (the most difficult of the required attributes), and also makes it possible to address the estimation limitations of MIRT models of high dimensionality (Cai L, Psychometrika, 75(4):581–612, 2010). Additionally, the bifactor MIRT model uses orthogonal statistical dimensions, making it easier to quantify the incremental contribution provided by attending to specific factors that can provide the foundation for diagnosis.



The author would like to thank Nana Kim and the two assigned reviewers for their review and comments on an earlier version of this chapter.


  1. Bolt, D. M. (2017). Parameter invariance and skill attribute continuity in the DINA model. Unpublished manuscript.Google Scholar
  2. Bolt, D. M., & Lall, V. F. (2003). Estimation of compensatory and noncompensatory multidimensional item response models using Markov chain Monte Carlo. Applied Psychological Measurement, 27(6), 395–414.CrossRefGoogle Scholar
  3. Bradshaw, L. P., & Madison, M. J. (2016). Invariance properties for general diagnostic classification models. International Journal of Testing, 16(2), 99–118.CrossRefGoogle Scholar
  4. Cai, L. (2010). A two-tier full-information item factor analysis model with applications. Psychometrika, 75(4), 581–612.CrossRefGoogle Scholar
  5. Chalmers, R. P. (2012). MIRT: A multidimensional item response theory package for the R environment. Journal of Statistical Software.
  6. de la Torre, J., & Douglas, J. A. (2004). Higher-order latent trait models for cognitive diagnosis. Psychometrika, 69(3), 333–353.CrossRefGoogle Scholar
  7. de la Torre, J., & Lee, Y.-S. (2010). A note on the invariance of the DINA model parameters. Journal of Educational Measurement, 47(1), 115–127.CrossRefGoogle Scholar
  8. DeCarlo, L. T. (2011). On the analysis of fraction subtraction data: The DINA model, classification, latent class sizes, and the Q-matrix. Applied Psychological Measurement, 35, 8–26.CrossRefGoogle Scholar
  9. Embretson, S. E. (1984). A general multicomponent latent trait model for response processes. Psychometrika, 49, 175–186.CrossRefGoogle Scholar
  10. Gibbons, R. D., & Hedeker, D. R. (1992). Full-information item bifactor analysis. Psychometrika, 57, 423–436.CrossRefGoogle Scholar
  11. Holzinger, K. J., & Swineford, F. (1937). The bi-factor method. Psychometrika, 2, 41–54.CrossRefGoogle Scholar
  12. Hooker, G., & Finkelman, M. (2010). Paradoxical results and item bundles. Psychometrika, 75(2), 249–271.CrossRefGoogle Scholar
  13. Hooker, G., Finkelman, M., & Schwartzman, A. (2009). Paradoxical results in multidimensional item response theory. Psychometrika, 74(3), 419–442.CrossRefGoogle Scholar
  14. Reise, S. P. (2012). The rediscovery of bifactor measurement models. Multivariate Behavioral Research, 47(5), 667–696.CrossRefGoogle Scholar
  15. Rijmen, F. (2009). Efficient full information maximum likelihood estimation for multidimensional IRT models. ETS Research Report Series, i-31.Google Scholar
  16. Robitzsch, A., Kiefer, T., George, A. C., & Uenlue, A. (2017). CDM: Cognitive diagnosis modeling. R package version, 3-1.Google Scholar
  17. Rupp, A. A., & Templin, J. L. (2008). Unique characteristics of diagnostic classification models: A comprehensive review of the current state-of-the-art. Measurement, 6(4), 219–262.Google Scholar
  18. Sympson, J. B. (1978). A model for testing with multidimensional items. In D. J. Weiss (Ed.), Proceedings of the 1977 computerized adaptive testing conference (pp. 82–98). Minneapolis, MN: University of Minnesota, Department of Psychology, Psychometric Methods Program.Google Scholar
  19. Tatsuoka, K. K. (1990). Toward an integration of item-response theory and cognitive error diagnosis. In N. Frederiksen, R. Glaser, A. Lesgold, & M. Safto (Eds.), Monitoring skills and knowledge acquisition (pp. 453–488). Hillsdale, NJ: Erlbaum.Google Scholar
  20. van der Linden, W. J. (2012). On compensation in multidimensional response modeling. Psychometrika, 77(1), 21–30.CrossRefGoogle Scholar
  21. van Rijn, P., & Rijmen, F. (2015). On the explaining-away phenomenon in multivariate latent variable models. British Journal of Mathematical and Statistical Psychology, 68(1), 1–22.CrossRefGoogle Scholar
  22. von Davier, M., & Haberman, S. (2014). Hierarchical diagnostic classification models morphing into unidimensional ‘Diagnostic’ classification models – A commentary. Psychometrika, 79, 340–346.CrossRefGoogle Scholar
  23. Wainer, H., Bradlow, E. T., & Wang, X. (2007). Testlet response theory and its applications. Cambridge, UK: Cambridge University Press.CrossRefGoogle Scholar
  24. Wang, C., Zheng, C., & Chang, H.-H. (2017). An enhanced approach to combine item response theory with cognitive diagnosis in adaptive testing. Journal of Educational Measurement, 51, 358–380.CrossRefGoogle Scholar
  25. Weeks, J. P. (2015). Multidimensional test linking. In S. P. Reise & D. A. Revicki (Eds.), Handbook of item response theory modeling: Applications to typical performance assessment (pp. 406–434). New York, NY: Routledge.Google Scholar
  26. Whitely, S. E. (1980). Multicomponent latent trait models for ability tests. Psychometrika, 45, 479–494.CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Educational PsychologyUniversity of Wisconsin – MadisonMadisonUSA

Personalised recommendations