Skip to main content

The Reparameterized Unified Model System: A Diagnostic Assessment Modeling Approach

  • Chapter
  • First Online:
Book cover Handbook of Diagnostic Classification Models

Part of the book series: Methodology of Educational Measurement and Assessment ((MEMA))

Abstract

This chapter considers the Reparameterized Unified Model (RUM). The RUM a refinement of the DINA where which particular required skills that are lacking influences the probability of a correct response: Hartz, A Bayesian framework for the Unified Model for assessing cognitive abilities: blending theory with practicality. Dissertation, University of Illinois at Urbana-Champaign, 2001; Roussos, DiBello, Stout, Hartz, Henson, and Templin. The fusion model skills diagnosis system. In: JP Leighton and MJ Gierl (eds) Cognitive diagnostic assessment for education: Theory and applications. New York, Cambridge University Press, pp 275–318, 2007). The RUM diagnostic classification models (DCM) models binary (right/wrong scoring) items as the basis for a stills diagnostic classification system for scoring quizzes or tests. Refined DCMs developed from the RUM are discussed in some detail. Specifically, the commonly used “Reduced” RUM and an extension of the RUM to option scored items referred to as the Extended RUM model (ERUM; DiBello, Henson, & Stout, Appl Psychol Measur 39:62–79, 2015) are also considered. For the ERUM, the latent skills space is augmented by the inclusion of misconceptions whose possession reduces the probability of a correct response and increases the probability of certain incorrect responses, thus providing increasing classification accuracy. In addition to discussion of the foundational identifiability issue that occurs for option scored DCMs, available software using the SHINY package in R and including various appropriate “model checking” fit and discrimination indices is discussed and is available for users.

This research is supported by IES Grant R305D140023.

This book chapter is dedicated to Lou DiBello, deceased in March 2017, who contributed seminally to the research and methodology reported herein.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The original RUM (DiBello et al., 1995) also allowed compensatorily for mixing in the possible influence of non-Q based alternative cognitive strategies, but this added complexity has not been implemented anywhere and is thus not discussed herein. Indeed this multi-strategy generalization seems unrealistically complex and further in most cases not necessary for effective use of the RUM. Further, the original RUM had a slightly richer nonidentifiable parameterization than the RUM does with π k in 3.2 replaced by a product of D substantively meaningful but nonidentifiable multiplicands π kd.

  2. 2.

    It is perhaps mathematically equivalent to let misconceptions be represented by the negation of skills. We avoid this because it seems cumbersome for users.

  3. 3.

    Restricted options guessing has been modeled but is not discussed here.

  4. 4.

    This somewhat technical section can be skipped over, but with the caveat that identifiability is endemic to DCM option scored MC modeling, and this section helps explain this perhaps surprising claim.

  5. 5.

    DF has a mathematical meaning: the number of elements that need to be constrained to uniquely determine the full vector. That is how DF is used herein. Note here however that DF > 0 is a bad thing that has to be changed to DF = 0. This is contrary to many statistical applications where DF > 0 is beneficial, such as in linear models where DF > 0 allows the size of the residual variance to be estimated and estimated well when DF > 0 is large.

References

  • Beeley, C. (2013). Web application development with R using SHINY. Birmingham, UK: PACKT Publishing.

    Google Scholar 

  • Bradshaw, L., & Templin, J. (2014). Combining scaling and classification: A psychometric model for scaling ability and diagnosing misconceptions. Psychometrika, 79, 403–425.

    Article  Google Scholar 

  • Briggs, D. C., Alonzo, A. C., Schwab, C., & Wilson, M. (2006). Diagnostic assessment with ordered multiple-choice items. Educational Assessment, 11(1), 33–63.

    Article  Google Scholar 

  • Chen, Y., Culpepper, S., Chen, Y., & Douglas, J. (2018). Bayesian estimation of the DINA Q matrix. Psychometrika, 83, 89–108.

    Article  Google Scholar 

  • Chiu, C., Kohn, H., & Wu, H. (2016). Fitting the reduced RUM with Mplus: A tutorial. International Journal of Testing, 16(4), 331–351.

    Article  Google Scholar 

  • Chung, M., & Johnson, M. (2017). An MCMC algorithm for estimating the Reduced RUM. https://arxiv.org/abs/1710.08412

  • de la Torre, J. (2009). A cognitive diagnosis model for cognitively-based multiple choice options. Applied Psychological Measurement, 33, 163–183.

    Article  Google Scholar 

  • DiBello, L., Stout, W., & Roussos, L. (1995). Unified cognitive/psychometric diagnostic assessment likelihood-based classification techniques. In P. Nichols, S. Chipman, & R. Brennan (Eds.), Cognitively diagnostic assessment (pp. 361–389). Hillsdale, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • DiBello, L. V., Henson, R., & Stout, W. (2015). A family of generalized diagnostic classification models for multiple choice option-based scoring. Applied Psychological Measurement, 39, 62–79.

    Article  Google Scholar 

  • DiBello, L. V., & Stout, W. (2008). Arpeggio documentation and analyst manual. Department of Statistics, University of Illinois, Champaign-Urbana IL (Contact W. Stout).

    Google Scholar 

  • Feng, Y., Habing, B., & Huebner, A. (2014). Parameter estimation of the reduced RUM using the EM algorithm. Applied Psychological Measurement, 38, 137–150.

    Article  Google Scholar 

  • Gelman, A., & Rubin, D. (1992). Inference from iterative simulation using multiple sequences. Statistical Science, 7, 457–511.

    Article  Google Scholar 

  • Gilks, W. R., Richardson, S., & Spielgelhalter, D. J. (1996). Markov chain Monte Carlo. London, UK: Chapman and Hall/CRC.

    Google Scholar 

  • Haertel, E. H. (1989). Using restricted latent class models to map the skill structure of achievement items. Journal of Educational Measurement, 26, 301–321.

    Article  Google Scholar 

  • Han, Z., & Johnson, M. (this volume). Global model and item-level fit indices. In M. von Davier & Y.-S. Lee (Eds.), Handbook of diagnostic classification models. Cham, Switzerland: Springer.

    Google Scholar 

  • Hartz, S. M. (2001). A Bayesian framework for the Unified Model for assessing cognitive abilities: Blending theory with practicality. Dissertation. University of Illinois at Urbana-Champaign.

    Google Scholar 

  • Henson, R., DiBello, L., & Stout, W. (2018). A generalized approach to defining item discrimination for DCMs. Measurement: Interdisciplinary Research and Perspectives, 16(1), 18–29. https://doi.org/10.1080/15366367.2018.1436855

    Article  Google Scholar 

  • Jang, E. E. (2005). A validity narrative: Effects of reading skills diagnosis on teaching and learning in the context of NG TOEFL. Dissertation. University of Illinois at Urbana-Champaign.

    Google Scholar 

  • Jang, E. E. (2009). Cognitive diagnostic assessment of L2 reading comprehension ability: Validity arguments for fusion model application to language assessment. Language Testing, 26(1), 031–073. https://doi.org/10.1177/0265532208097336

    Article  Google Scholar 

  • Jorion, N., Gane, B. D., James, K., Schroeder, L., DiBello, L. V., & Pellegrino, J. W. (2015). An analytic framework for evaluating the validity of concept inventory claims: Framework for evaluating validity of concept inventories. Journal of Engineering Education, 104(4), 454–496.. https://doi.org/10.1002/jee.20104

    Article  Google Scholar 

  • Kim, A.-Y. (Alicia). (2015). Exploring ways to provide diagnostic feedback with an ESL placement test: Cognitive diagnostic assessment of L2 reading ability. Language Testing, 32(2), 227–258. https://doi.org/10.1177/0265532214558457

    Article  Google Scholar 

  • Kim, Y.-H. (2011). Diagnosing EAP writing ability using the Reduced Reparameterized Unified Model. Language Testing, 28(4), 509–541. https://doi.org/10.1177/0265532211400860

    Article  Google Scholar 

  • Kunina-Habenicht, O., Rupp, A., & Wilhelm, O. (2009). A practical illustration of multidimensional diagnostic skills profiling: Comparing results from confirmatory factor analysis and diagnostic classification models. Studies in Educational Evaluation, 35, 64–70.

    Article  Google Scholar 

  • Kuo, B., Chen, C., & de la Torre, J. (2017). A cognitive diagnosis model for identifying coexisting skills and misconceptions. Applied Psychological Measurement, 42(3), 179–191.

    Article  Google Scholar 

  • Lee, Y.-W., & Sawaki, Y. (2009a). Application of three cognitive diagnosis models to ESL reading and listening assessments. Language Assessment Quarterly, 6(3), 239–263. https://doi.org/10.1080/15434300903079562

    Article  Google Scholar 

  • Lee, Y.-W., & Sawaki, Y. (2009b). Cognitive diagnosis and Q-matrices in language assessment. Language Assessment Quarterly, 6(3), 169–171. https://doi.org/10.1080/15434300903059598

    Article  Google Scholar 

  • Li, H., & Suen, H. K. (2013a). Constructing and validating a Q-matrix for cognitive diagnostic analyses of a reading test. Educational Assessment, 18(1), 1–25. https://doi.org/10.1080/10627197.2013.761522

    Article  Google Scholar 

  • Li, H., & Suen, H. K. (2013b). Detecting native language group differences at the subskills level of reading: A differential skill functioning approach. Language Testing, 30(2), 273–298. https://doi.org/10.1177/0265532212459031

    Article  Google Scholar 

  • Liu, J., & Johnson, M. (this volume). Estimating CDMs Using MCMC. In M. von Davier & Y.-S. Lee (Eds.), Handbook of diagnostic classification models. Cham, Switzerland: Springer.

    Google Scholar 

  • Liu, X., & Kang, H. (this volume). Q matrix learning via latent variable selection and identifiability. In M. von Davier & Y.-S. Lee (Eds.), Handbook of diagnostic classification models. Cham, Switzerland: Springer.

    Google Scholar 

  • Ma, W. (this volume). The GDINA R-package. In M. von Davier & Y.-S. Lee (Eds.), Handbook of diagnostic classification models. Cham, Switzerland: Springer.

    Google Scholar 

  • Masters, J. (2012). Diagnostic geometry assessment project: Validity evidence (Technical Report). Measured Progress Innovation Laboratory.

    Google Scholar 

  • Plummer, M., Best, N., Cowles, K., & Vines, K. (2006). CODA: Convergence diagnosis and output analysis for MCMC. R News, 6, 7–11.

    Google Scholar 

  • Ranjbaran, F., & Alavi, M. (2017). Developing a reading comprehension test for cognitive diagnostic assessment: A RUM analysis. Studies in Educational Evaluation 55 167–179.

    Google Scholar 

  • Robitzsch, A., & George, A. (this volume). The R package CDM for diagnostic modeling. In M. von Davier & Y.-S. Lee (Eds.), Handbook of diagnostic classification models. Cham, Switzerland: Springer.

    Google Scholar 

  • Roussos, L. A., DiBello, L. V., Stout, W., Hartz, S. M., Henson, R. A., & Templin, J. L. (2007). The fusion model skills diagnosis system. In J. P. Leighton & M. J. Gierl (Eds.), Cognitive diagnostic assessment for education: Theory and applications (pp. 275–318). New York, NY: Cambridge University Press.

    Chapter  Google Scholar 

  • Santiago-Román, A. I., Streveler, R. A., & DiBello, L. V. (2010a). The development of estimated cognitive attribute profiles for the concept assessment tool for statics. Presented at the 40th ASEE/IEEE Frontiers in Education Conference, Washington, DC.

    Google Scholar 

  • Santiago-Román, A. I., Streveler, R. A., Steif, P. S., & DiBello, L. V. (2010b). The development of a Q-matrix for the concept assessment tool for statics. Presented at the ERM division of the ASEE annual conference and exposition, Louisville, KY.

    Google Scholar 

  • Shear, B. R., & Roussos, L. A. (2017). Validating a distractor-driven geometry test using a generalized diagnostic classification model. In I. B. D. Zumbo & A. M. Hubley (Eds.), Understanding and investigating response processes in validation research (Vol. 69, pp. 277–304). Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-319-56129-5_15

    Chapter  Google Scholar 

  • Steif, P. S., & Dantzler, J. A. (2005). A statics concept inventory: Development and psychometric analysis. Journal of Engineering Education, 94(4), 363–371. https://doi.org/10.1002/j.2168-9830.2005.tb00864.x

    Article  Google Scholar 

  • Sullivan, M., Pace, J., & Templin, J. (this volume). Using Mplus to estimate the Log-Linear Cognitive Diagnosis model. In M. von Davier & Y.-S. Lee (Eds.), Handbook of diagnostic classification models. Cham, Switzerland: Springer.

    Google Scholar 

  • Templin, J., & Hoffman, L. (2013). Obtaining diagnostic classification model estimates using Mplus. Educational Measurement: Issues and Practice, 32(2), 37–50.

    Article  Google Scholar 

  • von Davier, M., & Lee, Y.-S. (this volume). Introduction: From latent class analysis to DINA and beyond. In M. von Davier & Y.-S. Lee (Eds.), Handbook of diagnostic classification models. Cham, Switzerland: Springer.

    Google Scholar 

  • Zhang, S., Douglas, J., Wang, S., & Culpepper, S. (this volume). Reduced Reparameterized Unified Model applied to learning system spacial reasoning. In M. von Davier & Y.-S. Lee (Eds.), Handbook of diagnostic classification models. Cham, Switzerland: Springer.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to William Stout .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Stout, W., Henson, R., DiBello, L., Shear, B. (2019). The Reparameterized Unified Model System: A Diagnostic Assessment Modeling Approach. In: von Davier, M., Lee, YS. (eds) Handbook of Diagnostic Classification Models. Methodology of Educational Measurement and Assessment. Springer, Cham. https://doi.org/10.1007/978-3-030-05584-4_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-05584-4_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-05583-7

  • Online ISBN: 978-3-030-05584-4

  • eBook Packages: EducationEducation (R0)

Publish with us

Policies and ethics