Advertisement

Application of Cognitive Diagnostic Models to Learning and Assessment Systems

  • Benjamin Deonovic
  • Pravin Chopade
  • Michael Yudelson
  • Jimmy de la Torre
  • Alina A. von DavierEmail author
Chapter
Part of the Methodology of Educational Measurement and Assessment book series (MEMA)

Abstract

Over the past few decades, cognitive diagnostic models have generated a lot of interest due in large part to the call made by the No Child Left Behind Act of 2001 (No Child Left Behind, Act of 2001 Public Law No. 107–110, § 115. Stat, 1425, 2002) for more formative assessments in learning systems. In this chapter, we provide an overview of learning and assessment systems, including the rise in popularity of online and personalized learning systems; we contrast the role of summative and formative assessments in learning systems; and we provide a review of cognitive diagnostic models and the challenges of retrofitting models to data not designed for cognitive diagnostic models.

Notes

Acknowledgements

The authors wish to thank Terry Ackerman, Former Lindquist Research Chair, ACT Inc., Yu Fang, Principal Psychometrician, Psychometric Research, ACT Inc. for providing insightful comments and feedback for this chapter. We sincerely acknowledge Melanie Rainbow-Harel, Former Assessment Designer and David Carmody Principal Assessment Specialist, ACT Inc. for their contribution towards design of attributes for three Math domains. We are thankful to Andrew Cantine- Communications and Publications Manager, ACTNext for editing this work. We are also thankful to ACT, Inc. for their ongoing support as this chapter took shape.

Glossary

A-CDM

The additive-CDM

ADA-MAT-BOM

Algebra-Matrices-Basic operations on matrices.

AIC

Akaike Information Criterion

BIC

Bayesian Information Criterion

BKT

Bayesian Knowledge Tracing

CCSS

Common Core State Standards

CDM

Cognitive Diagnostic Models

CP

Computational Psychometrics

DINA

The deterministic inputs, noisy “and” gate

DINO

The deterministic input, noisy “or” gate

DM

Data Mining

EAR

Element-wise agreement rate

ECA

Educational Companion App

EDM

Educational Data Mining

EM

Expectation Maximization

G

Geometry

G-DINA

The generalized DINA

GDI

G-DINA discrimination index

GDM

The general diagnostic model

HF

The ACT Holistic Framework

HMM

Hidden Markov Model

IRT

Item Response Theory

ITS

Intelligent Tutoring System

KC

Knowledge Component

KLI

The Knowledge-learning-instruction

LAK

Learning Analytics & Knowledge

LAS

Learning at Scale

LCDM

The log-linear CDM

LEAP

The Learning Analytics Platform

LLM

The log-linear model

LLTM

Linear Logistic Test Model

ML

Machine Learning

NGSS

Next Generation Science Standards

NRC

The National Research Council

OAF

Operations, Algebra, & Functions

OCW

Open Courseware

OER

Open Education Resources

OpenEd

Open educational resources (OER) from ACT’s OpenEd that allows students to practice the skills they have yet to master.

PFA

Performance Factors Analysis

PVAF

The proportion of variance accounted for

Q-matrix

A Q-matrix is a mapping which identifies which skills or attributes are tested by an item or which skills or attributes are required to successfully complete an item on an assessment.

rRUM

The reduced reparametrized unified model

SRL

Self-regulated Learning

VAR

Vector-wise agreement rate

1PL

One-parameter Logistic Model

References

  1. Abelson, H. (2008). The creation of OpenCourseWare at MIT. Journal of Science Education and Technology, 17(2), 164–174.CrossRefGoogle Scholar
  2. Atkins, D., Brown, J., & Hammond, A. (2007). A review of the open educational resources (OER) movement: Achievements, challenges, and new opportunities. San Francisco, CA: Creative Common, The William and Flora Hewlett Foundation.Google Scholar
  3. Bischl, B., Lang, M., Kotthoff, L., Schiffner, J., Richter, J., Studerus, E., … Jones, Z. (2016). mlr: Machine learning in R. Journal of Machine Learning Research, 17(170), 1–5. http://jmlr.org/papers/v17/15-066.html Google Scholar
  4. Camara, W., O’Connor, R., Mattern, K., & Hanson, M. A. (2015). Beyond academics: A holistic framework for enhancing education and workplace success. Iowa City, IA: ACT, Inc.Google Scholar
  5. Carmona, C., Millán, E., Pérez-de-la-Cruz, J., Trella, M., & Conejo, R. (2005). Introducing prerequisite relations in a multi-layered Bayesian student model. In International conference on user modeling (pp. 347–356). Berlin/Heidelberg, Germany: Springer.Google Scholar
  6. Chopade, P., von Davier, A., Polyak, S., Peterschmidt, K., Yudelson, M., Greene, J., & Blum, A. (2017). Introducing the ACTNext educational companion: An intelligent, personalized guide for mobile learning. Poster session presented at Educational Technology and Computational Psychometrics Symposium (ETCPS) organized by ACTNext, ACT Inc at The Englert Theatre, November 15–16, 2017 (pp. 21). Iowa City, IA: ACTNext ACT Inc.Google Scholar
  7. Conati, C., Gertner, A., & Vanlehn, K. (2002). Using Bayesian networks to manage uncertainty in student modeling. User Modeling and User-Adapted Interaction, 12(4), 371–417.CrossRefGoogle Scholar
  8. Corbett, A., & Anderson, J. (1995). Knowledge tracing: Modeling the acquisition of procedural know ledge. User Modeling and User-Adapted Interaction, 4(4), 253–278.CrossRefGoogle Scholar
  9. Corcoran, T., Mosher, F., & Rogat, A. (2009). Learning progressions in science: An evidence-based approach to reform. Philadelphia, PA: Consortium for Policy Research in Education.Google Scholar
  10. Cronbach, L. (1957). The two disciplines of scientific psychology. American Psychologist, 12(11), 671–684.CrossRefGoogle Scholar
  11. de la Torre, J. (2008). An empirically based method of Q-matrix validation for the DINA model: Development and applications. Journal of Educational Measurement, 45, 343–362.CrossRefGoogle Scholar
  12. de la Torre, J. (2009, March). DINA model and parameter estimation: A didactic. Journal of Educational and Behavioral Statistics, 34(1), 115–130.CrossRefGoogle Scholar
  13. de la Torre, J. (2011, April). The generalized DINA model framework. Psychometrika, 76(2), 179–199.CrossRefGoogle Scholar
  14. de la Torre, J., & Chiu, C.-Y. (2016). A general method of empirical Q-matrix validation. Psychometrika, 81, 253–273.CrossRefGoogle Scholar
  15. de la Torre, J., & Lee, Y.-S. (2013). Evaluating the Wald test for item-level comparison of saturated and reduced models in cognitive diagnosis. Journal of Educational Measurement, 50(4), 355–373.CrossRefGoogle Scholar
  16. de la Torre, J., & Ma, W. (2016). Cognitive diagnosis modeling: A general framework approach and its implementation in R. In A short course at the fourth conference on statistical methods in psychometrics. New York, NY: Columbia University.Google Scholar
  17. de la Torre, J., & Minchen, N. (2014). Cognitively diagnostic assessments and the cognitive diagnosis model framework. Psicologa Educativa, 20, 89–97.CrossRefGoogle Scholar
  18. Dibello, L., & Stout, W. (2007). Guest editors’ introduction and overview: IRT-based cognitive diagnostic models and related methods. Journal of Educational Measurement, 44(4), 285–291.CrossRefGoogle Scholar
  19. Dibello, L. V., Roussos, L. A., & Stout, W. (2007). Review of cognitively diagnostic assessment and a summary of psychometric models. In C. R. Rao & S. Sinharay (Eds.), Handbook of statistics, volume 26, psychometrics (pp. 979–1030). Amsterdam, The Netherlands: Elsevier.Google Scholar
  20. Dijksman, J., & Khan, S. (2011). Khan academy: The world’s free virtual school. In APS meeting abstracts.Google Scholar
  21. Doignon, J., & Falmagne, J. (2012). Knowledge spaces. Berlin, Germany: Springer.Google Scholar
  22. Embretson, S. (1984). A general latent trait model for response processes. Psychometrika, 49, 175–186.CrossRefGoogle Scholar
  23. Embretson, S., & Gorin, J. (2001). Improving construct validity with cognitive psychology principles. Journal of Educational Measurement, 38(4), 343–368.CrossRefGoogle Scholar
  24. Gorin, J. (2006). Test design with cognition in mind. Educational Measurement: Issues and Practice, 25(4), 21–35.CrossRefGoogle Scholar
  25. Haberman, S., & von Davier, M. (2006). Some notes on models for cognitively based skills diagnosis. In C. Rao & S. Sinharay (Eds.), Handbook of statistics (pp. 1031–1038). Amsterdam, The Netherlands: Elsevier.Google Scholar
  26. Haertel, E. H. (1989). Using restricted latent class models to map the skill structure of achievement items. Journal of Educational Measurement, 26(4), 301–321.CrossRefGoogle Scholar
  27. Hagenaars, J. (1993). Loglinear models with latent variables. Thousand Oaks, CA: Sage.CrossRefGoogle Scholar
  28. Hartz, S. (2002). A Bayesian framework for the unified model for assessing cognitive abilities: Blending theory with practicality. Unpublished doctoral dissertation.Google Scholar
  29. Henson, R., Templin, J., & Willse, J. (2009). Defining a family of cognitive diagnosis models using log-linear models with latent variables. Psychometrika, 74, 191–210.CrossRefGoogle Scholar
  30. Huebner, A. (2010). An overview of recent developments in cognitive diagnostic computer adaptive assessments. Practical Assessment, Research & Evaluation, 15(3), 1.Google Scholar
  31. Journell, W., McFadyen, B., Miller, M., & Brown, K. (2014). K-12 online education: Issues and future research directions. In Handbook of research on emerging priorities and trends in distance education: Communication, pedagogy, and technology (p. 385). Hershey, PA: Information Science Reference.CrossRefGoogle Scholar
  32. Junker, B., & Sijtsma, K. (2001). Cognitive assessment models with few assumptions, and connections with nonparametric item response theory. Applied Psychological Measurement, 25, 258–272.CrossRefGoogle Scholar
  33. Käser, T., Klingler, S., Schwing, A., & Gross, M. (2014). Beyond knowledge tracing: Modeling skill topologies with bayesian networks. In International conference on intelligent tutoring systems (pp. 188–198). Cham, Switzerland: Springer.CrossRefGoogle Scholar
  34. Koedinger, K., Corbett, A., & Perfetti, C. (2012). The knowledge-learning-instruction framework: Bridging the science-practice chasm to enhance robust student learning. Cognitive Science, 36(5), 757–798.CrossRefGoogle Scholar
  35. Leighton, J. (2004). Avoiding misconception, misuse, and missed opportunities: The collection of verbal reports in educational achievement testing. Educational Measurement: Issues and Practice, 23(4), 6–15.CrossRefGoogle Scholar
  36. Leighton, J., & Gierl, M. (Eds.). (2007). Cognitive diagnostic assessment for education: Theory and applications. Cambridge, UK/New York, NY: Cambridge University Press.Google Scholar
  37. Levinson, S., Rabiner, L., & Sondhi, M. (1983). An introduction to the application of the theory of probabilistic functions of a Markov process to automatic speech recognition. Bell System Technical Journal, 62(4), 1035–1074.CrossRefGoogle Scholar
  38. Ma, E., & de la Torre, J. (2017). The generalized DINA model framework, package ‘GDINA’. Retrieved February 12.Google Scholar
  39. Ma, W., Iaconangelo, C., & de la Torre, J. (2016). Model similarity, model selection, and attribute classification. Applied Psychological Measurement, 40, 200–217.CrossRefGoogle Scholar
  40. Maris, E. (1999). Estimating multiple classification latent class models. Psychometrika, 64, 187–212.CrossRefGoogle Scholar
  41. Millán, E., Loboda, T., & Pérez-de-la-Cruz, J. (2010). Bayesian networks for student model engineering. Computers & Education, 55(4), 1663–1683.CrossRefGoogle Scholar
  42. No Child Left Behind. (2002). Act of 2001 Pubic Law No. 107–110, § 115. Stat, 1425.Google Scholar
  43. NRC. (2001). Knowing what students know: The science and design of educational assessment. Washington, DC: National Research Council/National Academies Press.Google Scholar
  44. NRC. (2005). In M. Wilson & M. Bertenthal (Eds.), Systems for state science assessments. Committee on test design for K-12 science achievement. Washington, DC: National Research Council, National Academies Press.Google Scholar
  45. NRC. (2007). In R. Duschl, H. Schweingruber, & A. Shouse (Eds.), Taking science to school: Learning and teaching science in grades K-8. Committee on science learning, kindergarten through eighth grade. Washington, DC: National Research Council, National Academies Press.Google Scholar
  46. OpenEd. (n.d.). Driving blended learning from classroom assessments. https://www.opened.com/
  47. Open Learning Initiative. (2018). Retrieved from https://oli.cmu.edu/learn-more-about-oli/
  48. OpenStax. (2018). Retrieved from https://openstax.org/about
  49. Palmisano, S. (2008). A smarter planet: The next leadership agenda. New York, NY: IBM.Google Scholar
  50. Pelánek, R. (2017, December). Bayesian knowledge tracing, logistic models, and beyond: An overview of learner modeling techniques. User Modeling and User-Adapted Interaction, 27(3–5), 313–350.CrossRefGoogle Scholar
  51. Pellegrino, J., Baxter, G., & Glaser, R. (1999). Addressing the “Two disciplines” problem: Linking theories of cognition and learning with assessment and instructional practice. In Review of Research in Education, 24, 307–353.Google Scholar
  52. R Core Team. (2017). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. https://www.R-project.org/ Google Scholar
  53. Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests (Studies in mathematical psychology, Vol. 1). Copenhagen, Denmark: Danmarks Paedagogiske Institut.Google Scholar
  54. Rudd, J., Davia, C., & Sullivan, P. (2009). Education for a smarter planet: The future of learning. Redbooks IBM.Google Scholar
  55. Rupp, A. A., & Templin, J. (2008). The effects of Q-matrix misspecification on parameter estimates and classification accuracy in the DINA model. Educational and Psychological Measurement, 68(1), 78–96. https://doi.org/10.1177/0013164407301545
  56. Tatsuoka, K. (1985). A probabilistic model for diagnosing misconceptions in the pattern classification approach. Journal of Educational Statistics, 12, 55–73.CrossRefGoogle Scholar
  57. Templin, J. (2016). Diagnostic measurement: Theory, methods, applications, and software. NCME training session, Washington, DC. Retrieved April 8.Google Scholar
  58. Templin, J., & Henson, R. (2006). Measurement of psychological disorders using cognitive diagnosis models. Psychological Methods, 11, 287–305.CrossRefGoogle Scholar
  59. Tsoumakas, G., & Katakis, I. (2006). Multi-label classification: An overview. Thessaloniki, Greece: Deptartment of Informatics, Aristotle University of Thessaloniki.Google Scholar
  60. von Davier, A. (2017). Computational psychometrics in support of collaborative educational assessments. Journal of Educational Measurement, 54, 3–11.CrossRefGoogle Scholar
  61. von Davier, A., Polyak, S., Peterschmidt, K., Chopade, P., Yudelson, M., de la Torre, J., & Paek, P. (2017, November). Systems and methods for interactive dynamic learning diagnostics and feedback. U.S. Patent Application No. 15/802,404.Google Scholar
  62. Von Davier, M. (2005). A general diagnostic model applied to language testing data (ETS research report RR-05-16). Educational Testing Service. https://onlinelibrary.wiley.com/doi/pdf/10.1002/j.2333-8504.2005.tb01993.x
  63. von Davier, M., & Haberman, S. (2014). Hierarchical diagnostic classification models morphing into unidimensional ‘Diagnostic’ classification models—A commentary. Psychometrika, 340–346.Google Scholar
  64. Wilmot, D., Schoenfeld, A., Wilson, M., Champney, D., & Zahner, W. (2011). Validating a learning progression in mathematical functions for college readiness. Mathematical Thinking and Learning, 13(4), 259–291.CrossRefGoogle Scholar
  65. Wilson, M. (2005). Constructing measures: An item response theory approach. Mahwah, NJ: Lawrence Erlbaum.Google Scholar
  66. Zhang, S., & Chang, H.-H. (2016). From smart testing to smart learning: How testing technology can assist the new generation of education. International Journal of Smart Technology and Learning, 67–92.Google Scholar
  67. Zhang, S. S. (2014). Statistical inference and experimental design for Q-matrix based cognitive diagnosis models. Doctoral dissertation. Columbia University.Google Scholar
  68. Zhu, Z., & Shen, D. (2013). Learning analytics: The scientific engine for smart education. E-Education Research, 241(1), 5–12.Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Benjamin Deonovic
    • 1
  • Pravin Chopade
    • 1
  • Michael Yudelson
    • 1
  • Jimmy de la Torre
    • 2
  • Alina A. von Davier
    • 1
    Email author
  1. 1.ACTNext ACT Inc.Iowa CityUSA
  2. 2.Division of Learning, Development and DiversityUniversity of Hong KongHong KongChina

Personalised recommendations