Advertisement

Small-Variance Priors Can Prevent Detecting Important Misspecifications in Bayesian Confirmatory Factor Analysis

  • Terrence D. JorgensenEmail author
  • Mauricio Garnier-Villarreal
  • Sunthud Pornprasermanit
  • Jaehoon Lee
Conference paper
Part of the Springer Proceedings in Mathematics & Statistics book series (PROMS, volume 265)

Abstract

We simulated Bayesian CFA models to investigate the power of PPP to detect model misspecification by manipulating sample size, strongly and weakly informative priors for nontarget parameters, degree of misspecification, and whether data were generated and analyzed as normal or ordinal. Rejection rates indicate that PPP lacks power to reject an inappropriate model unless priors are unrealistically restrictive (essentially equivalent to fixing nontarget parameters to zero) and both sample size and misspecification are quite large. We suggest researchers evaluate global fit without priors for nontarget parameters, then search for neglected parameters if PPP indicates poor fit.

Keywords

Structural equation modeling Confirmatory factor analysis Bayesian inference Model evaluation Model modification 

References

  1. Arbuckle, J. L. (2012). IBM SPSS Amos 21 user’s guide. Chicago, IL: IBM.Google Scholar
  2. Asparouhov, T., Muthén, B., & Morin, A. J. S. (2015). Bayesian structural equation modeling with cross-loadings and residual covariances: Comments on Stromeyer et al. Journal of Management, 41, 1561–1577.  https://doi.org/10.1177/0149206315591075.CrossRefGoogle Scholar
  3. Browne, M. W., & Cudeck, R. (1992). Alternative ways of assessing model fit. Sociological Methods and Research, 21(2), 230–258.  https://doi.org/10.1177/0049124192021002005.CrossRefGoogle Scholar
  4. Cain, M. K., & Zhang, Z. (in press). Fit for a Bayesian: An evaluation of PPP and DIC for structural equation modeling. Structural Equation Modeling.  https://doi.org/10.1080/10705511.2018.1490648.MathSciNetCrossRefGoogle Scholar
  5. Carpenter, B., Gelman, A., Hoffman, M., Lee, D., Goodrich, B., Betancourt, M., … Riddell, A. (2017). Stan: A probabilistic programming language. Journal of Statistical Software, 76(1), 1–32.  https://doi.org/10.18637/jss.v076.i01.
  6. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum.zbMATHGoogle Scholar
  7. Depaoli, S. (2012). The ability for posterior predictive checking to identify model misspecification in Bayesian growth mixture modeling. Structural Equation Modeling, 19(4), 534–560.  https://doi.org/10.1080/10705511.2012.713251.MathSciNetCrossRefGoogle Scholar
  8. Gelman, A., Meng, X.-L., & Stern, H. (1996). Posterior predictive assessment of model fitness via realized discrepancies. Statistica Sinica, 6, 733–807. https://doi.org/10.1.1.142.9951.Google Scholar
  9. Hoijtink, H., & van de Schoot, R. (2018). Testing small variance priors using prior-posterior predictive p values. Psychological Methods, 23(3), 561–569.  https://doi.org/10.1037/met0000131.CrossRefGoogle Scholar
  10. Hoofs, H., van de Schoot, R., Jansen, N. W., & Kant, I. (2018). Evaluating model fit in Bayesian confirmatory factor analysis with large samples: Simulation study introducing the BRMSEA. Educational and Psychological Measurement, 78(4), 537–568.  https://doi.org/10.1177/0013164417709314.CrossRefGoogle Scholar
  11. Levy, R. (2011). Bayesian data–model fit assessment for structural equation modeling. Structural Equation Modeling, 18(4), 663–685.  https://doi.org/10.1080/10705511.2011.607723.MathSciNetCrossRefGoogle Scholar
  12. MacCallum, R. C., Edwards, M. C., & Cai, L. (2012). Hopes and cautions in implementing Bayesian structural equation modeling. Psychological Methods, 17(3), 340–345.  https://doi.org/10.1037/a0027131.CrossRefGoogle Scholar
  13. Merkle, E. C., & Rosseel, Y. (2018). Blavaan: Bayesian structural equation models via parameter expansion. Journal of Statistical Software, 85(4), 1–30.  https://doi.org/10.18637/jss.v085.i04.CrossRefGoogle Scholar
  14. Muthén, B. O., & Asparouhov, T. (2012). Bayesian structural equation modeling: A more flexible representation of substantive theory. Psychological Methods, 17(3), 313–335.  https://doi.org/10.1037/a0026802.CrossRefGoogle Scholar
  15. Muthén, L. K., & Muthén, B. O. (2002). How to use a Monte Carlo study to decide on sample size and determine power. Structural Equation Modeling, 9(4), 599–620.  https://doi.org/10.1207/S15328007SEM0904_8.MathSciNetCrossRefGoogle Scholar
  16. Muthén, L. K., & Muthén, B. O. (2012). Mplus user’s guide (7th ed.). Los Angeles, CA: Author.Google Scholar
  17. Plummer, M. (2003). JAGS: A program for analysis of Bayesian graphical models using Gibbs sampling. In Proceedings of the 3rd International Workshop on Distributed Statistical Computing. Retrieved from http://www.ci.tuwien.ac.at/Conferences/DSC-2003/Proceedings/.
  18. R Core Team. (2018). R: A language and environment for statistical computing (version 3.5.1) [Computer software]. Vienna, Austria: R Foundation for Statistical Computing. Retrieved from the comprehensive R archive network (CRAN): https://www.R-project.org/.
  19. Rhemtulla, M., Brosseau-Liard, P. É., & Savalei, V. (2012). When can categorical variables be treated as continuous? A comparison of robust continuous and categorical SEM estimation methods under suboptimal conditions. Psychological Methods, 17(3), 354–373.  https://doi.org/10.1037/a0029315.CrossRefGoogle Scholar
  20. Rindskopf, D. (2012). Next steps in Bayesian structural equation models: Comments on, variations of, and extensions to Muthen and Asparouhov (2012). Psychological Methods, 17(3), 336–339.  https://doi.org/10.1037/a0027130.CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Terrence D. Jorgensen
    • 1
    Email author
  • Mauricio Garnier-Villarreal
    • 2
  • Sunthud Pornprasermanit
    • 3
  • Jaehoon Lee
    • 3
  1. 1.University of AmsterdamAmsterdamThe Netherlands
  2. 2.Marquette UniversityMilwaukeeUSA
  3. 3.Texas Tech UniversityLubbockUSA

Personalised recommendations