Comparison of Construct Mean Scores Across Populations: A Conceptual Framework

  • Alain De Beuckelaer
Conference paper


Applied researchers who wish to compare groups of people (e.g. different cultures) on the basis of a theoretically meaningful concept (e.g. individualism / collectivism) are faced with a number of methodological problems (e.g. measurement issues/cross-group comparability issues, ... etc.). Unfortunately, the availability of ‘coping strategies’ is largely dependent on the nature of the underlying concept (either represented by an ‘emergent’ or a ‘latent’ construct). If the concept represents an emergent construct (i.e. as represented by formative indicators), both measurement and cross-group comparability of construct quantifications are problematical. With latent constructs (as represented by reflective indicators), however, these methodological problems are much easier to overcome. By using multigroup Confirmatory Factor Analysis (CFA) models invariance of measurement across groups can be explicitly tested, and cross-group comparisons can be made (provided sufficient evidence exists for claiming cross-group comparability of construct scores).


Measurement Invariance Measurement Model Latent Construct Formative Indicator Meaningful Concept 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Akaike, H. (1973), Information theory and an extension of the maximum likelihood principle. In Second international symposium in information theory, Petrov, N & F. Csaki (Eds.), 267–281, Akademiai Kiado, Budapest.Google Scholar
  2. Akaike, H. (1974), A new look at the statistical identification model, IEEE Transactions on Automatic Control, 19, 716–723.MathSciNetCrossRefMATHGoogle Scholar
  3. Bozdogan, H. (1987), Model selection and Akaike’s information criterion (AIC): The general theory and its analytical extensions. Psychometrika, 52, 345–370.MathSciNetCrossRefMATHGoogle Scholar
  4. Bollen, K.A. (1984). Multiple indicators: internal consistency or NO necessary relationship. Quality and Quantity, 18, 377–385.CrossRefGoogle Scholar
  5. Bollen, K.A. and Lennox, R. (1991). Conventional wisdom on measurement measurement a structural equation perspective. Psychological Bulletin, 110, 2, 305–314.CrossRefGoogle Scholar
  6. Bollen, K.A. and Long, J.S. (Eds.)(1993), Testing structural equation models, Sage, Newbury Park, California.Google Scholar
  7. Byrne, B.M., Shavelson, R.J., and Muthen, B. (1989). Testing for the equivalence of factor covariance and mean structures: the issue of partial measurement invariance. Psychological Bulletin, 105, 3, 456–466.CrossRefGoogle Scholar
  8. Cheung, G.W. and Rensvold, R.B. (1999). Testing factorial invariance across groups: a reconceptualisation and proposed new method. Journal of Management, 25, 1, 1–27.CrossRefGoogle Scholar
  9. Cohen, P.J., Cohen, J., Teresi, J., Marchi, M., and Velez, C.N. (1990). Problems in the measurement of latent variables in structural equations causal models. Applied Psychological Measurement, 14, 2, 183–196.CrossRefGoogle Scholar
  10. Cole, D.A., Maxwell, S.E., Arvey, R., and Salas, E. (1993). Multivariate group comparisons of variable systems: MANOVA and structural equation modeling. Psychological Bulletin, 114, 1, 174–184.CrossRefGoogle Scholar
  11. Hand, D.J. and Taylor, C.C. (1987). Multivariate analysis of variance and repeated measures: A practical approach for behavioural scientists. Chapman and Hall, New York.CrossRefGoogle Scholar
  12. Horn, J.L. and McArdle, J.J. (1992). A practical and theoretical guide to measurement invariance in aging research. Experimental Aging Research, 18, 117–144.CrossRefGoogle Scholar
  13. Hox, J.J. (1997). From theoretical concept to survey question. In Survey measurement and process quality, Lyberg, L. et al. (Eds.), 47–69, Wiley, New York.Google Scholar
  14. Kuehnel, S. (1988). Testing MANOVA designs with LISREL. Sociological Methods and Research, 16, 4, 504–523.CrossRefGoogle Scholar
  15. Little, T.D. (2000). On the comparability of constructs in cross-cultural research. Journal of Cross-cultural Psychology, 31, 2, 213–219.CrossRefGoogle Scholar
  16. Marsch, H.W. and Grayson, D. (1990). Public/catholic differences in the high school and beyond data: a multi-group structural equation modelling approach to testing mean differences. Journal of Educational Statistics, 5, 199–235.CrossRefGoogle Scholar
  17. Muthen, B. (1989). Multiple-group structural modeling with non-normal continuous variables. British Journal of Mathematical and Statistical Psychology, 42, 55–62.MathSciNetCrossRefMATHGoogle Scholar
  18. Rensvold, R.B. and Cheung, G.W. (1998). Testing measurement models for factorial invariance: a systematic approach. Educational and Psychological Measurement, 58, 6, 1017–1034.CrossRefGoogle Scholar
  19. Sörbom, D. (1981). Structural equation models with structured means. In Systems under indirect observation: causality structure, and prediction, Jöreskog, K.G. and Wold, D. (Eds.), 183–195, North Holland, Amsterdam.Google Scholar
  20. Stelzl, I and Schnabel, K. (1992). The two-group MANOVA problem with unequal covariance matrices: a simulation study comparing Hotelling’s T2 to the LISREL approach. Methodika, 6, 54–75.Google Scholar

Copyright information

© Springer Japan 2002

Authors and Affiliations

  • Alain De Beuckelaer
    • 1
  1. 1.Unilever Research Vlaardingen (URV)Catholic University of BrusselsVlaardingenThe Netherlands

Personalised recommendations