Skip to main content

Non-Response and Measurement Error

  • Chapter
  • First Online:
Handbook of Survey Methodology for the Social Sciences

Abstract

This chapter deals with some aspects of two sources of systematic error in surveys: non-response and measurement error. The quality of the obtained response is discussed first with a focus on non-response bias estimation and non-response bias reduction. Measurement error is studied by evaluating the quality of registered responses through question wording, order, and response scale effects. The different approaches to measurement error are discussed. Practical examples of dealing with bias and measurement error are offered. These include the evaluation of non-(response) rates and response enhancement strategies, comparing cooperative and reluctant respondents on non-response issues based on the analysis of European Social Survey. Concerning measurement error, the split ballot approach and multitrait multimethod are extensively discussed including theoretical concepts and methods/models, and an example of acquiescence when a balanced set of items is available. The chapter also presents some debates concerning theoretical construct and model developments in the respective fields, and emphasizes that both errors are strongly related. This means response distributions, correlations, and regression parameters can be seriously affected.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 249.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    See for example the discussion on so called ‘non-attitudes’ in Saris and Sniderman (2004).

  2. 2.

    “Observed” in the meaning of “measured” in a broad sense by recording a response or by observing an event.

  3. 3.

    The ESS Central Coordination Team received the 2005 European Descartes Prize for excellence in collaborative scientific research.

  4. 4.

    Discussion of this can be found in the document of ESS Round 4 specification for participating countries.

  5. 5.

    These research activities are ESSi (infrastructure) program funded project of ‘Joint Research Activities (JRA2): improving representativeness of the samples’ implemented in 2006–2011.

  6. 6.

    This project is financed by the EU 7th Framework Programme and involving a number of European institutions.

  7. 7.

    Schouten et al. (2009) discusses the mathematical features of R-indicators. Risq project (http://www.risq-project.eu/indicators.html)

  8. 8.

    BG, CY, CZ, DK, GR, LV, LT, PT, RO and UK.

  9. 9.

    CH, DE, ES, GB, NL, NO, PL and SK.

  10. 10.

    CH, DE, ES, GB, HR, NL, NO, PL, SI and SK.

  11. 11.

    BE, FI, FR, HU, IE and SE.

  12. 12.

    Slovenia was excluded from the analysis due to data quality issue although the number of reluctant respondents was high.

  13. 13.

    CASM: cognitive aspects of survey methodology.

  14. 14.

    The challenge of the cross-cultural equivalence of measurement models in not considered here. For this, we can refer to “Cross Cultural Analysis. Methods and Applications” (Davidov et al. 2011).

  15. 15.

    A revised edition appeared in 1996, Thousand Oaks Ca.: Sage Pub, Inc.

  16. 16.

    The standard interview situation, where it is assumed that the interviewer does not ask the same question twice (the ‘given new contract’ rule) is a favorable context for this kind of context effects.

  17. 17.

    In a survey in the 1980s, we have observed that some respondents offered numbers between 20 and 30 in response to the question “How many children do you have?”. These respondents were school teachers who interpreted the question to refer to the children in their classes, the meaning that was most accessible in their memories (cited in Bradburn, 1992, p. 317). That question was in the last part on the household characteristics of teachers in a self-administered questionnaire about their attitude toward mixed (qua gender) classes.

  18. 18.

    Because of a strong correlation between the responses to the question on dismissal and the question on promotion.

  19. 19.

    Congeneric measures are defined as measures of the same underlying construct in possibly different units of measurement or possibly different degrees of precision (Lord and Novick 1968, pp. 47–50). The congeneric measures are indicators of different characteristics that are all simple linear functions of a single construct (Groves 1989, p. 18).

  20. 20.

    See footnote n°7.

  21. 21.

    Take, for example, an IQ test in Dutch intended to measure the IQ of French-speaking students. The results may be very stable but it measures very probably the students’ knowledge of Dutch, and not their IQ.

  22. 22.

    The symbols τ, η, and ε are altered in F, M, and e, and the slope parameters λ are replaced by v and m in order to be consistent with Saris’ true score model which is shown in the appendix. (Saris 1990b, p. 119).

  23. 23.

    Contrary to standard linear regression, the parameter on the left side of the equation is a measured quantity and is expressed in function of un-measured variables on the right side of the equation. For that reason, a set of equations need to be solved in order to obtain values of the regression parameters in covariance structure analysis. The number of unknowns is reduced by the introduction of constraints between the parameters (Bollen 1989 pp. 16–23). These constraints are based on theoretical ideas, e.g., equality of slopes between indicators and a common method factor.

  24. 24.

    Saris established in 1989 the International Research Group on Measurement Error in Comparative Survey research (IRMCS).

  25. 25.

    See http://www.sqp.nl/media/sqp-1.0/.

  26. 26.

    E.g. the elimination of content in a multi-functional measure, or the randomization of content in multi-functional items that are intended to measure content and style. If one eliminates the content then there is no longer a multi-functional measure, and the selection of items to measure content is per definition not random.

  27. 27.

    The model contains two content factors, Threat and Distrust, with balanced sets of six and four observed indicators, respectively, and with identical slopes for the style factors (denoted 1). For actual parameter values, see Billiet and McClendon 2000, pp. 622–625).

  28. 28.

    Models with two positive and two negative factors, or models with a separate style factor per content.

  29. 29.

    In order to obtain validation, Billiet and McClendon (2000, pp. 623–626) investigated the relation between the style factor and a variable that measures the sum of agreements (named “scoring for acquiescence”), across a balanced set of 14 positively and negatively worded items. This procedure led to a strong correlation of 0.90 (t = 22.26) between “scoring for acquiescence” and the style factor in a random sample of Flemish voters in Belgium. This finding was replicated several times in other samples.

  30. 30.

    One can model MRS and ERS with latent class analysis by specifying a latent class of respondents that are very likely to choose the middle of scales or extreme response categories.

References

  • AAPOR. (2006). Standard Definitions. Final Dispositions of case Codes and Outcome rates for Surveys (pp. 46). American Association for Public Opinion Research: http://www.aapor.org/pdfs/standarddefs_4.pdf)

  • Alwin, D. F. (1989). Problems in the estimation and interpretation of the reliability of survey data. Quality & Quantity, 23(3–4), 277–331.

    Google Scholar 

  • Alwin, D. F. (1992). Information transmission in the survey interview: Number of response categories and the reliability of attitude measurement. In P. V. Marsden (Ed.), Sociological Methodology 1992 (pp. 83–118). San Francisco: Jossey Bass.

    Google Scholar 

  • Alwin, D. F. (1997). Feeling thermometers versus 7-point scales. Which are better? Sociological Methods & Research, 25(3), 318–340.

    Article  Google Scholar 

  • Andrews, F. M. (1990). Some observations on meta-analysis of MTMM studies. In W. E. Saris & A. Van Meurs (Eds.), Evaluation of measurement instruments by meta-analysis of multitrait-multimethod studies (pp. 15–51). North Holland: Amsterdam.

    Google Scholar 

  • Andrews, F. M. (1984). Construct validity and error components of survey measures: A structural modeling approach. Public Opinion Quarterly, 48(2), 409–442.

    Article  Google Scholar 

  • Andrews, F. M., & Crandall, R. (1976). The validity of measures of self-reported well-being. Social Indicators Research, 3(1), 1–19.

    Article  Google Scholar 

  • Andrews, F. M., & Withey, S. B. (1976). Social indicators of well-being. Americans’ perceptions of life quality. New York: Plenum Press.

    Book  Google Scholar 

  • Baumgartner, H., & Steenkamp, J. -B. E. M. (2001). Response styles in marketing research: A cross national investigation. Journal of Marketing Research, 38(2), 143–156.

    Article  Google Scholar 

  • Belson, W. A. (1981). Design and understanding of survey questions. Aldershot: Gower.

    Google Scholar 

  • Beckers, M., & Billiet, J. (2010). Uphold or revoke? A study of question wording in twelve municipal plebiscites in Flanders. World Political Science Review, 6(1), article 5, p. 29: http://www.bepress.com/wpsr/vol6/iss1/art5/

  • Biderman, A. (1980). Report of a workshop on applying cognitive psychology to recall problems of the national crime survey. Washington, DC: Bureau of Social Research.

    Google Scholar 

  • Biemer, P., & Caspar, R. (1994). Continuous quality improvement for survey operations: Some general principles and applications. Journal of official statistics, 10(3), 307–326.

    Google Scholar 

  • Biemer, P. P. (2010). Total survey error, design, implementation and evaluation. Public Opinion Quarterly, 74(5), 817–848.

    Article  Google Scholar 

  • Biemer, P. P., Groves, R. M., Lyberg, L. E., Mathiowetz, N. A., & Sudman, S. (Eds.). (1991). Measurement errors in survey. New York: Wiley.

    Google Scholar 

  • Billiet, J. (2011). Godsdienstige betrokkenheid en de houding tegenover vreemden: Een verdwijnend verband? In K. Abts, K. Dobbelaere & L. Voyé (Eds.), Nieuwe tijden, nieuwe mensen. Belgen over arbeid, gezin, ethiek, religie en politiek, Chapt. X (pp. 215–243). Leuven: Lannoo campus & Koning Boudewijnstichting.

    Google Scholar 

  • Billiet, J., Waterplas, L., & Loosveldt, G. (1992). Context effects as substantive data in social surveys. In N. Schwarz & S. Sudman (Eds.), Context effects in social and psychological research (pp. 131–147). New York: Springer.

    Chapter  Google Scholar 

  • Billiet, J., & McClendon, J. M. (2000). Modeling acquiescence in measurement models for two balanced sets of items. Structural Equation Modeling: A Multidisciplinary Journal, 7(4), 608–628.

    Article  Google Scholar 

  • Billiet, J., Cambré, B., & Welkenhuysen-Gybels, J. (2002). Equivalence of Measurement Instruments for Attitude Variables in Comparative Surveys taking Method Effects into Account: the Case of Ethnocentrism. In A. Ferligoj & A. Mrvar (Eds.), Developments in Social Science Methodology (pp. 73–96). Ljubljana: FDV.

    Google Scholar 

  • Billiet, J., Swyngedouw, M., & Waege, H. (2004). Attitude Strength and Response Stability of a Quasi-Balanced Political Alienation Scale in a Panel Study. In W.E. Saris & P. Sniderman (Eds.), Studies in Public Opinion: Attitudes, Nonattitudes, Measurement Error and Change (pp. 268–292). Princeton: Princeton University.

    Google Scholar 

  • Billiet, J., Koch, A., & Philippens, M. (2007a). Understanding and improving response rates. In R. Jowell, C. Roberts, R. Fitzgerald, & G. Eva (Eds.), Measuring attitudes cross-nationally: Lessons from the European Social Survey (pp. 113–137). London: Sage.

    Google Scholar 

  • Billiet, J., Philippens, M., Fitzgerald, R., & Stoop, I. (2007b). Estimation of nonresponse bias in the European social survey: Using information from reluctant respondents. Journal of Official Statistics, 23(2), 135–162.

    Google Scholar 

  • Billiet, J., Matsuo, H., Beullens, K., & Vehovar, V. (2009). Non-response bias in cross-national surveys: Designs for detection and adjustment in the ESS. ASK. Research and Methods, 18, 3–43.

    Google Scholar 

  • Blom, A., Jäckle, A., & Lynn, P. (2010). The use of contact data in understanding cross-national differences in unit-nonresponse. In J. A. Harkness, M. Braun, B. Edwards, T. P. Johnson, T. L. E. Lyberg, P. P. Mohler, B.-E. Pennell, & T. W. Smith (Eds.), Survey methods in multicultural, multinational, and multiregional contexts (pp. 335–354). Hoboken: Wiley

    Google Scholar 

  • Bollen, K. A. (1989). Structural equations with latent variables. New York: Wiley.

    Google Scholar 

  • Bradburn, N. M. (1992). What have we learned? In N. Schwarz & S. Sudman (Eds.), Context effects in social and psychological research (pp. 315–324). New York: Springer.

    Chapter  Google Scholar 

  • Borhnstedt, G. W. (1983). Measurement. In P. H. Rossie, J. D. Wright, & A. B. Anderson (Eds.), Handbook of survey research: Quantitative studies in social relations (pp. 70–122). New York: Academic Press.

    Google Scholar 

  • Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56(2), 81–105.

    Article  Google Scholar 

  • Cantril, H. (Ed.). (1944). Gauging public opinion. Princeton: Princeton University Press.

    Google Scholar 

  • Converse, J. M. (1964). The nature of belief systems in mass publics. In D. Apter (Ed.), Ideology and discontent (pp. 206–261). New York: The Free Press.

    Google Scholar 

  • Converse, J. M., & Presser, S. (1986). Survey Questions: Handcrafting the Standardized Questionnaire, Sage University, Series on Quantitative Applications in the Social Sciences. Beverly Hills: Sage Publications.

    Google Scholar 

  • Couper, M. P., & De Leeuw, E. (2003). Nonresponse in cross-cultural and cross-national surveys. In J. A. Harkness, F. J. R. van de Vijver, & P. Ph. Mohler (Eds.), Cross-cultural survey methods (pp. 157–178). New Jersey: Wiley.

    Google Scholar 

  • Couper, M. P. (2001). Measurement error in web surveys. Methodology & Statistics.

    Google Scholar 

  • Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52(4), 281–302.

    Article  Google Scholar 

  • Davidov, E., Schmidt, P., & Billiet, J. (Eds.). (2011). Cross-cultural analysis: Methods and applications (p. 503). New York: Francis & Taylor.

    Google Scholar 

  • De Wit, H. (1994). Cijfers en hun achterliggende realiteit. De MTMM-kwaliteitsparameters op hun kwaliteit onderzocht. Leuven: K.U.Leuven, Departement Sociologie.

    Google Scholar 

  • De Wit, H., & Billiet, J. (1995). The MTMM design: Back to the founding fathers. In W. E. Saris & A. Münnich (Eds.), The multitrait-mutlimethod approach to evaluate measurement instruments (pp. 39–60). Budapest: Eötvös University Press.

    Google Scholar 

  • de Heer, W. (1999). International response trends: Results of an international survey. Journal of Official Statistics, 15(2), 129–142.

    Google Scholar 

  • Dillman, D. A. (1978). Mail and telephone surveys: The total design method. New York: Wiley.

    Google Scholar 

  • Dillman, D. A. (2000). Mail and internet surveys: The tailored design method. New York: Wiley.

    Google Scholar 

  • Dillman, D. A., Phelps, G., Tortora, R., Swift, K, Kohrell, J., Berck, J., & Messer, B. L. (2009). Response rate and measurement differences in mixed-mode surveys using mail, telephone, interactive voice response (IVR) and the Internet. Social Science Research, 38(1), 1–18.

    Google Scholar 

  • Foddy, W. (1993). Constructing questions for interviews and questionnaires. Theory and practice in social research. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Forsyth, B. H., & Lessler, J. T. (1991). Cognitive laboratory methods: A taxonomy. In P. P. Biemer, et al. (Eds.), Measurement errors in surveys (pp. 393–418). New York: Wiley.

    Google Scholar 

  • Greenleaf, E. A. (1992a). Improving rating scale measures by detecting and correcting bias components in some response styles. Journal of Marketing Research, 29(2), 176–188.

    Article  Google Scholar 

  • Greenleaf, E. A. (1992b). Measuring extreme response style. Public Opinion Quarterly, 56(3), 328–351.

    Article  Google Scholar 

  • Groves, R. M. (1989). Survey errors and survey costs. New York: Wiley.

    Book  Google Scholar 

  • Groves, R. M. (2006). Non response rates and nonresponse bias in household surveys. Public Opinion Quarterly, 70(5), 646–675.

    Article  Google Scholar 

  • Groves, R. M., Fowler, F. J., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2004). Survey methodology. New York: Wiley.

    Google Scholar 

  • Groves, R., & Herringa, S. G. (2006). Responsive design for household surveys: Tools for actively controlling survey errors and costs. Journal of the Royal Statistical Society Series A, 169(3), 439–457.

    Google Scholar 

  • Groves, R., & Lyberg, L. (2010). Total survey error: Past, present, and future. Public Opinion Quarterly, 74(5), 849–879.

    Article  Google Scholar 

  • Heise, D. R. (1969). Separating reliability and stability in test-retest-correlation. American Sociological Review, 34(1), 93–101.

    Article  Google Scholar 

  • Heerwegh, D. (2003). Explaining response latencies and changing answers using client-side paradata from a web survey. Social Science Computer Review, 21(3), 360–373.

    Article  Google Scholar 

  • Hippler, H.-J., Schwarz, N., & Sudman, S. (Eds.). (1987). Social information processing and survey methodology. New York: Springer.

    Google Scholar 

  • Holleman, B. (2000). The forbid/allow assymetrie. On the cognitive mechanisms underlying wording effects in surveys. Amsterdam: Ed. Rodopi.

    Google Scholar 

  • Jabine, T. B, Straf, M. L, Tanur, J. M., & Rourangeau, R. T. (Eds.). (1984). Cognitive aspects of survey methodology: Building a bridge between disciplines. Washington, DC: National Academy Press.

    Google Scholar 

  • Jowell, R., Roberts, C., Fitzgerald, R., & Eva, G. (Eds.). (2007). Measuring attitudes cross-nationally: Lessons from the European social survey. London: Sage.

    Google Scholar 

  • Jöreskog, K. G. (1971). Statistical analysis of sets of congeneric tests. Psychometrica, 36(2), 109–133.

    Article  Google Scholar 

  • Kallenberg, A.L., & Kluegel, J.R. (1975). Analysis of the multitrait multimethod matrix: some limitations and an alternative. Journal of Applied Psychology, 60, 1–9.

    Article  Google Scholar 

  • Kaminska, O., McCutcheon, A., & Billiet, J. (2011). Satisficing among reluctant respondents in a cross-national context. Public Opinion Quarterly, 74(5), 956–984.

    Article  Google Scholar 

  • Kalton, G., & Schuman, H. (1982). The effect of the question on survey responses: A Review. Journal of the Royal Statistical Society, 145(1), 42–73.

    Article  Google Scholar 

  • Kreuter, F., & Kohler, U., (2009) Analysing contact sequences in call record data. Potential and limitations of sequence indicators for nonresponse adjustment in the European Social Survey. Journal of Official Statistics, 25, 203–226.

    Google Scholar 

  • Kreuter, F., Olson, K., Wagner, J., Yan, T., Ezzati-Rice, T., Casas-Cordero, C., et al. (2010). Using proxy measures and other correlates of survey outcomes to adjust for nonresponse: Examples from multiple surveys. Journal of the Royal Statistical Society Series A, 173(2), 389–407.

    Google Scholar 

  • Krosnick, J. A., Narayan, S., & Smith, W. R. (1996). Satisficing in surveys: Initial evidence. In M. T. Braverman & J. K. Slater (Eds.), Advances in survey research (pp. 29–44). San Francisco: Josey-Bass.

    Google Scholar 

  • Loosveldt, G., Carton, A., & Billiet, J. (2004). Assessment of survey data quality: A pragmatic approach focused on interviewer tasks. International Journal of Market Research, 46(1), 65–82.

    Google Scholar 

  • Loosveldt, G., & Beullens, K. (2009). Fieldwork monitoring. Work package 6, Deliverable 5. (Version 2): http://www.risq-project.eu

  • Lord, F. M., & Novick, M. R. (1968). Statistical theories of mental test scores. Reading: Addison-Wesley.

    Google Scholar 

  • Lyberg, L., Biemer, P., Collins, M., de Leeuw, E., Dippo, C., Schwarz, N., & Trewin, D. (1997) (Eds.), Survey measurement and process quality. New York: Wiley.

    Google Scholar 

  • Lynn, P. J., Beerten, R., Laiho, J., & Martin, J. (2001) Recommended standard final outcome categories and standard definitions of response rate for social surveys. Working Paper 2001, 23 October.

    Google Scholar 

  • Lynn, P. (2003). PEDAKSI: Methodology for collecting data about survey non-respondents. Quality & Quantity, 37, 239–261.

    Article  Google Scholar 

  • Lynn, P. & Kaminska, O. (2011). The impact of mobile phones on survey measurement error (p. 45). Working Paper Series. Essex: Institute for Social & Economic Research, 2011–2007

    Google Scholar 

  • Matsuo, H., Billiet, J., Loosveldt, G., & Malnar, B. (2010a). Response-based quality assessment of ESS Round 4: Results for 30 countries based on contact files (pp. 80). CeSO: Working paper, CeSO/SM/2010-2.

    Google Scholar 

  • Matsuo, H., Billiet, J., Loosveldt, G., Berglund, F., & Kleven, Ø. (2010b). Measurement and adjustment of non-response bias based on non-response surveys: The case of Belgium and Norway in European Social Survey Round 3. Survey Research Methods, 4(3):165–178.

    Google Scholar 

  • McClendon, M. J. (1991). Acquiescence and recency response-order effect in interview surveys. Sociological Methods and Research, 20(1), 60–103.

    Article  Google Scholar 

  • McClendon, M.J. (1992). On the measurement and control of acquiescence in latent variable models. Paper presented at the annual meeting of the American Sociological Association. (pp. 24–29). Pittsburg: Augustus.

    Google Scholar 

  • Mirowsky, J., & Ross, C. E. (1991). Eliminating defense and agreement bias from measures of the sense of control: A 2×2 Index. Social Psychology Quarterly, 54(2), 127–145.

    Article  Google Scholar 

  • Molenaar, N. J. (1986). Formuleringseffecten in survey-interviews: een nonexperimenteel onderzoek. Amsterdam: Vrije Universiteit.

    Google Scholar 

  • Moors, G. (2004). Facts and artefacts in the comparison of attitudes among ethnic minorities. A multigroup latent class structure model with adjustment for response style behavior. European Sociological Review, 20(4), 303–320.

    Article  Google Scholar 

  • Olson, K. (2006). Survey participation, nonresponse bias, measurement error bias, and total bias. Public Opinion Quarterly, 70(5), 737–758.

    Article  Google Scholar 

  • O’Muircheartaigh, C., Gaskell, G., & Wright, D. B. (1995). Weighing anchors: Verbal and numeric labels for response scales. Journal of Official Statistics, 11(3), 295–307.

    Google Scholar 

  • Paulhus, D. L. (1991). Measurement and control of response bias. In J. P. Robinson, P. R. Shaver, & L. S. Wrightsman (Eds.), Measures of personality and social psychological attitudes (pp. 17–59). San Diego: Academic Press.

    Google Scholar 

  • Payne, S. L. (1951). The art of asking questions. Princeton: Princeton University Press.

    Google Scholar 

  • Rorer, L. G. (1965). The great response style myth. Psychological Bulletin, 63(3), 129–156.

    Article  Google Scholar 

  • Rossi, P. H., Wright, J. D., & Anderson, A. B. (1983). Handbook of survey research: Quantitative Studies in Social Relations. New York: Academic Press.

    Google Scholar 

  • Rugg, D. (1941). Experiments in wording questions: II. Public Opinion Quarterly, 5(1), 91–92.

    Article  Google Scholar 

  • Saris, W.E. (1990a). Models or evaluation of measurement instruments. In W. E. Saris. & A. van Meurs. (Eds.), Evaluation of measurement instruments by meta-analysis of multitrait-multimethod studies (pp. 52–80). Amsterdam: North Holland.

    Google Scholar 

  • Saris, W. E. (1990b). The choice of a model fore evaluation of measurement instruments. In W. E. Saris & A. van Meurs (Eds.), Evaluation of measurement instruments by meta-analysis of multitrait-multimethod studies (pp. 118–132). Amsterdam: North Holland.

    Google Scholar 

  • Saris, W. (1995). Designs and models for quality assessment of survey measures. In W. Saris & A. Munnich (Eds.), The multitrait-Multimethod Approach to Evaluate measuremnt Instruments. Eötvös University press: Budapest.

    Google Scholar 

  • Saris, W. E., & Andrews, F. M. (1991). Evaluation of measurement instruments using a structural modeling approach. In P. P. Biemer, R. M. Groves, L. E. Lyberg, N. A. Mathiowetz, & S. Sudman (Eds.), Measurement errors in surveys (pp. 575–598). New York: Wiley.

    Google Scholar 

  • Saris, W. E., Satorra, A., & Coenders, G. (2004). A new approach to evaluating quality of measurement instruments: The split-ballot MTMM Design. Sociological Methodology, 34(1), 311–347.

    Google Scholar 

  • Saris, W. E., & Gallhofer, I. N. (2007). Design, evaluation, and analysis of questionnaires for survey research. Hoboken: Wiley.

    Book  Google Scholar 

  • Saris, W. E. & Sniderman, P. M. (Eds.) (2004). Studies in public opinion: Attitudes, nonattitudes, measurement error and change. Princeton: Princeton University.

    Google Scholar 

  • Scherpenzeel, A. (1995). Misspecification effects. In W. E. Saris & A. Münnich (Eds.), The multitrait-mutlimethod approach to evaluate measurement instruments (pp. 61–70). Budapest: Eötvös University Press.

    Google Scholar 

  • Scherpenzeel, A. C., & Saris, W. E. (1997). The validity and reliability of survey questions: A meta-analysis of MTMM studies. Sociological Methods & Research, 25(3), 341–383.

    Article  Google Scholar 

  • Schmitt, N., & Stults, D. N. (1986). Methodology review. Analysis of multitrait-multimethod matrices. Applied Psychological Measurement, 10, 1–22.

    Article  Google Scholar 

  • Schouten, B., Cobben, F., & Bethlehem, J. (2009). Indicators for the representativeness of survey response. Survey Methodology, 35(1), 101–113.

    Google Scholar 

  • Schuman, H., & Presser, S. (1979). The open and closed question. American Sociological Review, 44, 692–712.

    Article  Google Scholar 

  • Schuman, H. (1982). Artifacts are in the mind of the beholder. The American Sociologist, 17, 21–28.

    Google Scholar 

  • Schuman, H. (1984). Response effects with subjective survey questions. In C. F. Turner & E. Martin (Eds.), Surveying subjective phenomena (Vol. Vol. I, pp. 129–147). New York: Russell Sage Foundation.

    Google Scholar 

  • Schuman, H., & Presser, S. (1981). Questions and answers in attitude surveys: Experiments on question form, wording and context. New York: Academic Press.

    Google Scholar 

  • Schuman, H., & Ludwig, J. (1983). The norm of even handedness in surveys as in life. American Sociological Review, 48(1), 112–120.

    Article  Google Scholar 

  • Schwarz, N., & Hippler, H. J. (1987). What response scales may tell your respondents. In H. Hippler, N. Schwarz, & S. Sudman (Eds.), Social information processing and survey methodology (pp. 163–178). New York: Springer.

    Chapter  Google Scholar 

  • Schwarz, N., Strack, F., Mueller, G., & Chassein, B. (1988). The range of response alternatives may determine the meaning of questions: Further evidence on informative functions of response alternatives. Social Cognition, 6(2), 107–117.

    Article  Google Scholar 

  • Schwarz, N., & Hippler, H. J. (1991). Response alternatives: The impact of their choice and presentation order. In P. P. Biemer, R. M. Groves, L. E. Lyberg, N. A. Mathiowetz, & S. Sudman (Eds.), Measurement errors in surveys (pp. 163–178). New York: Wiley.

    Google Scholar 

  • Schwarz, N., & Sudman, S. (Eds.) (1992). Context effects in social and psychological research. New York: Springer.

    Google Scholar 

  • Schwarz, N., Knäper, B., Oyserman, D., & Stich, C. (2008). The psychology of asking questions. In E. de Leeuw, J.J. Hox & D.A. Dillman (Eds.), International Handbook of Survey Methodology. (pp. 18-34). New York: Lawrence Erlbaum Associates.

    Google Scholar 

  • Smith, T. W. (1984). Nonattitudes: A review and evaluation. In C. F. Turner & E. Martin (Eds.), Surveying subjective phenomena (Vol. 2, pp. 215–255). New York: Russell Sage Foundation.

    Google Scholar 

  • Sniderman, P. M., & Grob, D. B. (1996). Innovations in experimental design in attitude surveys. Annual Review of Sociology, 22, 377–399.

    Article  Google Scholar 

  • Sniderman, P. M., & Hagendoorn, L. (2009). When ways of life collide: Multiculturalism and its discontents in The Netherlands. Princeton: Princeton University Press.

    Google Scholar 

  • Strack, F., & Martin, L. L. (1987). Thinking, judging, and communicating: A processs account of context effects in attitude surveys. In H. Hippler, N. Schwarz & S. Sudman (Eds.) Social Information Processing and Survey Methodology (pp. 123–148). New York: Springer.

    Google Scholar 

  • Strack, F., Schwarz, N., & Wänke, M. (1991). Semantic and pragmatic aspects of context effects in social and psychological research. Social Cognition, 9(1), 111–125.

    Article  Google Scholar 

  • Sudman, S., & Bradburn, N. M. (1974). Response effects in surveys: A review and synthesis. Chicago: Aldine.

    Google Scholar 

  • Sudman, S., & Bradburn, N. M. (1982). Asking questions. A practical guide to questionnaire design. San Francisco: Jossey Bass.

    Google Scholar 

  • Sudman, S., Bradburn, N. M., & Schwarz, N. (1996). Thinking about answers. The application of cognitive processes to survey methodology. San Francisco: Jossey-Bass Publishers.

    Google Scholar 

  • Stoop, I., Billiet, J., Koch, A., & Fitzgerald, R. (2010). Improving survey response: Lessons learned from the European Social Survey. Chichester: Wiley.

    Google Scholar 

  • Tourangeau, R. (1992). Context effects on responses to attitude questions: attitudes as memory structures. In N. Schwarz & S. Sudman (Eds.), Context effects in social and psychological research (pp. 35–48). New York: Springer.

    Chapter  Google Scholar 

  • Tourangeau, R., & Rasinski, K. A. (1988). Cognitive processes underlying context effects in attitude measurement. Psychological Bulletin, 103(3), 299–314.

    Article  Google Scholar 

  • Tourangeau, R., Rips, L. J., & Rasinski, K. A. (2000). The psychology of survey response. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Vehovar, V. (2007). Non-response bias in the European Social Survey. In G. Loosveldt, M. Swyngedouw & B. Cambré (Eds.), Measuring meaningful data in social research (pp. 335–356). Leuven: Acco publisher.

    Google Scholar 

  • Weijters, B. (2006). Response styles in consumer research. Belgium: Vlerick Leuven Gent Management School.

    Google Scholar 

  • Zaller, J. R. (1992). The nature and origins of mass opinion. Cambridge: Cambridge University Press.

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jaak Billiet .

Editor information

Editors and Affiliations

Appendix

Appendix

The MTMM true-score model for two correlated variables measured with two different methods (response scales), and with unique components and random error

  • yi1 and yi2: observed indicator of i measured with methods 1 and 2;

  • F1 and F2: state of latent variables 1 and 2 which are correlated;

  • M1 and M2: latent method factors 1 and 2 that are assumed un-correlated;

  • Ti1 and Ti2: true score of variable Fi repeatedly measured with methods 1 and 2;

  • ei1 and ei2: random measurement error for item i measured with methods 1 and 2;

  • ri1 and ri2: the reliability coefficients for item i measured with methods 1 and 2; the square of these are estimates of the test–retest reliability in case of repeated identical methods)

  • vi1 and vi2: the true score validity coefficients for item i measured with methods 1 and 2; vi1² (or vi2²) is the variance in Ti1 (or the Ti2) explained by the variable F i that one intends to measure;

  • mi1 and mi2: the method effect on the true score of methods 1 and 2; mi1² (or mi21²) is together with ui1 (or ui2) the part in Ti1 (or Ti2) not explained by the intended variable, and thus invalidity;

  • ui1 and ui2: the residual variance of the true score of item i measured with methods 1 and 2;

  • ρ(F1F2): the correlation between the two latent (intended) variables

$$ y_{ij} = \, r_{ij} T_{ij} + \, e_{ij} $$
(10.1)
$$ T_{ij} = \, v_{ij} F_{i} + \, m_{ij} M_{j} + \, u_{ij} $$
(10.2)

This model is a simplified version of the true score MTMM model used. It is in reality not possible to estimate a ‘2(traits) × 2(methods)’ model. One needs at least three traits and three methods. But even then is it not possible to estimate the parameters, unless certain assumptions are made. Most simple (and acceptable) is to assume that the unique residual variances of the true scores (u ij ) are zero (Saris 1990b, p. 119; Scherpenzeel and Saris 1997, pp. 344–347).

In line with Eqs (10.1) and (10.2), and assuming zero unique variance u ij, the true score model is written as:

$$ y_{ij} = \, r_{ij}\times v_{ij} F_{i} + \, r_{ij}\times m_{ij} M_{j} + \, e_{i} $$
(10.3)

Path analysis suggests that the observed correlation between the measures of two traits, both measured with method M 1, is (Scherpenzeel and Saris 1997, pp. 372–373):

$$ {\text{Corr}}\left( {y_{11} ,y_{2,1} } \right) \, = v_{11}\times r_{11}\times \rho \left( {F_{1} ,F_{2} } \right)\times v_{2}\times r_{21} + r_{11}\times m_{1 \, 1}\times m_{21}\times r_{21} $$
(10.4)

From which one can derive:

$$ {\text{r}}\left( {F_{1} F_{2} } \right) = \, [{\text{cor}}\left( {y_{11} y_{21} } \right) \, - \, (r_{11}\times m_{1 \, 1 }\times m_{21 }\times r_{21 } )]/(v_{11 }\times r_{11 }\times v_{2 }\times r_{21} ) $$
(10.5)

This expression is useful for adjusting observed correlations between variables when one can rely on validity, reliability, and invalidity estimates (e.g. based on meta-analysis of MTMM data).

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer Science+Business Media New York

About this chapter

Cite this chapter

Billiet, J., Matsuo, H. (2012). Non-Response and Measurement Error. In: Gideon, L. (eds) Handbook of Survey Methodology for the Social Sciences. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-3876-2_10

Download citation

Publish with us

Policies and ethics