Unit Non-Response Due to Refusal

Chapter

Abstract

Non-response is a key threat to survey quality when participation is related to the topic of a survey. The higher the non-response and the more respondents differ from the non-respondents, the larger the non-response bias and the lower the validity of survey results. Refusal is usually the major cause of non-response. This can result in non-response bias when the reasons for refusal are related to the topic of the survey. Face-to-face surveys can provide some information about refusals and refusers. Based on empirical evidence from a wide range of studies this chapter shows why people cooperate and why they refuse, which survey design issues can influence cooperation and what impact interviewers can have. It also discusses the extent to which refusal conversion can help to enhance response rates and minimise bias, and how follow-up surveys or doorstep questionnaires can help to provide information about the survey questions that are central to the topic.

Keywords

Non-response Refusal Survey questions Validity Bias Survey cooperation 

References

  1. Abraham, K. G., Maitland, A., & Bianchi, S. M. (2006). Nonresponse in the American time use survey: Who is missing from the data and how much does it matter? Public Opinion Quarterly, 70, 676–703.CrossRefGoogle Scholar
  2. American Association for Public Opinion Research (AAPOR). (2011a). Standard definitions: Final dispositions of case codes and outcome rates for surveys (7th ed.).Google Scholar
  3. American Association for Public Opinion Research (AAPOR). (2011b). Best practices: http://www.aapor.org/Best_Practices1.htm#best7. Downloaded on the July 13, 2011.
  4. Bates, N., Dahlhamer, J., & Singer, E. (2008). Privacy concerns, too busy, or just not interested: using doorstep concerns to predict survey nonresponse. Journal of Official Statistics, 24(4), 591–612.Google Scholar
  5. Bethlehem, J. G. (2002). Weighting nonresponse adjustments based on auxiliary information. In R. M. Groves, D. A. Dillman, J. L. Eltinge, & R. J. A. Little (Eds), Survey nonresponse (pp. 265–287). Wiley: New York.Google Scholar
  6. Bethlehem, J. (2009). Applied survey methods: A statistical perspective. Hoboken: Wiley.Google Scholar
  7. Bethlehem, J. G., Cobben, F., & Schouten, B. (2011a). Handbook of nonresponse in household surveys. Hoboken: Wiley.CrossRefGoogle Scholar
  8. Bethlehem, J. G., & Kersten, H. M. P. (1985). On the treatment of nonresponse in sample surveys. Journal of Official Statistics, 1(3), 287–300.Google Scholar
  9. Beullens, K., Billiet, J., & Loosveldt, G. (2009). The effect of the elapsed time between initial refusal and conversion contact on conversion success: evidence from the 2nd round of the European Social Survey. Quality & Quantity, 44(6), 1053–1065.CrossRefGoogle Scholar
  10. Biemer, P. P., & Lyberg, L. E. (2003). Introduction to survey quality. Hoboken: Wiley.CrossRefGoogle Scholar
  11. Billiet, J., Philippens, M., Fitzgerald, R., & Stoop, I. (2007). Estimation of nonresponse bias in the european social survey: Using information from reluctant respondents. Journal of Official Statistics, 23(2), 135–162.Google Scholar
  12. Billiet, J., Matsuo, H., Beullens, K. & Vehovar, V. (2009). Non-response bias in cross-national surveys: Designs for detection and adjustment in the ESS. ASK. Sociological Methods & Research, 18, 3–43 (Published by the Polish Institute of Philosophy and Sociology. Polish Academy of Science).Google Scholar
  13. Blom, A., de Leeuw E. D.,& J. Hox (2010). Interviewer effects on nonresponse in the European social survey. ISER Working Paper Series No. 2010-25, University of Essex.Google Scholar
  14. Bogen, K. (1996). The effect of questionnaire length on response rates—A review of the literature. Proceedings of the Survey Research Methods Section, American Statistical Association.Google Scholar
  15. Borg, I. (2000, November). Früh- versus Spätantworter. ZUMA-Nachrichten 47, Jg. 24, 7–19.Google Scholar
  16. Brehm, J. (1993). The phantom respondents: Opinion surveys and political representation. Ann Arbor: University of Michigan Press.Google Scholar
  17. Burton, J., Laurie, H., & Lynn, P. (2006). The long term effectiveness of refusal conversion procedures on longitudinal surveys. Journal of the Royal Statistical Society Series A, 169, 459–478.Google Scholar
  18. Campanelli, P., Sturgis, P., & Purdon, S. (1997). Can you hear me knocking: An investigation into the impact of interviewers on survey response rates. London: The Survey Methods Centre at SCPR.Google Scholar
  19. Cannel, C. F., & Fowler, F. J. (1963). Comparison of a self-enumerative procedure and a personal interview. A validity study. Public opinion quarterly, 27(2), 250–264.CrossRefGoogle Scholar
  20. Carton, A. (1999). Selectie, training en evaluatie van interviewers binnen een interviewernetwerk. Leuven: Garant.Google Scholar
  21. Cohen, G., & Duffy, J. C. (2002). Are nonrespondents to health surveys less healthy than respondents? Journal of Official Statistics, 18(1), 13–23.Google Scholar
  22. Couper, M. P. (1997). Survey introductions and data quality. Public Opinion Quarterly, 61, 317–338.CrossRefGoogle Scholar
  23. Couper, M. P., & de Leeuw, E. D. (2003). Nonresponse in cross-cultural and cross-national surveys. In J. A. Harkness, F. J. R. van de Vijver, & P. Ph. Mohler (Eds), Cross-cultural survey methods (pp. 157–177). New York: Wiley.Google Scholar
  24. Curtin, R., Presser, S., & Singer, E. (2000). The effects of response rate changes on the index of consumer sentiment. Public Opinion Quarterly, 64, 413–428.CrossRefGoogle Scholar
  25. De Leeuw, E. D. (1999, April). How do successful and less successful interviewers differ in tactics for combating survey nonresponse? Bulletin de Méthodologie Sociologique, 62, 29–42.Google Scholar
  26. De Leeuw, E. (2001). I am not selling anything: Experiments in telephone introductions. Kwantitatieve Methoden, 22(68), 41–48Google Scholar
  27. De Leeuw, E. D., Callegaro, M., Hox, J., Korendijk, E., & Lensvelt-Mulders, G. (2007). The influence of advance letters on response in telephone surveys. A meta-analysis. Public Opinion Quarterly, 71(3), 413–443.CrossRefGoogle Scholar
  28. De Leeuw, E., & de Heer, W. (2002). Trends in household survey nonresponse: A longitudinal and international comparison. In R. M. Groves, D. A. Dillman, J. L. Eltinge, & R. J. A. Little (Eds), Survey Nonresponse (pp. 41–54). New York: Wiley.Google Scholar
  29. Dijkstra, W., & Smit, J.H. (2002). Persuading reluctant recipients in telephone surveys. In: R.M. Groves, D.A. Dillman, J.L. Eltinge & R.J.A. Little (Eds) (pp. 121–134), Survey nonresponse. New York: WileyGoogle Scholar
  30. Dillman, D. A. (2000). Mail and Internet surveys: The tailored design method. New York: Wiley.Google Scholar
  31. Durrant, G. B., Groves, R. M., Staetsky, L., & Steele, F. (2010). Effects of interviewer attitudes and behaviors on refusal in household surveys. Public Opinion Quarterly, 74(1), 1–36.CrossRefGoogle Scholar
  32. Edwards, S., Martin, D., DiSogra, C. & Grant, D. (2004). Altering the hold period for refusal conversion cases in an RDD survey. Proceedings of the Survey Research methods Section (3440-3444). Alexandria: American Statistical Association.Google Scholar
  33. Elliot, D. (1991) Weighting for non-response. A survey researcher’s guide. London: Office of Population Censuses and Surveys.Google Scholar
  34. European Social Survey. (2011). Round 6 specification for participating countries. London: Centre for Comparative Social Surveys, City University.Google Scholar
  35. Goyder, J. (1986). Surveys on surveys: Limitations and potentialities. Public Opinion Quarterly, 50, 27–41.CrossRefGoogle Scholar
  36. Goyder, J. (1987). The silent minority. Nonrespondents on sample surveys. Cambridge: Polity Press.Google Scholar
  37. Goyder, J., Boyer, L. & Martinelli, G. (2006, October) Integrating exchange and heuristic theories of survey nonresponse. Bulletin de Méthodologie Sociologique, 92, 28–44.Google Scholar
  38. Goyder, J., Warriner, K., & Miller, S. (2002). Evaluating socio-economic status (SES) bias in survey nonresponse. Journal of Official Statistics, 18(1), 1–11.Google Scholar
  39. Groves, R.M. (1989). Survey errors and survey costs. New York: John Wiley & Sons, Inc.Google Scholar
  40. Groves, R. M. (2006). Nonresponse rates and nonresponse bias in household surveys. Public Opinion Quarterly, 70(5), 646–675.CrossRefGoogle Scholar
  41. Groves, R. M., Cialdini, R. B., & Couper, M. P. (1992). Understanding the decision to participate in a survey. Public Opinion Quarterly, 56, 475–495.CrossRefGoogle Scholar
  42. Groves, R. M., & Couper, M. P. (1998). Nonresponse in household interview surveys. New York: Wiley.CrossRefGoogle Scholar
  43. Groves, R. M., & McGonagle, K. A. (2001). A theory-guided interview training protocol regarding survey participation. Journal of Official Statistics, 17(2), 249–266.Google Scholar
  44. Groves, R. M., & Peytcheva, E. (2008). The impact of nonresponse rates on nonresponse bias. A meta-analysis. Public Opinion Quarterly, 72(2), 167–189.CrossRefGoogle Scholar
  45. Groves, R. M., Presser, S., & Dipko, S. (2004). The role of topic interest in survey participation decisions. Public Opinion Quarterly, 68, 2–31.CrossRefGoogle Scholar
  46. Groves, R. M., Singer, E., & Corning, A. (2000). Leverage-saliency theory of survey participation. Description and an illustration. Public Opinion Quarterly, 64, 299–308.CrossRefGoogle Scholar
  47. Häder, S. & Lynn, P. (2007). How representative can a multi-nation survey be? In R. Jowell, C. Roberts, R. Fitzgerald, G. Eva (Eds), Measuring attitudes cross-nationally. Lessons from the European social survey (pp. 33–52). London: Sage.Google Scholar
  48. Hansen, M., & Hurvitz, W. (1946). The problem of nonresponse in sample surveys. Journal of the American Statistical Association, 41, 517–529.CrossRefGoogle Scholar
  49. Hippler, H.-J., & Hippler, G. (1986). Reducing refusal rates in the case of threatening questions: The ‘door-in-the-face’ technique. Journal of Official Statistics, 2(1), 25–33.Google Scholar
  50. Holbrook, A. L., Green, M., & Krosnick, J. A. (2003). Telephone vs. face-to-face interviewing of national probability samples with long questionnaires: Comparisons of respondent satisficing and social desirability response bias. Public Opinion Quarterly, 67, 79–125.CrossRefGoogle Scholar
  51. Hox, J., & de Leeuw, E. (2002). The influence of interviewers’ attitude and behavior on household survey nonresponse: An international comparison. In R. M. Groves, D. A. Dillman, J. L. Eltinge, & R. J. A. Little (Eds), Survey nonresponse (pp. 103–120). New York: Wiley.Google Scholar
  52. Hox, J., de Leeuw, E., & Vorst H. (1995, September). Survey participation as reasoned action: A behavioral paradigm for survey nonresponse? Bulletin de Méthodologie Sociologique, 48, 52–67.Google Scholar
  53. Jäckle, A., Lynn, P., Sinibaldi J., & Tipping, S. (2011). The effect of interviewer personality, skills and attitudes on respondent co-operation with face-to-face surveys. ISER Working Paper Series No. 2011-14, University of Essex.Google Scholar
  54. Johnson, T.P., O'Rourke, D., Burris, J., & Owens L. (2002). Culture and survey nonresponse In: R.M. Groves, D.A. Dillman, J.L. Eltinge & R.J.A. Little (Eds) (pp. 55–70), Survey nonresponse. New York: WileyGoogle Scholar
  55. Keeter, S., Miller, C., Kohut, A., Groves, R. M., & Presser, S. (2000). Consequences of reducing nonresponse in a national telephone survey. Public Opinion Quarterly, 64, 125–148.CrossRefGoogle Scholar
  56. Koch, A., Fitzgerald, R., Stoop, I., & Widdop, S. (2010). Field procedures in the european social survey round 5: Enhancing response rates. Mannheim: European Social Survey, GESIS.Google Scholar
  57. Krosnick, J. A., Narayan, S. S., & Smith, W. R. (1996). Satisficing in surveys: Initial evidence. In M. T. Braverman & J. K. Slater (Eds), Advances in survey research (pp. 29–44). San Francisco: Jossey-Bass.Google Scholar
  58. Laurie, H., Smith, R., & Scott, L. (1999). Strategies for reducing nonresponse in a longitudinal survey. Journal of Official Statistics, 15(2), 269–282.Google Scholar
  59. Lesser, J., Eyerman, J., & Wang, K. (2008). Interviewer training. In E. de Leeuw, J. Hox, & D. Dillman (Eds), International handbook of survey methodology (pp. 442–460). New York: Lawrence Erlbaum Associates.Google Scholar
  60. Loosveldt, G. (2008). Face-to-face interviews. In E. de Leeuw, J. Hox, & D. Dillman (Eds.), International handbook of survey methodology (pp. 201–220). New York: Lawrence Erlbaum Associates.Google Scholar
  61. Loosveldt, G., & Carton, A. (2002). Utilitarian individualism and panel nonresponse. International Journal of Public Opinion Research, 14(4), 428–438.CrossRefGoogle Scholar
  62. Loosveldt, G., Carton, A., & Billiet, J. (2004). Assessment of survey data quality: a pragmatic approach focused on interviewer tasks. International Journal of Market Research, vol. 46 (1), 65–82Google Scholar
  63. Loosveldt, G., & Storms, V. (2008). Measuring public opinions about surveys. International Journal of Public Opinion Research, 20(1), 74–89.CrossRefGoogle Scholar
  64. Lynn, P. (2003). PEDAKSI: Methodology for collecting data about survey non-respondents. Quality & Quantity, 37, 239–261.CrossRefGoogle Scholar
  65. Lynn, P., & Clarke, P. (2001). Separating refusal bias and non-contact bias: evidence from uk national surveys. Working Papers of the Institute for Social and Economic Research, paper 2001-24. Colchester: University of Essex.Google Scholar
  66. Martin, E.A., Traugott, M.W., & Kennedy, C. (2005). A review and proposal for a new measure of poll accuracy. Public Opinion Quarterly 69(3), 342–369Google Scholar
  67. Matsuo, H., Billiet, J., Loosveldt, G. & Malnar, B. (2010a) Response‐based quality assessment of ESS Round 4: Results for 30 countries based on contact files. Onderzoeksverslag Centrum voor Sociologisch Onderzoek. CeSO/SM/2010‐2.Google Scholar
  68. Matsuo, H., Billiet, J., Loosveldt, G., Berglund, F., & Kleven, Ø. (2010b). Measurement and adjustment of non-response bias based on non-response surveys: the case of Belgium and Norway in the European Social Survey Round 3. Survey Research Methods, 4(3):165–178Google Scholar
  69. Merkle, D. A., & Edelman, M. (2002). Nonresponse in exit polls: A comprehensive analysis. In R. M. Groves, D. A. Dillman, J. L. Eltinge, & R. J. A. Little (Eds.), Survey nonresponse (pp. 243–258). New York: Wiley.Google Scholar
  70. Morton-Williams, J. (1993). Interviewer approaches. Aldershot: Dartmouth Publishing.Google Scholar
  71. Mowen, J. C., & Cialdini, R. B. (1980). On implementing the door-in-the-face compliance technique in a business context. Journal of Marketing Research, 17, 253–258.CrossRefGoogle Scholar
  72. Luiten, A. (2011). Personalisation in advance letters does not always increase response rates. Demographic correlates in a large scale experiment. Survey Research Methods, 5(1), 11–20.Google Scholar
  73. Olson, K., Lepkowski, J., & Garabant, D. (2011). An experimental examination of the content of persuasion letters on nonresponse rates and survey estimates in a nonresponse follow-up study. Survey Research Methods, 5(1), 21–26.Google Scholar
  74. Rogelberg, S. G., Fisher, G. G., Maynard, D. C., Hakel, M. D., & Horvath, M. (2001). Attitudes toward surveys: Development of a measure and its relationship to respondent behavior. Organizational Research Methods, 4(1), 3–25.CrossRefGoogle Scholar
  75. Rogelberg, S. G., Conway, J. M., Sederburg, M. E., Spitzmüller, C., Aziz, S., & Knight, W. E. (2003). Profiling active and passive nonrespondents to an organizational survey. Journal of Applied Psychology, 88(6), 1104–1114.CrossRefGoogle Scholar
  76. Schaeffer, N.C., Dykema, J. & Maynard, D.W. (2010). Interviewers and interviewing. In P. Marsden & J. Wright (eds.), Handbook of survey research (2nd ed., pp. 437–470). UK: Emerald Group Publishing Ltd.Google Scholar
  77. Schräpler J. P., Schupp J., & Wagner, G. G. (2010). Individual and neighborhood determinants of survey nonresponse: An analysis based on a new subsample of the german socio-economic panel (SOEP), microgeographic characteristics and survey-based interviewer characteristics. SOEP papers 288, DIW Berlin, The German Socio-Economic Panel (SOEP).Google Scholar
  78. Singer, E. (2002). The use of incentives to reduce nonresponse in household surveys. In R. M. Groves, D. A. Dillman, J. L. Eltinge, & R. J. A. Little (Eds.), Survey nonresponse (pp. 163–177). New York: Wiley.Google Scholar
  79. Singer, E. (2006). Nonresponse bias in household surveys. Public Opinion Quarterly, 70(5), 637–645.CrossRefGoogle Scholar
  80. Singer, E. (2011). Towards a cost-benefit theory of survey participation: Evidence, further test, and implications. Journal of Official Statistics, 27(2), 379–392.Google Scholar
  81. Singer, E., Groves, R. M., & Corning, A. D. (1999a). Differential incentives. Beliefs about practices, perceptions of equity, and effects on survey participation. Public Opinion Quarterly, 63, 251–260.CrossRefGoogle Scholar
  82. Singer, E., Mathiowetz, N. A., & Couper, M. P. (1993). The impact of privacy and confidentiality concerns on survey participation: The case of the 1990 census. Public Opinion Quarterly, 57, 465–482.CrossRefGoogle Scholar
  83. Singer, E., Van Hoewyk, J., &Maher, M.P. (1998). Does the payment of incentives create expectation effects? Public Opinion Quarterly, 62, 152–164Google Scholar
  84. Singer, E., Van Hoewyk, J., & Neugebauer, R. J. (2003). Attitudes and behavior. The impact of privacy and confidentiality concerns on participation in the 2000 census. Public Opinion Quarterly, 67, 368–384.CrossRefGoogle Scholar
  85. Singer, E., Van Hoewyk, J., Gebler, N., Raghunathan, T., & McGonagle, K. (1999b). The effects of incentives on response rates in interviewer-mediated surveys. Journal of Official Statistics, 15(2), 199–216.Google Scholar
  86. Smeets, I. (1995). Facing another gap: An exploration of the discrepancies between voting turnout in survey research and official statistics. Acta Politica, 30, 307–334.Google Scholar
  87. Smith, T. W. (1983). The hidden 25 Percent: An analysis of nonresponse on the 1980 General Social Survey. Public Opinion Quarterly, 47, 386–404.CrossRefGoogle Scholar
  88. Smith, T. W. (1984). Estimating nonresponse bias with temporary refusals. Sociological Perspectives, 27(4), 473–489.CrossRefGoogle Scholar
  89. Stinchcombe, A. L., Jones, C., & Sheatsley, P. (1981). Nonresponse bias for attitude questions. Public Opinion Quarterly, 45, 359–375.CrossRefGoogle Scholar
  90. Stocké, V., & Langfeldt, B. (2004). Effects of survey experience on respondents’ attitude towards surveys. Bulletin de Méthodologie Sociologique, January 2004, 81, 5–32Google Scholar
  91. Stoop, I. A. L. (2004). Surveying nonrespondents. Field Methods, 16(1), 23–54.CrossRefGoogle Scholar
  92. Stoop, I. A. L. (2005). The hunt for the last respondent. The Hague: Social and Cultural Planning Office.Google Scholar
  93. Stoop, I. (2007). No time, too busy. Time strain and survey cooperation. In G. Loosveldt, M. Swyngedouw, & B. Cambré (Eds), Measuring meaningful data in social research (pp. 301–314). Leuven: Acco.Google Scholar
  94. Stoop, I., Billiet, J., Koch, A., & Fitzgerald, R. (2010). improving survey response. Lessons learned from the European Social Survey. Chichester: Wiley.Google Scholar
  95. Sturgis, P., & Campanelli, P. (1998). The scope for reducing refusals in household surveys: An investigation based on transcripts of tape-recorded doorstep interactions. Journal of the Market Research Society 40(2), 121–139Google Scholar
  96. Te Riele, S. (2002) Vertekening door non-respons. Hoe nauwkeurig zijn de uitkomsten van persoons-enquêtes? Sociaal-economische Maandstatistiek, Jrg. 19, April 2002. Voorburg/Heerlen: Centraal Bureau voor de Statistiek, pp. 20–25.Google Scholar
  97. Teitler, J. O., Reichman, N. E., & Sprachman, S. (2003). Costs and benefits of improving response rates for a hard-to-reach population. Public Opinion Quarterly, 67, 126–138.CrossRefGoogle Scholar
  98. Tourangeau, R., & Smith, T. W. (1996). Asking sensitive questions: The impact of data collection mode, question format, and question context. Public Opinion Quarterly, 60, 275–304.CrossRefGoogle Scholar
  99. Triplett, T. (2002). What is gained from additional call attempts & refusal conversion and what are the cost implications? Research Report. Washington, DC: The Urban Institute.Google Scholar
  100. Triplett, T. (2006). 2002 NASF nonresponse analysis. Methodology Reports, Report No. 7. Washington, DC: The Urban Institute.Google Scholar
  101. Triplett, T., Scheib, J., & Blair, T. (2001). How long should you wait before attempting to convert a telephone refusal? Proceedings of the Survey Research Methods Section. Alexandria: American Statistical Association.Google Scholar
  102. Van Ingen, E., Stoop, I., & Breedveld, K. (2009). Nonresponse in the Dutch time use survey: Strategies for response enhancement and bias reduction. Field Methods, 21(1), 69–90.CrossRefGoogle Scholar
  103. Voogt, R. J. J., Saris, W. E., & Niemöller, B. (1998). Non-response, and the gulf between the public and the politicians. Acta Politica, 33, 250–280.Google Scholar
  104. West, B., & Olson, K. (2010). How much of interviewer variance is really nonresponse error variance. Public Opinion Quarterly, 74(5), 1004–1026.CrossRefGoogle Scholar

Further Reading

  1. Bethlehem, J. G., Cobben, F., & Schouten, B. (2011b). Handbook of nonresponse in household surveys. Hoboken: Wiley.CrossRefGoogle Scholar
  2. Blom, A., & Kreuter, F. (2011). Special issue on nonresponse. Journal of Official Statistics, 27(2), 151–414. (Guest editors).Google Scholar
  3. Groves, R. M., & Couper, M. P. (1998b). Nonresponse in household interview surveys. New York: Wiley.CrossRefGoogle Scholar
  4. Singer, E. (2006). Nonresponse bias in household surveys. Public Opinion Quarterly, 70(5).Google Scholar
  5. Stoop, I. A. L. (2005). The hunt for the last respondent. The Hague: Social and Cultural Planning Office.Google Scholar
  6. Stoop, I., Billiet, J., Koch, A., & Fitzgerald, R. (2010). Improving survey response. Lessons learned from the European Social Survey. Chichester: Wiley.Google Scholar

Copyright information

© Springer Science+Business Media New York 2012

Authors and Affiliations

  1. 1.The Netherlands Institute for Social Research/SCPThe HagueThe Netherlands

Personalised recommendations