Advertisement

Outcome Rates and Effect on Sample Size

  • Richard Valliant
  • Jill A. Dever
  • Frauke Kreuter
Chapter
Part of the Statistics for Social and Behavioral Sciences book series (SSBS)

Abstract

Outcome rates, such as the percent of sample units refusing to participate in a survey, generally have three uses. The first is to measure study performance and outcome rates, which are often also referred to as process indicators. The second use is to inflate a calculated sample size for loss of sample units to ensure viability of planned analyses. Third, study rates can also be incorporated into the design weights as adjustment factors to create final analysis weights. Outcome rates are not necessarily a measure of data quality but can be used to guide field decisions, and the logic behind them helps in the planning stages of a survey. Disposition codes are discussed that are needed to define the outcome rates.

References

  1. AAPOR. (2016b). Standard definitions: Final dispositions of case codes and outcome rates for surveys, 9th edn. Tech. rep., The American Association for Public Opinion Research, Deerfield, IL, URL http://www.aapor.org/AAPOR_Main/media/publications/Standard-Definitions20169theditionfinal.pdf
  2. Abraham K. G., Maitland A., Bianchi S. M. (2006). Nonresponse in the American time use survey: Who is missing from the data and how much does it matter? Public Opinion Quarterly 70(5):676–703.CrossRefGoogle Scholar
  3. Brick J. M., Waksberg J., Kulp D., Starer A. (1995). Bias in list-assisted telephone samples. Public Opinion Quarterly 59(2):218–235.CrossRefGoogle Scholar
  4. Callegaro M., Baker R., Bethlehem J., Göritz A., Krosnick J., Lavrakas P. (eds) (2014). Online Panel Research: A Data Quality Perspective. John Wiley & Sons, Ltd., United Kingdom.Google Scholar
  5. Defense Manpower Data Center. (2004). May 2004 Status of Forces Survey of Reserve component members: Administration, datasets, and codebook. Tech. Rep. No. 2004-013, Defense Manpower Data Center, Arlington, VA.Google Scholar
  6. Groves R. M. (2006). Nonresponse rates and nonresponse bias in household surveys. Public Opinion Quarterly 70(5):646–675.CrossRefGoogle Scholar
  7. Groves R. M., Peytcheva E. (2008). The impact of nonresponse rates on nonresponse bias. Public Opinion Quarterly 72:167–189.CrossRefGoogle Scholar
  8. Kish L. (1965). Survey Sampling. John Wiley & Sons, Inc., New York.zbMATHGoogle Scholar
  9. Potter F. J., Iannacchione V. G., Mosher W., Mason R., Kavee J. A. (1998). Sample design, sampling weights, imputation, and variance estimation in the 1995 National Survey of Family Growth. Vital and Health Statistics, National Center for Health Statistics 124(2).Google Scholar
  10. Traugott M. W., Goldstein K. (1993). Evaluating dual frame samples and advance letters as a means of increasing response rates. In: Proceedings of the Survey Research Methods Section, American Statistical Association, pp 1284–1286.Google Scholar
  11. Wagner J. (2010). The fraction of missing information as a tool for monitoring the quality of survey data. Public Opinion Quarterly 74(2):223–243.CrossRefGoogle Scholar
  12. Wagner J., Ragunathan T. (2010). A new stopping rule for surveys. Statistics in Medicine 29(9):1014–1024.MathSciNetGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Richard Valliant
    • 1
    • 2
  • Jill A. Dever
    • 3
  • Frauke Kreuter
    • 2
    • 4
  1. 1.University of MichiganAnn ArborUSA
  2. 2.University of MarylandCollege ParkUSA
  3. 3.RTI InternationalWashington, DCUSA
  4. 4.University of MannheimMannheimGermany

Personalised recommendations