Abstract
Outcome rates, such as the percent of sample units refusing to participate in a survey, generally have three uses. The first is to measure study performance and outcome rates, which are often also referred to as process indicators. The second use is to inflate a calculated sample size for loss of sample units to ensure viability of planned analyses. Third, study rates can also be incorporated into the design weights as adjustment factors to create final analysis weights. Outcome rates are not necessarily a measure of data quality but can be used to guide field decisions, and the logic behind them helps in the planning stages of a survey. Disposition codes are discussed that are needed to define the outcome rates.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
- 3.
Alternatively, the units could be worked in a random order, in which case, data collection could be stopped partway through a replicate. Working cases in a random order is typically impractical, however.
- 4.
References
AAPOR. (2016b). Standard definitions: Final dispositions of case codes and outcome rates for surveys, 9th edn. Tech. rep., The American Association for Public Opinion Research, Deerfield, IL, URL http://www.aapor.org/AAPOR_Main/media/publications/Standard-Definitions20169theditionfinal.pdf
Abraham K. G., Maitland A., Bianchi S. M. (2006). Nonresponse in the American time use survey: Who is missing from the data and how much does it matter? Public Opinion Quarterly 70(5):676–703.
Brick J. M., Waksberg J., Kulp D., Starer A. (1995). Bias in list-assisted telephone samples. Public Opinion Quarterly 59(2):218–235.
Callegaro M., Baker R., Bethlehem J., Göritz A., Krosnick J., Lavrakas P. (eds) (2014). Online Panel Research: A Data Quality Perspective. John Wiley & Sons, Ltd., United Kingdom.
Defense Manpower Data Center. (2004). May 2004 Status of Forces Survey of Reserve component members: Administration, datasets, and codebook. Tech. Rep. No. 2004-013, Defense Manpower Data Center, Arlington, VA.
Groves R. M. (2006). Nonresponse rates and nonresponse bias in household surveys. Public Opinion Quarterly 70(5):646–675.
Groves R. M., Peytcheva E. (2008). The impact of nonresponse rates on nonresponse bias. Public Opinion Quarterly 72:167–189.
Kish L. (1965). Survey Sampling. John Wiley & Sons, Inc., New York.
Potter F. J., Iannacchione V. G., Mosher W., Mason R., Kavee J. A. (1998). Sample design, sampling weights, imputation, and variance estimation in the 1995 National Survey of Family Growth. Vital and Health Statistics, National Center for Health Statistics 124(2).
Traugott M. W., Goldstein K. (1993). Evaluating dual frame samples and advance letters as a means of increasing response rates. In: Proceedings of the Survey Research Methods Section, American Statistical Association, pp 1284–1286.
Wagner J. (2010). The fraction of missing information as a tool for monitoring the quality of survey data. Public Opinion Quarterly 74(2):223–243.
Wagner J., Ragunathan T. (2010). A new stopping rule for surveys. Statistics in Medicine 29(9):1014–1024.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this chapter
Cite this chapter
Valliant, R., Dever, J.A., Kreuter, F. (2018). Outcome Rates and Effect on Sample Size. In: Practical Tools for Designing and Weighting Survey Samples. Statistics for Social and Behavioral Sciences. Springer, Cham. https://doi.org/10.1007/978-3-319-93632-1_6
Download citation
DOI: https://doi.org/10.1007/978-3-319-93632-1_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-93631-4
Online ISBN: 978-3-319-93632-1
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)