Respondent Burden: A First Measurement Effort
Since the 1930’s, the sample survey has become the major tool of social and behavioral research in the western world. Most investigations which seek to establish how people live, what kind of work they do, how they spend their time and money, how they fare with respect to health and safety, and how they feel about private and public issues rely on sample surveys. Many important decisions made by government agencies and in the business community are to a greater or lesser extent based on survey data, be they decisions to launch major employment programs (if sample surveys show increases in the incidence of unemployment), select political candidates (depending on their showing in public opinion polls) or introduce new products or services (on the basis of market research findings).
Unable to display preview. Download preview PDF.
- 1.Brooks, Camilla A. and Barbara A. Bailar, “An Error Profile: Employment as Measured by the Current Population Survey.” Statistical Policy Working Paper 3. U.S. Department of Commerce, Office of Federal Statistical Policy and Standards, 1978.Google Scholar
- 2.Marquis, Kent H., “Survey response rates: some trends, causes and correlates.” Background paper, The Rand Corporation, Santa Monica, 1977.Google Scholar
- 3.Goldfield, E. D., A. G. Turner, C. D. Cowan, and J. D. Scott, “Privacy and confidentiality as factors in survey response.” Public Data Use, Vol. 6, 1978, pp. 3–16.Google Scholar
- 5.See “Discussion of Response Rates” (Charles Cannell, University of Michigan, Chair) in Health Survey Research Methods, Second Biennial Conference, Williamsburg, Virginia, 1977. (National Center for Health Services Research, Research Proceedings Series), DHEW Publication No. (PHS) 79-3207.Google Scholar
- 6.Bradburn, Norman, “Respondent Burden”, in Health Survey Research Methods, op. cit. pp. 49-5 3.Google Scholar
- 7.Malmuth, M., “The Advantages and Disadvantages of Rotating the Annual Housing Survey, National Sample from a Non-Response Point of View.” Paper presented at the American Statistical Association Annual Meeting, August, 1978, p. 26.Google Scholar
- 8.To our surprise, the interviewers were not greatly disturbed by the experimental nature of the study, i. e., having to work with different questionnaire models. What disturbed them most was the possibility that the actual data collected in the study would not be analyzed and used as policy inputs. They had made a commitment to the study because they felt that it dealt with important public issues (housing, energy use, transportation) and were relatively uninterested in the methodological aspects of the study. “Good” interviewers clearly believe that surveys are important and influential; in this respect, as shown in the “findings” section, they are like the most cooperative respondents.Google Scholar
- 9.“No contact” occurred if there was no one at home after four attempts, the eligible respondent was away during the interviewing period, or the dwelling unit was unoccupied or not an eligible housing unit.Google Scholar
- 10.Wiseman, F., The Nonresponse Problem in Consumer Surveys. Unpublished Paper presented at the annual meeting of the American Association for Public Opinion Research, Cincinnati, Ohio, 1980.Google Scholar
- 11.Jones, C., et. al., Dakota Farmers and Ranchers Evaluate Crop and Livestock Surveys, National Opinion Research Center, Chicago, 1979, p. 69.Google Scholar
- 12.This statement should be qualified: the research reported here was limited to predominantly white, middle class respondents. It should be replicated with inner-city, minority and rural populations before broad generalizations are made.Google Scholar
- 13.Frankel, Joanne, Measurement of Respondent Burden: Study Design and Early Findings, Bureau of Social Science Research, Washington, D.C., 1980.Google Scholar