Skip to main content

Survey Interviewing: Departures from the Script

  • Chapter
  • First Online:
Book cover The Palgrave Handbook of Survey Research
  • 6100 Accesses

Abstract

Most standardized survey interviews are designed for an interviewer to deliver a specific script and follow a set of predefined paths through the survey interview. Most standardized interviewing practices dictate that each interviewer read the questions exactly as they are worded and avoid any deviation from the questions, at least when they are read the first time. However, as this chapter discusses, interviewers sometimes deviate from the standardized script – for example, by modifying the wording of the question or, occasionally, by providing a definition to respondents. Research suggests that when interviewers are well trained and monitored — for example in centralized phone rooms — most deviations from standardized practice are minor and are occasioned by the wording or structure of the question or by a lack of fit between the assumptions of the question and the respondent’s situation. This chapter reviews several different types of these deviations and discusses the positive and negative effects that they can have on survey data quality.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

References and Further Reading

  • Belli, R. F., & Lepkowski, J. M. (1996). “Behavior of Survey Actors and the Accuracy of Response.” Pp. 69–74 In Health Survey Research Methods Conference Proceedings, DHHS Publication No. (PHS) 96–1013, edited by R. Warneke. Hyattsville, MD: Department of Health and Human Services, Public Health Service, Centers for Disease Control and Prevention, National Center for Health Statistics.

    Google Scholar 

  • Conrad, F. G., & Schober, M. F. (2000). “Clarifying Question Meaning in a Household Telephone Survey.” Public Opinion Quarterly 64: 1–28.

    Article  Google Scholar 

  • Dijkstra, W. (1987). “Interviewing Style and Respondent Behavior: An Experimental Study of the Survey Interview.” Sociological Methods and Research 16: 309–334.

    Article  Google Scholar 

  • Dykema, J., Lepkowski, J. M., & Blixt, S. (1997). “The Effect of Interviewer and Respondent Behavior on Data Quality: Analysis of Interaction Coding in a Validation Study.” Pp. 287–310 In Survey Measurement and Process Quality, edited by L. Lyberg, P. Biemer, M. Collins, E. De Leeuw, C. Dippo, N. Schwarz, & D. Trewin. New York: Wiley-Interscience.

    Google Scholar 

  • Dykema, J., & Schaeffer, N. C. (2005). “An Investigation of the Impact of Departures from Standardized Interviewing on Response Errors in Self-Reports About Child Support and Other Family-Related Variables.” Paper presented at the annual meeting of the American Association for Public Opinion Research, May, Miami Beach, FL.

    Google Scholar 

  • Fuchs, M. (2000). “Screen Design and Question Order in a CAI Instrument: Results from a Usability Field Experiment.” Survey Methodology 26: 199–207.

    Google Scholar 

  • Fuchs, M., Couper, M. P., & Hansen, S. E. (2000). “Technology Effects: Do CAPI or PAPI Interviews Take Longer?” Journal of Official Statistics 16(3): 273–286.

    Google Scholar 

  • Fuchs, M. (2002). “The Impact of Technology on Interaction in Computer-Assisted Interviews.” Pp. 471–491 In Standardization and Tacit Knowledge: Interaction and Practice in the Survey Interview, edited by D. W. Maynard, H. Houtkoop-Steenstra, J. Van Der Zouwen, & N. C. Schaeffer. New York: Wiley.

    Google Scholar 

  • Gathman, E., Cabell, H., Maynard, D. W., & Schaeffer, N. C. (2008). “The Respondents are All Above Average: Compliment Sequences in a Survey Interview. Research on Language and Social Interaction 41: 271–301.

    Article  Google Scholar 

  • Groves, R. M., & Magilavy, L. J. (1981). “Increasing Response Rates to Telephone Surveys: A Door in the Face for Foot in the Door?.” Public Opinion Quarterly 45: 346–358.

    Article  Google Scholar 

  • Groves, Robert M. and Lou J. Magilavy. (1986). “Measuring and Explaining Interviewer Effects in Centralized Telephone Surveys.” Public Opinion Quarterly 50(2): 251–66.

    Article  Google Scholar 

  • Hak, T. (2002). “How Interviewers Make Coding Decisions.” Pp. 449–470 In Standardization and Tacit Knowledge: Interaction and Practice in the Survey Interview, edited by D. W. Maynard, H. Houtkoop-Steenstra, J. Van Der Zouwen, & N. C. Schaeffer. New York: Wiley.

    Google Scholar 

  • Hess, J., Singer, E., & Bushery, J. M. (1999). “Predicting Test-Retest Reliability from Behavior Coding.” International Journal of Public Opinion Research 11: 346–360.

    Article  Google Scholar 

  • Mangione Jr., T. W., Fowler, F. J., & Louis, T. A. (1992). “Question Characteristics and Interviewer Effects.” Journal of Official Statistics 8: 293–307.

    Google Scholar 

  • Moore, R. J., & Maynard, D. W. (2002). “Achieving Understanding in the Standardized Survey Interview: Repair Sequences.” Pp. 281–312 In Standardization and Tacit Knowledge: Interaction and Practice in the Survey Interview, edited by D. W. Maynard, H. Houtkoop-Steenstra, N. C. Schaeffer, & J. Van Der Zouwen. New York: Wiley.

    Google Scholar 

  • O’Muircheartaigh, C., & Campanelli, P. (1998). “The Relative Impact of Interviewer Effects and Sample Design Effects on Survey Precision.” Journal of the Royal Statistical Society, Series A 161: 63–77.

    Article  Google Scholar 

  • Schaeffer, N. C., & Dykema, J. (2011a). “Questions for Surveys: Current Trends and Future Directions.” Public Opinion Quarterly 75: 909–961.

    Article  Google Scholar 

  • Schaeffer, N. C., & Dykema, J. (2011b). “Response 1 to Fowler’s Chapter: Coding the Behavior of Interviewers and Respondents to Evaluate Survey Questions.” Pp. 23–39 In Question Evaluation Methods: Contributing to the Science of Data Quality, edited by J. Madans, K. Miller, A. Maitland, & G. Willis. Hoboken, NJ: John Wiley & Sons, Inc.

    Chapter  Google Scholar 

  • Schaeffer, N. C., & Maynard, D. W. (2008). “The Contemporary Standardized Survey Interview for Social Research.” Pp. 31–57 In Envisioning the Survey Interview of the Future, edited by F. G. Conrad & M. F. Schober. Hoboken, NJ: Wiley.

    Google Scholar 

  • Schaeffer, N. C., & Thomson, E. (1992). “The Discovery of Grounded Uncertainty: Developing Standardized Questions about Strength of Fertility Motivation.” Pp. 37–82 In Sociological Methodology 1992, Vol. 22, edited by P. V. Marsden. Oxford: Basil Blackwell.

    Google Scholar 

  • Schaeffer, Nora Cate, Dana Garbarski, Jeremy Freese, and Douglas W. Maynard. (2013). “An Interactional Model of the Call for Participation in the Survey Interview: Actions and Reactions in the Survey Recruitment Call.” Public Opinion Quarterly 77(1): 323–51.

    Article  Google Scholar 

  • Schnell, R., & Kreuter, F. (2005). “Separating Interviewer and Sampling-Point Effects.” Journal of Official Statistics 21: 389–410.

    Google Scholar 

  • Schober, M. F., & Conrad, F. C. (1997). “Does Conversational Interviewing Reduce Survey Measurement Error?.” Public Opinion Quarterly 61: 576–602.

    Article  Google Scholar 

  • Suchman, L., & Jordan, B. (1990). “Interactional Troubles in Face-to-Face Survey Interviews.” Journal of the American Statistical Association 85: 232–253.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nora Cate Schaeffer .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 The Author(s)

About this chapter

Cite this chapter

Schaeffer, N.C. (2018). Survey Interviewing: Departures from the Script. In: Vannette, D., Krosnick, J. (eds) The Palgrave Handbook of Survey Research . Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-319-54395-6_15

Download citation

Publish with us

Policies and ethics