Abstract
Surveys, now commonplace on the Internet, allow researchers to make inferences about an entire population by gathering information from a small subset of the larger group. Surveys can gather insights about people’s attitudes, perceptions, intents, habits, awarenesses, experiences, and characteristics, at significant moments both in time and over time. Even though they are easy to administer, there is a wide gap between quick-and-dirty surveys and surveys that are properly planned, constructed, and analyzed.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Overview Books
Couper, M. (2008). Designing effective Web surveys. Cambridge, UK: Cambridge University Press.
Fowler, F. J., Jr. (1995). Improving survey questions: Design and evaluation (Vol. 38). Thousand Oaks, CA: Sage. Incorporated.
Groves, R. M. (1989). Survey errors and survey costs. Hoboken, NJ: Wiley.
Groves, R. M. (2004). Survey errors and survey costs (Vol. 536). Hoboken, NJ: Wiley-Interscience.
Groves, R. M., Fowler, F. J., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2004). Survey methodology. Hoboken, NJ: Wiley.
Marsden, P. V., & Wright, J. (Eds.). (2010). Handbook of survey research (2nd ed.). Bingley, UK: Emerald Publishing Group Limited.
Sampling Methods
Aquilino, W. S. (1994). Interview mode effects in surveys of drug and alcohol use: A field experiment. Public Opinion Quarterly., 58(2), 210–240.
Cochran, W. G. (1977). Sampling techniques (3rd ed.). New York, NY: Wiley.
Couper, M. P. (2000). Web surveys: A review of issues and approaches. Public Opinion Quarterly, 64, 464–494.
Kish, L. (1965). Survey sampling. New York, NY: Wiley.
Krejcie, R. V., & Morgan, D. W. (1970). Determining sample size for research activities. Educational and Psychological Measurement, 30, 607–610.
Lohr, S. L. (1999). Sampling: Design and analysis. Pacific Grove, CA: Duxbury Press.
Questionnaire Design
Bradburn, N. M., Sudman, S., & Wansink, B. (2004). Asking questions: The definitive guide to questionnaire design – for market research, political polls, and social and health questionnaires. San Francisco, CA: Jossey-Bass. Revised.
Cannell, C. F., & Kahn, R. L. (1968). Interviewing. The Handbook of Social Psychology, 2, 526–595.
Chan, J. C. (1991). Response-order effects in Likert-type scales. Educational and Psychological Measurement, 51(3), 531–540.
Costa, P. T., & McCrae, R. R. (1988). From catalog to classification: Murray’s needs and the five-factor model. Journal of Personality and Social Psychology, 55(2), 258.
Couper, M. P., Tourangeau, R., Conrad, F. G., & Crawford, S. D. (2004). What they see is what we get response options for web surveys. Social Science Computer Review, 22(1), 111–127.
Edwards, A. L., & Kenney, K. C. (1946). A comparison of the Thurstone and Likert techniques of attitudes scale construction. Journal of Applied Psychology, 30, 72–83.
Goffman, E. (1959). The presentation of self in everyday life, 1–17. Garden City, NY
Goldberg, L. R. (1990). An alternative description of personality: The big-five factor structure. Journal of Personality and Social Psychology, 59(6), 1216.
Herzog, A. R., & Bachman, J. G. (1981). Effects of questionnaire length on response quality. Public Opinion Quarterly, 45(4), 549–559.
Holbrook, A. L., & Krosnick, J. A. (2010). Social desirability bias in voter turnout reports tests using the item count technique. Public Opinion Quarterly, 74(1), 37–67.
Kinder, D. R., & Iyengar, S. (1987). News That Matters: Television and American Opinion. Chicago: University of Chicago Press.
Krosnick, J. A. (1991). Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied Cognitive Psychology, 5, 213–236.
Krosnick, J. A. (1999). Survey research. Annual review of psychology, 50(1), 537–567.
Krosnick, J. A. (2002). The causes of no-opinion responses to attitude measures in surveys: They are rarely what they appear to be. In R. Groves, D. Dillman, J. Eltinge, & R. Little (Eds.), Survey non-response (pp. 87–100). New York: Wiley.
Krosnick, J. A., & Alwin, D. F. (1987). Satisficing: A strategy for dealing with the demands of survey questions. Columbus, OH: Ohio State University.
Krosnick, J. A., & Alwin, D. F. (1988). A test of the form-resistant correlation hypothesis ratings, rankings, and the measurement of values. Public Opinion Quarterly, 52(4), 526–538.
Krosnick, J. A., & Fabrigar, L. A. (1997). Designing rating scales for effective measurement in surveys. In L. Lyberg et al. (Eds.), Survey measurement and process quality (pp. 141–164). New York: Wiley.
Krosnick, J. A., Narayan, S., & Smith, W. R. (1996). Satisficing in surveys: Initial evidence. New Directions for Evaluation, 1996(70), 29–44.
Krosnick, J. A., & Presser, S. (2010). Question and questionnaire design. In P. V. Marsden & J. D. Wright (Eds.), Handbook of survey research (pp. 263–314). Bingley, UK: Emerald Group Publishing Limited.
Landon, E. L. (1971). Order bias, the ideal rating, and the semantic differential. Journal of Marketing Research, 8(3), 375–378.
O’Muircheartaigh, C. A., Krosnick, J. A., & Helic, A. (2001). Middle alternatives, acquiescence, and the quality of questionnaire data. In B. Irving (Ed.), Harris Graduate School of Public Policy Studies. Chicago, IL: University of Chicago.
Paulhus, D. L. (1984). Two-component models of socially desirable responding. Journal of Personality and Social Psychology, 46(3), 598.
Payne, S. L. (1951). The art of asking questions. Princeton, NJ: Princeton University Press.
Payne, J. D. (1971). The effects of reversing the order of verbal rating scales in a postal survey. Journal of the Marketing Research Society, 14, 30–44.
Rohrmann, B. (2003). Verbal qualifiers for rating scales: Sociolinguistic considerations and psychometric data. Project Report. Australia: University of Melbourne
Saris, W. E., Revilla, M., Krosnick, J. A., & Shaeffer, E. M. (2010). Comparing questions with agree/disagree response options to questions with construct-specific response options. Survey Research Methods, 4(1), 61–79.
Schaeffer, N. C., & Presser, S. (2003). The science of asking questions. Annual Review of Sociology, 29, 65–88.
Schlenker, B. R., & Weigold, M. F. (1989). Goals and the self-identification process: Constructing desired identities. In L. Pervin (Ed.), Goal concepts in personality and social psychology (pp. 243–290). Hillsdale, NJ: Erlbaum.
Schuman, H., & Presser, S. (1981). Questions and answers in attitude surveys. New York: Academic Press.
Simon, H. A. (1956). Rational choice and the structure of the environment. Psychological Review, 63(2), 129–138.
Smith, D. H. (1967). Correcting for social desirability response sets in opinion-attitude survey research. Public Opinion Quarterly, 31, 87–94.
Stone, G. C., Gage, N. L., & Leavitt, G. S. (1957). Two kinds of accuracy in predicting another’s responses. The Journal of Social Psychology, 45(2), 245–254.
Tourangeau, R. (1984). Cognitive science and survey methods. Cognitive aspects of survey methodology: Building a bridge between disciplines (pp. 73–100). Washington, DC: National Academy Press.
Tourangeau, R., Couper, M. P., & Conrad, F. (2004). Spacing, position, and order: Interpretive heuristics for visual features of survey questions. Public Opinion Quarterly, 68(3), 368–393.
Tourangeau, R., Rips, L. J., & Rasinski, K. (2000). The psychology of survey response. Cambridge, UK: Cambridge University Press.
Tourangeau, R., & Smith, T. W. (1996). Asking sensitive questions the impact of data collection mode, question format, and question context. Public Opinion Quarterly, 60(2), 275–304.
Tourangeau, R., & Yan, T. (2007). Sensitive questions in surveys. Psychological Bulletin, 133(5), 859.
Villar, A., & Krosnick, J. A. (2011). Global warming vs. climate change, taxes vs. prices: Does word choice matter? Climatic change, 105(1), 1–12.
Visual Survey Design
Callegaro, M., Villar, A., & Yang, Y. (2011). A meta-analysis of experiments manipulating progress indicators in Web surveys. Annual Meeting of the American Association for Public Opinion Research, Phoenix
Couper, M. (2011). Web survey methodology: Interface design, sampling and statistical inference. Presentation at EUSTAT-The Basque Statistics Institute, Vitoria-Gasteiz
Couper, M. P., Conrad, F. G., & Tourangeau, R. (2007). Visual context effects in Web surveys. Public Opinion Quarterly, 71(4), 623–634.
Peytchev, A., Couper, M. P., McCabe, S. E., & Crawford, S. D. (2006). Web survey design paging versus scrolling. Public Opinion Quarterly, 70(4), 596–607.
Yan, T., Conrad, F. G., Tourangeau, R., & Couper, M. P. (2011). Should I stay or should I go: The effects of progress feedback, promised task duration, and length of questionnaire on completing Web surveys. International Journal of Public Opinion Research, 23(2), 131–147.
Established Questionnaire Instruments
Brooke, J. (1996). SUS-A quick and dirty usability scale. Usability Evaluation in Industry, 189, 194.
Chin, J. P., Diehl, V. A., & Norman, K. L. (1988, May). Development of an instrument measuring user satisfaction of the human-computer interface. In Proceedings of the SIGCHI Conference on Human factors in computing systems (pp. 213–218). New York, NY: ACM
Hart, S. G., & Staveland, L. E. (1988). Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. Human Mental Workload, 1, 139–183.
Kirakowski, J., & Corbett, M. (1993). SUMI: The software usability measurement inventory. British Journal of Educational Technology, 24(3), 210–212.
Lewis, J. R. (1995). IBM computer usability satisfaction questionnaires: Psychometric evaluation and instructions for use. International Journal of Human‐Computer Interaction, 7(1), 57–78.
Moshagen, M., & Thielsch, M. T. (2010). Facets of visual aesthetics. International Journal of Human-Computer Studies, 68(10), 689–709.
Questionnaire Evaluation
Bolton, R. N., & Bronkhorst, T. M. (1995). Questionnaire pretesting: Computer assisted coding of concurrent protocols. In N. Schwarz & S. Sudman (Eds.), Answering questions (pp. 37–64). San Francisco: Jossey-Bass.
Collins, D. (2003). Pretesting survey instruments: An overview of cognitive methods. Quality of Life Research an International Journal of Quality of Life Aspects of Treatment Care and Rehabilitation, 12(3), 229–238.
Drennan, J. (2003). Cognitive interviewing: Verbal data in the design and pretesting of questionnaires. Journal of Advanced Nursing, 42(1), 57–63.
Presser, S., Couper, M. P., Lessler, J. T., Martin, E., Martin, J., Rothgeb, J. M., et al. (2004). Methods for testing and evaluating survey questions. Public Opinion Quarterly, 68(1), 109–130.
Survey Response Rates and Non-response
American Association for Public Opinion Research, AAPOR. (2011). Standard definitions: Final dispositions of case codes and outcome rates for surveys. (7th ed). http://aapor.org/Content/NavigationMenu/AboutAAPOR/StandardsampEthics/StandardDefinitions/StandardDefinitions2011.pdf
Baruch, Y. (1999). Response rates in academic studies: A comparative analysis. Human Relations, 52, 421–434.
Baruch, Y., & Holtom, B. C. (2008). Survey response rate levels and trends in organizational research. Human Relations, 61(8), 1139–1160.
Church, A. H. (1993). Estimating the effect of incentives on mail survey response rates: A meta-analysis. Public Opinion Quarterly, 57, 62–79.
Cook, C., Heath, F., & Thompson, R. L. (2000). A meta-analysis of response rates in Web- or Internet-based surveys. Educational and Psychological Measurement, 60(6), 821–836.
Dillman, D. A. (1978). Mail and telephone surveys: The total design method. New York: Wiley.
Dillman, D. A. (1991). The design and administration of mail surveys. Annual Review of Sociology, 17, 225–249.
Dillman, D. A. (2007). Mail and Internet surveys: The tailored design method (2nd ed.). Hoboken, NJ: Wiley.
Fan, W., & Yan, Z. (2010). Factors affecting response rates of the web survey: A systematic review. Computers in Human Behavior, 26(2), 132–139.
Groves, R. M. (2006). Non-response rates and non-response bias in household surveys. Public Opinion Quarterly, 70, 646–75.
Groves, R. M., Presser, S., & Dipko, S. (2004). The role of topic interest in survey participation decisions. Public Opinion Quarterly, 68(1), 2–31.
Kaplowitz, M. D., Hadlock, T. D., & Levine, R. (2004). A comparison of web and mail survey response rates. Public Opinion Quarterly, 68(1), 94–101.
Kerlinger, F. N. (1986). Foundations of behavioral research (3rd ed.). New York: Holt, Rinehart & Winston.
Kiesler, S., & Sproull, L. S. (1986). Response effects in the electronic survey. Public Opinion Quarterly, 50, 402–413.
Lavrakas, P. J. (2011). The use of incentives in survey research. 66th Annual Conference of the American Association for Public Opinion Research
Lin, I., & Schaeffer, N. C. (1995). Using survey participants to estimate the impact of nonparticipation. Public Opinion Quarterly, 59(2), 236–258.
Lu, H., & Gelman, A. (2003). A method for estimating design-based sampling variances for surveys with weighting, poststratification, and raking. Journal of Official Statistics, 19(2), 133–152.
Manfreda, K. L., Bosnjak, M., Berzelak, J., Haas, I., Vehovar, V., & Berzelak, N. (2008). Web surveys versus other survey modes: A meta-analysis comparing response rates. Journal of the Market Research Society, 50(1), 79.
Olson, K. (2006). Survey participation, non-response bias, measurement error bias, and total bias. Public Opinion Quarterly, 70(5), 737–758.
Peytchev, A. (2009). Survey breakoff. Public Opinion Quarterly, 73(1), 74–97.
Schonlau, M., Van Soest, A., Kapteyn, A., & Couper, M. (2009). Selection bias in web surveys and the use of propensity scores. Sociological Methods & Research, 37(3), 291–318.
Sheehan, K. B. (2001). E-mail survey response rates: A review. Journal of Computer Mediated Communication, 6(2), 1–16.
Singer, E. (2002). The use of incentives to reduce non-response in household surveys. In R. Groves, D. Dillman, J. Eltinge, & R. Little (Eds.), Survey non-response (pp. 87–100). New York: Wiley. 163–177.
Stevenson, J., Dykema, J., Cyffka, C., Klein, L., & Goldrick-Rab, S. (2012). What are the odds? Lotteries versus cash incentives. Response rates, cost and data quality for a Web survey of low-income former and current college students. 67th Annual Conference of the American Association for Public Opinion Research
Survey Analysis
Armstrong, D., Gosling, A., Weinman, J., & Marteau, T. (1997). The place of inter-rater reliability in qualitative research: An empirical study. Sociology, 31(3), 597–606.
Böhm, A. (2004). Theoretical coding: Text analysis in grounded theory. In A companion to qualitative research, London: SAGE. pp. 270–275.
De Leeuw, E. D., Hox, J. J., & Huisman, M. (2003). Prevention and treatment of item nonresponse. Journal of Official Statistics, 19(2), 153–176.
Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. Hawthorne, NY: Aldine de Gruyter.
Gwet, K. L. (2001). Handbook of inter-rater reliability. Gaithersburg, MD: Advanced Analytics, LLC.
Heeringa, S. G., West, B. T., & Berglund, P. A. (2010). Applied survey data analysis. Boca Raton, FL: Chapman & Hall/CRC.
Lee, E. S., Forthofer, R. N., & Lorimor, R. J. (1989). Analyzing complex survey data. Newbury Park, CA: Sage.
Saldaña, J. (2009). The coding manual for qualitative researchers. Thousand Oaks, CA: Sage Publications Limited.
Other References
Abran, A., Khelifi, A., Suryn, W., & Seffah, A. (2003). Usability meanings and interpretations in ISO standards. Software Quality Journal, 11(4), 325–338.
Anandarajan, M., Zaman, M., Dai, Q., & Arinze, B. (2010). Generation Y adoption of instant messaging: An examination of the impact of social usefulness and media richness on use richness. IEEE Transactions on Professional Communication, 53(2), 132–143.
Archambault, A., & Grudin, J. (2012). A longitudinal study of facebook, linkedin, & twitter use. In Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems (CHI '12) (pp. 2741–2750). New York: ACM
Auter, P. J. (2007). Portable social groups: Willingness to communicate, interpersonal communication gratifications, and cell phone use among young adults. International Journal of Mobile Communications, 5(2), 139–156.
Calfee, J. E., & Ringold, D. J. (1994). The 70 % majority: Enduring consumer beliefs about advertising. Journal of Public Policy & Marketing, 13(2).
Chen, J., Geyer, W., Dugan, C., Muller, M., & Guy, I. (2009). Make new friends, but keep the old: Recommending people on social networking sites. In Proceedings of the 27th International Conference on Human Factors in Computing Systems (CHI '09), (pp. 201–210). New York: ACM
Clauser, B. E. (2007). The life and labors of Francis Galton: A review of four recent books about the father of behavioral statistics. Journal of Educational and Behavioral Statistics, 32(4), 440–444.
Converse, J. (1987). Survey research in the United States: Roots and emergence 1890–1960. Berkeley, CA: University of California Press.
Drouin, M., & Landgraff, C. (2012). Texting, sexting, and attachment in college students’ romantic relationships. Computers in Human Behavior, 28, 444–449.
Feng, J., Lazar, J., Kumin, L., & Ozok, A. (2010). Computer usage by children with down syndrome: Challenges and future research. ACM Transactions on Accessible Computing, 2(3), 35–41.
Froelich, J., Findlater, L., Ostergren, M., Ramanathan, S., Peterson, J., Wragg, I., et al. (2012). The design and evaluation of prototype eco-feedback displays for fixture-level water usage data. In Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems (CHI '12) (pp. 2367–2376). New York: ACM
Harrison, M. A. (2011). College students’ prevalence and perceptions of text messaging while driving. Accident Analysis and Prevention, 43, 1516–1520.
Junco, R., & Cotten, S. R. (2011). Perceived academic effects of instant messaging use. Computers & Education, 56, 370–378.
Katosh, J. P., & Traugott, M. W. (1981). The consequences of validated and self-reported voting measures. Public Opinion Quarterly, 45(4), 519–535.
Nacke, L. E., Grimshaw, M. N., & Lindley, C. A. (2010). More than a feeling: Measurement of sonic user experience and psychophysiology in a first-person shooter game. Interacting with Computers, 22(5), 336–343.
Obermiller, C., & Spangenberg, E. R. (1998). Development of a scale to measure consumer skepticism toward advertising. Journal of Consumer Psychology, 7(2), 159–186.
Obermiller, C., & Spangenberg, E. R. (2000). On the origin and distinctiveness of skepticism toward advertising. Marketing Letters, 11, 311–322.
Person, A. K., Blain, M. L. M., Jiang, H., Rasmussen, P. W., & Stout, J. E. (2011). Text messaging for enhancement of testing and treatment for tuberculosis, human immunodeficiency virus, and syphilis: A survey of attitudes toward cellular phones and healthcare. Telemedicine Journal and e-Health, 17(3), 189–195.
Pitkow, J. E., & Recker, M. (1994). Results from the first World-Wide web user survey. Computer Networks and ISDN Systems, 27(2), 243–254.
Rodden, R., Hutchinson, H., & Fu, X. (2010). Measuring the user experience on a large scale: User-centered metrics for web applications. In Proceedings of the 28th International Conference on Human Factors in Computing Systems (CHI '10) (pp. 2395–2398) ACM, New York, NY, USA
Schild, J., LaViola, J., & Masuch, M. (2012). Understanding user experience in stereoscopic 3D games. In Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems (CHI '12) (pp. 89–98). New York: ACM
Shklovski, I., Kraut, R., & Cummings, J. (2008). Keeping in touch by technology: Maintaining friendships after a residential move. In Proceedings of the 26th Annual SIGCHI Conference on Human Factors in Computing Systems (CHI '08) (pp. 807–816). New York: ACM
Turner, M., Love, S., & Howell, M. (2008). Understanding emotions experienced when using a mobile phone in public: The social usability of mobile (cellular) telephones. Telematics and Informatics, 25, 201–215.
Weisskirch, R. S., & Delevi, R. (2011). “Sexting” and adult romantic attachment. Computers in Human Behavior, 27, 1697–1701.
Wright, P. J., & Randall, A. K. (2012). Internet pornography exposure and risky sexual behavior among adult males in the United States. Computers in Human Behavior, 28, 1410–1416.
Yew, J., Shamma, D. A., & Churchill, E. F. (2011). Knowing funny: Genre perception and categorization in social video sharing. In Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems (CHI '11) (pp. 297–306). New York: ACM
Zaman, M., Rajan, M. A., & Dai, Q. (2010). Experiencing flow with instant messaging and its facilitating role on creative behaviors. Computers in Human Behavior, 26, 1009–1018.
Acknowledgements
We would like to thank our employers Google, Inc. and Twitter, Inc. for making it possible for us to work on this chapter. There are many that contributed to this effort, and we would like to call out the most significant ones: Carolyn Wei for identifying published papers that used survey methodology for their work, Sandra Lozano for her insights on analysis, Mario Callegaro for inspiration, Ed Chi and Robin Jeffries for reviewing several drafts of this document, and Professors Jon Krosnick from Stanford University and Mick Couper from the University of Michigan for laying the foundation of our survey knowledge and connecting us to the broader survey research community.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer Science+Business Media New York
About this chapter
Cite this chapter
Müller, H., Sedley, A., Ferrall-Nunge, E. (2014). Survey Research in HCI. In: Olson, J., Kellogg, W. (eds) Ways of Knowing in HCI. Springer, New York, NY. https://doi.org/10.1007/978-1-4939-0378-8_10
Download citation
DOI: https://doi.org/10.1007/978-1-4939-0378-8_10
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4939-0377-1
Online ISBN: 978-1-4939-0378-8
eBook Packages: Computer ScienceComputer Science (R0)