Promoting Participation in a Culture of Sustainability Web Survey

  • Heather M. Schroeder
  • Andrew L. Hupp
  • Andrew D. Piskorowski
Chapter
Part of the World Sustainability Series book series (WSUSE)

Abstract

The Sustainability Cultural Indicators Program (SCIP) at the University of Michigan is designed to measure and track the university’s progress (Callewaert and Marans 2017) in moving the campus community towards a culture of sustainability. SCIP gathers this data using a web survey conducted annually. Web surveys generally attain lower response rates than other modes of data collection. Web surveys are also at risk of other forms of nonresponse, such as breakoffs, which happen less frequently in other modes. Breakoffs commonly occur very early in a web survey, often on informed consent screens required by Institutional Review Boards (IRBs), before respondents have a chance to get to the survey content. There are many methods used (prenotification, incentives, etc.) to try to increase participation and reduce breakoffs. This paper investigates the efficacy of two experiments designed to increase participation and reduce breakoffs in two SCIP surveys. The first experiment examines the effect of “celebrity endorsement”. As part of the final email reminder, respondents were randomized to receive a reminder with a link to the survey or a reminder that also contained a link to a video of a head coach from the U-M Department of Athletics encouraging non-respondents to participate. The second experiment investigates informed consent screen design. One group was presented a screen appearing as a traditional informed consent form. The other group was presented a screen with the most important items visible and the rest of the information available via a series of accordion menus.

Keywords

Email Paradata Sustainability survey Consent screen design Breakoff Celebrity endorsement 

Notes

Acknowledgements

The authors thank the Office of the Provost at the University of Michigan for funding the SCIP. Thanks to John Callewaert and Robert Marans for allowing the research to be conducted on the SCIP and to the reviewers for their useful suggestions.

References

  1. Bosnjak, M., & Tuten, T. L. (2003). Prepaid and promised incentives in web surveys: An experiment. Social Science Computing Review, 21(2), 208–217.CrossRefGoogle Scholar
  2. Bosnjak, M., Neubarth, W., Couper, M. P., Bandilla, W., & Kaczmirek, L. (2008). Prenotification in web surveys: The influence of mobile text messaging versus e-mail on response rates and sample composition. Social Science Computing Review, 26(2), 213–223.CrossRefGoogle Scholar
  3. Callegaro, M. (2010). Do you know which device your respondent has used to take your online survey? Survey Practice, 3(6), 1–12.Google Scholar
  4. Callegaro, M. (2013). Paradata in web surveys. Improving Surveys with Paradata, 259–279.Google Scholar
  5. Callegaro, M., Manfreda, K. L., & Vehovar, V. (2015). Web survey methodology. Thousand Oaks: Sage.Google Scholar
  6. Callewaert, J., & Marans, R. W. (2017). Measuring progress over time: The sustainability cultural indicators program at the University of Michigan. In W. Leal (ed.). Handbook of theory and practice of sustainable development in higher education (Vol. 2). New York: Springer.Google Scholar
  7. Cook, C., Heath, F., & Thompson, R. L. (2000). A meta-analysis of response rates in web-or internet-based surveys. Educational and Psychological Measurement, 60(6), 821–836.Google Scholar
  8. Couper, M. P. (1998). Measuring survey quality in a CASIC environment. Proceedings of the survey research methods section, ASA, pp. 41–49.Google Scholar
  9. Couper, M. P. (2000). Review: Web surveys: A review of issues and approaches. Public Opinion Quarterly, 64(4), 464–494.CrossRefGoogle Scholar
  10. Couper, M. P. (2008). Designing effective web surveys. New York: Cambridge University Press.CrossRefGoogle Scholar
  11. Crawford, S. D., McCabe, S. E., Saltz, B., Boyd, C. J., Freisthler, B., & Paschall, M. J. (2004). Gaining respondent cooperation in college web-based alcohol surveys: Findings from experiments at two universities. Paper presented at the 59th annual conference of the American Association for Public Opinion Research, Phoenix, AZ, May.Google Scholar
  12. Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone, mail, and mixed-mode surveys. Hoboken: Wiley.Google Scholar
  13. Harmon, M. A., Westin, E. C., & Levin, K. Y. (2005). Does type of pre-notification affect web survey response rates? Paper presented at the 60th annual conference of the American Association for Public Opinion Research, Miami Beach, FL, May.Google Scholar
  14. Holland, L., Couper, M. P., & Schroeder, H. (2014), Pre-notification strategies for mixed-mode data collection. Paper presented at the 69th annual conference of the American Association for Public Opinion Research, Anaheim, CA, May.Google Scholar
  15. Kaplowitz, M. D., Hadlock, T. D., & Levine, R. (2004). A comparison of web and mail survey response rates. Public Opinion Quarterly, 68(1), 94–101.CrossRefGoogle Scholar
  16. Lozar Manfreda, K., Bosnjak, M., Berzelak, J., Haas, I., & Vehovar, V. (2008). Web surveys versus other survey modes: A meta-analysis comparing response rates. International Journal of Market Research, 50(1), 79–104.Google Scholar
  17. Pew Research Center. (April, 2015). The smartphone difference. http://www.pewinternet.org/2015/04/01/us-smartphone-use-in-2015/.
  18. Peytchev, A. (2009). Survey breakoff. Public Opinion Quarterly, 73(1), 74–97.CrossRefGoogle Scholar
  19. Schonlau, M., Asch, B. J., & Du, C. (2003). Web surveys as part of a mixed-mode strategy for populations that cannot be contacted by e-mail. Social Science Computer Review, 21(2), 218–222.CrossRefGoogle Scholar
  20. The American Association for Public Opinion Research. (2016). Standard definitions: Final dispositions of case codes and outcome rates for surveys. 9th edition. AAPOR.Google Scholar
  21. Trouteaud, A. R. (2004). How you ask counts: A test of internet-related components of response rates to a web-based survey. Social Science Computer Review, 22(3), 385–392.CrossRefGoogle Scholar
  22. Tuten, T. L. (2005). Do reminders encourage response by affect response behaviors? reminders in web-based surveys. Paper presented at the ESF workshop on internet survey methodology, Dubrovnik, Croatia, September.Google Scholar
  23. Vehovar, V., Batagelj, Z., Lozar Manfreda, K., & Zaletel, M. (2002). Nonresponse in web surveys. In R. M. Groves, D. A. Dillman, J. L. Eltinge, & R. J. A. Little (Eds.), Survey Nonresponse (pp. 229–242). New York: Wiley.Google Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  • Heather M. Schroeder
    • 1
  • Andrew L. Hupp
    • 1
  • Andrew D. Piskorowski
    • 1
  1. 1.Survey Research CenterUniversity of MichiganAnn ArborUSA

Personalised recommendations