Advertisement

Eine Trendwende in der Marketingforschung – Aktuelle Chancen und Risiken von Web Surveys

  • Claudia Becker
  • Kristin Dombrowski
Chapter

Zusammenfassung

Ein effektives Marketing ist essenziell für jede Unternehmung, um sich im ständig in Bewegung befindlichen Markt erfolgreich positionieren zu können.Im Rahmen der Marktforschung werden die benötigten Marktdaten, die die Basis vieler Marketingentscheidungen darstellen, generiert. Die Erhebung dieser Marktdaten erfolgt dabei häufig in Form von Umfragedaten.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Literature

  1. Barrios, M., Villarroya, A., Borrego, A., Ollé, C. (2011): Response Rates and Data Quality in Web and Mail Surveys Administered to PhD Holders, in: Social Science Computer Review, Vol. 29, No. 2, p. 208-220.CrossRefGoogle Scholar
  2. Batinic, B. (2003): Datenqualität bei internetbasierten Befragungen, in: Theobald, A., Dreyer, M., Starsetzki, T. (Hrsg.), Online-Marktforschung: Theoretische Grundlagen und praktische Erfahrungen, 2. Auflage, Wiesbaden, S. 143-160.Google Scholar
  3. Bech, M., Kristensen, M. B. (2009): Differential response rates in postal and Web-based surveys among older respondents, in: Survey Research Methods, Vol. 3, No. 1, p. 1-6.Google Scholar
  4. Bernhard, S., Hohmeyer, K., Jozwiak, E., Koch, S., Kruppe, T., Stephan, G.,Wolff, J. (2009): Aktive Arbeitsmarktpolitik in Deutschland und ihre Wirkungen, in: Möller, J., Walwei, U (Hrsg.), Handbuch Arbeitsmarkt 2009: Analysen, Daten, Fakten, Bielefeld, S. 149-202.Google Scholar
  5. Bethlehem, J. (2007): Reducing the bias of websurvey based estimates, Discussion Paper 07001, Statistics Netherlands, Voorburg/Heerlen.Google Scholar
  6. Bethlehem, J. (2009): Applied Survey Methods: A Statistical Perspective, Hoboken.Google Scholar
  7. Bethlehem, J., Biffignandi, S. (2012): Handbook of Web Surveys, Hoboken.Google Scholar
  8. Birnholtz, J. P., Horn, D. B., Finholt, T. A., Bae, S. J. (2004): The Effects of Cash, Electronic, and Paper Gift Certificates as Respondent Incentives for a Web-Based Survey of Technologically Sophisticated Respondents, in: Social Science Computer Review, Vol. 22, No. 3, p. 355-362.CrossRefGoogle Scholar
  9. Börkan, B. (2010): The Mode Effect in Mixed-Mode Surveys: Mail and Web Surveys, in: Social Science Computer Review, Vol. 28, No. 3, p. 371-380.CrossRefGoogle Scholar
  10. Bosnjak, M., Tuten, T. L. (2003): Prepaid and Promised Incentives in Web Surveys: An Experiment, in: Social Science Computer Review, Vol. 21, No. 2, p. 208-217.CrossRefGoogle Scholar
  11. Bowling, A. (2005): Mode of questionnaire administration can have serious effects on data quality, in: Journal of Public Health, Vol. 27, No. 3, p. 281-291.CrossRefGoogle Scholar
  12. Chang, L., Krosnick, J. A. (2010): Comparing Oral Interviewing with Self- Administered Computerized Questionnaires: An Experiment, in: Public Opinion Quarterly, Vol. 74, No.1, p. 154-167.CrossRefGoogle Scholar
  13. Church, A. H. (1993): Estimating the Effect of Incentives on Mail Survey Response Rates: A Meta-Analysis, in: Public Opinion Quarterly, Vol. 57, No. 1, p. 62-79.CrossRefGoogle Scholar
  14. Couper, M. P. (2000): Web Surveys: A Review of Issues and Approaches, in: Public Opinion Quarterly, Vol. 64, No. 4, p. 464-494.CrossRefGoogle Scholar
  15. Couper, M. P. (2008): Designing Effective Web Surveys, Cambridge.Google Scholar
  16. Couper, M. P., Conrad, F. G., Tourangeau, R. (2007a): Color, Labels, and Interpretive Heuristics for Response Scales, in: Public Opinion Quarterly, Vol. 71, No.1, p. 91-112.CrossRefGoogle Scholar
  17. Couper, M. P., Conrad, F. G., Tourangeau, R. (2007b): Visual Context Effects in Web Surveys, in: Public Opinion Quarterly, Vol. 71, No.4, p. 623-634.CrossRefGoogle Scholar
  18. Couper, M. P., Tourangeau, R., Conrad, F. G., Crawford, S. D. (2004): What They See Is What We Get: Response Options for Web Surveys, in: Social Science Computer Review, Vol. 22, No. 1, p. 111-127.CrossRefGoogle Scholar
  19. Couper, M. P., Tourangeau, R., Conrad, F. G., Singer, E. (2006): Evaluating the Effectiveness of Visual Analog Scales: A Web Experiment, in: Social Science Computer Review, Vol. 24, No. 2, p. 227-245.CrossRefGoogle Scholar
  20. Das, M., Toepoel, V., van Soest, A. (2007): Can I Use a Panel? Panel Conditioning and Attrition Bias in Panel Surveys, in: CentER Discussion Series 2007- 56, Tilburg.Google Scholar
  21. De Bernardo, D. H., Curtis, A. (2012): Using Online and Paper Surveys: The Effectiveness of Mixed-Mode Methodology for Populations Over 50, in: Research on Aging, Vol. 28, No. 3, p. 1-21.Google Scholar
  22. De Leeuw, E. D. (2005): To Mix or Not to Mix Data Collection Modes in Surveys, in: Journal of Official Statistics, Vol. 21, No. 2, p. 233-255.Google Scholar
  23. Dever, J. A., Rafferty, A., Valliant R. (2008): Internet Surveys: Can Statistical Adjustments Eliminate Coverage Bias?, in: Survey Research Methods, Vol. 2, No. 2, p. 47-62.Google Scholar
  24. Diekmann, A. (2010): Empirische Sozialforschung: Grundlagen, Methoden, Anwendungen, 21. Auflage, Reinbek.Google Scholar
  25. Dillman, D. A. (2000): Mail and Internet Surveys: The Tailored Design Method, 2. edition, New York.Google Scholar
  26. Dillman, D. A., Bowker, D. K. (2001): The Web Questionnaire Challenge to Survey Methodologists, in: Reips, U.-D., Bosnjak, M. (eds.), Dimensions of Internet Science, Lengerich, p. 159-178.Google Scholar
  27. Dillman, D. A., Smyth, J. D., Christian, L. M. (2009): Internet, Mail and Mixedmode Surveys: The Tailored Design Method, Hoboken.Google Scholar
  28. Dillman, D. A., Phelps, G., Tortora, R., Swift, K., Kohrell, J., Berck, J., Messer, B. L. (2009): Response rate and measurement differences in mixed-mode surveys using mail, telephone, interactive voice response (IVR) and the Internet, in: Social Science Research, Vol. 38, No. 1, p. 1-18.CrossRefGoogle Scholar
  29. Dolnicar, S., Laesser, C., Matus, K. (2009): Online Versus Paper: Format Effects in Tourism Surveys, in: Journal of Travel Research, Vol. 47, No. 3, p. 295-316.CrossRefGoogle Scholar
  30. Duffy, B., Smith, K., Terhanian, G., Bremer, J. (2005): Comparing data from online and face-to-face surveys, in: International Journal of Market Research, Vol. 47, No. 6, p. 615-639.Google Scholar
  31. Fricker, S., Galesic, M., Tourangeau, R., Yan, T. (2005): An Experimental Comparison of Web and Telephone Surveys, in: Public Opinion Quarterly, Vol. 69, No. 3, p. 370-392.CrossRefGoogle Scholar
  32. Gajic, A., Cameron, D., Hurley, J. (2011): The cost-effectiveness of cash versus lottery incentives for a web-based, stated-preference community survey, in: The European Journal of Health Economics.Google Scholar
  33. Göritz, A. S. (2003): Online-Panels, in: Theobald, A., Dreyer, M., Starsetzki, T. (Hrsg.), Online-Marktforschung: Theoretische Grundlagen und praktische Erfahrungen, 2. Auflage, Wiesbaden, S. 227-240.Google Scholar
  34. Göritz, A. S. (2004): The impact of material incentives on response quantity, response quality, sample composition, survey outcome, and cost in online access panels, in: International Journal of Market Research, Vol. 46, No. 3, p. 327-345.Google Scholar
  35. Göritz, A. S. (2006): Incentives in Web Studies: Methodological Issue and a Review, in: International Journal of Internet Science, Vol. 1, No. 1, p. 58-70.Google Scholar
  36. Göritz, A. S. (2007): Belohnungen in Online-Befragungen, in: Welker, M., Wenzel, O (Hrsg.), Online-Forschung 2007: Grundlagen und Fallstudien, Köln, S. 119-131.Google Scholar
  37. Groves, R. M. (2004): Survey Errors and Survey Costs, Hoboken.Google Scholar
  38. Groves, R. M., Fowler, F. J., Couper, M. P., Lepkowski, J. M., Singer, E., Tourangeau, R. (2009): Survey Methodology, 2. edition, Hoboken.Google Scholar
  39. Hayslett, M. M., Wildemuth, B. M. (2004): Pixels or pencils? The relative effectiveness of Web-based versus paper surveys, in: Library and Information Science Research, Vol. 26, No. 1, p. 73-93.CrossRefGoogle Scholar
  40. Heerwegh, D. (2003): Explaining Response Latencies and Changing Answers Using Client-Side Paradata from a Web Survey, in: Social Science Computer Review, Vol. 21, No. 3, p. 360-373.CrossRefGoogle Scholar
  41. Heerwegh, D. (2006): An Investigation of the Effect of Lotteries on Web Survey Response Rates, in: Field Methods, Vol. 18, No. 2, p. 205-220.CrossRefGoogle Scholar
  42. Heerwegh, D., Loosveldt, G. (2008): Face-to-Face versus Web Surveying in a High-Internet-Coverage Population: Differences in Response Quality, in: Public Opinion Quarterly, Vol. 72, No. 5, p. 836-846.CrossRefGoogle Scholar
  43. Hofmann, O. (2003): Standards zur Qualitätssicherung für Online-Befragungen, in: Theobald, A., Dreyer, M., Starsetzki, T. (Hrsg.), Online-Marktforschung: Theoretische Grundlagen und praktische Erfahrungen, 2. Auflage, Wiesbaden, S. 135-160.Google Scholar
  44. Holbrook, A., Krosnick, J. A. (2010): Social Desirability Bias in Voter Turnout Reports: Testing Using the Item Count Technique, in: Public Opinion Quarterly, Vol. 74, No. 1, p. 37-67.CrossRefGoogle Scholar
  45. Kreuter, F., Presser, S., Tourangeau, R. (2008): Social Desirability Bias in CATI, IVR, and Web Surveys: The Effects of Mode and Question Sensitivity, in: Public Opinion Quarterly, Vol. 72, No. 5, p. 847-865.CrossRefGoogle Scholar
  46. Kiernan, N. E., Kiernan, M., Oyler, M. A., Gilles, C. (2005): Is a Web Survey as Effective as a Mail Survey? A Field Experiment Among Computer Users, in: American Journal of Evaluation, Vol. 26, No. 2, p. 245-252.CrossRefGoogle Scholar
  47. Laguilles, J. S., Williams, E. A., Saunders, D. B. (2011): Can lottery incentives boost web survey response rates?: Findings from four experiments, in: Research in Higher Education, Vol. 52, p. 537-553.CrossRefGoogle Scholar
  48. Lee, S., Valliant, R. (2009): Estimation for Volunteer Panel Web surveys  sing Propensity Score Adjustment and Calibration Adjustment, in: Sociological Methods & Research, Vol. 37, No. 3, p. 319-343.CrossRefGoogle Scholar
  49. Loosvelt, G., Sonck, N. (2008): An evaluation of the weighting procedures for an online access panel survey, in: Survey Research Methods, Vol. 2, No. 2, p. 93-105.Google Scholar
  50. Lozar Manfreda, K., Bosnjak, M., Berzelak, J., Haas, I., Vehovar, V. (2008): Web surveys versus other survey modes: A meta-analysis comparing response rates, in: International Journal of Market Research, Vol. 50, No. 1, p. 79-104.Google Scholar
  51. Marcus, B., Bosnjak, M., Lindner, S., Pilischenko, S., Schütz, A. (2007): Compensating for Low Topic Interest and Long Surveys: A Field Experiment on Nonresponse in Web Surveys, in: Social Science Computer Review, Vol. 25, No. 3, p. 372-383.CrossRefGoogle Scholar
  52. Matteson, K. A., Anderson, B. L., Pinto, S. B., Lopes, V., Schulkin, J., Clark, M. A. (2011): Surveying Ourselves: Examining the Use of a Web-Based Approach for a Physician Survey, in: Evaluation & the Health Professions, Vol. 34, No.4, p. 448-463.CrossRefGoogle Scholar
  53. McCabe, S. E., Couper, M. P., Cranford, J. A., Boyd, C. J. (2006): Comparison of Web and mail surveys for studying secondary consequences associated with substance use: Evidence for minimal mode effects, in: Addictive Behaviors, Vol. 31, No. 9, p. 162-168.CrossRefGoogle Scholar
  54. McCree-Hale, R., De La Cruz, N. G., Montgomery, A. E. (2010): Using downloadable songs from Apple iTunes as a novel incentive for college students participating in a Web-based follow-up survey, in: American Journal of Health Promotion, Vol. 25, No. 2, p. 119-121.CrossRefGoogle Scholar
  55. Millar, M. M., Dillman, D. A. (2011): Improving Response to Web and Mixed-Mode Surveys, in: Public Opinion Quarterly, Vol. 75, No. 2, p. 249-269.CrossRefGoogle Scholar
  56. Nederhof, A. J. (1983): The Effects of Material Incentives in Mail Surveys: Two Studies, in: Public Opinion Quarterly, Vol. 47, No. 1, p. 103-111.CrossRefGoogle Scholar
  57. Oudejans, M., Christian, L. M. (2011): Using Interactive Features to Motivate and Probe Responses to Open-Ended Questions, in: Das, M., Ester, P., Kaczmirek, L. (eds.), Social and Behavioral Research and the Internet: Advances in Applied Methods and Research Strategies, New York, p. 215-244.Google Scholar
  58. Peytchev, A., Hill, C. A. (2010): Experiments in Mobile Web Survey Design: Similarities to Other Modes and Unique Considerations, in: Social Science Computer Review, Vol. 28, No. 3, p. 319-335.CrossRefGoogle Scholar
  59. Peytchev, A., Couper, M. P., McCabe, S. E., Crawford, S. D. (2006): Web Survey Design: Paging versus Scrolling, in: Public Opinion Quarterly, Vol. 70, No. 4, p. 596-607.CrossRefGoogle Scholar
  60. Porter, S. R., Whitcomb, M. E. (2003): The Impact of Lottery Incentives on Student Survey Response Rates, in: Research in Higher Education, Vol. 44, No. 4, p. 389-407.CrossRefGoogle Scholar
  61. Rivers, D., Bailey, D. (2009): Inference From Matched Samples in the 2008 U.S. National Elections, in: American Association of Public Opinion Research, p. 627-639.Google Scholar
  62. Rodriguez, H. P., von Glahn, T., Rogers, W. H., Chang, H., Fanjiang, G., Safran, D. G. (2008): Evaluating Patients´ Experiences with Individual Physicians: A Randomized Trial of Mail, Internet and Interactive Voice Response Telephone Administration of Surveys, in: Medical Care, Vol. 44, No. 2, p. 167-174.CrossRefGoogle Scholar
  63. Rosenbaum, P. R. (1987): Model-Based Direct Adjustment, in: Journal of the American Statistical Association, Vol. 82, No. 398, p. 387-394.CrossRefGoogle Scholar
  64. Rosenbaum, P. R., Rubin, D. B. (1983): The central role of the propensity score in observational studies for causal effects, in: Biometrika, Vol. 70, No. 1, p. 41-55.CrossRefGoogle Scholar
  65. Sánchez-Fernández, J., Muňoz-Leiva, F., Montoro-Bíos, F. J., Ibáňez-Zapata, J. Á. (2010): An analysis of the effect of pre-incentives and post-incentives based on draws on response to web surveys, in: Quality and Quantity, Vol. 44, No. 2, p. 375-373.CrossRefGoogle Scholar
  66. Saunders, M. N. K. (2012): Web versus Mail: The Influence of Survey Distribution Mode on Employees´ Response, in: Field Methods, Vol. 24, No.1, p. 56-73.CrossRefGoogle Scholar
  67. Sax, L. J., Gilmartin, S. K., Byrant, A. N. (2003): Assessing Response Rates and Nonresponse Bias in Web and Paper Surveys, in: Research in Higher Education, Vol. 44, No. 4, p. 409-432.CrossRefGoogle Scholar
  68. Schiøtz, M., Bøgelund, M., Willaing, I. (2012): Challenges using online surveys in a Danish population of people with type 2 diabetes, in: Chronic Illness, Vol. 8, No. 1, p. 56-63.CrossRefGoogle Scholar
  69. Schnell, R., Hill, P. B., Esser, E. (2011): Methoden der empirischen Sozialforschung, 9. Auflage, München.Google Scholar
  70. Scholl, A. (2009): Die Befragung, 2. Auflage, Konstanz.Google Scholar
  71. Schonlau, M., van Soest, A., Kapteyn, A. (2007): Are “Webographic“ or attitudinal questions useful for adjusting estimates from Web surveys using propensity scoring?, in: Survey Research Methods, Vol. 1, No.3, p. 155-163.Google Scholar
  72. Schonlau, M., van Soest, A., Kapteyn, A., Couper, M. (2009): Selection Bias in Web Surveys and the Use of Propensity Scores, in: Sociological Methods & Research, Vol. 37, No.3, p. 291-318.CrossRefGoogle Scholar
  73. Schwarz, S., Reips. U.-D. (2001): CGI Versus JavaScript: A Web Experiment on the Reversed Hindsight Bias, in: Reips, U.-D., Bosnjak, M. (eds.), Dimensions of Internet Science, Lengerich, p. 75-90.Google Scholar
  74. Shih, T.-H., Fan, X. (2008): Comparing Response Rates from Web and Mail Surveys: A Meta-Analysis, in: Field Methods, Vol. 20, No.3, p. 249-271.CrossRefGoogle Scholar
  75. Smyth, J. D., Dillman, D. A., Christian, L. M., Stern, M. J. (2006): Comparing Check-All and Forced-Choice Question Formats in Web Surveys, in: Public Opinion Quarterly, Vol. 70, No. 1, p. 66-77.CrossRefGoogle Scholar
  76. Smyth, J. D., Dillman, D. A., Christian, L. M., O´Neill, A. C. (2010): Using the Internet to Survey Small Towns and Communities: Limitations and Possibilities in the Early 21st Century, in: American Behavioral Scientist, Vol. 53, No. 9, p. 1423-1448.CrossRefGoogle Scholar
  77. Statistisches Bundesamt (2011): Private Haushalte in der Informationsgesellschaft – Nutzung von Informations- und Kommunikationstechnologien, Fachserie 15, Reihe 4, Wiesbaden.Google Scholar
  78. Toepoel, V., Das, M., van Soest, A. (2009): Design of Web Questionnaires: The Effects of the Number of Items per Screen, in: Field Methods, Vol. 21, No. 2, p. 200-213.CrossRefGoogle Scholar
  79. Truell, A. D., Bartlett, J. E., Alexander, M. W. (2002): Response rate, speed and completeness: A comparison of Internet-based and mail surveys, Behavior Research Methods, in: Instruments, and Computers, Vol. 34, No.1, p. 46-49.CrossRefGoogle Scholar
  80. Valliant, R., Lee, S. (2004): Economic Characteristics of Internet and Non-Internet Users and Implications for Web-based Surveys, in: IT & Society, Vol. 1, No. 8, p. 1-25.Google Scholar
  81. Valliant, R., Dever, J. A. (2011): Estimating Propensity Adjustments for Volunteer Web Surveys, in: Sociological Methods & Research, Vol. 40, No.1, p. 105-137.CrossRefGoogle Scholar
  82. Vicente, P., Reis, E. (2011): The “frequency divide” on web surveys: differences of opinion, behaviour and demographics among internet users, 58th Session of the ISI 2011.Google Scholar
  83. Witte, J. C., Pargas, R. P., Mobley, C., Hawdon, J. (2004): Instrument Effects of Images in Web Surveys: A Research Note, in: Social Science Computer Review, Vol. 22, No. 3, p. 363-369.CrossRefGoogle Scholar
  84. Weichbold, M. (2008): Was ist eine “gute” Umfrage?, in: Sozialwissenschaftliche Rundschau, Vol. 48, Heft 3, S. 342-347.Google Scholar

Copyright information

© Springer Fachmedien Wiesbaden 2013

Authors and Affiliations

  1. 1.HalleDeutschland

Personalised recommendations