Advertisement

Questions on honest responding

  • Vaka VésteinsdóttirEmail author
  • Adam Joinson
  • Ulf-Dietrich Reips
  • Hilda Bjork Danielsdottir
  • Elin Astros Thorarinsdottir
  • Fanney Thorsdottir
Article

Abstract

This article presents a new method for reducing socially desirable responding in Internet self-reports of desirable and undesirable behavior. The method is based on moving the request for honest responding, often included in the introduction to surveys, to the questioning phase of the survey. Over a quarter of Internet survey participants do not read survey instructions, and therefore, instead of asking respondents to answer honestly, they were asked whether they responded honestly. Posing the honesty message in the form of questions on honest responding draws attention to the message, increases the processing of it, and puts subsequent questions in context with the questions on honest responding. In three studies (nStudy I = 475, nStudy II = 1,015, nStudy III = 899), we tested whether presenting the questions on honest responding before questions on desirable and undesirable behavior could increase the honesty of responses, under the assumption that less attribution of desirable behavior and/or admitting to more undesirable behavior could be taken to indicate more honest responses. In all studies the participants who were presented with the questions on honest responding before questions on the target behavior produced, on average, significantly less socially desirable responses, though the effect sizes were small in all cases (Cohen’s d ranging between 0.02 and 0.28 for single items, and from 0.17 to 0.34 for sum scores). The overall findings and the possible mechanisms behind the influence of the questions concerning honest responding on subsequent questions are discussed, and suggestions are made for future research.

Keywords

Questions on honest responding Socially desirable responding Sensitive questions Self-reports Internet surveys 

Notes

Author note

This work was part funded by the Eimskip Fund of the University of Iceland (Háskólasjóður Eimskipafélags Íslands). The funding source had no role in study design; in the collection, analysis, and interpretation of the data; in the writing of the report; or in the decision to submit the article for publication. The authors acknowledge networking support from the COST Action IS1004, WEBDATANET: www.webdatanet.eu.

References

  1. Acquisti, A., Brandimarte, L., & Loewenstein, G. (2015). Privacy and human behavior in the age of information. Science, 347, 509–514.  https://doi.org/10.1126/science.aaa1465 CrossRefPubMedGoogle Scholar
  2. Bäckström, M. (2007). Higher-order factors in a five-factor personality inventory and its relation to social desirability. European Journal of Psychological Assessment, 23, 63–70.  https://doi.org/10.1027/1015-5759.23.2.63 CrossRefGoogle Scholar
  3. Bäckström, M., Björklund, F., & Larsson, M. R. (2009). Five-factor inventories have a major general factor related to social desirability which can be reduced by framing items neutrally. Journal of Research in Personality, 43, 335–344.  https://doi.org/10.1016/j.jrp.2008.12.013 CrossRefGoogle Scholar
  4. Barrick, M. R., & Mount, M. K. (1996). Effects of impression management and self-deception on the predictive validity of personality constructs. Journal of Applied Psychology, 81, 261–272.  https://doi.org/10.1037/0021-9010.81.3.261 CrossRefPubMedGoogle Scholar
  5. Bayram, A. B. (2018). Serious subjects: A test of the seriousness technique to increase participant motivation in political science experiments. Research & Politics, 5 (2):205316801876745Google Scholar
  6. Crowne, D. P., & Marlowe, D. (1960). A new scale of social desirability independent of psychopathology. Journal of Consulting Psychology, 24, 349–354.  https://doi.org/10.1037/h0047358 CrossRefPubMedGoogle Scholar
  7. Dalal, D. K., & Hakel, M. D. (2016). Experimental comparisons of methods for reducing deliberate distortions to self-report measures of sensitive constructs. Organizational Research Methods, 19, 475–505.  https://doi.org/10.1177/1094428116639131 CrossRefGoogle Scholar
  8. de Leeuw, E. D. (2008). Choosing the method of data collection. In E. D. de Leeuw, J. J. Hox and D. A. Dillman (Eds.), International handbook of survey methodology (pp. 264–284). Mahwah, NJ: Erlbaum.Google Scholar
  9. de Leeuw, E. D., & Hox, J. J. (2008). Self-administered questionnaires. In E. D. de Leeuw, J. J. Hox and D. A. Dillman (Eds.), International handbook of survey methodology (pp. 264–284). Mahwah, NJ: Erlbaum.Google Scholar
  10. DeVellis, R. F. (2012). Scale development: Theory and applications (3rd ed.). Los Angeles, CA: Sage.Google Scholar
  11. Douglas, K. S., Otto, R. K., & Borum, R. (2003). Clinical forensic psychology. In J. A. Schinka & W. F. Velicer (Eds.), Handbook of psychology: Research methods in psychology (Vol. 2, pp. 189–211). Hoboken, NJ: Wiley.  https://doi.org/10.1002/0471264385.wei0208 CrossRefGoogle Scholar
  12. Endler, N. S., & Magnusson, D. (1976). Toward an interactional psychology of personality. Psychological Bulletin, 83, 956–974.  https://doi.org/10.1037/0033-2909.83.5.956 CrossRefPubMedGoogle Scholar
  13. Galesic, M., & Bosnjak, M. (2009). Effects of questionnaire length on participation and indicators of response quality in a web survey. Public Opinion Quarterly, 73, 349–360.  https://doi.org/10.1093/poq/nfp031 CrossRefGoogle Scholar
  14. Gnambs, T., & Kaspar, K. (2015). Disclosure of sensitive behaviors across self-administered survey modes: A meta-analysis. Behavior Research Methods, 47, 1237–1259.  https://doi.org/10.3758/s13428-014-0533-4 CrossRefPubMedGoogle Scholar
  15. Graham, J. W., Olchowski, A. E., & Gilreath, T. D. (2007). How many imputations are really needed? Some practical clarifications of multiple imputation theory. Prevention Science, 8, 206–213.CrossRefGoogle Scholar
  16. Hirsh, J. B., & Peterson, J. B. (2008). Predicting creativity and academic success with a “fake-proof” measure of the Big Five. Journal of Research in Personality, 42, 1323–1333.  https://doi.org/10.1016/j.jrp.2008.04.006 CrossRefGoogle Scholar
  17. Hoerger, M., Quirk, S. W., & Weed, N. C. (2011). Development and validation of the Delaying Gratification Inventory. Psychological Assessment, 23, 725–738.  https://doi.org/10.1037/a0023286 CrossRefPubMedPubMedCentralGoogle Scholar
  18. Huang, J. L., Curran, P. G., Keeney, J., Poposki, E. M., & DeShon, R. P. (2012). Detecting and deterring insufficient effort responding to surveys. Journal of Business and Psychology, 27, 99–114.  https://doi.org/10.1007/s10869-011-9231-8 CrossRefGoogle Scholar
  19. Joinson, A. N., & Paine, C. B. (2007). Self-disclosure, privacy and the Internet. In A. N. Joinson (Ed.), The Oxford handbook of Internet psychology (pp. 237–252). Oxford, UK: Oxford University Press.Google Scholar
  20. King, M. F., & Bruner, G. C. (2000). Social desirability bias: A neglected aspect of validity testing. Psychology and Marketing, 17, 79–103.CrossRefGoogle Scholar
  21. Little, R. J. A. (1988). A test of missing completely at random for multivariate data with missing values. Journal of the American Statistical Association, 83, 1198–1202.  https://doi.org/10.2307/2290157.
  22. Lozar Manfreda, K., & Vehovar, V. (2008). Internet surveys. In E. D. de Leeuw, J. J. Hox, & D. A. Dillman (Eds.), International handbook of survey methodology (pp. 264–284). Mahwah, NJ: Erlbaum.Google Scholar
  23. Marder, B., Joinson, A., Shankar, A., & Houghton, D. (2016). The extended “chilling” effect of Facebook: The cold reality of ubiquitous social networking. Computers in Human Behavior, 60, 582–592.  https://doi.org/10.1016/j.chb.2016.02.097 CrossRefGoogle Scholar
  24. Meier, S. T. (1994). The chronic crisis in psychological measurement and assessment: A historical survey. New York, NY: Academic Press.Google Scholar
  25. Mohorko, A., de Leeuw, E., & Hox, J. (2013). Internet coverage and coverage bias in Europe: Developments across countries and over time. Journal of Official Statistics, 29, 609–622.  https://doi.org/10.2478/jos-2013-0042 CrossRefGoogle Scholar
  26. Nederhof, A. J. (1985). Methods of coping with social desirability bias: A review. European Journal of Social Psychology, 15, 263–280.  https://doi.org/10.1002/ejsp.2420150303 CrossRefGoogle Scholar
  27. Oppenheimer, D. M., Meyvis, T., & Davidenko, N. (2009). Instructional manipulation checks: Detecting satisficing to increase statistical power. Journal of Experimental Social Psychology, 45, 867–872.  https://doi.org/10.1016/j.jesp.2009.03.009 CrossRefGoogle Scholar
  28. Pashler, H., Rohrer, D., & Harris, C. R. (2013). Can the goal of honesty be primed? Journal of Experimental Social Psychology, 49, 959–964.  https://doi.org/10.1016/j.jesp.2013.05.011 CrossRefGoogle Scholar
  29. Paulhus, D. L. (1991). Measurement and control of response bias. In J. P. Robinson, P. R. Shaver, & L. S. Wrightsman (Eds.), Measures of personality and social psychological attitudes (pp. 17–59). New York, NY: Academic Press.CrossRefGoogle Scholar
  30. Petty, R. E., & Cacioppo, J. T. (1981). Attitudes and persuasion: Classic and contemporary approaches. Dubuque, IA: Wm. C. Brown.Google Scholar
  31. Petty, R. E., & Cacioppo, J. T. (1986). The elaboration likelihood model of persuasion. Advances in experimental social psychology, 19, 123–205.  https://doi.org/10.1016/S0065-2601(08)60214-2 CrossRefGoogle Scholar
  32. Rasinski, K. A., Visser, P. S., Zagatsky, M., & Rickett, E. M. (2005). Using implicit goal priming to improve the quality of self-report data. Journal of Experimental Social Psychology, 41, 321–327.  https://doi.org/10.1016/j.jesp.2004.07.001 CrossRefGoogle Scholar
  33. Reips, U.-D. (2000). The Web Experiment Method: Advantages, disadvantages, and solutions. In M. H. Birnbaum (Ed.), Psychological experiments on the Internet (pp. 89–118). San Diego, CA: Academic Press.  https://doi.org/10.5167/uzh-19760
  34. Reips, U.-D. (2002). Context effects in Web surveys. In B. Batinic, U.-D. Reips, & M. Bosnjak (Eds.), Online Social Sciences (pp. 69–80). Seattle: Hogrefe & Huber.Google Scholar
  35. Reips, U.-D. (2012). Using the Internet to collect data. In H. Cooper, P. M. Camic, R. Gonzalez, D. L. Long, A. Panter, D. Rindskopf, & K. J. Sher (Eds.), APA handbook of research methods in psychology: 2. Research designs: Quantitative, qualitative, neuropsychological, and biological (pp. 291–310). Washington, DC: American Psychological Association.  https://doi.org/10.1037/13620-017 CrossRefGoogle Scholar
  36. Rosenthal, R. (1986). Media violence, antisocial behavior, and the social consequences of small effects. Journal of Social Issues, 42, 141–154.  https://doi.org/10.1111/j.1540-4560.1986.tb00247.x
  37. Rosenthal, R. (1990). How are we doing in soft psychology? American Psychologist, 45, 775–777.  https://doi.org/10.1037/0003-066X.45.6.775 CrossRefGoogle Scholar
  38. Schuman, H., & Presser, S. (1996). Questions and answers in attitude surveys: Experiments on question form, wording, and context. Thousand Oaks, CA: Sage.Google Scholar
  39. Schuman, H., Presser, S., & Ludwig, J. (1981). Context effects on survey responses to questions about abortion. Public Opinion Quarterly, 45, 216–223.  https://doi.org/10.1086/268652 CrossRefGoogle Scholar
  40. Schwarz, N. (1999). Self-reports: How the questions shape the answers. American Psychologist, 54, 93–105.  https://doi.org/10.1037/0003-066X.54.2.93 CrossRefGoogle Scholar
  41. Stutzman, F., Gross, R., & Acquisti, A. (2013). Silent listeners: The evolution of privacy and disclosure on Facebook. Journal of Privacy and Confidentiality, 4(2), 2:7–41.  https://doi.org/10.29012/jpc.v4i2.620 CrossRefGoogle Scholar
  42. Tourangeau, R., & Rasinski, K. A. (1988). Cognitive processes underlying context effects in attitude measurement. Psychological Bulletin, 103, 299–314.  https://doi.org/10.1037/0033-2909.103.3.299 CrossRefGoogle Scholar
  43. Tourangeau, R., Rips, L. J., & Rasinski, K. (2000). The psychology of survey response. Cambridge, UK: Cambridge University Press.CrossRefGoogle Scholar
  44. Tourangeau, R., & Yan, T. (2007). Sensitive questions in surveys. Psychological Bulletin, 133, 859–883.  https://doi.org/10.1037/0033-2909.133.5.859 CrossRefPubMedGoogle Scholar
  45. Vésteinsdóttir, V., Reips, U.-D., Joinson, A., & Thorsdottir, F. (2017). An item level evaluation of the Marlowe–Crowne Social Desirability Scale using item response theory on Icelandic Internet panel data and cognitive interviews. Personality and Individual Differences, 107, 164–173.  https://doi.org/10.1016/j.paid.2016.11.023

Copyright information

© Psychonomic Society, Inc. 2018

Authors and Affiliations

  • Vaka Vésteinsdóttir
    • 1
    • 2
    Email author
  • Adam Joinson
    • 3
  • Ulf-Dietrich Reips
    • 2
  • Hilda Bjork Danielsdottir
    • 1
  • Elin Astros Thorarinsdottir
    • 1
  • Fanney Thorsdottir
    • 1
  1. 1.Department of PsychologyUniversity of IcelandReykjavikIceland
  2. 2.Department of PsychologyUniversity of KonstanzKonstanzGermany
  3. 3.School of ManagementUniversity of BathBathUK

Personalised recommendations