Doing the Survey Two-Step: The Effects of Reticence on Estimates of Corruption in Two-Stage Survey Questions

Chapter
Part of the International Economic Association Series book series (IEA)

Abstract

This paper develops a structural approach for modeling how respondents answer survey questions and uses it to estimate the proportion of respondents who are reticent in answering corruption questions, as well as the extent to which reticent behavior biases conventional estimates of corruption downwards. The context is a common two-step question, first inquiring whether a government official visited a business, and then asking about bribery if a visit was acknowledged. Reticence is a concern for both steps, since denying a visit side-steps the bribe question. This paper considers two alternative models of how reticence affects responses to two-step questions, with differing assumptions on how reticence affects the first question about visits. Maximum-likelihood estimates are obtained for seven countries using data on interactions with tax officials. Different models work best in different countries, but cross-country comparisons are still valid because both models use the same structural parameters. On average 40% of corruption questions are answered reticently, with much variation across countries. A statistic reflecting how much standard measures underestimate the proportion of all respondents who had a bribe interaction is developed. The downward bias in standard measures is highly statistically significant in all countries, varying from 12% in Nigeria to 90% in Turkey. The source of bias varies widely across countries, between denying a visit and denying a bribe after admitting a visit.

Keywords

Corruption measures Structural estimation Reticence World Bank enterprise surveys 

References

  1. Azfar, Omar, and Peter Murrell. 2009. Identifying Reticent Respondents: Assessing the Quality of Survey Data on Corruption and Values. Economic Development and Cultural Change 57: 387–412.CrossRefGoogle Scholar
  2. Boruch, Robert.F. 1971. Assuring Confidentiality of Responses in Social Research: A Note on Strategies. The American Sociologist 6 (4): 308–311.Google Scholar
  3. Cameron, A. Colin and Douglas L. Miller. 2010. Robust Inference with Clustered Data. In Handbook of Empirical Economics and Finance, ed. Aman Ullah and David E. A. Giles. Boca Raton: Chapman and Hall.Google Scholar
  4. Clausen, Bianca, Aart Kraay, and Peter Murrell. 2011. Does Respondent Reticence Affect the Results of Corruption Surveys? Evidence from the World Bank Enterprise Survey for Nigeria. In International Handbook on the Economics of Corruption, ed. Susan Rose-Ackerman and Tina Søreide, vol. 2. Cheltenham: Edward Elgar.Google Scholar
  5. Coutts, Elisabeth, and Ben Jann. 2011. Sensitive Questions in Online Surveys: Experimental Results for the Randomized Response Technique (RRT) and the Unmatched Count Technique (UCT). Sociological Methods & Research 40: 169–193.CrossRefGoogle Scholar
  6. Funk, Patricia. 2016. How Accurate are Surveyed Preferences for Public Policies? Evidence from a Unique Institutional Setup. Review of Economics and Statistics 98: 442–454.CrossRefGoogle Scholar
  7. Gong, Erick. 2015. HIV Testing and Risky Sexual Behavior. Economic Journal 125: 32–60.CrossRefGoogle Scholar
  8. Holbrook, Allyson L., and Jon A. Krosnick. 2010. Measuring Voter Turnout by Using the Randomized Response Technique: Evidence Calling into Question the Method’s Validity. Public Opinion Quarterly 74: 328–343.CrossRefGoogle Scholar
  9. Kraay, Aart, and Peter Murrell. 2016a. Misunderestimating Corruption. Review of Economics and Statistics 98: 455–466.CrossRefGoogle Scholar
  10. ———. 2016b. Comment on “The Use of Random Response Questions.” Rosenfeld, Imai and Shapiro. Working Paper, University of Maryland.Google Scholar
  11. Lensvelt-Mulders, Gerty J.L.M., and Hennie R. Boeije. 2007. Evaluating Compliance with a Computer Assisted Randomized Response Technique: A Qualitative Study into the Origins of Lying and Cheating. Computers in Human Behavior 23: 591–608.CrossRefGoogle Scholar
  12. Lensvelt-Mulders, Gerty J.L.M., Joop J. Hox, Peter G.M. van der Heijden, and Cora J.M. Maas. 2005. Meta-Analysis of Randomized Response Research: Thirty-five Years of Validation. Sociological Methods & Research 33: 319–348.CrossRefGoogle Scholar
  13. Locander, William, Seymour Sudman, and Norman Bradburn. 1976. An Investigation of Interview Method, Threat and Response Distortion. Journal of the American Statistical Association 71: 269–275.CrossRefGoogle Scholar
  14. OECD. 2015. The ABC of Gender Equality in Education: Aptitude, Behaviour, Confidence. Paris: PISA, OECD Publishing.CrossRefGoogle Scholar
  15. Olken, Benjamin. 2009. Corruption Perceptions vs. Corruption Reality. Journal of Public Economics 93: 950–964.CrossRefGoogle Scholar
  16. Reinikka, Ritva, and Jakob Svensson. 2004. Local Capture: Evidence from a Central Government Transfer Program in Uganda. Quarterly Journal of Economics: 679–705.Google Scholar
  17. Rose, Richard, and Caryn Peiffer. 2015. Paying Bribes for Public Services. Basingstoke, UK: Palgrave Macmillan.CrossRefGoogle Scholar
  18. Rosenfeld, Bryn, Kosuke Imai, and Jacob N. Shapiro. 2016. An Empirical Validation Study of Popular Survey Methodologies for Sensitive Questions. American Journal of Political Science 60: 783–802.CrossRefGoogle Scholar
  19. Shipton, D., D.M. Tappin, T. Vadiveloo, J.A. Crossley, D. Aitken, and J. Chalmers. 2009. Reliability of Self Reported Smoking Status by Pregnant Women for Estimating Smoking Prevalence: A Retrospective, Cross Sectional Study. British Medical Journal 339: b4347.CrossRefGoogle Scholar
  20. Tourangeau, Roger, and Ting Yan. 2007. Sensitive Questions in Surveys. Psychological Bulletin 133: 859–833.CrossRefGoogle Scholar
  21. Trappmann, Mark, Ivar Krumpal, Antje Kirchner, and Ben Jann. 2014. Item Sum: A New Technique for Asking Quantitative Sensitive Questions. Journal of Survey Statistics and Methodology 2: 58–77.CrossRefGoogle Scholar
  22. Vuong, Quang H. 1989. Likelihood Ratio Tests for Model Selection and Nonnested Hypotheses. Econometrica 57: 307–333.CrossRefGoogle Scholar
  23. Warner, Stanley L. 1965. Randomized Response: A Survey Technique for Eliminating Evasive Answer Bias. Journal of the American Statistical Association 60: 63–69.CrossRefGoogle Scholar
  24. Wojcik, Sean P., Arpine Hovasapian, Jesse Graham, Matt Motyl, and Peter H. Ditto. 2015. Conservatives Report, But Liberals Display, Greater Happiness. Science 347: 1243–1247.CrossRefGoogle Scholar
  25. Wolter, Felix, and Peter Preisendörfer. 2013. Asking Sensitive Questions: An Evaluation of the Randomized Response Technique Versus Direct Questioning Using Individual Validation Data. Sociological Methods & Research 42: 321–353.CrossRefGoogle Scholar
  26. World Bank. 2015. Enterprise Surveys (WBES). http://www.enterprisesurveys.org/

Copyright information

© The Author(s) 2018

Authors and Affiliations

  1. 1.The World BankWashington, DCUSA
  2. 2.University of MarylandCollege ParkUSA

Personalised recommendations