Skip to main content

Doing the Survey Two-Step: The Effects of Reticence on Estimates of Corruption in Two-Stage Survey Questions

  • Chapter
  • First Online:
Institutions, Governance and the Control of Corruption

Abstract

This paper develops a structural approach for modeling how respondents answer survey questions and uses it to estimate the proportion of respondents who are reticent in answering corruption questions, as well as the extent to which reticent behavior biases conventional estimates of corruption downwards. The context is a common two-step question, first inquiring whether a government official visited a business, and then asking about bribery if a visit was acknowledged. Reticence is a concern for both steps, since denying a visit side-steps the bribe question. This paper considers two alternative models of how reticence affects responses to two-step questions, with differing assumptions on how reticence affects the first question about visits. Maximum-likelihood estimates are obtained for seven countries using data on interactions with tax officials. Different models work best in different countries, but cross-country comparisons are still valid because both models use the same structural parameters. On average 40% of corruption questions are answered reticently, with much variation across countries. A statistic reflecting how much standard measures underestimate the proportion of all respondents who had a bribe interaction is developed. The downward bias in standard measures is highly statistically significant in all countries, varying from 12% in Nigeria to 90% in Turkey. The source of bias varies widely across countries, between denying a visit and denying a bribe after admitting a visit.

Economics studies facts, and seeks to arrange the facts in such ways as make it possible to draw conclusions from them. As always, it is the arrangement which is the delicate operation. Facts, arranged in the right way, speak for themselves; unarranged they are as dead as mutton.

John Hicks, The Social Framework, 1950

The authors can be contacted at nkaralashvili@worldbank.org, akraay@worldbank.org, or murrell@econ.umd.edu We thank discussants of a first version of this paper, presented at the Conference on “Ethics and Corporate Malfeasance: Interdisciplinary Perspectives”, organized by the Center for the Study of Business Ethics, Regulation, and Crime (C-BERC), University of Maryland, September 12, 2014. We are grateful to our discussant, Joao de Mello, and to participants in the IEA Roundtable in Montevideo, for helpful comments, and to the Enterprise Survey Team at the World Bank for their collaboration. We are grateful to Patricia Funk for providing data on Swiss referenda. Financial support from the Knowledge for Change Program of the World Bank is gratefully acknowledged. The views expressed here are the authors’, and do not reflect those of the World Bank, its Executive Directors, or the countries they represent.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Notes

  1. 1.

    See “Counting Calories”, The Economist Aug 13, 2016.

  2. 2.

    “Corruption still alive and well in post-bailout Greece” The Guardian December 3, 2014 http://www.theguardian.com/world/2014/dec/03/greece-corruption-alive-and-well.

  3. 3.

    In Appendix B, we argue that this is a reasonable assumption and also summarize some simulation results that show that our estimates of the degree of downward bias in standard measures of corruption are lower than those that would be obtained should the RRQ work as designed.

  4. 4.

    Kraay and Murrell (2016a) used two different data sets in their implementation. For brevity, we focus on their implementation that uses data from the World Bank’s Enterprise Surveys project (World Bank 2015), since this project also provides the data for the set of countries on which this paper focuses.

  5. 5.

    Full details of the methodology can be found at http://www.enterprisesurveys.org/methodology. Stratified random sampling was used, with strata based on firm size, geographical location, and economic sector. Given the small sample size and the oversampling of some industries, the pattern of sampling weights is highly skewed. To prevent a small number of firms with very high weights from dominating our results, we report unweighted results throughout the paper. As a result, our results should be interpreted as representative only of the sample of firms in the data.

  6. 6.

    See, for example, Transparency International’s survey for the Global Corruption Barometer http://files.transparency.org/content/download/604/2549/file/2013_GlobalCorruptionBarometer_EN.pdf, the questions on gambling in the Fed’s Survey of Household Economics and Decision making http://www.federalreserve.gov/econresdata/2014-economic-well-being-of-us-households-in-2013-appendix-2.htm, the questions on health behavior in the World Health Survey of the WHO http://www.who.int/healthinfo/survey/en/, the World Justice Project’s questions on corruption http://worldjusticeproject.org/sites/default/files/gpp_2013_final.pdf, the questions on unwanted sexual acts in the National Crime Victimization Survey http://www.bjs.gov/index.cfm?ty=dcdetail&iid=245, and the questions on sexual behavior in the Demographic and Health Surveys of USAID http://dhsprogram.com/What-We-Do/Survey-Types/DHS-Questionnaires.cfm.

  7. 7.

    Several measures of corruption produced and used by the World Bank are available at: http://www.enterprisesurveys.org/data/exploretopics/corruption.

  8. 8.

    With one caveat: refusals to answer are sometimes treated as admissions of guilt, as for example in analysis conducted by the World Bank’s Enterprise Surveys unit.

  9. 9.

    Because these RRQs are not part of the core questionnaire for the World Bank’s Enterprise Surveys, we placed these questions in selected Enterprise Surveys over the past several years, in collaboration with the Enterprise Survey team at the World Bank. We are particularly grateful to Giuseppe Iarossi, Jorge Rodríguez Meza, Veselin Kuntchev, and Arvind Jain for their cooperation in placing these questions.

  10. 10.

    Specifically, under the assumption that the respondent has done none of the sensitive acts, the probability of observing seven “No” responses is 0.57 < 0.01.

  11. 11.

    Specifically, under the assumption that the respondent has done none of the sensitive acts, the probability of observing one or two “Yes” responses can readily be calculated from the binomial distribution with seven trials and a success probability of 0.5.

  12. 12.

    It is of course plausible that businesses of different characteristics (size, activity, etc.) have different probabilities of getting a visit or inspection by tax officials. However, we do not model tax officials’ choices here.

  13. 13.

    On the WBES questionnaires, this tax question appears in the middle (or in Nigeria, at the end) of a series of two-part questions each of which is identical in structure to the question on taxes, but referring to other government agencies. Thus by the time the respondents reach the tax question they should know that acknowledgement of a visit will be followed by a question about a bribe request.

  14. 14.

    We also considered a third model, similar to model B except that all reticence respondents who chose to behave reticently did so on the visit question whether or not a visit had resulted in a bribe request. When evaluating the performance of each model—see section “Results: Preferred Estimates and Analysis” —this third model was the least preferred for all countries and therefore we have not reported any results for this model.

  15. 15.

    Since the assumption that rates of reticence on the CQ and the RRQ are the same is so critical to our procedures, and is non-standard in the context of the existing literature on RRQs, we elaborate on this point in Appendix B. Importantly, in that appendix we report the results of some simulations showing that if our assumption is not correct in the sense that reticent behavior is less prevalent in the RRQ, then we underestimate the degree of downward bias in standard estimates of corruption.

  16. 16.

    This point is trivial to show using equations (1) and (2) and the information appearing in the paragraphs immediately above.

  17. 17.

    See Kraay and Murrell (2016a) for more details.

  18. 18.

    Our test statistics for the parameters are based on heteroskedasticity-consistent standard errors clustered at the strata level. Following Cameron-Miller (Cameron and Miller 2010, pp. 19–20), our coefficient estimates would be consistent even in the presence of significant intra-cluster correlation of observations although they would no longer be the ML estimates. If instead we used the test statistics that were based on standard errors that were not clustered, there would be no substantive differences in our main conclusions.

  19. 19.

    This composite parameter is different from the measure of corruption in dealing with tax officials that is usually publicized (e.g. by the WBES) in that it takes into account the fact that not every business is necessarily involved in the contexts that potentially involve corruption. Average guilt, estimates of which are provided in Tables 11.4, 11.5, 11.6, 11.7, 11.8, 11.9, 11.10, is the concept that is usually reported.

  20. 20.

    This general point is strongly reinforced when we examine the set of preferred estimates in the ensuing section. It is also a strong conclusion in Kraay and Murrell (2016a).

  21. 21.

    Formally, we perform the Vuong (1989) model selection test for non-nested models estimated by ML. The test is very simple and involves forming the difference between the maximized value of the likelihood function between the two models for each observation, and then performing a standard t-test of the null hypothesis that the mean difference is zero. The difference in the maximized values of the log likelihoods for the two models is statistically significant at the 5% level in Ukraine.

  22. 22.

    As noted above, the data for all countries reject a model that reticent respondents treat the visit as a sensitive issue in exactly the same way that they would confront a question on bribes.

  23. 23.

    In producing the information in the following, we made a number of assumptions on how to treat non-responses, whether voters accurately reported whether they voted, and so on. While precise numbers would change if we varied these assumptions, the overall conclusion of this paragraph would be unaltered.

  24. 24.

    The questionnaire contains one other question that is asked between these two questions if the respondent answers yes to the first question: “If visited or inspected by tax officials, over the last year, how many times was this establishment either inspected by tax officials or required to meet with them?” Information from this subquestion is not used here.

  25. 25.

    The World Bank constructs the numerator of the following variable: “Percent of firms expected to give gifts in meetings with tax officials” by including both those who answer “yes” and those who refuse to answer, effectively assuming that a refusal means “yes”. In contrast, we drop from the sample those who refuse to answer.

  26. 26.

    With the exception of India where this information is available only for half of the sample and Nigeria where the interviewer code is missing for 2387 out of 5544 interviews. Therefore the procedure described below is not applicable to these observations in India and Nigeria.

  27. 27.

    Specifically, if i in country c carried out n ic interviews, for which a proportion p ic answered “No” to all seven sensitive questions, we dropped all the interviews of this interviewer if \( {p}_{ic}-5\sqrt{\frac{p_{ic}\left(1-{p}_{ic}\right)}{n_{ic}}}>\frac{\sum_i{n}_{ic}{p}_{ic}}{\sum_i{n}_{ic}}.\)

  28. 28.

    One very recent study seems to fly in the face of these judgments. Rosenfeld et al. (2016, henceforth RIS) surveyed Mississippi citizens on how they had voted in a controversial ballot initiative and found that the use of a RRQ “recovers the truth well” (RIS p. 794), a judgment made possible because the outcome of the initiative was known. However, Kraay and Murrell (2016b) show that RIS rely on a very unorthodox RRQ, one that has none of the properties that are usually deemed necessary to encourage candid behavior in respondents. Using a model of individual behavior similar to the one employed in this paper, Kraay and Murrell (2016b) also show that the survey results obtained by RIS are internally inconsistent. While there is no doubt that RIS recovered the truth well in this instance, why this was the case is something of a mystery, causing one to doubt the external validity of their methodology.

  29. 29.

    The version of our model with a one-step question and k = 1 is analyzed in Kraay and Murrell (2016a). This result follows transparently from the formulae for population moments in that paper.

References

  • Azfar, Omar, and Peter Murrell. 2009. Identifying Reticent Respondents: Assessing the Quality of Survey Data on Corruption and Values. Economic Development and Cultural Change 57: 387–412.

    Article  Google Scholar 

  • Boruch, Robert.F. 1971. Assuring Confidentiality of Responses in Social Research: A Note on Strategies. The American Sociologist 6 (4): 308–311.

    Google Scholar 

  • Cameron, A. Colin and Douglas L. Miller. 2010. Robust Inference with Clustered Data. In Handbook of Empirical Economics and Finance, ed. Aman Ullah and David E. A. Giles. Boca Raton: Chapman and Hall.

    Google Scholar 

  • Clausen, Bianca, Aart Kraay, and Peter Murrell. 2011. Does Respondent Reticence Affect the Results of Corruption Surveys? Evidence from the World Bank Enterprise Survey for Nigeria. In International Handbook on the Economics of Corruption, ed. Susan Rose-Ackerman and Tina Søreide, vol. 2. Cheltenham: Edward Elgar.

    Google Scholar 

  • Coutts, Elisabeth, and Ben Jann. 2011. Sensitive Questions in Online Surveys: Experimental Results for the Randomized Response Technique (RRT) and the Unmatched Count Technique (UCT). Sociological Methods & Research 40: 169–193.

    Article  Google Scholar 

  • Funk, Patricia. 2016. How Accurate are Surveyed Preferences for Public Policies? Evidence from a Unique Institutional Setup. Review of Economics and Statistics 98: 442–454.

    Article  Google Scholar 

  • Gong, Erick. 2015. HIV Testing and Risky Sexual Behavior. Economic Journal 125: 32–60.

    Article  Google Scholar 

  • Holbrook, Allyson L., and Jon A. Krosnick. 2010. Measuring Voter Turnout by Using the Randomized Response Technique: Evidence Calling into Question the Method’s Validity. Public Opinion Quarterly 74: 328–343.

    Article  Google Scholar 

  • Kraay, Aart, and Peter Murrell. 2016a. Misunderestimating Corruption. Review of Economics and Statistics 98: 455–466.

    Article  Google Scholar 

  • ———. 2016b. Comment on “The Use of Random Response Questions.” Rosenfeld, Imai and Shapiro. Working Paper, University of Maryland.

    Google Scholar 

  • Lensvelt-Mulders, Gerty J.L.M., and Hennie R. Boeije. 2007. Evaluating Compliance with a Computer Assisted Randomized Response Technique: A Qualitative Study into the Origins of Lying and Cheating. Computers in Human Behavior 23: 591–608.

    Article  Google Scholar 

  • Lensvelt-Mulders, Gerty J.L.M., Joop J. Hox, Peter G.M. van der Heijden, and Cora J.M. Maas. 2005. Meta-Analysis of Randomized Response Research: Thirty-five Years of Validation. Sociological Methods & Research 33: 319–348.

    Article  Google Scholar 

  • Locander, William, Seymour Sudman, and Norman Bradburn. 1976. An Investigation of Interview Method, Threat and Response Distortion. Journal of the American Statistical Association 71: 269–275.

    Article  Google Scholar 

  • OECD. 2015. The ABC of Gender Equality in Education: Aptitude, Behaviour, Confidence. Paris: PISA, OECD Publishing.

    Book  Google Scholar 

  • Olken, Benjamin. 2009. Corruption Perceptions vs. Corruption Reality. Journal of Public Economics 93: 950–964.

    Article  Google Scholar 

  • Reinikka, Ritva, and Jakob Svensson. 2004. Local Capture: Evidence from a Central Government Transfer Program in Uganda. Quarterly Journal of Economics: 679–705.

    Google Scholar 

  • Rose, Richard, and Caryn Peiffer. 2015. Paying Bribes for Public Services. Basingstoke, UK: Palgrave Macmillan.

    Book  Google Scholar 

  • Rosenfeld, Bryn, Kosuke Imai, and Jacob N. Shapiro. 2016. An Empirical Validation Study of Popular Survey Methodologies for Sensitive Questions. American Journal of Political Science 60: 783–802.

    Article  Google Scholar 

  • Shipton, D., D.M. Tappin, T. Vadiveloo, J.A. Crossley, D. Aitken, and J. Chalmers. 2009. Reliability of Self Reported Smoking Status by Pregnant Women for Estimating Smoking Prevalence: A Retrospective, Cross Sectional Study. British Medical Journal 339: b4347.

    Article  Google Scholar 

  • Tourangeau, Roger, and Ting Yan. 2007. Sensitive Questions in Surveys. Psychological Bulletin 133: 859–833.

    Article  Google Scholar 

  • Trappmann, Mark, Ivar Krumpal, Antje Kirchner, and Ben Jann. 2014. Item Sum: A New Technique for Asking Quantitative Sensitive Questions. Journal of Survey Statistics and Methodology 2: 58–77.

    Article  Google Scholar 

  • Vuong, Quang H. 1989. Likelihood Ratio Tests for Model Selection and Nonnested Hypotheses. Econometrica 57: 307–333.

    Article  Google Scholar 

  • Warner, Stanley L. 1965. Randomized Response: A Survey Technique for Eliminating Evasive Answer Bias. Journal of the American Statistical Association 60: 63–69.

    Article  Google Scholar 

  • Wojcik, Sean P., Arpine Hovasapian, Jesse Graham, Matt Motyl, and Peter H. Ditto. 2015. Conservatives Report, But Liberals Display, Greater Happiness. Science 347: 1243–1247.

    Article  Google Scholar 

  • Wolter, Felix, and Peter Preisendörfer. 2013. Asking Sensitive Questions: An Evaluation of the Randomized Response Technique Versus Direct Questioning Using Individual Validation Data. Sociological Methods & Research 42: 321–353.

    Article  Google Scholar 

  • World Bank. 2015. Enterprise Surveys (WBES). http://www.enterprisesurveys.org/

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nona Karalashvili .

Editor information

Editors and Affiliations

Appendices

Appendix A: The Survey Questions and the Data

The Two-Step Conventional Question

  1. 1.

    A professional surveyor read the following to the respondent: “Over the last year, was this establishment visited or inspected by tax officials?” Respondents could either answer “Yes”, or “No”, or “Don’t know” (DK) or refuse (R) to answer. Respondents answering DK or R were dropped from the sample. The incidence of DK and R responses to these questions is given in Table 11.12.

  2. 2.

    We set D = 1 if the respondent did not acknowledge that the visit occurred, that is if the respondent answered “No”.

  3. 3.

    If the above question was answered with a “Yes”, then the interviewer read the following to the respondentFootnote 24: “In any of these inspections or meetings was a gift or informal payment expected or requested?” Respondents could either answer “Yes”, or “No”, or DK or R.Footnote 25 Respondents who answered DK or R were dropped from the sample. The incidence of DK and R responses to these questions is given in Table 11.12.

  4. 4.

    We set D = 2 if the respondent said “No” to the inquiry about the bribe request, and we set D = 3 if the bribe request is acknowledged.

The Random-Response Questions

  1. 1.

    A professional surveyor read the following to the respondent: “We have designed an alternative experiment which provides the opportunity to answer questions based on the outcome of a coin toss. Before you answer each question, please toss this coin and do not show me the result. If the coin comes up heads, please answer ‘yes’ to the question regardless of the question asked. If the coin comes up tails, please answer in accordance with your experience. Since I do not know the result of the coin toss, I cannot know whether your response is based on your experience or by chance.”

  2. 2.

    The ten sensitive questions used in this battery of questions are given in Table 11.2. Respondents who refused to respond or responded “Don’t know” were dropped from the sample.

  3. 3.

    The variable X used in the analysis is equal to the number of the seven bolded questions in Table 11.2 for which the respondent answers “Yes”.

The incidence of DK and R responses to these questions is given in Table 11.12.

Cleaning the Data for Interviewer Effects

The RRQ battery is a key ingredient in our methodology, and therefore it is important to ensure that this unusual and cumbersome-to-administer procedure was implemented as designed. Enumerators received specific training on the RRQ methodology. As part of this training, they learned how the RRQ methodology is supposed to provide greater anonymity for respondents. However, they were not briefed on our intention to use the RRQ battery to make inferences about reticence.

Despite these precautions we do find some evidence of interviewer effects in the data that might indicate variation across interviewers in the implementation of the RRQs. In all countries we have information on the identity of the interviewer for each respondent.Footnote 26 For each interviewer, we calculated the proportion of respondents with seven “No” responses on the RRQs. For most interviewers in most countries, we found rates of seven “No” responses that were not too different from the corresponding country averages. However, we did find some interviewers with implausibly high rates of seven “No” responses. We speculate that this may reflect differences across interviewers in how the RRQ was implemented. One possibility is that the interviewer incorrectly had the respondent toss the coin only once and let a single outcome govern the responses to all questions in the RRQ battery. This could lead to an upward bias in our estimates of the prevalence of reticent behavior. To avoid such a possibility, we drop all interviews performed by interviewers whose interviewer-specific rate of seven “No” responses on the sensitive RRQ questions was more than five standard deviations above the corresponding country average.Footnote 27 Combining all surveys except India, we drop 2% of interviewers who accounted for just under one tenth of all respondents who answered “No” seven times on the RRQ battery. Including India, we drop a total of 4% of interviewers, together accounting for over third of all of the respondents who answered “No” seven times.

This process is necessary in order to pursue the objective of focusing solely on the effects of respondent reticence. Our goal is not to evaluate the properties of survey data as a whole, but rather to investigate the effect of reticent behavior on the CQ as a possible source of bias in estimates of corruption. The goal is furthered by focusing on a subset of the data where one can be most sure that interview procedures were followed faithfully. We also note that while dropping these interviewers naturally increases the rate of “Yes” responses on the RRQ, it only has small effects on the rate of “Yes” responses on the CQ. This treatment of interviewers did not result in any changes in the data from Turkey and Nigeria. It had the biggest effect on the rate of “Yes” responses on the second part of the CQ in the data from India where the rate increased from 9.1% to 11.9%. In Peru, Bangladesh 2011, and Bangladesh 2013, this rate increased by less than one percentage point, and in Ukraine it decreased by less than 0.1 percentage point. This suggests that our concerns about the dropped interviewers applies only to their administration of the RRQs, except in India, perhaps.

Appendix B: The Assumption That Reticence on the CQ and the RRQs Is the Same

Our methodology assumes that rates of reticence on the CQ are the same as on the RRQs. This appears to be a strong assumption because the RRQ was developed with the exact purpose of reducing respondent reticence relative to that on CQs. In this Appendix we justify our assumption in two ways. First, we show that the assumption is reasonable given current evidence in the survey-research literature. Second, we show that one of our major conclusions—the underestimation of corruption—is robust when this assumption is relaxed, that is, assuming the RRQ does reduce respondent reticence.

In a meta-analysis, Lensvelt-Mulders et al. (2005) examined the few studies where RRQs and CQs were used and external validation of survey responses was possible. They found that on average RRQs had 90% of the reticence of conventional face-to-face interview questions (CQs). Holbrook and Krosnick (2010) and Wolter and Preisendörfer (2013) cite a large number of studies of the effects of RRQs versus CQs and both conclude that there are reasons to doubt the efficacy of RRQs. After conducting their own study showing that the use of RRQs actually increased estimates of voter turnout to impossible levels, Holbrook and Krosnick (2010, p. 336) conclude that “… among the few studies that have compared RRT and direct self-report estimates of socially admirable attributes, none yielded consistent evidence that the RRT significantly reduced reported rates … This calls into question interpretations of all past RRT studies and raises serious questions about whether the RRT has practical value for increasing survey reporting accuracy.”Footnote 28

Coutts and Jann (2011) used exactly the technique that we used in our study—forced-response, manual-coin toss RRQ—to examine six sensitive topics. They find that admission rates for RRQs are much lower than for CQs for not buying a ticket on public transport, shoplifting, marijuana use, driving under the influence (DUI), and infidelity, while higher only for keeping extra change when too much was given in a transaction. They attribute their results on RRQs as reflecting the fact that a forced-yes response can feel like an admission of guilt. (Indeed, a yes response should mean that the Bayesian posterior probability of guilt is higher than the prior for anybody but the respondent, such as a judgmental interviewer.) Wolter and Preisendörfer (2013) also compared a CQ to a forced-response RRQ, questioning a sample of known convicted criminals on whether they had committed an offense. Whereas 100% of the sample were guilty, 57.5% acknowledged this in a CQ and 59.6% in an RRQ, a trivial increase in candor. As the qualitative study of Lensvelt-Mulders and Boeije (2007) shows, the forced-response of yes after a coin toss is highly unpopular among respondents, thus suggesting the reason why RRQs might not produce their desired effect.

One can also address the issue analytically. Suppose that the world is such that reticence on the RRQ is less than on the CQ, that is the RRQ has some of the effect that its proponents hoped for. In terms of the parameters of our models, there are now two values of q, one for the CQ and one for the RRQs, and q CQ > q RRQ. One can then ask what the biases in our estimation procedure would be given that our procedure embodies the assumption that reticence is the same on the two types of questions. This is easy to answer analytically in one case, when there is a one-step CQ and k = 1. (This is equivalent to Model A with k =1, since in that model respondents are always candid about visits). Suppose that our ML estimate of average guilt is denoted g e and the true value of average guilt is g a. Then, it is straightforward to show that \( \frac{g^e}{\left(1+{g}^e\right)} \) consistently estimates \( \frac{g^a}{\left(1+{g}^a\right)}\cdot \frac{\left(1-{rq}^{CQ}\right)}{\left(1-{rq}^{RRQ}\right)} \).Footnote 29 Thus our procedures underestimate the actual rate of guilt in this special case.

When we turn to model B or instances where k < 1, or both, the analytics is not as straightforward. Thus, we use simulations for the analysis. A single simulation is as follows. We generate a data set of 10,000 observations using one of our models, for example model B, and parameter values that appear in Table 11.11 for a particular country for which that model is preferred, for example India. However, when we generate the observations we make one variation on the model: we assume reticence on the RRQ is less than on the CQ. That is, q CQ is set at the value of q in Table 11.11, but q RRQ = 0.8 × q CQ. Then when we estimate the model we incorrectly assume that the simulated world is one where q CQ = q RRQ, that is, estimation is as described in section “Estimation and Definitions of Composite Parameters”.

Table 11.13 Simulation results

The results of the simulations appear in the table immediately above. Because the results are so consistent, and consistent with the analytics for the simple case above, six simulations are sufficient, each one matching our preferred model for a country. In all cases, our procedures severely underestimate the true rates of effective corruption when the world is one in which q CQ = 0.8 × q RRQ and estimation incorrectly imposes the assumption that q CQ = q RRQ. The degree of underestimation varies between 3 standard deviations (Peru) and 43 (Ukraine). Thus, our conclusion that standard estimates of corruption are significantly underestimated is robust to the criticism that we have incorrectly assumed that the RRQ has no affect in diminishing respondent reticence (Table 11.13).

Rights and permissions

Reprints and permissions

Copyright information

© 2018 The Author(s)

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Karalashvili, N., Kraay, A., Murrell, P. (2018). Doing the Survey Two-Step: The Effects of Reticence on Estimates of Corruption in Two-Stage Survey Questions. In: Basu, K., Cordella, T. (eds) Institutions, Governance and the Control of Corruption. International Economic Association Series. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-319-65684-7_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-65684-7_11

  • Published:

  • Publisher Name: Palgrave Macmillan, Cham

  • Print ISBN: 978-3-319-73822-2

  • Online ISBN: 978-3-319-65684-7

  • eBook Packages: Economics and FinanceEconomics and Finance (R0)

Publish with us

Policies and ethics