Public resource allocation, strategic behavior, and status quo bias in choice experiments

Abstract

Choice experiments, a survey methodology in which consumers face a series of choice tasks requiring them to indicate their most preferred option from a choice set containing two or more options are used to generate estimates of consumer preferences to determine the appropriate allocation of public resources to competing projects or programs. The analysis of choice-experimental data typically relies on the assumptions that choices of the non-status quo option are demand-revealing and choices of the status quo option are not demand-revealing, but, rather, reflect an underlying behavioral bias in favor of the status quo. This paper reports the results of an experiment demonstrating that both of those assumptions are likely to be invalid. We demonstrate that choice experiments for a public good are vulnerable to the same types of strategic voting that affect other types of multiple-choice voting mechanisms. We show that owing to the mathematics of choice-set design, what actually is strategic voting often is misinterpreted as a behavioral bias for the status quo option. Therefore, we caution against using current choice-experimental methodologies to inform policy making about public goods.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Notes

  1. 1.

    See, for example, List et al. (2006), Taylor et al. (2010), Carlsson et al. (2007), Bateman et al. (2008), Collins and Vossler (2009), Day and Pinto Prades (2010), Day et al. (2012) and Aravena et al. (2014).

  2. 2.

    By the “mathematics of combinatorial choice set design”, we mean the method by which individual choice options with different levels of attributes are combined into groups of options, termed choice sets. During a choice experimental survey, respondents are presented with a series of tasks, in which they are asked to choose one option from each choice set. Their selection usually is interpreted to indicate the most preferred option in the set. .

  3. 3.

    A proof of the generalized result is available from the authors.

  4. 4.

    If the number of subjects who showed up was not divisible by nine, the unassigned subjects were invited to participate in a different experimental session at a later time. Prior to starting the experiment, all subjects completed an informed consent process. Subjects were free to leave at any time.

  5. 5.

    The experiment was conducted under the oversight of the university’s Institutional Review Board (IRB). All experimental instructions are available in the online supplementary materials. Experimental data are available from the corresponding author upon request.

  6. 6.

    As a check, we re-calculated all standard errors and p-values by controlling for clustering at the group level instead of at the subject level. Generally speaking, standard errors and p-values when controlling for clustering at the group level are the same or smaller than when controlling for non-independence at the individual subject level. Controlling for non-independence at the group level does not change any of the conclusions reported below. .

  7. 7.

    The overall rate of demand revelation is significantly less in the UBAL treatment than the OOD treatment (p = 0.02). No other significant differences in rates of demand revelation are evident across methods to create fractional factorial choice experiment designs.

References

  1. Adamowicz, W., Boxall, P., Williams, M., & Louviere, J. (1998). Stated preference approaches for measuring passive use values: Choice experiments and contingent valuation. American Journal of Agricultural Economics, 80(1), 64–75.

    Article  Google Scholar 

  2. Adamowicz, W., Dupont, D., Krupnick, A., & Zhang, J. (2011). Valuation of cancer and microbial disease risk reductions in municipal drinking water: An analysis of risk context using multiple valuation methods. Journal of Environmental Economics and Management, 61(2), 213–226.

    Article  Google Scholar 

  3. Aravena, C., Martinsson, P., & Scarpa, R. (2014). Does money talk?—The effect of a monetary attribute on the marginal values in a choice experiment. Energy Economics, 44, 483–491.

    Article  Google Scholar 

  4. Australian Energy Market Operator. (2014). Value of customer reliability review: final report. http://www.aemo.com.au/-/media/Files/PDF/VCR-final-report–PDF-update-27-Nov-14.pdf.

  5. Bateman, I., Munro, A., & Poe, G. (2008). Decoy effects in choice experiments and contingent valuation: Asymmetric dominance. Land Economics, 84(1), 115–127.

    Article  Google Scholar 

  6. Brownstone, D., & Train, K. (1999). Forecasting new product penetration with flexible substitution patterns. Journal of Econometrics, 89, 109–129.

    Article  Google Scholar 

  7. Cameron, T., & DeShazo, J. (2013). Demand for health risk reductions. Journal of Environmental Economics and Management, 65(1), 87–109.

    Article  Google Scholar 

  8. Carlsson, F., Frykblom, P., & Lagerkvist, C. (2007). Preferences with and without prices—Does the price attribute affect behavior in stated preference surveys? Environmental & Resource Economics, 38, 155–164.

    Article  Google Scholar 

  9. Collins, J., & Vossler, C. (2009). Incentive compatibility tests of choice experiment value elicitation questions. Journal of Environmental Economics and Management, 58(2), 226–235.

    Article  Google Scholar 

  10. Day, B., Bateman, I., Carson, R., Dupont, D., Louviere, J., Morimoto, S., et al. (2012). Ordering effects and choice set awareness in repeat-response stated preference studies. Journal of Environmental Economics and Management, 63, 73–91.

    Article  Google Scholar 

  11. Day, B., & Pinto Prades, J. (2010). Ordering anomalies in choice experiments. Journal of Environmental Economics and Management, 59(3), 271–285.

    Article  Google Scholar 

  12. Emmerson, C., & Metcalfe, P. (2013). Southern water customer engagement (economic)—willingness to pay, Report prepared for Southern Water UK, https://www.southernwater.co.uk/Media/Default/PDFs/A05_WillingnessToPay.pdf. Retrieved 14 Feb 2019.

  13. Farquharson, R. (1969). Theory of voting. New Haven: Yale University Press.

    Google Scholar 

  14. Felsenthal, D., Rapoport, A., & Maoz, Z. (1988). Tacit cooperation in three alternative non-cooperative voting games: A new model of sophisticated behavior under plurality procedure. Election Studies, 7, 143–161.

    Article  Google Scholar 

  15. Ferrini, S., & Scarpa, R. (2007). Designs with a priori information for nonmarket valuation with choice experiments: A Monte Carlo study. Journal of Environmental Economics and Management, 53, 342–363.

    Article  Google Scholar 

  16. Forsythe, R., Myerson, R., Rietz, T., & Weber, R. (1993). An experiment on coordination in multi-candidate elections: The importance of polls and election histories. Social Choice and Welfare, 10(3), 223–247.

    Article  Google Scholar 

  17. Fujiwara, T. (2011). A regression discontinuity test of strategic voting and Duverger’s law. Quarterly Journal of Political Science, 6, 197–233.

    Article  Google Scholar 

  18. Hensher, D., Rose, J., & Greene, W. (2015). Applied choice analysis (2nd ed.). Cambridge: Cambridge University Press.

    Google Scholar 

  19. Huber, J., & Zwerina, K. (1996). The importance of utility balance in efficient choice designs. Journal of Marketing Research, 33(3), 307–317.

    Article  Google Scholar 

  20. Kanninen, B. (2002). Optimal design for multinomial choice experiments. Journal of Marketing Research, 39(2), 214–227.

    Article  Google Scholar 

  21. Kawai, K., & Watanabe, Y. (2013). Inferring strategic voting. American Economic Review, 103(2), 624–662. https://doi.org/10.1257/aer.103.2.624.

    Article  Google Scholar 

  22. List, J., Sinha, P., & Taylor, M. (2006). Using choice experiments to value non-market goods and services: Evidence from field experiments. Advances in Economic Analysis and Policy, 5(2), 1132. https://doi.org/10.2202/1538-0637.1132.

    Article  Google Scholar 

  23. Louviere, J. (1984). Using discrete choice experiments and multinomial logit choice models to forecast trial in a competitive retail environment: A fast food restaurant illustration. Journal of Retailing, 60(4), 81–108.

    Google Scholar 

  24. Louviere, J. (1988). Conjoint analysis modeling of stated preferences: A review of theory, methods, recent developments and external validity. Journal of Transport Economics and Policy, 10, 93–119.

    Google Scholar 

  25. Louviere, J., & Woodworth, G. (1983). Design and analysis of simulated consumer choice or allocation experiments: An approach based on aggregate data. Journal of Marketing Research, 20, 350–367.

    Article  Google Scholar 

  26. Lusk, J., & Schroeder, T. (2004). Are choice experiments incentive compatible? A test with quality-differentiated beef steaks. American Journal of Agricultural Economics, 86(2), 467–482.

    Article  Google Scholar 

  27. McFadden, D. (1986). The choice theory approach to market research. Marketing Science, 5(4), 275–297.

    Article  Google Scholar 

  28. McFadden, D. (2001). Economic choices. American Economic Review, 91(3), 351–378.

    Article  Google Scholar 

  29. Meginnis, K., Burton, M., Chan, R., & Rigby, D. (2018). Strategic bias in discrete choice experiments. Journal of Environmental Economics and Management. https://doi.org/10.1016/j.jeem.2018.08.010.

    Article  Google Scholar 

  30. Queens University Belfast and Perceptive Insight. (2015). Discrete choice experiments for valuing the benefits of improved NIE services. Report prepared for Northern Ireland Electricity Networks. https://www.nienetworks.co.uk/documents/consultations/nie-wtp-report-5nov2015.

  31. Revelt, D., & Train, K. (1998). Mixed logit with repeated choices: Households’ choices of appliance efficiency level. Review of Economics and Statistics, 80(4), 647–657.

    Article  Google Scholar 

  32. Ryan, M., & Wordsworth, S. (2000). Sensitivity of willingness to pay estimates to the level of attributes in discrete choice experiments. Scottish Journal of Political Economy, 47(5), 504–524.

    Article  Google Scholar 

  33. Smith, J., & McKee, M. (2007). ‘People or prairie chickens’ revisited: Stated preferences with explicit non-market trade-offs. Defence and Peace Economics, 18(3), 223–244.

    Article  Google Scholar 

  34. Street, D., Burgess, L., & Louviere, J. (2005). Quick and easy choice sets: Constructing optimal and nearly optimal stated choice experiments. International Journal of Research in Marketing, 22, 459–470. https://doi.org/10.1016/j.ijresmar.2005.09.003.

    Article  Google Scholar 

  35. Taylor, L., Morrison, M., & Boyle, K. (2010). Exchange rules and incentive compatibility of choice experiments. Environmental & Resource Economics, 47(2), 197–220.

    Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Katherine Silz Carson.

Ethics declarations

Conflict of interest

Hutchinson: Employed as a consultant by Northern Ireland Electricity Networks Ltd in connection with the preparation of the 6th Price Control Agreement (RP6) with The Office of the Utility Regulator NI (Final Determination 30th June 2017). Study is cited herein as Queen’s University Belfast and Perceptive Insight (2015). Scarpa: Served as lead consultant in the design of the survey instruments and the choice data analysis for the study for the Australian Energy Market Operator cited herein.

Human and animal rights

This research involves human participants. The experiments reported herein were conducted under the oversight of the United States Air Force Academy Institutional Review Board, protocol number FAC20130036H.

Informed consent

All participants provided signed informed consent prior to participating in this research.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This research is funded by Queen’s University, Belfast, under a cooperative research and development agreement with the United States Air Force Academy. The funding was for payment of experimental subjects only. Sponsor had no role in study design; collection, analysis, and interpretation of data; writing of the report; or decision to submit the article for publication. The opinions expressed herein are solely those of the authors and do not necessarily reflect the views of the authors’ respective institutions.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (PDF 125 kb)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Carson, K.S., Chilton, S.M., Hutchinson, W.G. et al. Public resource allocation, strategic behavior, and status quo bias in choice experiments. Public Choice 185, 1–19 (2020). https://doi.org/10.1007/s11127-019-00735-y

Download citation

Keywords

  • Choice experiment
  • Strategic voting
  • Status quo bias
  • Public goods experiment

JEL Classification

  • H41
  • C91
  • C92