Choice experiments, a survey methodology in which consumers face a series of choice tasks requiring them to indicate their most preferred option from a choice set containing two or more options are used to generate estimates of consumer preferences to determine the appropriate allocation of public resources to competing projects or programs. The analysis of choice-experimental data typically relies on the assumptions that choices of the non-status quo option are demand-revealing and choices of the status quo option are not demand-revealing, but, rather, reflect an underlying behavioral bias in favor of the status quo. This paper reports the results of an experiment demonstrating that both of those assumptions are likely to be invalid. We demonstrate that choice experiments for a public good are vulnerable to the same types of strategic voting that affect other types of multiple-choice voting mechanisms. We show that owing to the mathematics of choice-set design, what actually is strategic voting often is misinterpreted as a behavioral bias for the status quo option. Therefore, we caution against using current choice-experimental methodologies to inform policy making about public goods.
This is a preview of subscription content, log in to check access.
Buy single article
Instant access to the full article PDF.
Tax calculation will be finalised during checkout.
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
Tax calculation will be finalised during checkout.
By the “mathematics of combinatorial choice set design”, we mean the method by which individual choice options with different levels of attributes are combined into groups of options, termed choice sets. During a choice experimental survey, respondents are presented with a series of tasks, in which they are asked to choose one option from each choice set. Their selection usually is interpreted to indicate the most preferred option in the set. .
A proof of the generalized result is available from the authors.
If the number of subjects who showed up was not divisible by nine, the unassigned subjects were invited to participate in a different experimental session at a later time. Prior to starting the experiment, all subjects completed an informed consent process. Subjects were free to leave at any time.
The experiment was conducted under the oversight of the university’s Institutional Review Board (IRB). All experimental instructions are available in the online supplementary materials. Experimental data are available from the corresponding author upon request.
As a check, we re-calculated all standard errors and p-values by controlling for clustering at the group level instead of at the subject level. Generally speaking, standard errors and p-values when controlling for clustering at the group level are the same or smaller than when controlling for non-independence at the individual subject level. Controlling for non-independence at the group level does not change any of the conclusions reported below. .
The overall rate of demand revelation is significantly less in the UBAL treatment than the OOD treatment (p = 0.02). No other significant differences in rates of demand revelation are evident across methods to create fractional factorial choice experiment designs.
Adamowicz, W., Boxall, P., Williams, M., & Louviere, J. (1998). Stated preference approaches for measuring passive use values: Choice experiments and contingent valuation. American Journal of Agricultural Economics, 80(1), 64–75.
Adamowicz, W., Dupont, D., Krupnick, A., & Zhang, J. (2011). Valuation of cancer and microbial disease risk reductions in municipal drinking water: An analysis of risk context using multiple valuation methods. Journal of Environmental Economics and Management, 61(2), 213–226.
Aravena, C., Martinsson, P., & Scarpa, R. (2014). Does money talk?—The effect of a monetary attribute on the marginal values in a choice experiment. Energy Economics, 44, 483–491.
Australian Energy Market Operator. (2014). Value of customer reliability review: final report. http://www.aemo.com.au/-/media/Files/PDF/VCR-final-report–PDF-update-27-Nov-14.pdf.
Bateman, I., Munro, A., & Poe, G. (2008). Decoy effects in choice experiments and contingent valuation: Asymmetric dominance. Land Economics, 84(1), 115–127.
Brownstone, D., & Train, K. (1999). Forecasting new product penetration with flexible substitution patterns. Journal of Econometrics, 89, 109–129.
Cameron, T., & DeShazo, J. (2013). Demand for health risk reductions. Journal of Environmental Economics and Management, 65(1), 87–109.
Carlsson, F., Frykblom, P., & Lagerkvist, C. (2007). Preferences with and without prices—Does the price attribute affect behavior in stated preference surveys? Environmental & Resource Economics, 38, 155–164.
Collins, J., & Vossler, C. (2009). Incentive compatibility tests of choice experiment value elicitation questions. Journal of Environmental Economics and Management, 58(2), 226–235.
Day, B., Bateman, I., Carson, R., Dupont, D., Louviere, J., Morimoto, S., et al. (2012). Ordering effects and choice set awareness in repeat-response stated preference studies. Journal of Environmental Economics and Management, 63, 73–91.
Day, B., & Pinto Prades, J. (2010). Ordering anomalies in choice experiments. Journal of Environmental Economics and Management, 59(3), 271–285.
Emmerson, C., & Metcalfe, P. (2013). Southern water customer engagement (economic)—willingness to pay, Report prepared for Southern Water UK, https://www.southernwater.co.uk/Media/Default/PDFs/A05_WillingnessToPay.pdf. Retrieved 14 Feb 2019.
Farquharson, R. (1969). Theory of voting. New Haven: Yale University Press.
Felsenthal, D., Rapoport, A., & Maoz, Z. (1988). Tacit cooperation in three alternative non-cooperative voting games: A new model of sophisticated behavior under plurality procedure. Election Studies, 7, 143–161.
Ferrini, S., & Scarpa, R. (2007). Designs with a priori information for nonmarket valuation with choice experiments: A Monte Carlo study. Journal of Environmental Economics and Management, 53, 342–363.
Forsythe, R., Myerson, R., Rietz, T., & Weber, R. (1993). An experiment on coordination in multi-candidate elections: The importance of polls and election histories. Social Choice and Welfare, 10(3), 223–247.
Fujiwara, T. (2011). A regression discontinuity test of strategic voting and Duverger’s law. Quarterly Journal of Political Science, 6, 197–233.
Hensher, D., Rose, J., & Greene, W. (2015). Applied choice analysis (2nd ed.). Cambridge: Cambridge University Press.
Huber, J., & Zwerina, K. (1996). The importance of utility balance in efficient choice designs. Journal of Marketing Research, 33(3), 307–317.
Kanninen, B. (2002). Optimal design for multinomial choice experiments. Journal of Marketing Research, 39(2), 214–227.
Kawai, K., & Watanabe, Y. (2013). Inferring strategic voting. American Economic Review, 103(2), 624–662. https://doi.org/10.1257/aer.103.2.624.
List, J., Sinha, P., & Taylor, M. (2006). Using choice experiments to value non-market goods and services: Evidence from field experiments. Advances in Economic Analysis and Policy, 5(2), 1132. https://doi.org/10.2202/1538-0637.1132.
Louviere, J. (1984). Using discrete choice experiments and multinomial logit choice models to forecast trial in a competitive retail environment: A fast food restaurant illustration. Journal of Retailing, 60(4), 81–108.
Louviere, J. (1988). Conjoint analysis modeling of stated preferences: A review of theory, methods, recent developments and external validity. Journal of Transport Economics and Policy, 10, 93–119.
Louviere, J., & Woodworth, G. (1983). Design and analysis of simulated consumer choice or allocation experiments: An approach based on aggregate data. Journal of Marketing Research, 20, 350–367.
Lusk, J., & Schroeder, T. (2004). Are choice experiments incentive compatible? A test with quality-differentiated beef steaks. American Journal of Agricultural Economics, 86(2), 467–482.
McFadden, D. (1986). The choice theory approach to market research. Marketing Science, 5(4), 275–297.
McFadden, D. (2001). Economic choices. American Economic Review, 91(3), 351–378.
Meginnis, K., Burton, M., Chan, R., & Rigby, D. (2018). Strategic bias in discrete choice experiments. Journal of Environmental Economics and Management. https://doi.org/10.1016/j.jeem.2018.08.010.
Queens University Belfast and Perceptive Insight. (2015). Discrete choice experiments for valuing the benefits of improved NIE services. Report prepared for Northern Ireland Electricity Networks. https://www.nienetworks.co.uk/documents/consultations/nie-wtp-report-5nov2015.
Revelt, D., & Train, K. (1998). Mixed logit with repeated choices: Households’ choices of appliance efficiency level. Review of Economics and Statistics, 80(4), 647–657.
Ryan, M., & Wordsworth, S. (2000). Sensitivity of willingness to pay estimates to the level of attributes in discrete choice experiments. Scottish Journal of Political Economy, 47(5), 504–524.
Smith, J., & McKee, M. (2007). ‘People or prairie chickens’ revisited: Stated preferences with explicit non-market trade-offs. Defence and Peace Economics, 18(3), 223–244.
Street, D., Burgess, L., & Louviere, J. (2005). Quick and easy choice sets: Constructing optimal and nearly optimal stated choice experiments. International Journal of Research in Marketing, 22, 459–470. https://doi.org/10.1016/j.ijresmar.2005.09.003.
Taylor, L., Morrison, M., & Boyle, K. (2010). Exchange rules and incentive compatibility of choice experiments. Environmental & Resource Economics, 47(2), 197–220.
Conflict of interest
Hutchinson: Employed as a consultant by Northern Ireland Electricity Networks Ltd in connection with the preparation of the 6th Price Control Agreement (RP6) with The Office of the Utility Regulator NI (Final Determination 30th June 2017). Study is cited herein as Queen’s University Belfast and Perceptive Insight (2015). Scarpa: Served as lead consultant in the design of the survey instruments and the choice data analysis for the study for the Australian Energy Market Operator cited herein.
Human and animal rights
This research involves human participants. The experiments reported herein were conducted under the oversight of the United States Air Force Academy Institutional Review Board, protocol number FAC20130036H.
All participants provided signed informed consent prior to participating in this research.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This research is funded by Queen’s University, Belfast, under a cooperative research and development agreement with the United States Air Force Academy. The funding was for payment of experimental subjects only. Sponsor had no role in study design; collection, analysis, and interpretation of data; writing of the report; or decision to submit the article for publication. The opinions expressed herein are solely those of the authors and do not necessarily reflect the views of the authors’ respective institutions.
Electronic supplementary material
Below is the link to the electronic supplementary material.
About this article
Cite this article
Carson, K.S., Chilton, S.M., Hutchinson, W.G. et al. Public resource allocation, strategic behavior, and status quo bias in choice experiments. Public Choice 185, 1–19 (2020). https://doi.org/10.1007/s11127-019-00735-y
- Choice experiment
- Strategic voting
- Status quo bias
- Public goods experiment