Skip to main content

Advertisement

Log in

Experimental Design Criteria and Their Behavioural Efficiency: An Evaluation in the Field

  • Published:
Environmental and Resource Economics Aims and scope Submit manuscript

Abstract

Comparative results from an evaluation of inferred attribute non-attendance are provided for experimental designs optimised for three commonly employed statistical criteria, namely: orthogonality, Bayesian D-efficiency and optimal orthogonality in the difference. Survey data are from a choice experiment used to value the conservation of threatened native species in New Zealand’s production forests. In line with recent literature, we argue that attribute non-attendance can be taken as one of the important measures of behavioural efficiency. We focus on how this varies when alternative design criteria are used. Attribute non-attendance is inferred using an approach based on constrained latent classes. Given our proposed criterion to evaluate behavioural efficiency, our data indicate that the Bayesian D-efficiency criterion provides behaviourally more efficient choice tasks compared to the other two criteria.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. As this study focuses on “serial ANA”, we asked each respondent the attribute or attributes that she/he ignored after evaluating all the choice tasks. Other CE studies also examined “choice task specific ANA” where each respondent was asked for the ignored attribute/s after evaluating each choice task (e.g. Hensher 2006; Puckett and Hensher 2009; Scarpa et al. 2010).

  2. We are thankful to an anonymous reviewer for suggesting to elaborate on other forms of behavioural inefficiencies worth investigating.

  3. It is also possible that accounting for ANA may result in poorer model fits. If, for example, a respondent is observed to always select the highest priced alternative over repeated choice tasks, under maximum likelihood estimation techniques, the model will naïvely assume that the respondent prefers higher priced products, thus assigning a positive parameter to that individual. If, in accounting for ANA, the respondent is assigned a parameter of zero (under the assumption that they ignored price), then a poorer model fit is likely to be observed. Mathematically, a better model log-likelihood will be obtained if the parameter were allowed to be positive as opposed to being constrained to be zero as a positive parameter will better match the observed data. As such, care is required when selecting specifications based only on model fit criteria.

  4. We note that self-reported statements of ANA can be directly implemented in choice models in a much simpler way, although we do not do it here. If respondent n self-reported ANA for attribute k, then this attribute will have a \({{\beta }}_{\mathrm{kn}}\) coefficient restricted to zero. This implementation is discussed in Hensher et al. (2005a) and in Campbell et al. (2008), amongst others. Similar to many previous studies that employed self-reported ANA, for its identification during the survey, we used a single de-briefing question posed to the respondent after the evaluation of all choice situations.

  5. Note that, even though we have excluded observations here to help facilitate the statistical tests to be performed, we do not recommend doing this in practice, particularly when using orthogonal designs. Orthogonality requires that each task in the design is equally replicated in a data set. Removing observations will induce correlations and hence destroy the properties of the design.

  6. Care should be taken, however, in putting excessive reliance on such comparisons because the log-likelihood function is data-specific. The concept of model fit provides little information in this context, as the data, and hence models, are non-nested.

  7. While we have also estimated specifications with classes, (e.g., ignoring the cost attribute, ignoring all attributes, and attending only to one attribute) our analysis indicates that this set of latent classes is the most suited to our pooled data set as it results to the lowest normalised AIC (AIC/n) value from among 10 other model specifications we employed in the grid search exercise (see Table 10 in “Appendix”).

References

  • Balcombe K, Burton M, Rigby D (2011) Skew and attribute non-attendance within the bayesian mixed logit model. J Environ Econ Manag 62(3):446–461

    Article  Google Scholar 

  • Balcombe K, Fraser I (2011) A general treatment of ‘don’t know’ responses from choice experiments. Eur Rev Agric Econ 38(2):171–191

    Article  Google Scholar 

  • Bliemer MCJ, Rose JM (2009) Designing stated choice experiments: the state of the art. In: Kitamura R, Yoshi T, Yamamoto T (eds) The expanding sphere of travel behavior research. Selected papers from the 11th International Conference on Travel Behavior Research, Kyoto, 16–20 August 2009

  • Bliemer MCJ, Rose JM (2010) Construction of experimental designs for mixed logit models allowing for correlation across choice observations. Transp Res B Methodol 46(3):720–734

    Article  Google Scholar 

  • Bliemer MCJ, Rose JM (2011) Experimental design influences on stated choice outputs: an empirical study in air travel choice. Transp Res A Policy 45:63–79

    Article  Google Scholar 

  • Boxall PC, Adamowicz WL (2002) Understanding heterogeneous preferences in random utility models: a latent class approach. Environ Resour Econ 23(4):421–446

    Article  Google Scholar 

  • Campbell D, Hutchinson WG, Scarpa R (2008) Incorporating discontinuous preferences into the analysis of discrete choice experiments. Environ Resour Econ 34(3):396–411

  • Campbell D, Hensher DA, Scarpa R (2012a) Cost thresholds, cut-offs and sensitivities in stated choice analysis: identification and implications. Resour Energy Econ 41:401–417

    Article  Google Scholar 

  • Campbell D, Mørkbak MR, Olsen SB (2012b) Response latency in stated choice experiments: impact on preference, variance and processing heterogeneity. Paper presented at the European Association of Environmental and Resource Economists 19th Annual Conference in Prague, Czech Republic, 27–30 June 2012

  • Cantillo V, Ortúzar JD (2006) Implications of thresholds in discrete choice modelling. Transp Rev 26:667–691

    Article  Google Scholar 

  • Cantillo V, Heydecker B, Ortúzar JD (2006) A discrete choice model incorporating thresholds for perception in attribute values. Transp Res B Methodol 40:807–825

    Article  Google Scholar 

  • Carlsson F, Kataria M, Lampi E (2010) Dealing with ignored attributes in choice experiments on valuation of sweden’s environmental quality objectives. Environ Resour Econ 47:65–89

    Article  Google Scholar 

  • ChoiceMetrics (2012) Ngene 1.1.1 user manual and reference guide. http://www.choice-metrics.com

  • Chou HY, Lu JL, Fu C (2008) The study of price accept threshold for the “blue highway” tour of the North-East Region in Taiwan. J MarSci Technol 16:255–264

    Google Scholar 

  • Econometric Software, Inc. (2012) NLOGIT 5. Plainview, New York

  • Fasolo B, McClelland GH, Todd PM (2007) Escaping the tyranny of choice: when fewer attributes make choice easier. Mark Theory Decis 7:13–26

    Article  Google Scholar 

  • Ferrini S, Scarpa R (2007) Designs with a priori information for nonmarket valuation with choice experiments: a Monte Carlo study. J Environ Econ Manag 53:342–363

    Article  MATH  Google Scholar 

  • Fiske ST, Taylor SE (1984) Social cognition. Addison-Wesley, Massachusetts

    Google Scholar 

  • Greene WH, Hensher DA (2003) A latent class model for discrete choice analysis: contrasts with mixed logit. Transp Res B Methodol 37:681–698

    Article  Google Scholar 

  • Greene WH, Hensher DA (2010) Ordered choices and heterogeneity in attribute processing. J Transp Econ Policy 44:331–364

    Google Scholar 

  • Han S, Gupta S, Lehmann DR (2001) Consumer price sensitivity and price thresholds. J Retail 77:435–456

    Article  Google Scholar 

  • Hensher DA (2006) How do respondents process stated choice experiments? Attribute consideration under varying information load. J Appl Econom 21:861–878

    Article  MathSciNet  Google Scholar 

  • Hensher DA (2008) Joint estimation of process and outcome in choice experiments and implications for willingness to pay. J Transp Econ Policy 42:297–322

    Google Scholar 

  • Hensher DA (2010) Hypothetical bias, stated choice studies and willingness to pay. Transp Res B Methodol 44:735–752

    Article  ADS  Google Scholar 

  • Hensher DA, Layton D (2008) Attribute referencing, cognitive rationalisation and implications for willingness to pay. Working paper, Institute of Transport and Logistics Studies, The University of Sydney

  • Hensher DA, Greene WH (2010) Non-attendance and dual processing of common-metric attributes in choice analysis: a latent class specification. Empir Econ 39:413–426

    Article  Google Scholar 

  • Hensher DA, Rose JM, Greene WH (2005a) Applied choice analysis: a primer. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Hensher DA, Rose JM, Greene WH (2005b) The implications on willingness to pay of respondents ignoring specific attributes. Transportation 32:203–222

    Article  Google Scholar 

  • Hensher DA, Rose JM, Greene WH (2012) Inferring attribute non-attendance from stated choice data: implications for willingness to pay estimates and a warning for stated choice experiment design. Transportation 39:235–245

    Article  Google Scholar 

  • Hess S, Smith C, Falzarano S, Stubits J (2008) Measuring the effects of different experimental designs and survey administration methods using an Atlanta managed lanes stated preference survey. Transp Res Rec 2049:144–152

    Article  Google Scholar 

  • Hole A (2011) A discrete choice model with endogenous attribute attendance. Econ Lett 110(3):203–205

    Article  MathSciNet  Google Scholar 

  • Islam T, Louviere JJ, Burke PF (2007) Modeling the effects of including/excluding attributes in choice experiments on systematic and random components. Int J Mark Res 24:289–300

    Article  Google Scholar 

  • Kennedy P (2008) A guide to econometrics, 6th edn. Wiley-Blackwell, London

    Google Scholar 

  • Kerr GN, Sharp BMH (2010) Choice experiment adaptive design benefits: a case study. Aust J Agric Resour Econ 54:407–420

    Article  Google Scholar 

  • Kessels R, Goos P, Vandebroek M (2006) A comparison of criteria to design efficient choice experiments. J Mark Res 43:409–419

    Article  Google Scholar 

  • Kessels R, Goos P, Vandebroek M (2008) Optimal designs for conjoint experiments. Comput Stat Data Anal 52(5):2369–2387

    Article  MATH  MathSciNet  Google Scholar 

  • Kessels R, Jones B, Goos P, Vandebroek M (2011) The usefulness of Bayesian optimal designs for discrete choice experiments. Appl Stoch Model Bus 27(3):173–188

    Article  MathSciNet  Google Scholar 

  • Kinter ET, Prior TJ, Carswell CI, Bridges JFP (2012) A comparison of two experimental design approaches in applying conjoint analysis in patient-centered outcomes research: a randomized trial. Patient 5(4):279–294

    Article  PubMed  Google Scholar 

  • Lancsar E, Louviere JJ (2006) Deleting ‘irrational’ responses from discrete choice experiments: a case of investigating or imposing preferences? Health Econ 15:797–811

    Article  PubMed  Google Scholar 

  • Louviere JJ, Hensher DA (1983) Using discrete choice models with experimental design data to forecast consumer demand for a unique cultural event. J Consum Res 10:348–361

    Article  Google Scholar 

  • Louviere JJ, Woodworth G (1983) Design and analysis of simulated consumer choice or allocation experiments: an approach based on aggregate data. J Mark Res 20:350–367

    Article  Google Scholar 

  • Louviere JJ, Hensher DA, Swait JD (2000) Stated choice methods analysis and application. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Louviere JJ, Islam T, Wasi N, Street D, Burgess L (2008) Designing discrete choice experiments: Do optimal designs come at a price? J Consum Res 35:360–375

    Article  Google Scholar 

  • Manski C (1977) The structure of random utility models. Theor Decis 8:229–254

    Article  MATH  MathSciNet  Google Scholar 

  • McFadden D (1974) Conditional logit analysis of qualitative choice behavior. In: Zarembka P (ed) Frontiers in econometrics. Academic Press, New York

    Google Scholar 

  • McIntosh E, Ryan M (2002) Using discrete choice experiments to derive welfare estimates for the provision of elective surgery: implications of discontinuous preferences. J Econ Psychol 23:367–382

    Article  Google Scholar 

  • Meyerhoff J, Liebe U (2009) Discontinuous preferences in choice experiments: evidence at the choice task level. Paper presented at the 17th EAERE conference, Amsterdam, 24–27 June 2009

  • Mørkbak MR, Christensen T, Gyrd-Hansen D (2010) Choke price bias in choice experiments. Environ Resour Econ 45:537–551

    Article  Google Scholar 

  • Puckett SM, Hensher DA (2009) Revealing the extent of process heterogeneity in choice analysis: an empirical assessment. Transp Res A Policy 431:117–126

    Article  Google Scholar 

  • Rose JM, Black I (2006) means matter, but variance matter too: decomposing response latency influences on variance heterogeneity in stated preference experiments. Mark Lett 17(4):295–310

    Article  Google Scholar 

  • Rose JM, Bliemer MCJ (2008) Stated preference experimental design strategies. In: Hensher DA, Button KJ (eds) Handbook of transport modelling. Elsevier, Oxford

    Google Scholar 

  • Rose JM, Bliemer MCJ (2013) Sample size requirements for stated choice experiments. Transportation 40:1021–1041

  • Sandor Z, Wedel M (2001) Designing conjoint choice experiments using managers’ prior beliefs. J Mark Res 38:430–444

    Article  Google Scholar 

  • Sandor Z, Wedel M (2002) Profile construction in experimental choice designs for mixed logit models. Mark Sci 21:455–475

    Article  Google Scholar 

  • Sandor Z, Wedel M (2005) Heterogeneous conjoint choice designs. J Mark Res 42:210–218

    Article  Google Scholar 

  • Scarpa R, Rose J (2008) Design efficiency for non-market valuation with choice modelling: how to measure it, what to report and why. Aust J Agric Resour Econ 52:253–282

    Article  Google Scholar 

  • Scarpa R, Campbell D, Hutchinson GW (2007) Benefit estimates for landscape improvements: sequential Bayesian design and respondents’ rationality in a choice experiment. Land Econ 83:617–34

    Article  Google Scholar 

  • Scarpa R, Ferrini S, Willis K (2005) Performance of error component models for status quo-effects in choice experiments. In: Scarpa R, Alberini A (eds) Applications of simulation methods in environmental and resource economics. Springer, Dordrecht

    Chapter  Google Scholar 

  • Scarpa R, Gilbride TJ, Campbell D, Hensher DA (2009) Modelling attribute non-attendance in choice experiments for rural landscape valuation. Eur J Agric Econ 36:151–74

    Article  Google Scholar 

  • Scarpa R, Thiene M, Hensher DA (2010) Monitoring choice task attribute attendance in non-market valuation of multiple park management services: Does it matter? Land Econ 86(4):817–839

    Article  Google Scholar 

  • Scarpa R, Zanoli R, Bruschi V, Naspetti S (2013) Inferred and stated attribute non-attendance in food choice experiments. Am J Agric Econ 95(1):165–180

    Article  Google Scholar 

  • Severin V (2001) Comparing statistical and respondent efficiency in choice experiments. Ph.D. Dissertation, Discipline of Marketing, Faculty of Economics and Business, University of Sydney

  • Street DJ, Burgess L (2004) Optimal and near-optimal pairs for the estimation of effects in 2-level choice experiments. J Stat Plan Inference 118:185–199

    Article  MATH  MathSciNet  Google Scholar 

  • Street DJ, Burgess L (2007) The construction of optimal stated choice experiments: theory and methods. Wiley, New Jersey

    Book  Google Scholar 

  • Street DJ, Burgess L, Louviere JJ (2005) Quick and easy choice tasks: constructing optimal and nearly optimal stated choice experiments. Int J Res Mark 22:459–470

    Article  Google Scholar 

  • Swait J (1994) A structural equation model of latent segmentation and product choice for cross-sectional revealed reference choice data. J Retail Consum Serv 1:77–89

    Article  Google Scholar 

  • Swait J (2001) A non-compensatory choice model incorporating attribute cutoffs. Transp Res B Methodol 35:903–928

    Article  Google Scholar 

  • Thurstone L (1931) The indifference function. J Soc Psychol 2:139–167

    Article  Google Scholar 

  • Train KE (2009) Discrete choice methods with simulation, 2nd edn. Cambridge University Press, Cambridge

    Book  MATH  Google Scholar 

  • Vermeulen B, Goos P, Scarpa R, Vandebroek M (2011) Bayesian conjoint choice designs for measuring willingness to pay. Environ Resour Econ 48:129–149

    Article  Google Scholar 

  • Vermunt JK, Magidson J (2005) Technical guide for Latent GOLD Choice 4.0: basic and advanced. Statistical Innovations Inc., Massachusetts

  • Viney R, Savage E, Louviere JJ (2005) Empirical investigation of experimental design properties of discrete choice experiments in health care. Health Econ 14:349–362

    Article  PubMed  Google Scholar 

  • Weber B, Aholt A, Neuhaus C, Trautner P, Elger CE, Teichert T (2007) Neural evidence for reference-dependence in real-market-transactions. NeurImage 35:441–447

    Article  Google Scholar 

  • Yao RT, Scarpa R, Turner JA, Barnard TD, Rose JM, Palma JHN, Harrison DR (2014) Valuing biodiversity enhancement in New Zealand’s planted forests: socioeconomic and spatial determinants of willingness-to-pay. Ecol Econ 98:90–101

    Article  Google Scholar 

  • Yu J, Goos P, Vandebroek M (2012) A comparison of different Bayesian design criteria for setting up stated preference studies. Transp Res B Methodol 46:789–807

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Richard T. Yao.

Appendix

Appendix

See Tables 9 and 10.

Table 9 Conditional logit model estimates using the pilot survey
Table 10 Estimates of normalised AICs of panel latent class logit models using the three design samples

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yao, R.T., Scarpa, R., Rose, J.M. et al. Experimental Design Criteria and Their Behavioural Efficiency: An Evaluation in the Field. Environ Resource Econ 62, 433–455 (2015). https://doi.org/10.1007/s10640-014-9823-7

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10640-014-9823-7

Keywords

Navigation