Economists are interested in eliciting values at the level of the individual because market values do not provide the information needed to measure consumer surplus, value new products, or value goods that have no market. Direct and indirect procedures have been developed to elicit values, and each has some strengths and weaknesses. The evidence points to several recommendations for best practice in the reliable elicitation of values, trading off transparency and rigour.
KeywordsAuctions Bias correction Cheap talk Conjoint choice Contingent valuation Cost–benefit analysis English auction Latent choice models Maximum likelihood Multiple price lists Revealed preference theory Second-price auction Value elicitation Vickrey auction
Why elicit values? The prices observed on a market reflect, on a good competitive day, the equilibrium of marginal valuations and costs. They do not quantitatively reflect the infra-marginal or extra-marginal values, other than in a severely censored sense. We know that infra-marginal values are weakly higher, and extra-marginal values are weakly lower, but beyond that one must rely on functional forms to extrapolate. For policy purposes this is generally insufficient to undertake cost–benefit calculations.
When producers are contemplating a new product or innovation they have to make some judgement about the value that will be placed on it. New drugs, and the R&D underlying them, provide an important example. Unless one can heroically tie the new product to existing products in terms of shared characteristics, and somehow elicit values on those characteristics, there is no way to know what price the market will bear. Value elicitation experiments can help fill that void, complementing traditional marketing techniques (see Hoffman et al. 1993).
Many goods and services effectively have no market, either because they exhibit characteristics of public goods or it is impossible to credibly deliver them on an individual basis. These non-market goods have traditionally been valued using surveys, where people are asked to state a valuation ‘contingent on a market existing for the good’. The problem is that these surveys are hypothetical in terms of the deliverability of the good and the economic consequences of the response, and this understandably generates controversy about their reliability (Harrison, 2006).
Direct methods for value elicitation include auctions, auction-like procedures and ‘multiple price lists’.
Sealed-bid auctions require the individual to state a valuation for the product in a private manner, and then award the product following certain rules. For single-object auctions, the second-price (or Vickrey) auction awards the product to the highest bidder but sets the price equal to the highest rejected bid. It is easy to show, to students of economics at least, that the bidder has a dominant strategy to bid his true value: any bid higher or lower can only end up hurting the bidder in expectation. But these incentives are not obvious to inexperienced subjects. A real-time counterpart of the second-price auction is the English (or ascending bid) auction, in which an auctioneer starts the price out low and then bidders increase the price to become the winner of the product. Bidders seem to realize the dominant strategy property of the English auction more quickly than in comparable second-price sealed-bid auctions, no doubt due to the real-time feedback on the opportunity costs of deviations from that strategy (see Rutström, 1998; Harstad, 2000). Familiarity with the institution is also surely a factor in the superior performance of the English auction: first encounters with the second-price auction rules lead many non-economists to assume that there must be some ‘trick’.
Related schemes collapse the logic of the second-price auction into an auction-like procedure due to Becker et al. (1964). The basic idea is to endow the subject with the product, and to ask for a ‘selling price’. The subject is told that a ‘buying price’ will be picked at random, and that, if the buying price that is picked exceeds the stated selling price, the product will be sold at that price and the subject will receive that buying price. If the buying price equals or is lower than the selling price, the subject keeps the lottery and plays it out. Again, it is relatively transparent to economists that this auction procedure provides a formal incentive for the subject to truthfully reveal the certainty-equivalent of the lottery. One must ensure that the buyout range exceeds the highest price that the subject would reasonably state, but this is not normally a major problem. One must also ensure that the subject realizes that the choice of a buying price does not depend on the stated selling price; a surprising number of respondents appear not to understand this independence, even if they are told that a physical randomizing device is being used.
Multiple price lists present individuals with an ordered menu of prices at which they may choose to buy the product or not. In this manner the list resembles a menu, akin to the price comparison websites available online for many products. For any given price, the choice is a simple ‘take it or leave it’ posted offer, familiar from retail markets. The set of responses for the entire list is incentivized by picking one at random for implementation, so the subject can readily see that misrepresentation can only hurt for the usual revealed preference reasons. Refinements to the intervals of prices can be implemented, to improve the accuracy of the values elicited (see Andersen et al. 2006). These methods have been widely used to elicit risk preferences and discount rates, as well as values for products (see Holt and Laury, 2002; Harrison et al. 2002; Andersen et al., 2007).
Indirect methods work by presenting individuals with simple choices and using a latent structural model to infer valuations. The canonical example comes from the theory of revealed preference, and confronts the decision-maker with a series of purchase opportunities from a budget line and asks him to pick one. By varying the budget lines one can ‘trap’ latent indifference curves and place nonparametric or parametric bounds on valuations. The same methods extend naturally to variations in the non-price characteristics of products, and merge with the marketing literature on ‘conjoint choice’ (for example, Louviere et al. 2000; Lusk and Schroeder, 2004). Access to scanner data from the massive volume of retail transactions made every day promises rich characterizations of underlying utility functions, particularly when merged with experimental methods that introduce exogenous variation in characteristics in order to statistically condition and ‘enrich’ the data (Hensher et al. 1999). One of the attractions of indirect methods is that one can employ choice tasks which are familiar to the subject, such as binary ‘take it or leave it’ choices or rank orderings. The lack of precision in that type of qualitative data requires some latent structure before one can infer values, but behavioural responses are much easier to explain and motivate for respondents.
One major advantage of undertaking structural estimation of a latent choice model is that valuations can be elicited in a more fundamental manner, explicitly recognizing the decision process underlying a stated valuation. A structural model can control for risk attitudes when choices are being made in a stochastic setting, which is almost always the case in practical settings. Thus one can hope to tease apart the underlying deterministic valuation from the assessment of risk. Likewise, non-standard models of choice posit a myriad of alternative factors that might confound inference about valuation: respondents might distort preferences from their true values, they might exhibit loss aversion in certain frames, and they might bring their own home-grown reference points or aspiration levels to the valuation task. Only with a structural model can one hope to identify these potential confounds to the valuation process. Quite apart from wanting to identify the primitives of the underlying valuation free of confounds, normative applications will often require that some of these distortions be corrected for. That is only possible if one has a complete structural model of the valuation process.
A structural model also provides an antidote to those that claim that valuations are so contextual as to be an unreliable will-o’-the-wisp. If someone is concerned about framing, endowment effects, loss aversion, preference distortions, social preferences, and any number of related behavioural notions, it is impossible to generate a scientific dialogue without being able to write out a structural model and jointly estimate it.
Lessons and Concerns
The most important lesson that has been learned from decades of experimental research into the behavioural properties of these procedures to elicit values is: keep it simple. This refers primarily to the nature of the task given to respondents. It can be dangerous to rely on fancy rules that ensure incentives to truthfully reveal valuations only if everyone sees a complete chain of logic, even if that logic is apparent to trained economists. Of course, one can use ‘cheap talk’ and just tell people to reveal the truth since it is in their best interests, but one cannot be sure that such admonitions work reliably. Cultural familiarity with institutions counts for a lot when subjects are otherwise placed in an artefactual valuation task.
The desire to keep it simple has a corollary: the use of more rigorous statistical techniques to infer valuations. This implication follows from the need to make inferences about valuations on a cardinal scale when responses are often between subject and qualitative. Progress has been made in the use of numerical simulation methods for the maximum likelihood estimation of random utility models that allow extraordinary flexibility (for example, Train, 2003).
We also have a better understanding now of the manner in which valuations may be biased by being hypothetical, due to procedural devices in the institution being employed, and because of field context (for example, Harrison et al. 2004). More constructively, methods have been developed to undertake ex ante ‘instrument calibration’ to remove biases using controlled experiments, and to implement ex post ‘statistical calibration’ to filter out any remaining systematic biases (see Harrison, 2006).
Finally, the manner in which valuations change with states of nature is starting to be understood. Insights here again come from thinking about valuation as a latent, structural decision process. If we observe the same person state a different value for the same product at two different times, is it because he has a shift in his utility function, a change in some argument of his utility function, a change in his perceived opportunity set, or something else? If valuation is viewed as a process we can begin to design procedures that can help us identify answers to these questions, and better understand the valuations that are observed.
- Rutström, E.E. 1998. Home-grown values and the design of incentive compatible auctions. International Journal of Game Theory 3: 427–441.Google Scholar