Skip to main content

Experimental Methods in Valuation

  • Chapter
  • First Online:
A Primer on Nonmarket Valuation

Part of the book series: The Economics of Non-Market Goods and Resources ((ENGO,volume 13))

Abstract

This chapter discusses the role of behavioral experiments in evaluation of individual economic values. Principles of experimental design play a role in application and assessment of non-market valuation methods. Experiments can be employed to assess the formation of preferences and the role of personal characteristics, social factors, and economic constraints on economic values. Experiments can be used to test the efficacy of nonmarket valuation methods and to study the effect of the valuation task, information, and context on valuation responses. We discuss these issues in turn, incorporating pertinent literature, to provide a review and synthesis of experimental methods in valuation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Chamberlain taught economics at Harvard from the 1930s to the 1960s, where he conducted what might have been the first economic experiments, initially as classroom exercises, to demonstrate the formation of prices. See Roth (1995) for more about the history of experimental economics.

  2. 2.

    Examples include the use of field experiments to value water filters in rural Ghana (Berry et al. 2015) and bednets for protection from insects in Uganda (Hoffman 2009; Hoffman et al. 2009).

  3. 3.

    See Davis and Holt (1993, Chap. 1), for more details.

  4. 4.

    A robust experimental design that employs within-subject comparisons will also provide for between-subject comparisons with a changing of the ordering of stimuli. For example, consider an experiment employing two sets of stimuli, A and B, presented in treatments that switch the order of the stimuli sets. Thus we have the following treatments: 1.A-B stimuli ordering and 2.B-A stimuli ordering, which allow for: (i) within comparisons—1.A-B and 2.B-A; (ii) testing of order effects—1.A versus 2.A, and 1.B versus 2.B; and (iii) between comparisons—1.A versus 2.B and 2.A versus 1.B. Under standard microeconomic arguments, the final comparison (iii) is generally presumed valid in the presence of ordering effects (in which 1.A (1.B) is judged to be different from 2.A (2.B)).

  5. 5.

    Some experiments that use group interactions or collective decision-making can entail some loss of control in evolution of interpersonal group dynamics, which can introduce an additional dimension of heterogeneity into treatment effects (Duflo et al. 2007). If group, firm, or village interactions are an integral part of the study, clustered experimental designs can be used (Bloom 1995; Spybrook et al. 2006).

  6. 6.

    For example, if α = 0.05 and κ = 0.8, then z α  = Φ−1 (0.05) = −1.645 and z 1−κ  = Φ−1 (0.20) = −0.842, so that |z α  + z 1−κ| ≅ 2.487.

  7. 7.

    This discussion closely follows Duflo et al. (2007).

  8. 8.

    If control and treatments are applied to grouped data (e.g., villages in a developing country), power analysis must take account of intragroup correlation. See Duflo et al. (2007, pp. 31-33) for details.

  9. 9.

    Equations (10.3)-(10.5) can be adjusted for unequal variances by employing the weighted-average variance estimate in Eq. (10.7).

  10. 10.

    “Money illusion” usually refers to a tendency to treat money in nominal rather than real terms, but here it refers more generally to any behavior that does not treat money as actual currency.

  11. 11.

    For example, a two-stage procedure can be employed to induce risk neutrality by (i) allowing subjects to earn points (n out of a total possible N) in round 1, and (ii) mapping the points into a binomial utility function in round 2, in which the probability of winning a large sum of money (or prize) is n/N and the probability of winning less (or no money, or a small prize) is (1 − n)/N. (For more details, see Davis and Holt, 1993).

  12. 12.

    Such flexibility is typically absent from revealed-preference studies. For example, resource quality might not have adequate variability in a revealed-preference hedonic price or recreation demand dataset, or it might be correlated with one or more existing factors in the study. Of course, the potential downside of stated preference is that the scenarios are usually hypothetical and could lack realism or plausibility.

  13. 13.

    Revealed-preference studies can also observe the same sets of behaviors under different conditions, but the conditions are typically not controlled by the researcher, or the level of control is considerably lower.

  14. 14.

    The collection of planned stated preference behavior under current conditions is a method proposed by Whitehead et al. (2000). The stated reference baseline trips may suffer from hypothetical bias (optimistic assessment of future trips). Since recreation conditions are held constant in the stated preference baseline treatment, the researcher can use the stated preference data under current conditions to control for hypothetical bias or any other expected change in household conditions (expected increase in income, reduced opportunity cost of time). Measuring stated preference demand under baseline conditions before proceeding to changing conditions represents careful experimental design that avoids changing multiple factors (data type and conditions) within a single treatment (Whitehead et al.).

  15. 15.

    Landry and Liu (2011) provided an overview of econometric models for analyzing these types of data.

  16. 16.

    The coherent-but-arbitrary perspective is similar to constructed preferences (Slovic 1995; Kahneman 1996), which models preferences as not prior to, but rather constructed during the decision-making process. These models have not been embraced by economists because they are antithetical to axiomatic preference modeling that is the cornerstone of microeconomic theory and welfare analysis (Braga and Starmer 2005).

  17. 17.

    Corrigan et al. (2012) noted some of the controversies surrounding repeated auctions with price feedback. While this design allows for learning about preferences and auction institutions, it can engender value interdependence, anchoring on posted prices, detachment from the auction, and competition among participants.

  18. 18.

    To minimize attrition, Shogren et al. (2000) provided an additional incentive payment to subjects who participated in all sessions.

  19. 19.

    Consistent with the idea of value learning, Holmes and Boyle (2005) and Kingsley and Brown (2010) found that the variability of the error component of the random utility model decreased with the number of choices a subject made; subjects appeared to better discriminate among multiattribute choice sets as they gained experience with evaluating and selecting their preferred option (DeShazo and Fermo 2002). Among an extensive series of pairwise choices, Kingsley and Brown also found that the probability of an inconsistent choice decreased with choice experience.

  20. 20.

    A number of researchers have produced experimental findings in stated preference analysis that are indicative of anchoring effects (e.g., Boyle et al. 1985; Holmes and Kramer 1995; Green et al. 1998).

  21. 21.

    Hypothetical bias can be distinguished from strategic bias, which stems from perceived or actual perverse incentives in stated preference protocol (a fundamental reversal of incentive compatibility). Such perverse incentives for subject response can result from poor protocol design (in which subjects’ optimal response is an untruthful statement), subject misperceptions that lead them to respond strategically to an otherwise well-designed protocol, or other artifacts of stated preference design that could lead to inadvertent strategic response.

  22. 22.

    In addition, two provision prices—high and low—were evaluated, with sequencing of the prices varied randomly among treatments. The rationale for two prices was threefold: to provide more information on preferences, to increase the domain for inference of willingness to pay, and to test for preferences consistent with the theory of demand. For treatments that involved actual provision, a coin flip determined which price level would be executed.

  23. 23.

    The language of the cheap talk script followed the original text in Cummings and Taylor (1999) with necessary changes for the nature of the good and provision mechanism.

  24. 24.

    Potential consequences stemming from survey responses have been shown to produce referendum voting patterns that accord with actual voting patterns in both the lab (Cummings and Taylor 1998; Vossler and Evans 2009) and field (List et al. 2004).

  25. 25.

    The ticket stubs, dated October 12, 1997, corresponded with the game in which Barry Sanders passed Jim Brown for the No. 2 spot in NFL all-time rushing yardage. One of the study’s authors collected the ticket stubs after the game.

  26. 26.

    While this is a clever way to simulate a public good, there could be issues creating divergence between these quasi-public goods and more typically public goods in the field. In particular, all potential beneficiaries were clearly present in the framed field experiment and the scope of the public good (in terms of both payment and provision) is clearly defined. The authors are unaware of any research that has explored the relevance of these dimensions in the context of mitigation hypothetical bias.

  27. 27.

    Field surveys that involve choice experiments often implore respondents to treat choice sets as if they are independent. In practice, however, it is often unclear whether these exhortations are effective.

  28. 28.

    The six treatments allowed for between tests of price and quantity effects (more commodity at the same price, and same commodity at different prices), as well as within tests of consistency in individual choices; these tests are based on neoclassical theory.

  29. 29.

    Day and Prades (2010) employed stated preference methods. Thus, they were unable to compare their results with behavior in which actual payment or provision occurs. Moreover, their commodity is a private good, unlike the commodity valued by Vossler et al. (2012). Obviously, the role of independence across choice sets, provision rules, and nature of the commodity deserves further exploration.

References

  • Alberini, A., Boyle, K. & Welsh, M. (2003). Analysis of contingent valuation data with multiple bids and response options allowing respondents to express uncertainty. Journal of Environmental Economics and Management, 45, 40-62.

    Google Scholar 

  • Ariely, D., Loewenstein, G. & Prelec, D. (2003). Coherent arbitrariness: Stable demand curves without stable preferences. Quarterly Journal of Economics, 118, 73-105.

    Google Scholar 

  • Bateman, I. J., Langford, I. H., Jones, A. P. & Kerr, G. N. (2001). Bound and path effects in multiple-bound dichotomous choice contingent valuation. Resource and Energy Economics, 23, 191-213.

    Google Scholar 

  • Bateman, I. J., Burgess, D., Hutchinson, W. G. & Matthews, D. I. (2008). Learning design contingent valuation (LDCV): NOAA guidelines, preference learning and coherent arbitrariness. Journal of Environmental Economics and Management, 55, 127-141.

    Google Scholar 

  • Berry, J., Fischer, G. & Guiteras, R. P. (2015). Eliciting and utilizing willingness to pay: Evidence from field trials in northern Ghana. CEPR Discussion Paper No. DP10703. Retrieved from: http://ssrn.com/abstract=2630151.

  • Bin, O. & Landry, C. E. (2013). Changes in implicit flood risk premiums: Empirical evidence from the housing market. Journal of Environmental Economics and Management, 65, 361-376.

    Google Scholar 

  • Bloom, H. S. (1995). Minimum detectable effects: A simple way to report the statistical power of experimental designs. Evaluation Review, 19, 547-556.

    Google Scholar 

  • Boyle, K., Bishop, R. & Welsh, M. (1985). Starting-point bias in contingent valuation bidding games. Land Economics, 61, 188-194.

    Google Scholar 

  • Braga, J. & Starmer, C. (2005). Preference anomalies, preference elicitation and the discovered preference hypothesis. Environmental and Resource Economics, 32, 55-89.

    Google Scholar 

  • Camerer, C. (1995). Individual decision making. In J. H. Kagel & A. E. Roth (Eds.), The handbook of experimental economics (pp. 587-703). Princeton, NJ: Princeton University Press.

    Google Scholar 

  • Carbone, J. C., Hallstrom, D. G. & Smith, V. K. (2006). Can natural experiments measure behavioral responses to environmental risks? Environmental and Resource Economics, 33, 273-297.

    Google Scholar 

  • Carson, R. T., Flores, N. E. & Meade, N. F. (2001). Contingent valuation: Controversies and evidence. Environmental and Resource Economics, 19, 173-210.

    Google Scholar 

  • Carson, R. T. & Groves, T. (2007). Incentive and informational properties of preference questions. Environmental and Resource Economics, 37, 181-210.

    Google Scholar 

  • Carson, R. T., Groves, T. & Machina, M. J. (1997). Stated preference questions: Context and optimal response. Paper presented at the National Science Foundation Preference Elicitation Symposium, University of California, Berkeley.

    Google Scholar 

  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences. (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Cook, J., Whittington, D., Canh, D. G., Johnson, F. R. & Nyamete, A. (2007). Reliability of stated preferences for cholera and typhoid vaccines with time to think in Hue, Vietnam. Economic Inquiry, 45, 100-114.

    Google Scholar 

  • Corrigan, J. R., Drichoutis, A. C., Lusk, J. L., Nayga, R. M. Jr. & Rousu, M. C. (2012). Repeated rounds with price feedback in experimental auction valuation: An adversarial collaboration. American Journal of Agricultural Economics, 94, 97-115.

    Google Scholar 

  • Cummings, R. G., Harrison, G. W. & Osborne, L. L. (1995). Can the bias of contingent valuation surveys be reduced? Evidence from the laboratory. Working Paper, Policy Research Center, Georgia State University, Atlanta.

    Google Scholar 

  • Cummings, R. G. & Taylor, L. O. (1998.) Does realism matter in contingent valuation surveys? Land Economics, 74, 203-215.

    Google Scholar 

  • Cummings, R. G. & Taylor, L. O. (1999). Unbiased value estimates for environmental goods: A cheap talk design for the contingent valuation method. American Economic Review, 89, 649-665.

    Google Scholar 

  • Davis, D. D. & Holt, C. A. (1993). Experimental economics. Princeton, NJ: Princeton University Press.

    Google Scholar 

  • Day, B. & Prades, J.-L. P. (2010). Ordering anomalies in choice experiments. Journal of Environmental Economics and Management, 59, 271-285.

    Google Scholar 

  • DeShazo, J. R. (2002). Designing transactions without framing effects in iterative question formats. Journal of Environmental Economics and Management, 43, 360-385.

    Google Scholar 

  • DeShazo, J. R. & Fermo, G. (2002). Designing choice sets for stated preference methods: The effects of complexity on choice consistency. Journal of Environmental Economics and Management, 44, 123-143.

    Google Scholar 

  • Duflo, E., Glennerster, R. & Kremer, M. (2007). Using randomization in development economics research: A toolkit. Discussion Paper No. 6059. Centre for Economic Policy Research, London, UK.

    Google Scholar 

  • Gibbard, A. (1973). Manipulation of voting schemes: A general result. Econometrica, 41, 587-601.

    Google Scholar 

  • Green, D., Jacowitz, K. E., Kahneman, D. & McFadden, D. (1998). Referendum contingent valuation, anchoring, and willingness to pay for public goods. Resource and Energy Economics, 20, 85-116.

    Google Scholar 

  • Hallstrom, D. & Smith, V. K. (2005). Market responses to hurricanes. Journal of Environmental Economics and Management, 50, 541-561.

    Google Scholar 

  • Hanemann, M., Loomis, J. & Kanninen, B. (1991). Statistical efficiency of double-bounded dichotomous choice contingent valuation. American Journal of Agricultural Economics, 73, 1255-1263.

    Google Scholar 

  • Harrison, G. W. & List, J. A. (2004). Field experiments. Journal of Economic Literature, 42, 1009-1055.

    Google Scholar 

  • Herriges, J., Kling, C., Liu, C.-C. & Tobia, J. (2010). What are the consequences of consequentiality? Journal of Environmental Economics and Management, 59, 67-81.

    Google Scholar 

  • Hoehn, J. P. & Randall, A. (1987). A satisfactory benefit cost indicator from contingent valuation. Journal of Environmental Economics and Management, 14, 226-247.

    Google Scholar 

  • Hoffmann, V. (2009). Intrahousehold allocation of free and purchased mosquito nets. American Economic Review, 99 (2), 236-241.

    Google Scholar 

  • Hoffmann, V., Barrett, C. B. & Just, D. R. (2009). Do free goods stick to poor households? Experimental evidence on insecticide treated bednets. World Development, 37, 607-617.

    Google Scholar 

  • Holmes, T. P. & Boyle, K. J. (2005). Dynamic learning and context-dependence in sequential, attribute-based, stated-preference valuation questions. Land Economics, 81, 114-126.

    Google Scholar 

  • Holmes, T. P. & Kramer, R. A. (1995). An independent sample test of yea-saying and starting point bias in dichotomous-choice contingent valuation. Journal of Environmental Economics and Management, 29, 121-132.

    Google Scholar 

  • Hurwicz, L. (1972). On informationally decentralized systems. In C. McGuire & R. Radner (Eds.), Decision and organization: A volume in honor of Jacob Marschak (pp. 297-336). Amsterdam, Netherlands: North Holland.

    Google Scholar 

  • Jeuland, M., Lucas, M., Clemens, J. & Whittington, D. (2010). Estimating the private benefits of vaccination against cholera in Beira, Mozambique: A travel cost approach. Journal of Development Economics, 91, 310-322.

    Google Scholar 

  • Kagel, J. H. (1995). Auctions: A survey of experimental research. In J. H. Kagel & A. E. Roth (Eds.), The handbook of experimental economics (pp. 501-585). Princeton, NJ: Princeton University Press.

    Google Scholar 

  • Kagel, J. H. & Roth, A. E. (1995). The handbook of experimental economics. Princeton, NJ: Princeton University Press.

    Google Scholar 

  • Kahneman, D. (1996). Comment on Plott’s rational individual behavior in markets and social choice processes: The discovered preference hypothesis. In K. Arrow, E. Colombatto, M. Perleman & C. Schmidt (Eds.), Rational foundations of economic behavior (pp. 251-254). London: Macmillan.

    Google Scholar 

  • Kapteyn, A., Wansbeek, T. & Buyze, J. (1980). The dynamics of preference formation. Journal of Economic Behavior & Organization, 1, 123-157.

    Google Scholar 

  • Kingsley, D. C. & Brown, T. C. (2010). Preference uncertainty, preference learning, and paired comparison experiments. Land Economics, 86, 530-544.

    Google Scholar 

  • Landry, C. E. & List, J. A. (2007). Using ex ante approaches to obtain credible signals of value in contingent markets: Evidence from the field. American Journal of Agricultural Economics, 89, 420-432.

    Google Scholar 

  • Landry, C. E. & Liu, H. (2011). Econometric models for joint estimation of RP-SP site frequency recreation demand models. In J. Whitehead, T. Haab & J.-C. Huang (Eds.), Preference data for environmental valuation: Combining revealed and stated approaches (pp. 87-100). New York, NY: Routledge.

    Google Scholar 

  • Langford, I. H., Bateman, I. J. & Langford, H. D. (1996). A multilevel modelling approach to triple-bounded dichotomous choice contingent valuation. Environmental and Resource Economics, 7, 197-211.

    Google Scholar 

  • Ledyard, J. O. (1995). Public goods: A survey of experimental research. In J. H. Kagel & A. E. Roth (Eds.), The Handbook of Experimental Economics (pp. 111-193). Princeton, NJ: Princeton University Press.

    Google Scholar 

  • Levitt, S. D. & List, J. A. (2007). What do laboratory experiments measuring social preferences reveal about the real world? Journal of Economic Perspectives, 21 (2), 153-174.

    Google Scholar 

  • Lew, D. K. & Wallmo, K. (2011). External tests of scope and embedding in stated preference choice experiments: An application to endangered species valuation. Environmental and Resource Economics, 48, 1-23.

    Google Scholar 

  • List, J. A., Berrens, R., Bohara, A. & Kerkvilet, J. (2004). Examining the role of social isolation on stated preferences. American Economic Review, 94, 741-752.

    Google Scholar 

  • List, J. A. & Gallet, C. (2001). What experimental protocol influences disparities between actual and hypothetical stated values? Environmental and Resource Economics, 20, 241-254.

    Google Scholar 

  • List, J. A., Sadoff, S. & Wagner, M. (2011). So you want to run an experiment, now what? Some simple rules of thumb for optimal experimental design. Experimental Economics, 14, 439-457.

    Google Scholar 

  • Loomis, J. (2011). What’s to know about hypothetical bias in stated preference valuation studies? Journal of Economic Surveys, 25, 363-370.

    Google Scholar 

  • Loomis, J. & Eckstrand, E. (1998). Alternative approaches for incorporating respondent uncertainty when estimating willingness to pay: The case of the Mexican spotted owl. Ecological Economics, 27, 29-41.

    Google Scholar 

  • McFadden, D. (2009). The human side of mechanism design: A tribute to Leo Hurwicz & Jean-Jacque Laffont. Review of Economic Design, 13, 77-100.

    Google Scholar 

  • Meyer, B. D. (1995). Natural and quasi-experiments in economics. Journal of Business and Economic Statistics, 13, 151-161.

    Google Scholar 

  • Montgomery, D. C. (2005). Design and analysis of experiments. New York, NY: John Wiley & Sons.

    Google Scholar 

  • Murphy, J. J., Allen, P. G., Stevens, T. H. & Weatherhead, D. (2005). A meta-analysis of hypothetical bias in stated preference valuation. Environmental and Resource Economics, 30, 313-325.

    Google Scholar 

  • Parsons, G. R. (1991). A note on choice of residential location in travel cost demand models. Land Economics, 67, 360-364.

    Google Scholar 

  • Plott, C. R. (1996). Rational individual behavior in markets and social choice processes: The discovered preference hypothesis. In K. J. Arrow, E. Colombatto, M. Perleman & C. Schmidt (Eds.), Rational foundations of economic behavior (pp. 225-250). New York, NY: St. Martin’s.

    Google Scholar 

  • Roth, A. E. (1995). Introduction to experimental economics. In J. H. Kagel & A. E. Roth (Eds.), The handbook of experimental economics (pp. 3-111). Princeton, NJ: Princeton University Press.

    Google Scholar 

  • Satterthwaite, M. A. (1975). Strategy-proofness and Arrow’s conditions: Existence and correspondence theorems for voting procedures and social welfare functions. Journal of Economic Theory, 10, 187-217.

    Google Scholar 

  • Shogren, J. F., List, J. A. & Hayes, D. J. (2000). Preference learning in consecutive experimental auctions. American Journal of Agricultural Economics, 82, 1016-1021.

    Google Scholar 

  • Slovic, P. (1995). The construction of preferences. American Psychologist, 50, 364-371.

    Google Scholar 

  • Smith, V. L. (1964). Effect of market organization on competitive equilibrium. The Quarterly Journal of Economics, 78, 181-201.

    Google Scholar 

  • Spybrook, J., Raudenbush, S. W., Liu, X. & Congdon, R. (2006). Optimal design for longitudinal and multilevel research: Documentation for the “Optimal Design” software. University of Michigan. Retrieved from: www.rmcs.buu.ac.th/statcenter/HLM.pdf.

  • Tversky, A. & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124-1131.

    Google Scholar 

  • Tversky, A. & Kahneman, D. (1981). The framing of decision and the psychology of choice. Science, 211, 453-458.

    Google Scholar 

  • Vossler, C., Doyon, M. & Rondeau, D. (2012). Truth in consequentiality: Theory and field evidence on discrete choice experiments. American Economic Journal: Microeconomics, 4, 145-171.

    Google Scholar 

  • Vossler, C. & Evans, M. F. (2009). Bridging the gap between the field and the lab: Environmental goods, policymaker input, and consequentiality. Journal of Environmental Economics and Management, 58, 338-345.

    Google Scholar 

  • Vossler, C. & Watson, S. B. (2013). Understanding the consequences of consequentiality: Testing the validity of stated preferences in the field. Journal of Economic Behavior & Organization, 86, 137-147.

    Google Scholar 

  • Whitehead, J. C., Dumas, C. F., Herstine, J., Hill, J. & Buerger, B. (2008). Valuing beach access and width with revealed and stated preference data. Marine Resource Economics, 23, 119-135.

    Google Scholar 

  • Whitehead, J. C., Haab, T. C. & Huang, J.-C. (2000). Measuring recreation benefits of quality improvements with revealed and stated behavior data. Resource and Energy Economics, 22, 339-354.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Craig E. Landry .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer Science+Business Media B.V. (outside the USA)

About this chapter

Cite this chapter

Landry, C.E. (2017). Experimental Methods in Valuation. In: Champ, P., Boyle, K., Brown, T. (eds) A Primer on Nonmarket Valuation. The Economics of Non-Market Goods and Resources, vol 13. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-7104-8_10

Download citation

  • DOI: https://doi.org/10.1007/978-94-007-7104-8_10

  • Published:

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-94-007-7103-1

  • Online ISBN: 978-94-007-7104-8

  • eBook Packages: Economics and FinanceEconomics and Finance (R0)

Publish with us

Policies and ethics