1 Introduction

Giving policy advices solely based on market prices may be misleading when the prices give distorted signals, i.e., diverge from socially efficient prices. Potential reasons for market prices to diverge from efficiency prices include, but not limited to, controlled prices, externalities, imperfect competition, taxes, trade controls, etc. Broadly, market failure may occur due to structure (characteristics) of the market or government intervention.

Imperfect competition (either in input or in output markets) or externalities are causes of market failures due to the structure of the market. Most markets face some forms of imperfect competition. For example, in a market with tacitly colluding firms or a natural monopoly, the prices deviate from the socially optimal prices. An externality occurs when an economic activity affects others. The externality can be positive as in the case of training and human capital improvement or it can be negative as in the case of environmental damage.

Government interventions that may lead to market distortions include controlled prices, taxes, trade controls, etc. For example, tariffs on imports increase the prices of relevant imports and their substitutes above their costs, insurances, and freight (cif) prices. The distortion, however, is not limited to price divergence in the imported goods. Since the domestic prices increase relative to the world prices, this affects exchange rates too. Hence, in some cases in order to determine the efficiency prices, we need to rely on an approach that considers macroeconomic factors as well.

In the presence of market failures, it would be sensible to identify the shadow value (efficient value) of relevant outputs or inputs. To this end, it is essential to understand the relationship between market prices and shadow prices. This may help policy makers to determine the direction in which the mix of outputs or inputs should change in order to enhance social welfare. For example, Grosskopf et al. (1999) compare market school district administrative and teaching salaries in Texas with their corresponding shadow prices. This enables them to determine whether the schools are under-utilizing or over-utilizing their administrators and teachers. Similarly, using plant-level data taken from Wisconsin coal-burning electric utility plants, Coggins and Swinton (1996) compare the prices paid for sulfur dioxide (SO2) permits and the corresponding shadow prices. Swinton (1998) uses a similar comparison using plant-level data taken from Illinois, Minnesota, and Wisconsin. They find that the shadow prices are close to the permit prices. There are many other similar examples that we will briefly talk about later in the review.

It appears that a sensible starting point is considering an undistorted market where the market prices and shadow prices coincide. As Drèze and Stern (1990) argue, even in a perfectly competitive market the prices may be distorted, e.g., when the income distribution is not “optimal.” Due to this reason, it is even possible to have shadow prices that are distinct from the perfectly competitive market prices. However, in many occasions, this aspect is ignored and the perfectly competitive market is assumed to be socially optimal. In Sect. 4, we present an approach that aims to control for macroeconomic factors and distributional disturbances.

In the case of non-market goods or bads, the price is not observed. Since the utility of a consumer depends on not only market goods but also non-market goods and bads, a social planner who cares about the social welfare should allocate some value to the non-market goods and bads. Shadow pricing methods, which we will provide a review, can be used to account for environmental factors or in general the non-market goods and bads.

In the next section, we talk about market prices and efficiency prices in the case of imperfect competition and present the welfare effects of pricing with market power. In Sect. 3, we summarize some of the widely used valuation methods that are used in valuation of non-market goods, services, or bads, which may cause an externality. In Sect. 4, we introduce a valuation approach for projects that can accommodate not only allocative efficiency viewpoint but also their impact on the growth and redistribution of income. Section 5 discusses identification of shadow prices using different approaches while the following section concludes the chapter.

2 Imperfect Competition, Market Power, and Welfare

2.1 Measures Related to Market Power

Imperfect competition is one of the most commonly encountered reasons for why market prices diverge from efficiency prices. The antitrust literature relates this divergence to market power, which is the ability of a firm or a group of firms to set prices above the efficiency prices (or competitive prices). The extent of price distortion critically depends on the market structure, i.e., characteristics of the market and firms.

Structure-conduct-performance paradigm generally uses market concentration measures such as Herfindahl-Hirschman index (HHI) to describe the market structures. The HHI, which is defined as the sum of squared market shares, gives some idea about the extent of welfare loss due to the price distortions and market power. One particular advantage of this approach is that the HHI can be calculated using the market share data only. However, this measure ignores many of the important characteristics and aspects of the market such as capacity constraints, dynamic factors, durability of product, price discrimination,Footnote 1 and substitutes. For example, a typical market with perishable goods can be modeled in a static setting whereas a market structure with durable goods requires a dynamic model. Being a static measure, the HHI may not be suitable in this context. Moreover, the HHI is market-specific and thus does not provide information about firm-specific distortions. Although market share data is relatively easier to find compared to other market data, calculation of market share involves some conceptual difficulties related to definition of the market. This, however, is a common problem for market power studies. Finally, the HHI is not always positively related with the welfare. For example, let’s start from a situation with two symmetric firms. Now, assume that one of these firms reduces its production costs. This would tend to increase the welfare and reduce prices charged to consumers. However, the HHI will increase. Therefore, the changes in the value of HHI may not always be in line with the changes in welfare.

Another widely used measure of market power is the Lerner (1934) index, which is defined as the ratio of price-marginal cost markup and price:

$$LI = \frac{P - MC}{P}$$
(1)

where P is the market price and MC is the marginal cost. The benchmark scenario for the Lerner index is perfect competition where price equals marginal cost, and thus, the Lerner index equals zero for the benchmark scenario. As the output price diverges from the efficiency price, i.e., marginal cost, the Lerner index increases and reaches its maximum value at the inverse of price elasticity of demand (in absolute value). Unlike the HHI, the Lerner index directly measures the price distortion that stems from imperfect competition. Moreover, it can be calculated as either a firm-specific or a market-specific measure of market power. The market-specific Lerner index is usually calculated as the market share weighted average of firm-specific Lerner index values.

Estimation of market power is an important issue to public policy makers and empirical industrial organization economists. Lerner index provides a simple way to address this issue as long as the marginal costs can be calculated. However, the usual assumption of price being at least as great as the marginal cost may not hold under certain market situations. Prices may be lower than marginal costs if firms either engage in price wars, or intentionally lower price of one product to promote sales of other similar products, or practice price discrimination, or if pricing of a product includes coupon discounts. Weiher et al. (2002) adopt a novel approach to overcome problems associated with estimation of Lerner index for US airlines, where prices can be lower than marginal costs. Since prices can possibly be zero for customers buying air tickets with frequent flyer miles, Weiher et al. (2002) use \(\left( {\frac{p - MC}{MC}} \right)\) as a measure of market power instead of the usual Lerner index. This formulation allows them to put less weight on the below marginal cost prices, and averages of these normalized indices lead to more reasonable results in their study of US airlines.

Similar to the HHI, the conventional Lerner index assumes profit maximization in a static setting so that marginal revenue equals marginal cost. However, in a market characterized by dynamic factors, the price and production are determined intertemporally. If the current decisions of a firm involve a stock variable such as goodwill or knowledge or a level of quasi-fixed output, then the Lerner index needs to be adjusted to take these factors into account. Similarly, in the presence of an exhaustible or renewable resource, the conventional Lerner index needs to be adjusted. Pindyck (1985) proposes using what he calls full marginal cost (FMC), which is marginal cost plus competitive user cost, rather than marginal cost:

$$LI = \frac{P - FMC}{P} = \frac{P - (MC + \lambda )}{P}$$
(2)

where \(\lambda\) is the competitive user cost of one extra unit of cumulative production evaluated at the monopoly output path. Note that the user cost may depend on the extent of competition and other aspects of the market. Since a market power measure aims to reflect price distortions in comparison with competitive prices, the competitive user cost should be used as the correction term when calculating the FMC. Moreover, the competitive user cost must be calculated using the monopolist’s output path just as marginal cost being evaluated at the monopoly output level when calculating the conventional Lerner index. Pindyck’s (1985) market power measure ignores how the firms interact with each other, and thus, this measure is concerned with the measurement of potential market power.

Another case where the Lerner index needs to be interpreted carefully is when the firms have capacity constraints. With capacity constraints, price exceeds marginal costs (i.e., Lerner index is positive), and this indicates a welfare loss relative to perfect competition (without capacity constraint). But, if the capacity constraints are exogenous, then they are not under the control of the firms. Therefore, the deadweight loss should be calculated compared to perfect competition under capacity constraints, which indicates that the Lerner index needs to be adjusted to reflect this interpretation of deadweight loss. Puller (2007) suggests an adjusted Lerner index for markets where the firms have capacity constraints. In particular, he examines the market power of firms in the California electricity market. In this case, the adjusted Lerner index is the same as Eq. (2) except that \(\lambda\) equals the shadow cost of the capacity constraint. Since this shadow cost is not directly observed, it needs to be estimated along with the marginal cost. We will discuss this issue later in this section.

Even after adjusting for dynamic factors or capacity constraints, the Lerner index may not reflect price distortions precisely if a proper notion of marginal cost is not used. More precisely, the standard approaches for calculating the Lerner index implicitly assume that the firms are fully efficient. However, in reality, imperfect competition may lead to managerial inefficiency in both revenue and cost. The managerial inefficiency is present for a given production technology and can be improved if the firms do not waste resources and make optimal decisions in the production process. In practice, a common approach is estimating a cost function and calculating the marginal cost from the cost function parameter estimates. Using these marginal cost estimates and observed prices, the Lerner index is calculated. However, this does not reflect inefficiencies of firms in the Lerner index. Note that here we interpret the Lerner index as a measure of welfare loss for given production technologies in the market. Since the inefficiency reflects suboptimal outcome in the production process for given production technologies, calculation of the Lerner index needs to reflect such inefficiencies. In a static setting, Koetter et al. (2012) propose an efficiency adjusted measure of the Lerner index to overcome this issue. In a dynamic strategic framework where firms have repeated interactions, Kutlu and Sickles (2012) propose other efficiency adjusted Lerner index measures, but they only concentrate on inefficiency in cost. The Lerner index measure of Kutlu and Sickles (2012) is given by:

$$LI = \frac{P - EFMC}{P} = \frac{P - (EMC + \lambda )}{P}$$
(3)

where \(EFMC = EMC + \lambda\) is the efficient FMC, EMC is the marginal cost for the full efficiency scenario, and \(\lambda\) is a term that adjusts for dynamic factors. Although they use different approaches, both Koetter et al. (2012) and Kutlu and Sickles (2012) calculate EMC from the stochastic cost frontier estimates. In contrast to these studies, Kutlu and Wang (2018) present a game theoretical model that estimates EMC directly.

All these Lerner index variations mentioned above require the marginal cost information, which is not readily available in most cases and needs to be estimated using a cost function model or other means. However, since the total cost data contains sensitive information for the firms, they may be reluctant to share this information. Even when the total cost data is available, it may not be available for the market of interest. For example, Kutlu and Sickles (2012) and Kutlu and Wang (2018) argue that the airline-specific total cost of the US airlines is available for the whole industry, but the route-specific total cost data is not available. Therefore, this poses some issues when estimating route-specific marginal costs and Lerner indices for the airlines. Moreover, in the case where the firms have capacity constraints, the shadow cost of capacity is not available as well.

The conduct parameter (conjectural variations) method enables the estimation of marginal cost and an alternative market power measure, which is called conduct parameter, without using the total cost data. The conduct parameter is simply a demand elasticity adjusted counterpart of the Lerner index, and similar to the Lerner index, it can either be firm-specific or market-specific. Bresnahan (1989) and Perloff et al. (2007) are two good surveys on conduct parameter models. Some of the earlier examples of this approach include Gollop and Roberts (1979), Iwata (1974), Appelbaum (1982), Porter (1983), and Spiller and Favaro (1984).

The conduct parameter approach measures the market power of firms “as if” the firms have conjectures about other firms’ strategies so that the equilibrium outcomes may not be supported by the standard market conditions: perfect completion, Nash equilibrium (in quantity or price), and joint profit maximization. For instance, in a Cournot model, the conjecture is that the firms will have zero reaction, i.e., conjecture is the Nash assumption in determining the equilibrium. Given the action (in this case, output) of other firms, each firm chooses its output optimally. Basically, conduct parameter approach assumes that firms may act as if they have more general types of reactions. Note that, in the conduct parameters method, the conjectures of firms refer to what firms do as a result of their expectations about other firms’ behaviors, and it does not necessarily reflect what they believe will happen if they change their actions, e.g., quantities or prices. Based on this interpretation, one can consider the conduct parameter as an index that takes a continuum of values. Since the existing theories (e.g., perfect competition, Cournot competition, and joint profit maximization) are consistent with only a few of these potential values, some researchers may not be comfortable with the idea of conduct parameter taking a continuum of values. Hence, they would categorize the estimated conduct parameter using the competitive behavior of firms by using statistical tests (e.g., Bresnahan 1987).

Since the conduct parameter approach is based on game theoretical models, the researchers may add some structure to the model that describes the market structure in a market. In particular, capacity constraints (e.g., Puller 2007; Kutlu and Wang 2018), dynamic factors (e.g., Corts 1999; Puller 2009; Kutlu and Sickles 2012), managerial inefficiency (e.g., Koetter et al. 2012; Kutlu and Sickles 2012; Kutlu and Wang 2018), multi-output production (e.g., Berg and Kim 1998; O’Donnell et al. 2007; Kutlu and Wang 2018), price discrimination (e.g., Graddy 1995; Kutlu 2017; Kutlu and Sickles 2017), and other characteristics of the market and firms can be incorporated to the game theoretical model, which describes the characteristics of the imperfect competition and market. In the literature, most conduct parameter models assume imperfectly competitive behavior by firms only in one side of the market, e.g., output market, and the other side of the market, e.g., input market, is assumed to be perfectly competitive. Hence, in general, these models only consider price distortions in output market but not in the input market. O’Donnell et al. (2007) present a general conduct parameter model that allows imperfect competition in both input and output markets.

Although the conduct parameter method relaxes the cost data requirement and allows more structural modeling, this does not come without a cost. In order to estimate a conduct parameter model, one needs to estimate a system of equations consisting of a demand function and supply relation that is derived from the first-order conditions of the structural game that the firms are playing. Hence, the required variables are the same as one would need for estimating a demand-supply system but with the exception that one needs to be more careful about identification. More precisely, if the researcher is not careful about the functional form choices, the marginal cost and conduct parameter may not be separately identified. For example, it may be possible to confuse competitive markets with high marginal cost and collusive markets with low marginal cost. Lau (1982) and Bresnahan (1982) present some conditions for identification in this setting. As argued by Bresnahan (1982), this identification problem can be solved by using general demand functions that the exogenous variables not only lead to parallel shifts but also change the demand slope by rotations. The simplest way to achieve this is including an interaction term with the quantity variable. However, Perloff and Shen (2012) illustrate that such rotations may cause some multicollinearity issues. Another approach that enables identification is assuming a constant marginal cost, which does not depend on quantity but may depend on other variables. For certain commonly used conduct parameter settings (Lau 1982), the conduct parameter and marginal cost can be separately identified if the inverse demand function P(Q,Z), where Q is the quantity and Z is a vector of exogenous variables, is not a separable function of Z in the sense that we can write P(Q,Z) = f(Q,h(Z)) for some functions f and h. An alternative possibility is using the non-parametric structural identification approach in Brown (1983), Roehrig (1988), and Brown and Matzkin (1998). Another approach is modeling the conduct as a random variable and achieving the identification through distributional assumptions. Orea and Steinbucks (2012) and Karakaplan and Kutlu (2019) achieve identification using such distributional assumptions and using econometric tools from the stochastic frontier literature. They use skewness of the distribution of conduct parameter in order to identify marginal cost and conduct parameter separately. This allows them to relax some of the strong functional form restrictions on the demand and marginal cost functions. Kumbhakar et al. (2012) propose another approach that estimates market powers of firms using the stochastic frontier approaches.

The conventional models for assessing market power assume either price or quantity competition to be the only endogenous variable. In reality, the degree of market power in the product market is likely to be related to input markets such as R&D, advertisement, finance, labor, capacity, and so on. A few recent studies investigate the influence of input markets on market power at the product market level. For example, Röller and Sickles (2000) examine whether the degree of market power at the product market is sensitive to capacity. They specify and estimate a two-step structural model in which firms make capacity decisions first and then decide the product-differentiated prices. In this framework, costs are endogenized through the first stage, which has important implications for the measurement of market power in the product market. In particular, Röller and Sickles (2000) specify a product-differentiated, price-setting game under the duopoly assumption, where each producer faces a demand of the form:

$$q_{i} \left( {p_{i} ,p_{j} ,Z_{i} } \right), i = 1, \ldots ,N ,$$
(4)

where N is the number of producers, \(q_{i}\) is the quantity demanded, \(p_{i}\) is a price index for producer i, \(p_{j}\) is a price index for competitor’s prices, and \(Z_{i}\) is a vector of producer specific, exogenous factors affecting demand. While producers can affect costs only through changes in prices in the short-run, they can change the capital stock in the long run, thereby changing the long-run cost structure. Adopting a conjectural-variation framework, the first-order conditions of the two-stage profit maximization game in which producers purchase capital in stage 1 and decide prices in stage 2 can be written as:

$$\frac{{p_{i} - MC(.)}}{{p_{i} }} = \frac{1}{{\eta_{ii} - \theta \frac{{p_{i} }}{{p_{j} }}\eta_{ij} }}$$
(5)

where \(\eta_{ii}\) is the own price elasticity, \(\eta_{ij}\) is the cross-price elasticity, MC(.) is the marginal cost based on the short-run cost structure, and the market conduct parameter \(\theta \equiv \partial p_{j} /\partial p_{i}\), represents the degree of coordination in a price-setting game. Based on this framework and profit-maximizing principle of firms, Röller and Sickles (2000) discuss estimation of the model, specification tests regarding the relevance of the sequential set-up for measuring market power and apply their method to analyze the European airline industry.

There are some theoretical examples (e.g., Rosenthal 1980; Stiglitz 1989; Bulow and Klemperer 1999) that suggest that more intense competition may lead to higher price-cost margins. Boone (2008a, b) proposes market power measures that are theoretically robust yet can be estimated using data sets that are similar to the ones that are used in estimating price-cost margins. In particular, Boone (2008a) proposes relative profit differences (RPD) measure and Boone (2008b) proposes the relative profits (RP) measure. The RPD measure is defined as follows. Let \(\pi (n)\) denote the profit level of a firm with efficiency level \(n\) where a higher \(n\) value means higher efficiency. For three firms with efficiency levels \(n^{{\prime \prime }} > n^{{\prime }} > n\), let:

$$RPD = \left( {\pi \left( {n^{{\prime \prime }} } \right) - \pi \left( n \right)} \right)/\left( {\pi \left( {n^{{\prime }} } \right) - \pi \left( n \right)} \right)$$
(6)

be a variable representing RPD. Boone (2008a) argues that in models where a higher competition reallocates output from less efficient firms to more efficient firms, RPD increases in the extent of competition. Therefore, this measure covers a broad range of models. The relative profits measure is defined as follows. For two firms with efficiency levels \(n^{{\prime }} > n\), let:

$$PD = \pi \left( {n^{{\prime }} } \right)/\pi \left( n \right)$$
(7)

be a variable representing profit differences. This measure is a robust market power measure as well.

2.2 Welfare Analysis

Having discussed the market power aspect of pricing inputs and outputs, it is imperative that we look into the welfare effects of such pricing. The conventional argument against market power evolves around the fact that by charging a price that is higher than the marginal cost, a firm is able to grab higher surplus, leaving the consumers with a lower surplus compared to the competitive market outcomes. However, the gain in producer surplus is often not big enough to compensate for the loss in the consumer surplus, unless the producer employs perfect price discrimination. Thus, in the presence of market power, it is likely that the market outcome will be inefficient in terms of total surplus maximization and the society will experience welfare loss. The degree of welfare loss depends not only on the market power, i.e., the extent to which a firm is able to raise price above the marginal cost, but also on the elasticity of demand and size of the market.

Inefficiency of a non-competitive market is rooted in the inequality between price and the marginal cost of production (after factoring out restrictions that firms face in a suitable way). As mentioned in the previous section, one must consider an adjusted benchmark while identifying inefficiency of a non-competitive market in the presence of exogenous constraints. Otherwise, one may end up with an upward bias in the measured inefficiency. However, in the absence of any exogenous constraints, the difference between price and marginal cost is an indicator of the divergence between the marginal benefit to consumers and the marginal costs to producers. For a given technology (and cost) of production, such divergence leads to inefficient allocation of resources and static welfare loss for the society. The social cost of misallocation due to the presence of extreme market power like monopoly can be approximated by the well-known welfare triangle showing the difference between gain in producer surplus and loss in consumer surplus, when price is higher than the marginal cost. Prominent empirical research in this regard includes Harberger (1954) and Rhoades (1982). Using differences among profit rates in the US manufacturing industries, Harberger (1954) measures the possible increase in social welfare by eliminating monopolistic resource allocation. Rhoades (1982) studies the US banking sector and calculates the deadweight loss due to monopoly in the US banking system. However, Formby and Layson (1982) suggest to use caution while analyzing the relationship between market power as measured by the Lerner index or profit rates and allocative inefficiency. They find that under conditions of linear and constant price elasticity of demand functions, changes in monopoly power, as measured by the Lerner index or profit rates, are not adequate to predict changes in the allocative inefficiency.

The lack of competitiveness in a market is also likely to be associated with lower productive efficiency through wastage of resources and managerial efforts, which in turn may have crucial welfare implications. The study by Good et al. (1993) is worth noting in this regard. They discuss welfare implications for the US and European airlines by measuring changes in productive efficiency and market power due to liberalization. Focusing on relative efficiency scores defined by a stochastic production function frontier for selected US carriers over the period 1976–1986, they find a clear evidence of convergence toward a common efficiency standard under deregulation for US carriers. However, European carriers that did not enjoy deregulation to a similar extent suffered from low efficiency and associated costs during the period. To identify potential welfare gain from deregulation for European airlines, Good et al. (1993) estimate the existing market power and compare it with the simulated effects of increased competition due to deregulation in a product-differentiated, price-setting game framework.

The argument in favor of privatization also stems from the fact that it is likely to increase operating efficiency and performance of economic units, thereby improving economic welfare. Several countries implemented privatization in different sectors of the economy over time to improve economic performance. While studying the economic impacts of privatization of the electricity company in Sub-Saharan Africa, Plane (1999) finds substantial evidence in support of improved productive efficiency, total factor productivity gain, and a reduction in the relative price of electricity, as a result of which, consumers are the main beneficiaries of privatization.

The presence of market power may also impose cost inefficiency in production systems. Possible reasons for such inefficiency include lack of managerial effort in cost minimization, following objectives other than profit maximization and utilizing resources for unproductive purposes like maintaining and gaining market power. Hicks (1935) identifies the lack in managerial effort in maximizing operating efficiency in the presence of extreme market power like monopoly, as the “quiet life” effect of market power. Empirical evidence suggests that the cost of inefficiency due to slack management may exceed the social loss from mispricing. While studying US commercial banks, Berger and Hannan (1998) find strong evidence for poor cost efficiency of banks in more concentrated markets. They also point to the fact that the efficiency cost of market concentration for US banks may outweigh the loss in social welfare arising from mispricing. On the contrary, in the specific case of European banking sector, Maudos and Guevara (2007) find welfare gains associated with a reduction of market power to be greater than the loss of cost efficiency—rejecting the “quiet life” hypotheses.

Finally, it is worth noticing that while the presence of market power may be associated with inefficiency and welfare loss, a producer with market power may also provide better-quality products and spread information through advertising, which in turn may contribute to the gain in economic well-being of consumers. A major difficulty in assessing welfare consequences of market power further arises from the fact that in more globalized economies with segmented markets and differentiated products, it is not straightforward to precisely define a market.

3 Externalities and Non-market Valuation

Besides imperfect competition, externalities are other commonly encountered reasons for why market prices diverge from efficiency prices. In a market, an externality is present when the production or consumption of a good leads to an indirect effect on a utility function, a production function, or a consumption set. Here, indirect refers to any effect created by an economic agent that is affecting another agent, where the effect is not transmitted through prices. In many occasions, the indirect effect is due to a produced output or used input that may not have a market value. For example, if a production process involves carbon dioxide (CO2) emission that results in climate change, this would cause a negative effect on the society. However, no individual producer would try reducing the CO2 levels unless there is some cost imposed on the emission levels or another mechanism that restricts emission. Another example is the production of public goods and services that provide benefits to the society, i.e., positive externality. Hence, the externality leads to distortions in the market prices that lead to deviations from efficiency prices unless it is somehow internalized. One potential way to internalize the externality is creating markets for non-market value inputs and outputs, which requires determining their values. The literature on valuation of non-market goods is vast. Hence, we only provide a broad summary of this literature. Additional summaries of literature on both theory and methods are given by Freeman (1979, 2003). Broadly speaking, there are two types of general approaches that are used in determining the valuation of goods in the presence of externalities—approaches based on technical relationships and behavioral (link) approaches, which rely on responses or observed behaviors. For the technical relationship approaches, we consider the damage function approach and distance function related approaches. For the behavioral approaches, we consider travel cost approach, hedonic pricing approach, and contingent valuation approach. While this list is not exhaustive, it covers some of the most widely used approaches in the literature.

3.1 Damage Function Approach

A procedure that belongs to the first group is the expected damage function approach. This method assumes a functional relationship between the good (bad) and expected social damage from decreasing (increasing) the amount of the good (bad). This approach is commonly used in risk analysis and health economics. Rose (1990) (airline safety performance), Michener and Tighe (1992) (highway fatalities), Olson (2004) (drug safety), and Winkelmann (2003) (incidence of diseases and accident rates) exemplify some studies that use this approach in the risk analysis context. In general, the expected damage function approach can be used to measure the value of a good or a service (bad) that provides benefit in terms of decreasing (increasing) the probability and severity of some economic negative effect by the reduction in the expected damage. In an early application of this approach in the context of non-market valuation, Farber (1987) estimates the value of gulf coast wetlands due to its role of protection from wind damage to property that results from hurricanes. Obviously, the wetlands are non-market inputs, and thus, we cannot observe its value directly. The methodology aims to estimate a hurricane damage function in which wetlands moved by storms are a variable that determines the damage. He calculates the expected marginal damage from winds due to loss of the wetlands using the historic hurricane probabilities. Another example that uses the damage function approach is Barbier (2007) who measures the effects of mangrove forests on tsunami damages in Thailand. While the applications directly model the damage function, the starting point of this approach is the compensation surplus approach used for valuing a quantity or quantity change in non-market goods or services. In this setting, the expected damage due to a change in the amount of non-market good or service is the integral of the marginal willingness to pay for services that protect from the damage (e.g., avoid storm damage). The approach is useful in many occasions, but it only concentrates on one aspect of incremental benefits at a time. Hence, evaluation of the full valuation of a non-market good/bad or service would be difficult as this requires considering all aspects.

3.2 Distance Function Approach

Another non-market valuation approach based on technical relationships is the distance function approach. When the data on inputs and outputs is available, this enables us to construct a production model through a distance function. The properties of distance functions enable us to calculate the shadow prices for the inputs or outputs of production, which can be used to assign values to non-market goods or services. Färe and Grosskopf (1990) model the technology using input distance functions to represent technology. They use the duality of input distance function and cost function to calculate the cost normalized shadow prices. Färe et al. (1993) model the technology using output distance function, which can accommodate multiple outputs and allows weak disposability of undesirable outputs. They obtain the normalized shadow prices by applying a dual Shephard’s lemma and convert this to absolute shadow prices by assuming that the shadow price of one marketable output equals its market price. Another related approach to calculate shadow values is using the directional distance functions developed by Chambers et al. (1996, 1998). Chung et al. (1997) is the first example that models goods and bads using the directional distance functions. Among others, Lee et al. (2002), Färe et al. (2005), Färe et al. (2006), and Cross et al. (2013) are examples that use the directional distance function approaches. In contrast to Shephard’s (1953, 1970) distance functions, which are defined in terms of radial expansions to the frontier, the directional distance functions are defined in terms of directional expansions along a specified vector. The radial distance functions are special cases of the directional distance functions. The directional distance function approach allows non-proportional changes in outputs (and inputs). Moreover, this approach allows mixture of expansions and contractions for outputs. That is, while some outputs may be expanded, the others can be contracted. Although the choice of direction is left to the researcher, a common choice is the unit vector with negative signs for bads. The trade-off between the good and the bad outputs is not meaningful unless technical efficiency is removed by projecting on the frontier. The issue is that such projections are not unique as there are competing projection methods and we need to choose one of them. Moreover, a change in a bad and a good output as we move from one point on the frontier to another depends on the direction and the size of the change. Hence, for an inefficient point, a directional projection may be a more sensible choice as it lies between the bad-oriented and the good-oriented projections. However, this flexibility in the choice of direction vector raises some concerns. In particular, the estimates may be sensitive to the direction choice as illustrated by Vardanyan and Noh (2006). Moreover, unlike the directional distance functions, the conventional radial distance functions allow unit-free multiplicative changes in arguments. Therefore, these two approaches do not have a decisive winner and the choice depends on the particularity of the problem that a researcher wants to answer. Finally, a general concern about distance functions is that modeling goods and by-products in the same technology may not be sensible. Fernández et al. (2002), Førsund (2009), and Murty et al. (2012) raise this concern and suggest separating the technology of goods and by-product bads. For this purpose, Fernández et al. (2002) assume that two technologies are separable and Murty et al. (2012) use distinct technologies. Acknowledging these issues, Bokusheva and Kumbhakar (2014) present an approach that models the technology by two functions. They use a single technology specification but allow good and bad outputs to be related via a hedonic function. They provide the shadow price of the bad (pollutant) under the assumption that the shadow price of the marketed output equals its market price. Another paper that utilizes hedonic functions in this context is Malikov et al. (2016), which models undesirable outputs via a hedonic output index. This ensures that pollutants are treaded as outputs with undesirable nature as opposed to inputs or frontier shifters. For this purpose, Malikov et al. (2016) use a radial input distance function generalized to allow an unobservable hedonic output index of desirable and undesirable outputs.

Finally, we finish our notes about distance function approach by some application examples from a variety of contexts. Färe et al. (1993) (effluents by paper and pulp mills), Coggins and Swinton(1996), Swinton (1998), Färe et al. (2005) (SO2 emission), Hetemäki (1996) (sulfate pulp plants), and Aiken and Pasurka (2003) (SO2 and PM-10 emissions) exemplify some studies that concentrate on undesirable outputs. Other examples for shadow price estimates include Färe et al. (2001) (characteristics of sites), Aiken (2006) (activity of recycling), and Cross et al. (2013) (vineyard acres by quality).

3.3 Travel Cost Approach

The travel cost approach is developed by Trice and Wood (1958) and Clawson (1959). A good review is Parsons (2017). This approach belongs to the group of behavioral approaches, which is based on revealed preferences. In the context of environment, this method relies on the complementarity of quality of a natural resource and its recreational use value (e.g., visiting a national forest or fishing at a lake). The idea is that as the quality of a natural resource (e.g., quality of water) changes, the demand for the natural resource shifts. The change in the consumer surplus can be used to determine the value associated with the incremental benefit. Hence, individuals’ willingness to pay for the recreational activity is revealed by the number of trips that they make and where they choose to visit among the potential options. Two subcategories of the travel cost models are single-site models and random utility maximization models. The single-site models consider the travel cost as the price and work similar to a demand function where the total number of trips is treated as the quantity of demand. On the other hand, the random utility maximization models assume multiple choices for the individuals where the random utility is maximized based on these choices. In the random utility model, the sites are characterized by their attributes and travel cost for reaching the site. By choosing sites, the individuals reveal their preferences. Prior to the random utility travel cost models, multiple sites models were introduced in a demand system (Burt and Brewer 1971; Cicchetti et al. 1976). The random utility models became popular around the 1980s and 1990s starting with the works of Bockstael et al. (1984, 1987) on beach use and Carson et al. (1987) on recreational fishing. Parsons and Kealy (1992) and Feather (1994) (choice set formation), Adamowicz (1994) (intertemporal decisions), Train (1998) (simulated probability and mixed logit), and Hauber and Parsons (2000) (nested logit) exemplify some earlier works and developments around this time period. Meanwhile, the single-cite models concentrated on relaxing some other aspects of the problem such as continuity assumption of number of trips variable. This is achieved by using limited dependent variable and count data models (e.g., Shaw 1988; Hellerstein 1991, 1992; Hellerstein and Mendelsohn 1993). More recently, instrumental variable approach to handle endogeneity in congestion (Timmins and Murdock 2007) and models for handling on-site sampling are introduced in the random utility framework.

In a standard single-site model, the demand function is represented as:

$$q_{i} = f\left( {p_{i} ,ps_{i} ,z_{i} ,y_{i} } \right)$$
(8)

where \(q_{i}\) represents the number of trips, \(p_{i}\) is the trip cost or price, \(ps_{i}\) is a vector of trip costs or prices for substitute sites, \(z_{i}\) is a vector of individual characteristics, and \(y_{i}\) is the income of individual \(i\). A common choice for the demand function is the log-linear form. Using this demand function, the consumer surplus difference between with and without quality change can be used as a measure for quality improvement.

The random utility models provide a better behavioral explanation compared to the single-site models with an expanse of being somewhat more complicated. The individuals are assumed to choose among a set of possible sites (e.g., beaches, camping areas, parks, rivers, etc.) for a trip. In its simplest form, the utility from visiting a site is assumed to be a function of trip cost, \(p_{ki}\), and a vector of site attributes (quality), \(X_{i}\):

$$U_{ik} = \alpha p_{ki} + \beta X_{i} + \varepsilon_{ki}$$
(9)

where \(\alpha\) and \(\beta\) are parameters and \(\varepsilon_{ki}\) is an error term. The individual picks the site that gives the highest utility:

$$V_{i} = { \hbox{max} }(U_{1i} ,U_{2i} , \ldots ,U_{Ki} )$$
(10)

where \(U_{ki}\) is the utility from site k and \(V_{i}\) is the trip utility of individual \(i\) from visiting their top preference. If the quality level (e.g., more clean water) of a site, say \(U_{1i}\), changes so that the new utility becomes \(V_{i}^{*} = { \hbox{max} }(U_{1i}^{*} ,U_{2i} , \ldots ,U_{Ki} )\), the compensation variation measure for the trip is given by:

$$w_{i} = \frac{{\left( {V_{i}^{*} - V_{i} } \right)}}{ - \alpha }.$$
(11)

3.4 Hedonic Pricing Approach

Hedonic price method is another approach that belongs to the group of behavioral approaches, which is based on revealed preferences. In this approach, the goods are characterized by their attributes or characteristics. The market transactions do not directly reveal the values of each characteristic, and this method aims to derive the values attached to these different characteristics of the goods indirectly. Quigley (1982), Freeman (1995), Bockstael and McConnell (2007), Phaneuf and Requate (2016), and Taylor (2017) are some reviews on hedonic pricing. Some of the applications of hedonic methods in a variety of markets include Griliches (1961) (automobile industry), Ridker and Henning (1967), Boyle et al. (1999) (housing markets), Triplett (1984) (computers), Triplett (2004) (information technology products), Primont and Kokoski (1990) (medical field), Schwartz and Scafidi (2000) (university education), and Good et al. (2008) (airline industry). The hedonic price method goes as early as Waugh (1928), but the utility theoretic connections between consumer preferences and equilibrium price for non-market valuation are provided in Rosen (1974).

The hedonic analysis has two stages. The first stage involves estimation of the hedonic price function. The second stage uses the first stage price estimates and combines them with the individual characteristics to estimate demand or utility function parameters. However, due to data availability limitations, the second stage is not always implemented. We will concentrate on the first stage. A detailed discussion on the second stage is given by Taylor (2017).

In a standard hedonic price analysis, estimation of the first stage involves regressing the price on the characteristics variables. Although there is no general rule for functional form choice, using the linear model requires some compelling reasons as the price and quality variables are likely to have some non-linear relationship. Cropper et al. (1988) provide evidence in support of relatively simpler models such as semilog functional form. However, Kuminoff et al. (2010) find evidence supporting the more flexible functional forms. Another concern in price function estimation is the identification of model parameters. In particular, if the price variable is simultaneously determined with a characteristic variable or a relevant variable is omitted, this leads to inconsistent parameter estimates. The simultaneity problem can easily be handled by an instrumental variables approach (Irwin and Bockstael 2001). A particular omitted variable problem in the housing market context is omitting a relevant spatial lag variable, which can be addressed by using spatial hedonic price models. Anselin and Lozano-Gracia (2009) and Brady and Irwin (2011) provide extensive reviews for spatial hedonic price models.

3.5 Contingent Valuation Approach

Contingent valuation approach is the final behavioral method that we consider, which is based on stated preferences. This approach estimates the price of a good or a service through a contingent valuation question that carefully describes a hypothetical market. Contingent valuation method is useful when the market prices are unreliable or unavailable. Mitchell and Carson (1989) is an earlier book that provides a detailed discussion on designing a contingent valuation study. Boyle (2017) is a good recent review on contingent valuation for practical applications of the method. Although the approach has been widely critiqued, it is used in practice such as in some legal cases. Recently, Kling et al. (2012) argued that having some numbers is likely to be better than no number. On the other hand, Hausman (2012) focuses on the issues related to hypothetical bias and discrepancy between willingness to pay and willingness to accept. Therefore, the debate is still not conclusive.

Boyle (2017) identifies the steps in conducting a contingent valuation study as follows: (1) Identifying the change in quantity or quality to be evaluated; (2) identifying whose values to be estimated; (3) selecting data collection mode; (4) deciding about the sample size; (5) designing the information component of the survey instrument; (6) designing the contingent valuation question; (7) designing auxiliary questions; (8) pretesting and implementing survey; (9) analyzing data; and (10) reporting the results.

First, the researcher has to decide not only what needs to be measured but also whether there are risks involved. For example, in the case where there is some uncertainty about contamination of a water source, the valuation method would concentrate on the willingness to pay for a reduction in probability of contamination. The choice of whether the study would be based on individuals or households is important. Quiggin (1998) argues that if intra-household altruism does not exist or it is paternalistic, the aggregate measure of welfare is the same. Whereas Munro (2005) argues that this happens when the household incomes are pooled. Bateman and Munro (2009) and Lindhjem and Navrud (2011) illustrate that the values for individuals and households differ. Traditionally, the most widely used survey method has been by mail, but Internet surveys became popular recently due to its cost and convenience advantages. However, response rates for Internet surveys are relatively lower compared to the other means. The cost or response rates are not the only concerns when choosing a survey method. Boyle et al. (2016) find that the Internet-based surveys give 8% lower estimates for willingness to pay compared to the other methods of survey. An important aspect of these surveys is description of what is being valued. Bergstrom et al. (1990), Poe and Bishop (1999), and MacMillan et al. (2006) exemplify studies that illustrate sensitivity of the results to the information provided. Another important aspect of these surveys is the payment mechanism. The response formats in these contingent valuation questions include open-ended (direct statement of willingness to pay) (Hammack and Brown 1974), iterative bidding (bid increases if respondent says yes to a bid and decrease for a no) (Randall et al. 1974), payment-card (choose among possible willingness to pay options) (Mitchell and Carson 1989), or dichotomous choice (yes or no to a specified willingness to pay amount) (Bishop and Heberlein 1979) questions. Among these, dichotomous choice questions are most commonly used. Carson and Groves (2007) and Carson et al. (2014) present conceptual arguments for desirable properties of this type of questions.

4 Macroeconomic Valuation of Projects: LM Methodology

As mentioned in the introduction, even in the case of perfect competition, the prices may be distorted if the income distribution is not optimal. The early days of cost-benefit analysis literature aimed to assess projects based on not only allocative efficiency viewpoint but also their impact on the growth and redistribution of income. Both optimal growth and optimal income distribution are important factors that need to be considered when evaluating the value of projects as suboptimal growth or income distribution leads to welfare loss. Hence, Little and Mirrlees (1969, 1974) (LM) and UNIDO (UN Industrial Development Organization) (1972) develop approaches that aim to address this objective. The approach of LM is subsequently extended by Squire and van der Tak (1975) and UK Overseas Development Administration (1988). Combining allocative efficiency, growth, and redistribution aspects requires a common measure, which may be aggregated into a single measure. The LM approach uses the world price as numeraire. This method converts the domestic prices to world prices by using the standard conversion factor. Note that this does not claim that the world prices are undistorted and reflect perfectly competitive prices. Rather, the world prices are used because they represent the conditions in which the economy can participate in world trade and they reflect comparative advantages. On the other hand, UNIDO (1972) uses the domestic price numeraire and converts domestic prices using the shadow exchange rate. The approaches of LM and UNIDO are similar in spirit but the LM approach is a more widely adopted methodology for shadow price estimation. Therefore, in this section, we concentrate on the LM approach. Further details can be found in the cited studies as well as in Chowdhury and Kirkpatrick (1994) and Asian Development Bank (2013).

The valuation of public projects requires prices for traded and non-traded goods. The LM approach determines the valuations of traded goods based on world prices, which reflect the opportunity costs to the country evaluating the project. This reflects the net benefit of a traded good. The non-traded goods are not traded internationally due to either an export ban or another reason. Since the traded goods are valued at world prices, the non-traded goods should be valued comparably. This is achieved by first estimating the marginal cost of production and converting input costs to world prices. The conversion involves decomposing the inputs into traded and non-traded inputs labor and land. Then, the non-traded land and labor prices are converted into world prices. This conversion process involves determining the traded goods that they substitute in domestic production. The world prices of these goods can be used in order to drive shadow prices for the non-tradable goods.

As mentioned earlier, shadow prices for traded goods are based on world prices. In particular, for imports cif and for exports fob (free on board) prices are used. The prices can be given in either foreign exchange terms or domestic currency values. The world prices need to be adjusted for the costs of internal transportation and distribution. Since the world price is intrinsically an abstract concept, it must be estimated. One challenging issue is that the goods are rarely homogenous. Moreover, the goods may be subject to different price discrimination practices, e.g., different unit prices for different amounts. Hence, it is impossible to avoid researcher’s judgment when calculating the world price estimates.

A common way to calculate a shadow price for a non-tradable good is using a conversion factor, which is the ratio between the market price and the shadow price of the good. The shadow value for the relevant non-tradable good is calculated by multiplying the market price with the relevant conversion factor. Whenever the researcher does not have enough information about the non-tradable good or if the amount of the non-tradable good is small, the so-called standard conversion factor is used.

The development of semi-input-output method helped the consistent estimation of macroconversion factors. After identifying a set of primary factor inputs, primary inputs are given (exogenously or endogenously determined) values. Then, the economic price of a sector s (EPs) is determined by a weighted average of conversion factors of primary inputs x into s:

$$EP_{s} = \sum\nolimits_{x} {vx_{s} CF_{x} }$$
(12)
$$CF_{s} = \frac{{EP_{s} }}{{FP_{s} }}$$
(13)

where \(vx_{s}\) is the value of primary input x into sector s, FPs is the financial price value of s. This approach has the disadvantage of input-output systems as they employ fixed coefficients. However, it has the advantage of picking up both direct and indirect effects. For example, not only the direct employment effects but also the linkage employment effects from expansion of production are reflected.

Especially in economies with labor surplus, unskilled labor takes an importance place. In LM approach, the shadow price of unskilled labor is calculated using a separate conversion factor. If the production involves multiple goods, then the weighted mean of conversion factors for each output produced is applied to the market value of the opportunity cost of unskilled labor. The skilled labor shadow price is calculated by applying the standard conversion factor to the market wage.

As stated earlier, one of the aspects of LM approach is that it takes distributional issues into account as well. This is particularly important because the policy maker may not only be interested in allocative efficiency but also be interested in how the resources are distributed. LM approach considers two types of distributional issues. The first one is about distribution of output among members of the society with different incomes. The second one is about intertemporal distribution of resources. This involves deciding about which portion of a project’s output will be saved and which portion of the project’s output will be consumed. These procedures involve assigning distributional weights, which in turn contribute to the calculation of the shadow prices. For example, the poor are given higher weights compared to the rich. Squire and van der Tak (1975) present this approach more formally and show how distributional weights can be fed into a variety of parameters. Ray (1984) formalizes many of the expressions by Squire and van der Tak (1975) further and explains the underlying welfare theory. In practice, however, the distributional weight approach is not applied without some concerns. The main issue is that many times the weights are based on value judgments. Harberger (1978) argues that the weighting scheme gives implausibly high/low weights to some groups. Some even argue that even equal weights are subjective (e.g., Brent 2006). The arbitrarily chosen weights may make the allocative efficiency less important than the distributional objectives. Hence, sometimes the analysis for allocative efficiency and distributional impact is made separately. Overall, the LM method provides us a macroperspective when evaluating valuations of projects and is a useful tool along with other valuation methods that we summarized in this short review.

5 Shadow Prices of Inputs and Outputs

Shadow prices are virtual prices that can be calculated as changes in the optimal value of an objective function for marginal relaxation in the constraint, in a constrained optimization framework. Inevitably, shadow prices are highly relevant in constrained output, revenue, profit maximization, and cost minimization problems faced by production units. These prices are primarily theoretical values, estimation of which can be useful when market prices do not exist or do not reflect the true value of products. There are several approaches for identifying and estimating measures related to shadow prices in the productivity literature. These approaches differ in their objective functions, nature of inputs and outputs, and methods of identification.

5.1 Shadow Prices Based on the Cost Function

One plausible approach for identifying shadow price measures for inputs is to focus on the dual profit function. Under the standard regularity production conditions, this approach allows one to identify the profit-maximizing output supply and input demand system by virtue of the Hotelling’s lemma. The output supply and input demand functions then can be modified to incorporate different types of inefficiency, which in turn reflect the relationship between the perceived and the actual market prices of inputs and outputs. For example, Lovell and Sickles (1983) use the dual profit function to model technology of a competitive profit-maximizing multi-product firm and estimate the ratio of perceived to actual price of inputs. This ratio reflects the systematic component of allocative inefficiency and plays a pivotal role in estimating the cost of the forgone profit due to inefficiency. Based on this approach, Sickles et al. (1986) study the US airline industry for allocative distortions during a period of regulatory transition.

In the presence of quasi-fixed inputs, the production technology can be modeled using a dual restricted (variable) cost function that allows for the existence of temporary disequilibrium (Sickles and Streitwieser 1998). Temporary disequilibrium may occur for unexpected demand shocks or changes in factor prices. Under the assumptions of exogenous input and output prices, the short-run variable cost function can be obtained as a solution of the minimization problem of a firm operating at full capacity:

$${ \hbox{min} }\sum {W_{i} X_{i} } \,{\text{subject}}\,{\text{to}}\,H\left( {Y,X;T} \right) = 0$$
(14)

where H is the transformation function of the production technology, Y is the output, W represents input prices, and X represents the quantity of quasi-fixed inputs. The short-run variable cost function is then given by:

$$CV = G\left( {Y,W,X;\,T} \right)$$
(15)

where G is linearly homogeneous, non-decreasing, and concave in input prices; non-decreasing and convex in the levels of quasi-fixed inputs; and non-negative and non-decreasing in output. For example, G can be a non-homothetic translog function. Then, given exogenous input prices and by Shephard’s lemma, the first-order conditions of the cost minimization problem yield the variable cost share \(\left( {M_{i} } \right)\) for variable input \(\left( {X_{i} } \right)\). For estimation purposes, the shadow share equation, \(- \frac{\partial lnG}{{\partial lnX_{k} }} = \frac{{Z_{k} X_{k} }}{CV}\), can be added to the model where the shadow price, \(Z_{k}\), is the real rate of return or ex-post value of the fixed input \(Z_{k}\). The shadow price can be derived as the residual between revenues and variable costs. Since the effects of economic optimization are incorporated in the shadow value equations, they can be used in the system of estimating equations. The long-run cost function can also be obtained from the restricted cost function as \(C = H\left( {W,Y,Z^{ * } } \right)\) where \(Z_{k}^{*} = - \frac{\partial G}{{\partial X_{k} }}\). Sickles and Streitwieser (1998) apply their model and methodology to study the interstate natural gas transmission industry in the USA.

A production technology may be subject to inherent complexities, constraints, and distortions that are needed to be integrated into the optimization problems of firms. Good et al. (1991) formulate a multiple output technology in which the choice of production technique is an endogenous decision. They employ the concept of virtual prices in their modeling to estimate technology that corresponds to efficient resource allocation. They also discuss estimation of parameters that explain the divergence between virtual and observed prices and apply their method to analyze the US airline industry.

The institutional constraints and policy environment can substantially affect the relative input price in unobserved ways, resulting in a divergence between the relative market price and the relative shadow price. Extent of this divergence measures the relative price efficiency. Getachew and Sickles (2007) estimate the divergence of the relative market price form the relative shadow price using a generalized cost function approach. The first-order conditions for a standard neoclassical problem of cost minimization subject to an output constraint yield the equality between the marginal rate of technical substitution (MRTS) and the ratio of market price of inputs. However, in the presence of additional constraints due to the policy environment, the optimal allocation of inputs that minimize cost requires the equality between the MRTS and the ratio of shadow or effective prices. Thus, a firm’s cost minimization problem in the presence of additional restrictions can be given as:

$$\mathop {\hbox{min} }\nolimits_{X} C = P^{{\prime }} X\,{\text{s}} . {\text{t}} .\,f\left( X \right) \le Q\,{\text{and}}\,R\left( {P,X;\varphi } \right) \le 0$$
(16)

where P and X are \(h \times 1\) vectors of price and quantity of inputs, respectively, f(X) is a well-behaved production function, Q is output, R(.) is an \(R_{C}\)-dimensional function representing additional constraints, and \(\varphi\) is a vector of parameters. The first-order conditions for cost minimization become

$$\frac{{f_{i} }}{{f_{j} }} = \frac{{P_{i} + \sum\nolimits_{r = 1}^{{R_{C} }} {\lambda_{r} \partial R_{r} /\partial X_{i} } }}{{P_{j} + \sum\nolimits_{r = 1}^{{R_{C} }} {\lambda_{r} \partial R_{r} /\partial X_{i} } }} = \frac{{P_{i}^{e} }}{{P_{j}^{e} }}, i \ne j = 1 \ldots h$$
(17)

The parameters of the unobservable shadow prices then can be estimated using a first-order Taylor series approximation to a general shadow price function \(g_{i} (P_{i} )\) such that \(g_{i} \left( 0 \right) = 0\) and \(\frac{{\partial g_{i} (P_{i} )}}{{\partial P_{i} }} \ge 0\). One way to approximate these shadow prices (Lau and Yotopoulos 1971; Atkinson and Halverson 1984) is to consider:

$$P_{i}^{e} = k_{i} P_{i} , i = 1 \ldots h$$
(18)

where \(k_{i}\) is an input-specific factor of proportionality, the value of which informs us about the price efficiency of inputs. The shadow cost function in this case is given by:

$$C^{S} = C^{S} (kP,Q)$$
(19)

Using the logarithmic differentiation and Shephard’s lemma, one can derive the input demand functions from the shadow cost function and hence can derive the actual cost function and share equations. In particular, the demand for factor i is:

$$X_{i} = \frac{{M_{i}^{S} C^{S} }}{{k_{i} P_{i} }}, i = 1 \ldots h$$
(20)

where \(M_{i}^{S}\) is the shadow share of factor i. Thus, the actual cost function \(\left( {C^{A} } \right)\) and the actual share equation for input i are derived as \(C^{A} = C^{S} \sum\nolimits_{i = 1}^{h} {\frac{{M_{i}^{S} }}{{k_{i} }}}\) and \(M_{i}^{A} = \frac{{X_{i} P_{i} }}{{C^{A} }}\), respectively. Getachew and Sickles (2007) use this econometric model to analyze the Egyptian private manufacturing sector.

The regulatory constraints are likely to have major implications for the productivity and resource costs of production systems. For example, regulations regarding capital requirements affect resource costs in banking systems. Duygun et al. (2015) discuss measurement of shadow returns on equity associated with regulatory capital constraints on emerging economy banking systems. They model the cost function by incorporating regulatory constraints and measure productivity cost of changes in the regulatory capital requirements by measuring shadow price of the equity capital over time. In particular, in the presence of regulated equity-asset ratio in banking systems, they model the parametric frontier dual-cost function as:

$$c\left( {y, w, r_{{_{0} }} , t} \right) + w_{{_{0} }} z_{{_{0} }} = \mathop {\hbox{min} }\nolimits_{x} \left\{ {w^{{\prime }} x + w_{{_{0} }} z_{{_{0} }} :F\left( {x,z_{{_{0} }} ,y,t} \right) = 0, z_{{_{0} }} = r_{{_{0} }} y} \right\}$$
(21)

where x, w, and y are vectors of variable inputs, input prices, and output, respectively, and \(z_{{_{0} }}\) is a particular input that is either fixed in the short run, or required in a fixed ratio to output, but variable in the long run. Price of \(z_{{_{0} }}\) is \(w_{{_{0} }}\). The transformation function \(F\left( {x,z_{{_{0} }} ,y,t} \right)\) is the efficient boundary of the technology set. Assuming weak disposability and applying the envelop theorem showing the relationship between the long-run and short-run total cost, they derive the shadow price interpretation of the target equity capital ratio in terms of the shadow share of equity costs to total expenses as:

$$- \left[ {\frac{{\partial c\left( {y, w, r_{{_{0} }}^{*} , t} \right)}}{{\partial lnr_{{_{0} }} }}} \right] = \left( {w_{{_{0} }} y} \right)\left( {\frac{{r_{{_{0} }} }}{C}} \right) = \left( {\frac{{w_{{_{0} }} z_{{_{0} }} }}{C}} \right).$$
(22)

Applying this model, Duygun et al. (2015) confirm the importance of regulated equity capital as a constraint on cost minimizing behavior of banks in emerging economies.

The literature in this area has expanded to incorporate dynamic production and cost models as well. Captain et al. (2007) introduce a dynamic structural model to simulate the optimal levels of operational variables and identify sources of forgone profit. Using Euler equations derived from the first-order conditions of a dynamic value function maximization problem along with demand and cost equations, they simulate operating behavior of production units. They apply their model using data from the European airline industry to identify inefficiency in airlines by comparing the simulation results with the actual data and identify several sources of forgone profit like suboptimal network size. The methodology and modeling approach used in Captain et al. (2007) can be used to analyze potential impacts of economic policies in other setting as well.

While the shadow cost minimization based on shadow prices is widely used in the literature for identifying shadow values of inputs, an alternative approach is to use a shadow distance system. The shadow distance system can be estimated both in the static framework and in the dynamic framework, which accounts for adjustment costs of inputs. Atkinson and Cornwell (2011) discuss minimization of shadow costs of production in a dynamic framework using input distance function. Their formulation is based on the idea that shadow input quantities are likely to differ from actual input quantities, resulting in an inequality between the marginal rate of substitution and input price ratios. Divergence between the shadow and actual input quantities can occur due to policy regulations, contractual obligations, or shortage. Further, the production process may involve adjustment costs in terms of reduced output during the initial testing phase of a new capital good or training period of a newly hired worker. In this framework, they estimate the shadow costs by estimating a set of equations including the first-order conditions from the short-run shadow cost minimization problem for the variable shadow input quantities, a set of Euler equations derived from subsequent shadow cost minimization with respect to the quasi-fixed inputs, and the input distance function expressed in terms of shadow quantities.

Tsionas et al. (2015) further expand the literature by proposing estimation methods for a flexible system of input distance function in the presence of endogeneity of inputs. In their study, they discuss computation of the cost of allocative inefficiency, which is defined as the predicted difference between the actual and the frontier cost and is computed as a fraction of the predicted frontier cost. They apply their model and method to analyze production of Norwegian dairy firms.

Based on the standard economic model of shadow cost minimizing behavior of firms, it seems a natural choice to use shadow input quantities while analyzing cost minimizing behavior of firms. However, this approach involves significant challenges in terms of estimation. Coelli et al. (2008) propose a model based on shadow input prices in a similar framework and identify allocative inefficiencies in terms of shadow input prices. They also apply their method to a panel on US electricity generation firms.

5.2 Shadow Prices Based on the Directed Distance Function

Many production technologies produce undesirable or “bad” outputs along with desirable or “good” outputs. Some examples of undesirable outputs include the environmental degradation associated with use of pesticides in farming and greenhouse gas emission from industrial production technologies. It is logical to adjust producer performance based on shadow values of the undesirable outputs produced as a by-product of desired outputs. However, the undesirable outputs are often non-marketable, and thus, the valuation of such outputs is not straightforward. Often policy regulations are imposed to restrict the ability of producers to costlessly dispose of undesirable outputs. These regulations involve abatement of pollutants. There is an associated opportunity cost to the abatement process, which is the forgone marketable output. One possible approach of measuring shadow price of undesirable outputs is to rely on the data on abatement cost. The problem with this approach is that the data on abatement cost is likely to be subject to a wide range of errors.

An alternative approach is to estimate an output distance function which is dual to the revenue function (Shephard 1970). Then, by dual Shephard’s lemma, the output distance function yields the revenue deflated shadow prices of all outputs, including undesirable outputs. In particular, the output distance function, as introduced by Shephard (1970), is given by:

$$D_{0} \left( {x,u} \right) = \inf \left\{ {\theta :\left( {\frac{u}{\theta }} \right) \in P\left( x \right)} \right\}$$
(23)

where \(x \in {\mathcal{R}}_{ + }^{N}\) is the input vector, \(u \in {\mathcal{R}}_{ + }^{M}\) is the output vector, and \(P\left( x \right) = \left\{ {u \in {\mathcal{R}}_{ + }^{M} :x \,{\text{can}}\,{\text{produce}}\,u} \right\}\) represents the convex output set. Under the assumption that the technology satisfies the standard properties and axioms (Shephard 1970; Färe 1988), \(P\left( x \right)\) satisfies weak disposability of outputs, meaning reduction of an undesirable output can be achieved by simultaneously reducing some desirable output(s).

Färe et al. (1993) discuss the process to retrieve output shadow prices from the following duality relationships between the revenue function and the output distance function:

$$R\left( {x,r} \right) = \mathop {\sup }\nolimits_{u} \left\{ {ru:D_{0} \left( {x,u} \right) \le 1} \right\}$$
(24)
$$D_{0} \left( {x,u} \right) = \mathop {\sup }\nolimits_{r} \left\{ {ru:R\left( {x,r} \right) \le 1} \right\},$$
(25)

where ru is the inner product of output price and quantity vectors, \(r \ne 0\). Assuming the revenue and output distance functions are differentiable, and the output distance function is linearly homogeneous in outputs, the first-order conditions of the Lagrange problem can easily be written as:

$$r = R(x,r) \cdot \nabla_{u} D_{0} (x,u)$$
(26)

Further, from the second duality relationship, we have:

$$D_{0} \left( {x,u} \right) = r^{*} \left( {x,u} \right)u$$
(27)

where \(r^{*} \left( {x,u} \right)\) is the revenue maximizing output price vector from the second duality condition. Then, by Shephard’s dual lemma:

$$\nabla_{u} D_{0} \left( {x,u} \right) = r^{*} \left( {x,u} \right)$$
(28)

and therefore:

$$r = R\left( {x,r} \right) \cdot r^{*} \left( {x,u} \right).$$
(29)

The \(r^{*} \left( {x,u} \right)\) term can be interpreted as a vector of normalized or revenue deflated output shadow prices. Since \(R(x,r)\) depends on the vector of shadow prices r, for identification purposes one needs to assume that one observed output price equals its absolute shadow price. This assumption can easily be justified for a desired output, which is observable and market-determined price. This approach is straightforward to implement with a suitable parameterization of the output distance function. Färe et al. (1993) point out that shadow prices retrieved by this approach “reflect the trade-off between desirable and undesirable outputs at the actual mix of outputs which may or may not be consistent with the maximum allowable under regulation.” They showcase their method on a sample of paper and pulp mills in the USA.

In a recent working paper, Färe et al. (2015) use an input distance function and dual Shephard’s lemma to derive shadow prices and use them to construct an imputed price index. Assuming that a good is endowed with \(z = (z_{1} , \ldots ,z_{N} )\) characteristics that generate a value \(p \ge \ge 0\), they model the input correspondence as:

$$L\left( p \right) = \left\{ {z \in {\mathcal{R}}_{ + }^{N} :z\,{\text{generates }}\,{\text{value}}\,p} \right\}, p \ge 0.$$
(30)

With the help of Shephard’s (1953) input distance function and some mild assumptions on \(L\left( p \right)\), they discuss a complete characterization of the input correspondence as:

$$D_{i} \left( {p,z} \right) \ge 1 \Leftrightarrow z \in L(p).$$
(31)

Then, the cost function, which is the dual to the input distance, can be given as:

$$C\left( {p,w} \right) = \hbox{min} \left\{ {wz: z \in L\left( p \right)} \right\}$$
(32)

where \(w \in {\mathcal{R}}_{ + }^{N}\) are the unknown prices of characteristics. Using the duality between \(D(p,z)\) and \(C\left( {p,w} \right)\), the shadow price vector can be obtained as:

$$w^{s} = \frac{{p \cdot \nabla_{z} D_{i} (p,z)}}{{D_{i} (p,z)}}.$$
(33)

Färe et al. (2015) illustrate this method by constructing property price indices for houses in Netherlands. They also point out that this method avoids the multicollinearity problem associated with traditional hedonic regression.

Over the last two decades, the issues related to productivity growth and environmental quality have drawn a great deal of attention from economists. It is more important for production processes that experience substantial production of undesirable output like carbon dioxide and other greenhouse gases in the course of producing desirable outputs. The traditional productivity indices assume that undesirable outputs, if any, are freely disposable. However, that is a very strong assumption to be imposed on the technology and is often violated in reality. When undesirable outputs are produced as by-products of desirable outputs, it is reasonable to assume weak disposability of outputs. The weak disposability of output implies that a reduction in undesirable outputs can only be achieved by the reduction of desirable outputs, given fixed input levels.

Several studies (Chung et al. 1997; Boyd et al. 1999) focus on the construction of productivity indices in the presence of both “good” and “bad” outputs. The study by Jeon and Sickles (2004) is notably relevant in this regard. They use the directional distance function method to construct the Malmquist and Malmquist-Luenberger productivity indices under the assumption of weak disposability of undesirable outputs. While exploring a sample of OECD and Asian countries in their study, they discuss computation of incremental costs of pollution abatement. More specifically, they choose the direction vectors for the pollutant—carbon dioxide levels that are not freely disposable, derive the production frontier using the specific restrictions on carbon dioxide emissions, and calculate incremental costs by dividing the change in the frontier value of GDP under the assumption of free disposability by the corresponding frontier level of carbon dioxide emissions. The incremental costs of pollution abatement can give us a fair idea about the shadow values of pollution control and prices of pollution permits.

Undesirable output may be generated in other production systems as well. For example, banking services produce nonperforming loans that are not desired. While finding shadow prices of bank equity capital using parametric forms of directional distance functions, Hasannasab et al. (2018) consider deposits and borrowed funds as inputs to produce desirable outputs loans and leases along with undesirable output nonperforming loans. Since reducing undesirable output is costly, they assume undesirable and desirable outputs satisfy joint weak disposability while desirable inputs and outputs satisfy strong disposability. Accordingly, they obtain shadow prices using the estimated distance functions via the Lagrangian method. In the process, they use different pricing rules based on differently oriented distance functions that are associated with different economic optimization criteria like cost minimization, revenue maximization, and profit maximization.

6 Concluding Remarks

In this chapter, we discuss pricing methods that are adopted when the competitive or socially efficient prices are not established because of either market imperfections or externalities. We also discuss several shadow pricing methods and their implications when the market price is not observed or when the commodity is not marketable. However, the degree of price distortion due to market imperfection is not easy to measure, and the standard indices like the Lerner index need adjustment for dynamic factors, capacity constraints, and inefficiency of the production system that can affect the cost of production. Further, data for estimating market power and hence the degree of price distortion may not be readily available, and the researcher may need to modify the relevant methodology accordingly. Similarly, in the presence of externalities, market prices are likely to be far away from efficient prices, unless the external effects are accounted for in the pricing methods. This is more relevant when a production system produces undesirable outputs along with the desired ones. There are different methods for identifying and internalizing such external effects. In this chapter, we discuss several directions based on the most recent literature for dealing with these issues.

The literature on the estimation of shadow prices and their efficiency implications has expanded vastly in the last three decades. We discuss several approaches in this regard based on different objective functions and the nature of inputs and outputs. The pricing methods are not only important from microeconomic perspectives but also from macroeconomic perspectives, especially for international trade, growth, and distribution. The welfare implications of pricing under different circumstances are also crucial for policy makers. While there is an apparent conflict between the producers’ and consumers’ interests, factors like advertising or quality improvement for maintaining market power positively affect both groups. Since total welfare is influenced by both producer and consumer surplus, it is not straightforward to measure the welfare impacts of different pricing policies. Though some researchers have ventured into measuring welfare impacts in this regard, as is discussed in the chapter, it is still an open area of research, both from the microeconomic and macroeconomic perspectives.