Encyclopedia of Law and Economics

Living Edition
| Editors: Alain Marciano, Giovanni Battista Ramello

Optimization Problems

  • Roy CerquetiEmail author
  • Raffaella Coppier
Living reference work entry
DOI: https://doi.org/10.1007/978-1-4614-7883-6_354-1

Abstract

This issue deals with the conceptualization of an optimization problem. In particular, we first provide a formal definition of such a mathematical concept. Then, we give some classifications of the optimization problems on the basis of their main characteristics (presence of time dependence and of constraints). In so doing, we also outline the standard techniques adopted for seeking solutions of an optimization problem. Lastly, some examples taken by the classical theory of economics and finance are proposed.

Definition

An optimization problem is a mathematical obstacle. Its overcoming leads to the identification of some quantities (optimal solutions), contained in a predefined set (admissible region), such that a given (objective) function is optimized (maximized or minimized). Optimization problems have remarkable relevance in the context of economics and law, since taking consistent decisions is associated to the implementation of optimal – in some sense – strategies.

The General Theory of the Optimization Problems

In the context of economics and law, decision makers should act under the guide of optimality criteria. Indeed, it is easy to understand that a decision might drastically undermine the outcome of economic policies or law statement. This said, it is important to develop optimization models and solve them.

In this issue optimization problems (OP, hereafter) are treated. Specifically, some classifications of OP are proposed on the basis of their relevant features. Several clusterings of the huge set of the OP can be proposed. We here report the most important ones under the perspective of the applications. Moreover, the solution strategies for each class of OP are also discussed, having in mind that an optimization problem might not have solutions. In this respect, a warning is in order: OP can be very difficult to solve, and sometimes standard techniques are not sufficient for achieving a solution. The core of Operational Research is exactly the study of new methods for facing OP which are out of the main frameworks. Hence, nonstandard techniques are not treated in this report, even if some of them will be briefly mentioned. Some remarkable economic examples of OP are also proposed.

Classification of the OP

Perhaps, the most intuitive classification of the OP is the one grounded on the dependence of the problems on time and leads to the distinction between static and dynamic OP. More precisely, OP are dynamic when the objective function and/or the elements of the admissible region are time dependent, otherwise they are static.

The static case is the simplest one, and its treatment requires basic mathematical tools. To discuss static OP, a subclustering of them concerning the presence or the absence of constraints on the involved variables is needed.

Static OP are unconstrained if the objective to be optimized is a real-valued function f defined over the n-dimensional Cartesian space R n, and the optimal variables are searched on the entire set R n.

Now, assume that all the functions here presented are enough regular, and the derivatives used in the description of the static OP exist. Then the standard solution algorithm runs in two steps: first, the “candidate” optimal solutions are detected by posing the vector of the first partial derivatives of f (the gradient of f) identically null, i.e.,: by searching for the vectors x 0 \( \in \) R n such that \( grad\ \left(f\left({\boldsymbol{x}}_0\right)\right)=0 \) (first order conditions, and the x 0’s are the stationary points); second, the candidates obtained in the first step are classified through the analysis of the sign of the Hessian Hess (f(x 0)), which is an n × n matrix containing the second partial derivatives of f and provides information on the convexity of the function f. Specifically, if Hess (f(x 0)) has positive (negative) sign, then f is convex (concave) in a neighborhood of x 0, which is a minimum (maximum) for f over R n (second order conditions). When Hess (f(x 0)) is indefinite, then second order conditions do not lead to the classification of the stationary point x 0. In this case, x 0 can be classified through a local study of the behavior of f in a neighborhood of it.

Static OP are constrained when the optimal variables belong to a proper subset A of R n, which is identified through a collection of analytical equations and disequations of the variable \( \boldsymbol{x}\ \in {\mathbf{R}}^{\mathrm{n}} \). This class of problems is treated similarly to the unconstrained case but with a remarkable difference: the introduction of the so-called Lagrangian H is needed. The Lagrangian is a function including in a unified term the objective f and the equations and disequations generating A. The optimal solutions are then derived by setting the gradient of H equals to zero and then by classifying the stationary points through the study of the Hessian of H. It is worth noting that H is defined also by the introduction of some ancillary variables (the Lagrange multipliers), so that the stationary points of H are automatically included in A. It is important to recall here Weierstrass’ Theorem, which can be in general formulated as follows: if A and f satisfy some properties, i.e.,: A is a compact (closed and bounded) set of R n and f is continuous in A, then the constrained optimization problem admits solutions.

For details and examples on static OP, refer to Simon and Blume (1994).

The dynamic OP are those more involved in economic application, in that they are able to fit with the evolutive nature of the reality. In the dynamic OP, a set T \( \subseteq \left[0,+\infty \right) \) collecting the time is introduced. The objective functional J depends on time t \( \in \) T and is optimized with respect to an n-dimensional function of the time t (the state variable), denoted as {x t }. Indeed, functional J is usually defined as the aggregation over t (sum over t if T is a discrete set; integral over t if T is a continuous set) of a time-dependent function f which depends also on {x t }. The way in which optimization is performed can be indirect, i.e., {x t } and J are optimally selected by controlling a further set of variables {α t } included in their definition (the control variables) or direct, otherwise. In the former case, we define the optimal control problems (OCP, henceforth), which is a very relevant class of the dynamic OP.

To fix ideas, assume that time is continuous, and T \( =\left[{t}_1,{t}_2\right]\ \subseteq \left[0,+\infty \right) \).

Under an operative point of view, the strategy for solving OP in the “direct case” is similar to that presented for the static OP. The role played in the static situation by the stationary points is assigned in the dynamic setting to the so-called extremals. In details, under the necessary regularity condition, the first order conditions can be rewritten through a special differential equation (Euler equation) of f. The extremals are then classified by studying the convexity of f through the Hessian matrix of the second derivatives of f. This procedure is theoretically formalized in the Calculus of Variations, and an illuminating overview on it can be found in Kamien and Schwartz (1991).

For what concerns OCP, their main ingredients are: the state variable {x t }, which evolves accordingly to a differential equation (state equation); the control variable {α t }, managed by the decider in order to solve the optimization problem; the admissible region, which is a functional set containing the control variables; the objective functional J; the value function V, which is the optimized objective function; and, lastly, the optimal strategies, which are given by the pair optimal controls-optimal paths.

A further clustering of OCP can be properly identified, on the basis of the presence of randomness in their formalization. So, we have deterministic OCP and stochastic ones. The latter are those where f, {x t } and {α t } are random, while all the ingredients of the former are deterministic. In stochastic OCP, the state variable obeys a stochastic differential equation.

The procedures for solving deterministic and stochastic OCP are basically two: the Pontryagin maximum principle and the dynamic programming theory. The former method may be viewed as a generalization of the procedure employed for solving the dynamic OP. Indeed, Pontryagin’s Theorem states that the optimal strategies are couples optimal controls-optimal paths which must optimize a special function (the Hamiltonian) similar to the Lagrangian of the static OP case and including the objective functional and the constraints. Moreover, the introduction of additional variables (costates) and the fulfillment of further conditions (costate equations) are also required; the latter method is based on the definition of the value function, which is proved to be formally the unique solution of a specific differential equation (Hamilton-Jacobi-Bellman equation, HJB) by means of a maximum principle (the Dynamic Programming Principle). If the value function is so regular to be the true solution of the HJB, then one can derive the optimal strategies through a Verification Theorem. It is worth noting that the regularity of the value function is an important aspect to be treated in the dynamic programming approach, and the employment of a concept of weak solutions of the HJB (the so-called viscosity solutions) is often required.

For a survey on the OCT and, in general, on the dynamic OP, see Fleming and Rishel (1975), Bardi and Capuzzo Dolcetta (1997), Yong and Zhou (1999), Fleming and Soner (2006), Kamien and Schwartz (1991). The concept of viscosity solutions is introduced and discussed in Fleming and Soner (2006) and Crandall et al. (1992).

Examples of OP

Some classical examples of OP can be listed in the field of economics.

In Markowitz (1952), the future Nobel Laureate Harry Markowitz developed an optimization model for the selection of the best capital allocation among a set of risky assets. The optimality criterion is based on the evidence that an investor aims at maximizing the expected return of a portfolio and, simultaneously, at minimizing its risk level. The resulting problem is a static constrained one, with convex objective function.

In the field of decision theory, a relevant role is played by the concept of utility. A utility function is a tool for ordering preferences of goods and is a simple assignment of a number representing the satisfaction to be the owner of some quantities of goods. By conventional agreement, a greater value of the utility means a higher level of satisfaction. Hence, utility maximization is the ground of a number of static and dynamic OP. For some explicit models, refer to Mas-Colell et al. (1995).

A macroeconomic example of optimization model consists of the maximization of the growth rate of a country. In this case, the formalization of the related OP includes the evolutive nature of the phenomenon and the wide set of constraints. Hence, the OP are dynamic. The objective function is usually consumption or utility, and the constraints model human or physical capital accumulation. For this type of models, see Barro and Sala-i-Martin (2004).

Under a microeconomic point of view, of remarkable relevance is the problem of cost minimization and profit maximization. The solution of the resulting OP provides insights on the strategies to be implemented by the companies for improving their performances. Usually, such models are described through constrained OP, in order to capture the presence of budget and/or technological constraints. A detailed discussion on this family of OP can be found in Varian (1992).

References

  1. Bardi M, Capuzzo-Dolcetta I (1997) Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman equations, Systems & control: foundations & applications. Birkhäuser, BostonCrossRefGoogle Scholar
  2. Barro R, Sala-I-Martin X (2004) Economic growth, 2nd edn. The MIT PressGoogle Scholar
  3. Crandall MG, Ishii H, Lions P-L (1992) User's guide to viscosity solutions of second order partial differential equations. Bull Am Math Soc 27(1):1–67CrossRefGoogle Scholar
  4. Fleming WH, Soner HM (2006) Controlled Markov processes and viscosity solutions, 2nd edn. Springer, New York/Heidelberg/BerlinGoogle Scholar
  5. Fleming WH, Rishel RW (1975) Deterministic and stochastic optimal control. Springer, New York/Heidelberg/BerlinCrossRefGoogle Scholar
  6. Kamien MI, Schwartz NL (1991) Dynamic optimization: the Calculus of variations and optimal control in economics and management, vol 31, 2nd edn, Advanced textbooks in economics. Elsevier B.V, AmsterdamGoogle Scholar
  7. Markowitz H (1952) Portfolio selection. J Financ 7(1):77–91Google Scholar
  8. Mas-Colell A, Whinston M, Green J (1995) Microeconomic theory. Oxford University Press, OxfordGoogle Scholar
  9. Simon CP, Blume L (1994) Mathematics for economists. W.W. Norton & CompanyGoogle Scholar
  10. Varian H (1992) Microeconomic analysis, 3rd edn. W.W. Norton & CompanyGoogle Scholar
  11. Yong J, Zhou XY (1999) Stochastic controls. Springer, New York/Heidelberg/BerlinCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  1. 1.Department of Economics and LawUniversity of MacerataMacerataItaly