Multidimensional inequalities and generalized quantile functions


In this paper, we extend the generalized Yaari’s dual theory for multidimensional distributions, in the vein of Galichon and Henry’s paper (Galichon and Henry in J Econ Theory 147:1501–1516, 2012). We show how a class of generalized quantiles—which encompasses Galichon and Henry’s one or multivariate quantile transform [see Arjas and Lehtonen (Math Oper Res 3(3):205–223, 1978), O’Brien (Ann Prob 3(1):80–88, 1975) or Ruschendorf (Ann Probab 9(2):276–283, 1981)]—allows to derive a general representation theorem.

This is a preview of subscription content, access via your institution.


  1. 1.

    Yaari has extended his theory to multidimensional distributions [see Yaari (1986)], but his extension does not completely take into account the multidimensional aspect of the problem.

  2. 2.

    Anonymity is equivalent to neutrality, and the concavity of I corresponds to Pigou–Dalton Majorization. See Proposition 4 and discussion in Section 6 in Galichon and Henry (2012) for more information.

  3. 3.

    Note that this formulation of Yaari’s theorem is in the line of the suggestion of Yaari (1987, page 103), i.e., it is built upon the notion of comonotonic independence. Such similar formulations are extensively developed by Chateauneuf [see Chateauneuf (1991) or Chateauneuf (1994)].

  4. 4.

    In this theorem, recall that \(f \circ P\) is defined on the set of Borel subsets of S as follows: for every Borel \(E \subset S\), \(f \circ P(E)=f(P(E)).\) The function obtained is no longer a probability (it may not be additive), but it is a capacity. Then, the Choquet integral is a way to extend the standard expectation with respect to a probability to the case where the probability is replaced by a capacity.

  5. 5.

    Hereafter, Q(X) is also denoted by \(Q_X\) and is called a quantile. Sometimes, to emphasize the parameter U, we will call \(Q_X\) a U-quantile and will denote it \(Q^U_X\).

  6. 6.

    This quantile depends only on the law of U.

  7. 7.

    That is, for every Borel subset E of \(\mathbf {R}^n\) whose Lebesgue measure is 0, we have \(\mu (E)=0\).

  8. 8.

    As pointed by one referee, this representation formula raises some important questions: why does the mean of X need to be corrected? What does it mean to take inequality into account? Is this a reasonable procedure? Should it be recommended to policy makers? To answer these questions, let us recall that the formula above is derived from some axiomatization of the behavior of the policy maker. Among the axioms, inequality aversion is one of the most important, and it is related to the questions above. Fehr and Schmidt (1999) have contributed to popularize the term “inequality aversion”: in short, it is observed in practice that some groups of people dislike inequalities. This is also related to preference for redistribution: for example, Alesina et al. (February 2018) advocate that preference for redistribution could be correlated to Mobility perception. Thus, inequality aversion of the policy maker could be seen only as a way to represent inequality aversion among the population. Naturally, inequality aversion is not uniformly distributed among the population, and it may change through time, so that this assumption should be probably context dependent. Last, some recent theoretical papers [e.g., Razvan Vlaicu (2018)] state that polarization may increase with income inequalities. This could be an argument in favor of a policy maker who takes into account inequalities.

  9. 9.

    The first Axiom 1’ is not assumed in Galichon and Henry’s paper, but it can be added without any loss of generality, since it is only a normalization assumption.

  10. 10.

    Obviously, \({\tilde{X}}\) depends on U, not only on the law of U.

  11. 11.

    Throughout this paper, cone will be used for convex cone containing 0.


  1. Alesina, A., Stantcheva, S., Teso, E.: Intergenerational mobility and preferences for redistribution. Am. Econ. Rev. 108(2), 521–54 (2018)

    Article  Google Scholar 

  2. Aliprantis, C.D., Border, K.C.: Infinite Dimensional Analysis. Springer, Berlin (1994)

    Google Scholar 

  3. Arjas, E., Lehtonen, T.: Approximating many server queues by means of single server queues. Math. Oper. Res. 3(3), 205–223 (1978)

    Article  Google Scholar 

  4. Atkinson, A.: On the measurement of inequality. J. Econ. Theory 2, 244–263 (1970)

    Article  Google Scholar 

  5. Carlier, G., Dana, R.A., Galichon, A.: Pareto efficiency for the concave order and multivariate comonotonicity. J. Econ. Theory 147, 207–229 (2012)

    Article  Google Scholar 

  6. Chateauneuf, A.: On the use of capacities in modeling uncertainty aversion and risk aversion. J. Math. Econ. 20(4), 343–369 (1991)

    Article  Google Scholar 

  7. Chateauneuf, A.: Modeling attitudes towards uncertainty and risk through the use of choquet integral. Ann. Oper. Res. 52(1), 1–20 (1994)

    Article  Google Scholar 

  8. Dhaene, J., Denuit, M., Goovaerts, M.J., Kaas, R., Vyncke, D.: The concept of comonotonicity in actuarial science and finance: theory. Insur. Math. Econ. 31(1), 3–33 (2002)

    Article  Google Scholar 

  9. Ekeland, I., Galichon, A., Henry, M.: Comonotonic measures of multivariate risks. Math. Finance 22, 109–132 (2012)

    Article  Google Scholar 

  10. Fehr, E., Schmidt, K.: A theory of fairness, competition, and cooperation. Q. J. Econ. 114, 817–868 (1999)

    Article  Google Scholar 

  11. Gajdos, T., Weymark, J.: Multidimensional generalized gini indices. Econ. Theory 26, 471–496 (2005).

    Article  Google Scholar 

  12. Galichon, A., Henry, M.: Dual theory of choice with multivariate risks. J. Econ. Theory 147, 1501–1516 (2012)

    Article  Google Scholar 

  13. Hardy, G., Littlewood, J., Pólya, G.: Inequalities. Cambridge University Press, Cambridge (1952)

    Google Scholar 

  14. Kolm, S.C.: Multidimensional egalitarianisms. Q. J. Econ. 91, 1–13 (1977)

    Article  Google Scholar 

  15. O’Brien, G.L.: The comparison method for stochastic processes. Ann. Probab. 3(1), 80–88 (1975)

    Article  Google Scholar 

  16. Ruschendorf, L.: Ordering of distributions and rearrangement of functions. Ann. Probab. 9(2), 276–283, 04 (1981)

    Article  Google Scholar 

  17. Rüschendorf, L.: Mathematical Risk Analysis: Dependence, Risk Bounds, Optimal Allocations and Portfolios. Springer, Berlin (2004)

    Google Scholar 

  18. Vlaicu, R.: Inequality, participation, and polarization. Soc. Choice Welf. 50(4), 597–624 (2018)

    Article  Google Scholar 

  19. Weymark, J.A.: Generalized gini inequality indices. Math. Soc. Sci. 1, 409–430 (1981)

    Article  Google Scholar 

  20. Yaari, M.: Univariate and multivariate comparisons of risk aversion: a new approach. In: Heller, W., Starr, R., Starrett, D. (eds.) Essays in Honor of Kenneth Arrow, pp. 173–187. Cambridge University Press, Cambridge (1986)

    Google Scholar 

  21. Yaari, M.: The dual theory of choice under risk. Econometrica 55, 95–115 (1987)

    Article  Google Scholar 

Download references

Author information



Corresponding author

Correspondence to Philippe Bich.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This paper forms part of the research project ANR “Mesure des inégalités ordinales et multidimensionnelles” ORDINEQ of the French National Agency for Research, whose financial support is gratefully acknowledged.



Proof of Proposition 2.1

Property (i) is a consequence of the definition of \(F^{-1}_X\) in terms of an optimal coupling solution. For Property (ii), and since \(U(p)=p\), the quantile function \(F^{-1}_X\) is characterized by the two following properties:

$$\begin{aligned} (1) \ F^{-1}_{X} \text{ is } \text{ increasing } \end{aligned}$$


$$\begin{aligned} (2) \ F^{-1}_{X}=_d X. \end{aligned}$$

Indeed, the first condition can be equivalently written as: \(G(t):=\int _{0}^t F^{-1}_{X}(p)dp\) which is a continuous and convex function, and condition (2) can be written as \(\nabla G =_d X\). These two conditions characterize the (unique almost surely) solution \(F^{-1}_X \) of the optimal coupling problem

$$\begin{aligned} \sup _{\tilde{X} \in V_2, \ \tilde{X} =_d X} \int _{[0,1]} p. \tilde{X}(p) \mathrm{d}p \end{aligned}$$

(see “Appendix 6.1.1”).

Now, let \(X'=F^{-1}_X\) and \(Y'=F^{-1}_Y\). From above, the quantile of \(X'+Y'\) is characterized by (1) \(F^{-1}_{X'+Y'}\) is increasing (2) \(F^{-1}_{X'+Y'}=_d X'+Y'\). Thus, \(F^{-1}_{X'+Y'}=F^{-1}_{X}+F^{-1}_{Y}\), because it satisfies the two conditions (1) and (2). This is similar to prove that \(F^{-1}_{\lambda X}=\lambda F^{-1}_X\), or to prove Property (iii).

Reminders of optimal transportation and optimal coupling.

Consider hereafter a random vector \(U \in V_2\). Sometimes, we shall assume that the law of U is absolutely continuous with respect to Lebesgue measure: this means that for every Borel subset \(A \subset \mathbf {R}^n\) of Lebesgue measure 0, \(P^U(A)=0\).

An important object hereafter is \(\rho _{U}\), the maximal correlation functional with respect to U: this is the real function defined on \(V_2\) by

$$\begin{aligned} (\mathcal{P}) \ \forall X \in V_2, \rho _{U}(X)=\sup _{{\tilde{X}}=_dX} E({\tilde{X}}.U). \end{aligned}$$

From optimal coupling theory, given \(X \in V_2\), we have the following proposition:

Proposition 6.1

Assume that the law of U is absolutely continuous with respect to Lebesgue measure. Then:

  1. (i)

    Existence and uniqueness: there exists a solution \({\tilde{X}} \in V_2\) of \((\mathcal{P})\), which is unique (almost surely). The pair \(({\tilde{X}},U)\) is called an optimal coupling.Footnote 10

  2. (ii)

    Form of the solution: a solution \({\tilde{X}} \in V_2\) can be written as \({\tilde{X}}=\nabla f(U)\) for \(f: \mathbf {R}^n \rightarrow \mathbf {R}\) convex and lower semicontinuous. Moreover, \(\nabla f\) in the previous decomposition is unique, which means that if \(\nabla f(U)=\nabla g(U)\) is a solution of \((\mathcal{P})\), where \(f,g: \mathbf {R}^n \rightarrow \mathbf {R}\) are convex and l.s.c., then \(\nabla f=\nabla g\).

  3. (iii)

    Characterization of the solution: if \(f: \mathbf {R}^n \rightarrow \mathbf {R}\) is convex and l.s.c. function such that \(\nabla f(U)=_d X\), then \(\nabla f(U)\) is a solution of \((\mathcal{P})\).

  4. (iv)


    $$\begin{aligned} \sup _{{\tilde{X}} =_d X, {\tilde{U}} =_d U } E ({\tilde{X}}.{\tilde{U}})=\sup _{{\tilde{X}} =_d X} E (\tilde{X}.U)=\sup _{ {\tilde{U}} =_d U } E (X.{\tilde{U}}). \end{aligned}$$

Remark 6.1

In particular, when the law of U is absolutely continuous with respect to Lebesgue measure, this proves that for every \(X \in V_2\), there exists a convex and l.s.c. function \(f: \mathbf {R}^n \rightarrow \mathbf {R}\) such that \(\nabla f\) exists almost surely, and \(\nabla f(U)\) has the same law as X.

Remark 6.2

The equality \(\rho _{U}(X)=\sup _{ {\tilde{U}} =_d U } E (X.{\tilde{U}})\) proves that \(\rho _{U}\) depends only on the law of U. Thus, one could define equivalently, for every probability measure \(\mu \) with a finite second moment:

$$\begin{aligned} \rho _{\mu }(X)=\sup _{{\tilde{X}}=_dX} E({\tilde{X}}.U) \end{aligned}$$

where U is any element in \(V_2\) whose law is \(\mu \). With the previous notations, we could note \(\rho _{P^U}=\rho _{U}.\)

Proof that Galichon and Henry’s \(\mu \)-quantile defines a quantile operator in Sect. 3.5

We have to prove that 1, 2 and 3 in the definition of the U-quantile operator are satisfied (with \(V_2=V'_2\)), where \(U=(U_1,\ldots ,U_n)\). As a matter of fact, we will also show that the three additional properties 4, 5 and 6 are also satisfied (the last one being true in the particular case where \(S=[0,1]^n\), \(U(p_1,\ldots ,p_n)=(p_1,\ldots ,p_n)\) and P is the Lebesgue measure).

Point (1) is true by definition of \(\mu \)-quantile.

For Point (2), we use the following lemma:

Lemma 6.3

  1. (i)

    The \(\mu \)-quantile of \(Q_X(U)+Q_Y(U)\) is \(Q_X+Q_Y\).

  2. (ii)

    For every \(\lambda \ge 0\), the \(\mu \)-quantile of \(\lambda X\) is \(\lambda . Q_X\).


(i) Use the characterization of \(\mu \)-quantile: considering any \(U \in V_2\) such that \(P^U=\mu \), \(Q_X=\nabla f\) and \(Q_Y=\nabla g\) for some convex and l.s.c. functions \(f,g: \mathbf {R}^n \rightarrow \mathbf {R}\), and \((\nabla f(U),U)\) and \((\nabla g(U),U)\) are optimal couplings (see Proposition 6.1, (ii)). Thus, from Proposition 6.1, (iii), \((Q_X+Q_Y)(U)=\nabla (f+g)(U)\) is a solution of

$$\begin{aligned} \sup _{{\tilde{X}}=_d Q_X(U)+Q_Y(U)} E({\tilde{X}}.U) \end{aligned}$$

because \(f+g\) is convex and l.s.c., and \(\nabla (f+g)(U)=Q_X(U)+Q_Y(U)\). Thus, by definition, \(\nabla (f+g)=Q_X+Q_Y\) is the \(\mu \)-quantile of \(Q_X(U)+Q_Y(U)\).

ii) It is straightforward. \(\square \)

For Point (3), remark that from optimal coupling theory (see reminders above)

$$\begin{aligned}\sup _{{\tilde{X}}=_dX} E({\tilde{X}}.U)=\sup _{\tilde{U}=_dU} E({\tilde{U}}.X).\end{aligned}$$

In particular, the \(\mu \)-quantile of \(X+\lambda (1,\ldots ,1)\) satisfies

$$\begin{aligned} E(Q_{X+\lambda (1,\ldots ,1)}(U).U)= & {} \sup _{{\tilde{U}}=_dU} E({\tilde{U}}.(X+\lambda (1,\ldots ,1))\\= & {} \sup _{{\tilde{U}}=_dU} E({\tilde{U}}.X)+\lambda \sum _{i=1}^nE(U_i).\end{aligned}$$

This is also equal to

$$\begin{aligned} \sup _{{\tilde{X}}=_dX} E(U. {\tilde{X}})+\lambda \sum _{i=1}^nE(U_i)= & {} E(Q_X(U).U)+\lambda \sum _{i=1}^nE(U_i)\\= & {} E((Q_X(U)+\lambda (1,\ldots ,1)).U). \end{aligned}$$

Since the law of \(Q_X(U)+\lambda (1,\ldots ,1)\) is equal to the law of \(X+\lambda (1,\ldots ,1)\), we get that \(Q_{X+\lambda (1,\ldots ,1)}=Q_X+\lambda (1,\ldots ,1)\).

Point (4) comes from the fact that \(f: p \rightarrow \frac{\sum _{k=1}^n \lambda _k p_k^2}{2}\) is continuous and convex; thus, \(\nabla f(p) =(\lambda _1 p_1,\ldots ,\lambda _n p_n)\) is the quantile function of \(\nabla f(U)\) (use Proposition 6.1, (iii))

Point (5) is similar to Point (4) above. The function \(f: p \rightarrow ^{t}pAp\) is continuous and convex; thus, \(\nabla f(p) =Ap\) is the quantile function of \(\nabla f(U)\).

Finally, we prove Point (6) when \(S=[0,1]^n\), \(U(p_1,\ldots ,p_n)=(p_1,\ldots ,p_n)\) and P is the Lebesgue measure. We want to show that if \(X=(X_1,\ldots ,X_n)\) where \(X_1,\ldots ,X_n\) are independent, then one has \(Q_{X}(p)=(F^{-1}_{X_1}(p_1),\ldots ,F^{-1}_{X_i}(p_i),\ldots ,F^{-1}_{X_n}(p_n))\) for every \(p=(p_1,\ldots ,p_i,\ldots ,p_n) \in [0,1]^{n}\) . From the characterization of \(\mu \)-quantile from optimal coupling theory, it is enough to prove that there exists a convex l.s.c. function \(f:p\in [0,1]^{n} \longrightarrow f(p)\in \mathbf {R}\) such that

$$\begin{aligned}(1) \ \nabla f = (F^{-1}_{X_1}(p_1),\ldots ,F^{-1}_{X_i}(p_i),\ldots ,F^{-1}_{X_n}(p_n))\end{aligned}$$


$$\begin{aligned}(2) \ (F^{-1}_{X_1}(p_1),\ldots ,F^{-1}_{X_i}(p_i),\ldots ,F^{-1}_{X_n}(p_n))=_d X.\end{aligned}$$

Define \(f(p_1,\ldots ,p_n)=\sum _{i=1}^n {\displaystyle \int _0 ^{p_i} F^{-1}_{X_i}(t)dt}\). Clearly, f is convex, continuous, and (1) above is satisfied. Let us check that (2) above is also satisfied.

We have \(P(F^{-1}_{X_1}(p_1) \le x_1,\ldots ,F^{-1}_{X_i}(p_i) \le x_i,\ldots ,F^{-1}_{X_n}(p_n) \le x_n )=\prod _{i=1}^{n}\nu (F^{-1}_{X_i}(p_i) \le x_i)\) where \(\nu \) is the Lebesgue measure on [0, 1]. Thus, it can also be written as \(\prod _{i=1}^{n} P(X_i \le x_i)=P(X_1 \le x_1,\ldots , X_n \le x_n)\) because \(X_1,\ldots ,X_n\) are independent, and finally (2) above is true.

Proof of Proposition 3.5

First, from its definition, \(Q_X\) depends only on the distribution of X. Moreover, the proof of \(Q_X(U)=_d X\) can be found in Rüschendorf (2004), page 14. Thus, Point (1) in Definition 3.1 is satisfied.

Second, we will prove that the U-quantile of \(Q_X(U)+Q_Y(U)\) is \(Q_X+Q_Y\) (where \(X=(X_1,X_2)\) and \(Y=(Y_1,Y_2)\) are both 2-dimensional random vectors), which will prove Point (2) in Definition 3.1 (the n-dimensional case is similar). Define \(U=(U^1,U^2)\). First, by definition of the U-quantile, the first component of the U-quantile of \(Q_X(U)+Q_Y(U)\) is the standard one-dimensional quantile of \(F_X^{-1}(U^1)+F_Y^{-1}(U^1)\), which is \(F_X^{-1}+F_Y^{-1}\) (see Proposition 2.1), i.e., the first component of \(Q_X+Q_Y\). Now, we check the same result for the second component. This is essentially the same proof as for the first component, but with a conditional argument. Let \(Z=Q_X(U)+Q_Y(U)\) and define \(Q_Z=(Q^1_Z,Q^2_Z)\). By definition,

$$\begin{aligned} Q^2_Z(p_1,p_2)= & {} \inf \{x \in \mathbf {R}: P(Q_{X}^2(U)+Q_{Y}^2(U) \le x \mid F_{Q^1_{X}+Q^1_{Y}}^{-1}(U^1)\\= & {} F_{Q^1_{X}+Q^1_{Y}}^{-1}(p_1)) \ge p_2\}. \end{aligned}$$

But as explained before, the one-dimensional quantile of \(Q^1_{X}+Q^1_{Y}=F_{X_1}^{-1}(U^1)+F_{Y_1}^{-1}(U^1)\) is \(F_{X_1}^{-1}+F_{Y_1}^{-1}\); thus, we get

$$\begin{aligned} Q^2_Z(p_1,p_2)=\inf \{x \in \mathbf {R}: P(Q_{X}^2(U)+Q_{Y}^2(U) \le x \mid E) \ge p_2\}, \end{aligned}$$

where \(E=\{s \in S: F_{X_1}^{-1}(U^1(s))+F_{Y_1}^{-1}(U^1(s))=F_{X_1}^{-1}(p_1)+F_{Y_1}^{-1}(p_1)\}\).

This can be written as

$$\begin{aligned} Q^2_Z(p_1,p_2)=\inf \{x \in \mathbf {R}: P_E(Q_{X}^2(U)+Q_{Y}^2(U)) \le x) \ge p_2\}, \end{aligned}$$

where \(P_E\) denotes P conditionally to the event E.

Thus, \(p_1\) being fixed, \(Q^2_Z(p_1,.)\) is the one-dimensional quantile of \(Z=Q_{X}^2(U)+Q_{Y}^2(U)\) in the new probability space \((S,\mathcal{F},P_E)\). We shall use the following lemma to reinforce this statement:

Lemma 6.4

$$\begin{aligned} Q^2_Z(p_1,p_2)=\inf \{x \in \mathbf {R}: P_E(Q_{X}^2(p_1,U^2)+Q_{Y}^2(p_1,U^2)) \le x) \ge p_2\}. \end{aligned}$$


To prove this lemma, we have to prove that the conditional probability allows to replace \(U^1(.)\) in the probability above by \(p_1\). First, remark that \(E=E_1 \cap E_2\) where \(E_1=\{s \in S: F_{X_1}^{-1}(U^1(s))=F_{X_1}^{-1}(p_1))\} \) and \(E_2=\{s \in S: F_{Y_1}^{-1}(U^1(s))=F_{Y_1}^{-1}(p_1))\},\) which is a consequence of the comonotonicity of \(F_{X_1}^{-1}(U^1)\) and \(F_{Y_1}^{-1}(U^1)\). Indeed, clearly, \(E_1 \cap E_2 \subset E\). To prove the other inclusion, let \({\bar{s}} \in S\) such that \(p_1=U^1({\bar{s}})\) (in particular \({\bar{s}} \in E\)), and take \(s' \notin E_1 \cap E_2\) (if it does not exist, \(E \subset E_1 \cap E_2=S\)). In a first case (the other cases being treated similarly), we have \(F_{X_1}^{-1}(U^1)(s')> F_{X_1}^{-1}(U^1({\bar{s}}))\). Then, by comonotonicity, \(F_{Y_1}^{-1}(U^1)(s') \ge F_{Y_1}^{-1}(U^1(\bar{s}))\), thus, by summing these inequalities, we get \(s' \notin E\); the other cases are similar.

Thus, \(P_E(Q_{X}^2(U ^1,U^2)+Q_{Y}^2(U^1,U^2) \le x)\) is equal to the probability of

$$\begin{aligned}&\Bigl \{ s \in S: \inf \bigl \{x' \in \mathbf {R}: P(s' \in S: X^2(s') \le x') \ge U^2(s) \mid F_{X_1}^{-1}(U^1(s'))=F_{X_1}^{-1}(U^1(s))) \bigr \} \\&\quad + \inf \bigl \{x' \in \mathbf {R}: P(s' \in S: Y^2(s') \le x') \ge U^2(s) \mid F_{Y_1}^{-1}(U^1(s'))=F_{Y_1}^{-1}(U^1(s)) \bigr \} \le x \Bigr \} \end{aligned}$$

conditionally to

$$\begin{aligned}E=E_1 \cap E_2=\{s \in S: F_{X_1}^{-1}(U^1(s))=F_{X_1}^{-1}(p_1) \text{ and } F_{Y_1}^{-1}(U_1(s))=F_{Y_1}^{-1}(p_1)\},\end{aligned}$$

which is thus the probability of

$$\begin{aligned}&\Bigl \{ s \in S: \inf \bigl \{x' \in \mathbf {R}: P(s' \in S: X^2(s') \le x') \ge U^2(s) \mid F_{X_1}^{-1}(U^1(s'))=F_{X_1}^{-1}(p_1)) \bigr \} \\&\quad + \inf \bigl \{x' \in \mathbf {R}: P(s' \in S: Y^2(s') \le x') \ge U^2(s) \mid F_{Y_1}^{-1}(U^1(s'))=F_{Y_1}^{-1}(p_1)) \bigr \} \le x \Bigr \} \end{aligned}$$

conditionally to E, which is exactly the conclusion of the lemma. \(\square \)

The previous lemma proves that \(p_1\) being fixed, \(Q^2_Z(p_1,.)\) is the one-dimensional quantile of \(Z':=Q_{X}^2(p_1,U^2)+Q_{Y}^2(p_1,U^2)\) in the new probability space \((S,\mathcal{F},P_E)\). Remark that \(U^2\) is independent from E; thus, it is also uniformly distributed in [0, 1] as a random variable of \((S,\mathcal{F},P_{E})\). Consequently, \(Q^2_Z(p_1,.)\) is characterized by the two following properties:

(1) \(Q^2_Z(p_1,U^2)=_dZ'\).

(2) It is non-decreasing with respect to \(p_2\).

Thus, to prove \(Q^2_Z(p_1,p_2)=Q_{X}^2(p_1,p_2)+Q_{Y}^2(p_1,p_2),\) we only have to prove that:

(1) \(Q_{X}^2(p_1,U^2(p_2))+Q_{Y}^2(p_1,U^2(p_2))=_dZ'\) (which is clear by definition), and that:

(2) \(p_2 \rightarrow Q^2_X(p_1,p_2)+Q^2_Y(p_1,p_2)\) is non-decreasing.

Let us prove this last property (2). The same proof as the one of Lemma 6.4 gives that \(Q^2_X(p_1,.)\) (resp. \(Q^2_Y(p_1,.)\)) is the one-dimensional quantile of X (resp. of Y) in \((S,\mathcal{F},P_{E_1})\) (resp. in \((S,\mathcal{F},P_{E_2})\)). In particular, \(Q^2_X(p_1,.)\) and \(Q_{Y}^2(p_1,.) \) are non-decreasing with respect to \(p_2\); thus, \(Q^2_X(p_1,.)+Q^2_Y(p_1,.)\) is non-decreasing with respect to \(p_2\), which ends the proof.

The last point (the U-quantile of \(\lambda X\) is \(\lambda Q_X\)) is straightforward.

Proof of Proposition 3.6

Consider the case \(n=2\). Let \(X_1\) be a random variable uniformly distributed in [0, 1]. By contradiction, assume that there exists some probability measure \(\mu \) such that for every \(X \in V_2\), \(Q_X=Q^{\mu }_X\). In particular, since \(Q_{(X_1,-X_1)}(p_1,p_2)=(p_1,-p_1)\), we should have \(Q^{\mu }_{(X_1,-X_1)} =(p_1,-p_1)\). But by definition of \(Q^{\mu }_{(X_1,-X_1)}\), there is a convex function \(f:\mathbf {R}^2 \rightarrow \mathbf {R}\) such that \(Q^{\mu }_{(X_1,-X_1)}=\nabla f\). By convexity inequality, we get for every \(p=(p_1,p_2) \in \mathbf {R}^2\) and every \(p'=(p'_1,p'_2) \in \mathbf {R}^2\),

$$\begin{aligned}(\nabla f(p)-\nabla f (p')).(p-p') \ge 0, \end{aligned}$$

i.e., by developing:

$$\begin{aligned} (p_1-p_1')(p_1-p_1'-p_2+p_2') \ge 0, \end{aligned}$$

which is false, for example, when \(p=(\frac{1}{2},1)\) and \(p'=0\).

Proof of Proposition 3.7

Proof of 1. \(X+a(1,\ldots ,1)\) and \(Q_X(U)+a(1,\ldots ,1)\) have the same law from Point 3 in Definition 3.1. Thus, \(I(X+a(1,\ldots ,1))=I(Q_X(U)+a(1,\ldots ,1))=I(Q_X(U))+I(a(1,\ldots ,1))\) (from additivity on quantiles, because \(a(1,\ldots ,1)\) is the quantile of itself) which is also equal to \(I(X)+a\).

Proof of 2. Concavity clearly implies inequality aversion. Conversely, if inequality aversion is true, let us prove concavity. Let \((X,Y) \in V_2 \times V_2\) and \(\lambda \in [0,1]\). Consider \(X'=X+I(Y)(1,\ldots ,1) \in V_2'\), \(Y'=Y+I(X)(1,\ldots ,1)\). From property 1 above, we get \(I(X')=I(Y')=I(X)+I(Y)\). Now, from inequality aversion at \(X'\) and \(Y'\), we get \(I(\lambda X'+(1-\lambda )Y') \ge I(X')=I(X)+I(Y)\) or, equivalently, \(I(\lambda X +(1-\lambda )Y+(\lambda I(Y)(1,\ldots ,1)+(1-\lambda )I(X)(1,\ldots ,1)) \ge I(X)+I(Y)\), i.e., from property 1 above, \(I(\lambda X +(1-\lambda )Y)+\lambda I(Y)+(1-\lambda )I(X) \ge I(X)+I(Y)\) that is \(I(\lambda X +(1-\lambda )Y) \ge \lambda I(X)+(1-\lambda )I(Y)\) which is concavity inequality.

Proof of Theorem 3.4

Step one. First prove that if Q is a quantile operator, and if I satisfies Axioms 1–6, then there exists \(\phi : [0,1]^n \rightarrow \mathbf {R}^n\), with nonnegative components, such that

$$\begin{aligned}&(i) \ \displaystyle {E((1,\ldots ,1).\phi (U))=1},\\&(ii) \ \forall X \in V'_2, I(X)=\int Q_X(U). \phi (U) dP =\min _{\tilde{X}=_dX} \int \tilde{X}. \phi (U) dP. \end{aligned}$$

Define W the set of n-random vectors from \(([0,1]^n,\mathcal{B}_{[0,1]^n},P^U)\) to \(({\small \mathbf {R}}^n,\mathcal{B}_{{\tiny \mathbf {R}}^n})\), where \(P^U\) is the probability measure of U on \([0,1]^n\), and let \(W_2\) be the set of square-integrable random vectors of W. Remark that in the particular case where \(U(p)=p\), \(S=[0,1]^n\) and P is the Lebesgue measure, then \(W_2=V_2\).


$$\begin{aligned}C=\{{Q}_X: X \in V'_2\}\end{aligned}$$

the subset of \(W_2\) whose elements are all possible quantiles of random variables in \(V_2'\). From the definition of quantiles, C is a coneFootnote 11. Indeed, if \(\lambda \ge 0\), \(X \in V'_2\) and \(Y \in V'_2\), then \(\lambda {Q}_{X}\) and \({Q}_X+{Q}_Y\) are the quantiles of \(\lambda X \in V_2'\) and \({Q}_X(U)+{Q}_Y(U)\) (from Point (2) in the definition of a quantile operator). Last, \(0 \in C\) since 0 is the quantile of itself (because if \(X=0\), from Point (1) in the definition of a quantile operator, \(Q_X(U)=_dX=_d0\) thus \(Q_X=0\)).

Let \(F \subset W_2\) be the vector space spanned by C. Since C is a cone, it can be written as

$$\begin{aligned}F=C-C=\{c-c': (c,c') \in C \times C\}.\end{aligned}$$

Now, define \({\tilde{I}}: F \rightarrow \mathbf {R}\) by

$$\begin{aligned} \forall (X,Y) \in V'_2 \times V'_2, {\tilde{I}}({Q}_{X}-Q_Y)= I({Q}_{X}(U))-I({Q}_{Y}(U)). \end{aligned}$$

It is well defined: indeed, if \(Q_X-Q_Y=Q_{X'}-Q_{Y'}\) for some \(X,X',Y\) and \(Y'\) in \(V_2'\), then \(Q_X(U)-Q_Y(U)=Q_{X'}(U)-Q_{Y'}(U)\), i.e., \(Q_X(U)+Q_{Y'}(U)=Q_{Y}(U)+Q_{X'}(U)\). Consequently, from additivity of I on the set of quantiles, \(I(Q_X(U)+Q_{Y'}(U))=I(Q_X(U))+I(Q_{Y'}(U))=I(Q_{Y}(U)+Q_{X'}(U))=I(Q_{Y}(U))+I(Q_{X'}(U))\), that is \(I(Q_X(U))-I(Q_Y(U))=I(Q_{X'}(U))-I(Q_{Y'}(U))\); thus, \(I(Q_X-Q_Y)=I(Q_{X'}-Q_{Y'})\).

To prove linearity of \(\tilde{I}\), first remark that for every \(\lambda \ge 0\), we have, from positive homogeneity of I and Point (2) in the definition of quantile operator:

$$\begin{aligned} {\tilde{I}}(\lambda ({Q}_X-{Q}_Y))= & {} {\tilde{I}}({Q}_{\lambda X}-{Q}_{\lambda Y})=I({Q}_{\lambda X}(U))-I({Q}_{\lambda Y}U))\\= & {} \lambda I({Q}_{X}(U))-I({Q}_{Y}U))=\lambda \tilde{I}({Q}_X-{Q}_Y). \end{aligned}$$

But by definition, \({\tilde{I}}({Q}_X-{Q}_Y)=-\tilde{I}({Q}_Y-{Q}_X)\) is clear; thus, the last equality holds for \(\lambda \ge 0\). Moreover, \({\tilde{I}}({Q}_X+{Q}_Y)=\tilde{I}(Q_{Q_X(U)+Q_Y(U)})=I({Q}_{Q_X(U)+Q_Y(U)}(U))=I(Q_X(U)+Q_Y(U))=I(Q_X(U))+I(Q_Y(U))=\tilde{I}({Q}_{X})+{\tilde{I}}({Q}_Y)\), from additivity of I on quantiles, and from Point (2) of the definition of quantile operators. This proves linearity of \({\tilde{I}}\).

Let \(p:W_2 \rightarrow \mathbf {R}\) be defined by

$$\begin{aligned} \forall X \in W_2, \quad p(X)=-I(-X(U)). \end{aligned}$$

The following properties hold:

  • First, \(W_2\) and \(F \subset W_2\) are Riesz spaces (by definition, partially ordered vector spaces which also are lattices) for the natural order defined by \(X \lesssim Y \Leftrightarrow X(s) \le Y(s)\ \) for almost every \(s \in S\).

  • Second, the function p satisfies monotonicity (because I satisfies monotonicity), i.e., for every \((X,Y) \in W_2 \times W_2, \) \(X \le Y \Rightarrow p(X) \le p(Y).\)

  • Third, p is a sublinear function, which means that the following properties (1) and (2) are true:

    (1) For every X and Y in \(W_2\), \(p(X+Y) \le p(X)+p(Y)\).

    (2) p is positively homogeneous.

    Property (1) is true from Proposition 3.7 and positive homogeneity of I. Property (2) is true because I is positively homogeneous.

  • Fourth, \({\tilde{I}}\) is positive on F, that is for every \((X,Y) \in V'_2 \times V'_2\), \({Q}_X-{Q}_Y \ge 0\) implies \(\tilde{I}({Q}_X-{Q}_Y) \ge 0\), from monotonicity of I and by definition of \({\tilde{I}}\).

  • Last, we have \({\tilde{I}}({Q}_X-{Q}_Y) \le p({Q}_X-{Q}_Y)\) for every \((X,Y) \in V'_2 \times V'_2\). Indeed, this is equivalent to \(I(Q_Y(U)) \ge I(Q_X(U))+I(Q_Y(U)-Q_X(U))\) which is true from Proposition 3.7.

By Hahn–Banach extension theorem (Th. 8.31 in Aliprantis and Border (1994)), and since from above \({\tilde{I}}\) is a positive linear function on F, majorized by the monotone sublinear function p on F, \({\tilde{I}}\) extends to a positive linear functional \({\bar{I}}\) on \(W_2\), satisfying

$$\begin{aligned}{\bar{I}}(X) \le p(X) \ \forall X \in W_2.\end{aligned}$$

Recall in the following that a norm \(\Vert . \Vert \) on \(W_2\) is a lattice norm if \(\mid X \mid \le \mid Y \mid a.e. \Rightarrow \Vert X \Vert \le \Vert Y \Vert \ \). In particular, this is the case for \(\Vert . \Vert _2\). A Banach lattice is, by definition, a Banach space for the lattice norm. Again, this is true for \((W_2,\Vert . \Vert _2)\) (which is even a Hilbert space).

Since every positive operator on a Banach lattice is continuous [see Theorem 9.6, p. 350 in Aliprantis and Border (1994)], \({\bar{I}}\) is continuous on \(W_2\), which is a Hilbert space for the scalar product (associated to the norm \(\Vert . \Vert _2\)) \(<X,Y>=\int X.Y dP\). Thus, from Riesz representation theorem, there is \(\phi \in {W_2}\) with nonnegative components on the support of \(P^U\), i.e., on \([0,1]^n\) (because \({\bar{I}}\) is positive on \(W_2\)) such that

$$\begin{aligned} \forall X \in W_2,\ {\bar{I}}(X)=\int _{[0,1]^n} X(p) \phi (p) \mathrm{d}P^U. \end{aligned}$$

In particular, for every \(X \in V'_2\), we have

$$\begin{aligned} I(X)= & {} I({Q}_X(U))={\tilde{I}}({Q}_X)={\bar{I}}({Q}_X)=\int _{[0,1]^n} {Q}_X(p).\phi (p) \mathrm{d}P^U\\= & {} \int _{S} {Q}_X(U).\phi (U) \mathrm{d}P, \end{aligned}$$

the first equality being true because I is law invariant, the second by definition of \({\tilde{I}}\), the third because \(Q_X \in F\), and the last one is simply a change of variable formula.

Note that

$$\begin{aligned} \int _S (1,\ldots ,1). \phi (U) \mathrm{d}P= & {} \int _{[0,1]^n} (1,\ldots ,1). \phi dP^U\\= & {} \int _{[0,1]^n} Q_{(1,\ldots ,1)}. \phi \mathrm{d}P^U=I(1,\ldots ,1)=1 \end{aligned}$$

because the quantile of \((1,\ldots ,1)\) is \((1,\ldots ,1)\) and because of the normalization assumption.

Define \(W_2'=\{X \in W_2: X(U) \in V_2'\}\). For every \((X,Y) \in W'_2 \times W_2\) such that \(X(U)=_dY(U)\),

$$\begin{aligned} {\bar{I}}(-Y)= & {} \int _{[0,1]^n} -\, Y.\phi \mathrm{d}P^U\\= & {} \int _{S} -\,Y(U).\phi (U) \mathrm{d}P \le p(-Y) =-I({Y}(U))=-I({X}(U)) \end{aligned}$$

because X(U) and Y(U) have the same law, and I is law invariant. Moreover,

$$\begin{aligned} -I({X}(U))=-I({Q}_{{X}(U)}(U))=-\tilde{I}({Q}_{{X(U)}})=-\bar{I}(Q_{X(U)})=-\int Q_{{X}(U)}(U) .\phi (U) dP. \end{aligned}$$

From the two equations above, we get

$$\begin{aligned} \forall (X,Y) \in W'_2 \times W_2 \,\, \text{ such } \text{ that } \,\, Y(U)=_d X(U), \int Y(U).\phi (U) dP \ge \int Q_{{X}(U)}(U) .\phi (U) dP. \end{aligned}$$

But given \((X',Y') \in V'_2 \times V_2\), there exists \(X \in W_2'\) and \(Y \in W_2\) such that \(X(U)=_dX'\) and \(Y(U)=_dX'\) (from optimal coupling theory, because U is absolutely continuous with respect to Lebesgue measure). Consequently, we get

$$\begin{aligned} \forall (X',Y') \in V'_2 \times V_2 \text{ such } \text{ that } Y'=_d X', \int Y'.\phi (U) dP \ge \int Q_{X'}(U) .\phi (U) dP \end{aligned}$$

which ends the proof of (ii).

Step two. Converse implication: assume there exists \(\phi : [0,1]^n \rightarrow \mathbf {R}^n\), with nonnegative components on \([0,1]^n\), such that

  1. (i)

    \(\displaystyle {E((1,\ldots ,1).\phi (U))=1}\),

  2. (ii)

    for every \(X \in V'_2,\) \( I(X)=\int Q_X(U). \phi (U) dP =\min _{\tilde{X} \in V_2, \tilde{X}=_dX} \int \tilde{X}. \phi (U) dP\),

and let us prove I satisfies Axioms 1–6.

Axiom 1 is exactly (i) above. Monotonicity, concavity, positive homogeneity and neutrality are a consequence of \( I(X)=\min _{\tilde{X} \in V_2, \ \tilde{X}=_dX} \int \tilde{X}. \phi (U) dP\), i.e., I is the min correlation risk measure with respect to \(\phi (U)\). Indeed, recall that for every \(Y \in V_2\) with nonnegative components, the max correlation risk measure \(\Psi _{Y}\) is defined by \(\Psi _Y(X)=\max _{\tilde{X}=_d X}\int {\tilde{X}}.Y dP \). This is a coherent (i.e., monotone, positively homogeneous and subadditive) and law invariant risk measure [see Rüschendorf (2004), p. 192]. In particular, it is convex. Thus, in the above theorem, \(I(X)=-\Psi _{\phi (U)}(-X)\) satisfies the desired properties. Moreover, Additivity on quantiles is straightforward.

Proof of Corollary 3.7

Step one. Let us prove that \(\phi (p_1,\ldots ,p_n)=\nabla g(-p_1,\ldots ,-p_n)\) (almost surely) for some l.s.c., convex function \(g:\mathbf {R}^n \rightarrow \mathbf {R}\).

Indeed, Point (ii) of the main representation Theorem 3.4 implies that for every \(X \in V'_2,\) \( I(X)=\min _{\tilde{X} \in V_2, \ \tilde{X}=_dX} \int \tilde{X}. \phi (U) dP\). Thus, for every \(X \in V'_2,\) \((-Q_X(U),\phi (U))\) is an optimal coupling (see Subsection 6.1.1). In particular, since we can take \(X \in V_2'\) such that \(Q_X(p)=p\) (from Point 4 in Definition 3.1), we get that \((-U,\phi (U))\) is an optimal coupling. Since \(P^{-U}\) is absolutely continuous with respect to Lebesgue measure, there exists some l.s.c. convex function \(g:\mathbf {R}^n \rightarrow \mathbf {R}\) such that \(\phi (U)=\nabla g(-U)\) (see Subsection 6.1.1). This ends the proof, since the support of \(P^U\) is \([0,1]^n\).

Step two. Let us now assume that the quantile operator satisfies additionally 4. in Definition3.1, and let us prove that \(\phi (p)=(\phi _1(p_1),\ldots ,\phi _n(p_n))\) where each \(\phi _i\) is decreasing.

Consider a diagonal \(n \times n\) matrix D with a strictly positive diagonal. From Point 4. in Definition 3.1, \(p \rightarrow Dp\) is the quantile of some \(Y^D \in V_2'\). Moreover, there exists \(X^D \in W_2\) such that \({X}^D(U)=_dY^D\). Since quantile operator is law invariant, we get \(Dp=Q_{{X}^D(U)}(p)\), thus \(DU=Q_{{X}^D(U)}(U)\) which is also equal (in law) to \(X^D(U)\). Moreover, Equation (ii) in Theorem 3.4 implies that for every \(Y \in W_2\) whose law is equal to the law of \(X^D\)

$$\begin{aligned} \int {Y}(U).\phi (U) dP \ge \int DU .\phi (U) \mathrm{d}P. \end{aligned}$$

From optimal coupling theory, and since the law of DU is absolutely continuous with respect to the Lebesgue measure, there is some l.s.c. convex function \(f^D: \mathbf {R}^n \rightarrow \mathbf {R}\) such that \(-\phi (U) =\nabla f^D(DU)\). Thus, since the support of \(P^U\) is \([0,1]^n\), we get

$$\begin{aligned}-\,\phi (p) =\nabla f^D(Dp)\end{aligned}$$

for every \(p \in [0,1]^n\). From Alexandrov theorem, since \(f^D\) is convex, \(\nabla f^D\) and \(\phi \) are almost surely \(C^1\) on \(]0,1[^n\).

Differentiating the equality above at any \(p \in ]0,1[^n\), we get

$$\begin{aligned} \nabla \phi (p)=-\, \nabla ^2 f^D(Dp).D \end{aligned}$$

for every diagonal \(n-\)matrix D with strictly positive diagonal, where \(\nabla \phi (p)\) and \(\nabla ^2 f(Dp)\) are assimilated to \(n \times n\) matrices.

Then, we use the following lemma:

Lemma 6.5

Let A be a \(n \times n\) real matrix.

  1. (i)

    Assume that for every diagonal \(n \times n\) matrix D with a nonnegative diagonal, there exists a symmetric nonnegative matrix B such that \(A=-BD\). Then A is diagonal with a non-positive diagonal.

  2. (ii)

    Moreover, if for every symmetric definite positive matrix D, there exists a symmetric nonnegative matrix B such that \(A=-BD\). Then \(A=\lambda I_n\) for some \(\lambda <0\).


Taking \(D=I_n\), the property assumed in i) implies that \(-A\) is symmetric nonnegative (thus A is symmetric). Now, for every diagonal \(n \times n\) matrix D with a nonnegative diagonal, if B is some associated symmetric nonnegative matrix as defined in the assumption in i), then it can be seen that D and B commute. Indeed, we have \(A=-BD\) but also \(A=A^t\) (the transposition of A), A being symmetric. Thus, \(-BD=A=A^t=(-BD)^t= - D^t B^t=-DB\) (because D and B are symmetric), that is \(BD=DB\).

In particular, taking a diagonal matrix D with distinct elements on the diagonal, this implies that B is diagonal, thus A is diagonal. But \(-A\) is symmetric nonnegative, thus A has a negative diagonal. From Point i), Point ii) is straightforward. \(\square \)

From the above lemma applied to \(A=\nabla \phi (p)\), \(B=\nabla ^2 f^D(Dp)\), we get \(\nabla \phi (p)= (\nabla \phi _1(p),\ldots , \nabla \phi _n(p))\) is diagonal for almost every p in \(]0,1[^n\), with a negative diagonal. Thus, \( \phi (p)=(\phi _1(p_1),\ldots , \phi _n(p_n))\) where each \(\phi _i\) is decreasing.

Step three. Let us now assume that the quantile operator satisfies additionally Point (4) and Point (5) of Definition 3.1with \(n \ge 2\), and let us prove that \(\phi (p)=b-ap\).

From Point (5) in Definition 3.1, for every definite positive symmetric \(n \times n\) matrix S, the mapping \(p \rightarrow Sp\) is the quantile of some \(Y^S \in V_2'\). Moreover, there exists \(X^S \in W_2\) such that \(Y^S=_dX^S(U)\). From law invariance of quantile operator, we get \(Sp=Q_{X^S(U)}(p)\), thus \(SU=Q_{X^S(U)}(U)\) . As above, from Equation (ii) in Theorem 3.4, \(SU=Q_{X^S(U)}(U)\) and \(-\phi (U)\) are optimally coupled. From optimal coupling theory, and since the law of \(Q_{X^S(U)}\) (which is also the law of SU from Point (1) in the definition of quantile operator) is absolutely continuous with respect to the Lebesgue measure, there is some l.s.c. convex function \(f^S: \mathbf {R}^n \rightarrow \mathbf {R}\) such that \(-\phi (U) =\nabla f^S(SU)\), thus \(-\phi (p) =\nabla f^S(Sp)\) at every \(p \in [0,1]^n\) (because the support of \(P^U\) is \([0,1]^n\)). Differentiating this equality at every interior point p, we get \(-\nabla \phi (p)=\nabla ^2 f^S(Sp).S\) for every definite positive symmetric \(n \times n\) matrix S. From Point (ii) of the above lemma applied to \(A=\nabla \phi (p)\) and \(B=\nabla ^2 f^S(Sp)\), we get

$$\begin{aligned} \nabla \phi (p)=(\nabla \phi _1(p),\ldots , \nabla \phi _n(p))= \left( \begin{array}{cccc} \lambda _1(p_1) &{}\quad 0 &{}\quad \ldots &{}\quad 0 \\ 0 &{}\quad \lambda _2(p_2) &{}\quad 0 &{}\quad \ldots 0 \\ .&{}\quad . &{}\quad . &{}\quad . \\ 0 &{}\quad 0 &{}\quad \ldots &{}\quad \lambda _n(p_n)\end{array}\right) \end{aligned}$$

with \(\lambda (p_1)=\cdots =\lambda _n(p_n) \le 0\). This implies that \(\phi _i\) only depends on \(p_i\) for \(i=1,\ldots ,n\), and \(\phi '(p_1)=\cdots =\phi '(p_n)\). Thus, all the \(\phi _i'\) are equal to a same constant \(-a\) for some \(a \ge 0\), and finally \(\phi (p)=b-ap\) for some \(b=(b_1,\ldots ,b_n)\).

Proof of Corollary 3.9

From the second part of Corollary 3.7, for every \(X \in V_2\), we have:

$$\begin{aligned} I(X)= & {} -\,a \int Q_X(p).p \ dP+\int (b_1,\ldots ,b_n).Q_X(p)dP\nonumber \\= & {} -\,a \int Q_X(p).p \ dP+\sum _{i=1}^n b_iE(X_i) \end{aligned}$$

the second equality being a consequence of \(Q_X=_d X\). In the proof, we will use the following claim:

Claim. For every \(i=1,\ldots ,n\), the events \(A_i=\{p=(p_1,\ldots ,p_i,\ldots ,p_n) \in [0,1]^{n},\ p_i \ge \frac{1}{2}\}\) are (mutually) independent.


Just remark that \(P(\{(p_1,\ldots ,p_i,\ldots ,p_n)\in [0,1]^n, p_1 \ge \frac{1}{2},\ldots ,p_n \ge \frac{1}{2}\})=(\frac{1}{2})^n=\prod _{i=1}^{n}P(A_i)\). This remark applies similarly to any intersection of the \(A_i\) instead of \(A1 \cap A_2\ldots \cap A_n\). \(\square \)

Now, in Eq. 12, taking X equal to zero except component i which is equal to one, we get one part of Corollary 3.9:

$$\begin{aligned} \alpha _i:= I(0,\ldots ,0,1,0,\ldots ,0)=b_i-\frac{a}{2} \end{aligned}$$

because from Point (1) in the definition of quantile operator, \(Q_{(0,\ldots ,0,1,0,\ldots ,0)}\) and \((0,\ldots ,0,1,0,\ldots ,0)\) are equal.

Furthermore, since \(Q_{(1,\ldots ,1)}=(1,\ldots ,1)\), Normalization assumption on I implies \(1=I(1,\ldots ,1)=\sum _{i=1}^n\alpha _i\), that is, from Eq. (13):

$$\begin{aligned} \sum _{i=1}^n\ b_i=n\frac{a}{2}+1 \end{aligned}$$

Now, from the claim above, defining \(A_i=\{p=(p_1,\ldots ,p_i,\ldots ,p_n) \in [0,1]^{n}, p_i\ge \frac{1}{2}\}\) for every \(i=1,\ldots ,n\), we get (this is a consequence of Proposition 5.2 and its proof.):

$$\begin{aligned} \beta :=I((\mathbb {1}_{A_1},\ldots ,\mathbb {1}_{A_n}))=\sum _{i=1}^n \int _0^1 F_{\mathbb {1}_{A_i}}^{-1}(t).(b_i-at) \mathrm{d}t. \end{aligned}$$


$$\begin{aligned} F_{\mathbb {1}_{A_i}}^{-1}(t)= {\left\{ \begin{array}{ll} 1 \text{ if } \frac{1}{2} < t \le 1 \\ 0 \text{ if } 0 \le t \le \frac{1}{2} \end{array}\right. } \end{aligned}$$

Thus, Eq. (15) is equivalent to

$$\begin{aligned} \beta =\frac{\sum _{i=1}^n b_i}{2}-\frac{3na}{8} \end{aligned}$$

From Eqs. (13) and (16), we finally get

$$\begin{aligned} a=\frac{4-8\beta }{n}, \end{aligned}$$

which is the second part of Corollary 3.9.

Taking into account that \(b_i=\alpha _i + \frac{a}{2}\) and \(\sum _{i=1}^n\alpha _i=1\), one gets that

\(I(A_1^{*},\ldots ,A_i^{*},\ldots ,A_n^{*})= \frac{1}{2}-\frac{na}{8}\).

Hence letting \(\alpha _i= I(0,\ldots ,0,S^{*},0,\ldots ,0)\), \(\beta =I(A_1^{*},\ldots ,A_i^{*},\ldots ,A_n^{*})\) , one gets that \(a=\frac{4-8\beta }{n}\) and \(b_i=\alpha _i+\frac{a}{2}\), \(\forall i=1,\ldots ,n\) .

It remains to prove that the \(A_i\)’s are independent. This is immediate since:

\(P(p=(p_1,\ldots ,p_i,\ldots ,p_n)\in [0,1]^n, p_1 \ge \frac{1}{2},\ldots ,p_n \ge \frac{1}{2})=(\frac{1}{2})^n=\prod _{i=1}^{n}P(p=(p_1,\ldots ,p_i,\ldots ,p_n) \in [0,1]^n, p_i \ge \frac{1}{2}).\)

Proof of Proposition 5.1


From Point 2 in Corollary 3.7 and from Point (ii) in Theorem 3.4, for every \({Y}=({Y}_1,\ldots ,{Y}_k,\ldots ,{Y}_n)\) with the same marginals as X, we get

$$\begin{aligned} I({Y})=\sum _{k=1}^n b_k E({Q}^U_{{Y}}(U)_k) -aE({Q}^U_{{Y}}(U).U), \end{aligned}$$

where \({Q}^U_{{Y}}\) denotes the \(U-\)quantile of Y, and \({Q}^U_{{Y}}(U)_k\) denotes component k of \({Q}^U_{{Y}}(U)\). Since \(Q^U_{{Y}}(U)\) and Y have the same law (from Point 1 in the definition of U-quantile operator), they have the same marginals, i.e., \({Q}^U_{{Y}}(U)_k=_dX_k\) for every \(k=1,\ldots ,n\). Recalling that for every \(k=1,\ldots ,n\), \(X_k=_dY_k,\) we finally get \({Q}^U_{{Y}}(U)_k=_d {Y}_k=_dX_k\). Consequently,

$$\begin{aligned} \min _{X_k=_d {Y}_k, k=1,\ldots ,n} I({Y}_1,\ldots ,{Y}_n)=\sum _{k=1}^n b_k E(X_k)-a \max _{X_k=_d{Y}_k, k=1,\ldots ,n} E({Q}^U_{{Y}}(U).U).\end{aligned}$$


$$\begin{aligned}\max _{X_k=_d{Y}_k, k=1,\ldots ,n} E({Q}^U_{{Y}}(U).U))=\max _{X_k=_d{Y}_k, k=1,\ldots ,n} \sum _{k=1}^n E({Q}^U_{{Y}}(U)_k.U_k).\end{aligned}$$

Thus, using that \(Q^U_{{Y}}(U)\) and Y have the same marginals, we get

$$\begin{aligned} \max _{X_k=_d{Y}_k, k=1,\ldots ,n} \sum _{k=1}^n E({Q}^U_{{Y}}(U)_k.U_k)) \le \sum _{k=1}^n \max _{X_k=_d{Y}_k} E({Y}_k.U_k). \end{aligned}$$

But from Optimal coupling theory, since each \(U_k\) has a probability law on \(\mathbf {R}\) which is absolutely continuous with respect to the Lebesgue measure, for every \(k=1,\ldots ,n\) we can characterize the solution \(Y_k\) of the maximum problem

$$\begin{aligned}\max _{X_k=_dY_k} E(Y_k.U_k)=E(g_k' (U_k).U_k),\end{aligned}$$

that is \(Y_k\) is a solution if and only if it can be written \(g_k' (U_k)\) for some l.s.c. convex function \(g_k: \mathbf {R} \rightarrow \mathbf {R}\) satisfying \(g_k'(U_k)=_dX_k\) (see Point (ii) of Proposition 6.1). But if we define \(g_k(p)=\int _{0}^{p}F_{X_k}^{-1}(t)dt\), where \(F_{X_k}^{-1}\) is the standard one-dimensional quantile function of X, then \(g_k\) is convex continuous (hence convex l.s.c), satisfies \(g_k^{'}=F_{X_k}^{-1}\), and \(F_{X_k}^{-1}(U_k)=_dX_k\). Hence, Finally,

$$\begin{aligned} \max _{X_k=_d{Y}_k, k=1,\ldots ,n} \sum _{k=1}^n E({Q}^U_{{Y}}(U)_k.U_k)) \le \sum _{k=1}^n E(F_{X_k}^{-1}(U_k).U_k). \end{aligned}$$

The equality will be proved if we can prove that \(\tilde{Y}:=(F_{X_1}^{-1}(U_1),\ldots ,F_{X_n}^{-1}(U_n))\) has the same marginal as X (which is true), and if there exists some \(Y \in V_2\) with \(X_k=_dY_k\) for every \(k=1,\ldots ,n\), and such that \({Q}^U_{{Y}}(U)={\tilde{Y}}\).

To prove such existence result, applying optimal coupling theory we get that for any given Y, \({Q}^U_Y(U)\) is characterized as \({Q}^U_Y(U)= \nabla f(U)\) for some convex l.s.c. function \(f: \mathbf {R}^n \rightarrow \mathbf {R}\) with \(\nabla f(U)=_dY\). To use this characterization, simply define \(Y=\tilde{Y}\) and

$$\begin{aligned} f(p_1,\ldots ,p_n)=\int _{0}^{p_1}F_{X_1}^{-1}(t)dt+\cdots +\int _{0}^{p_k}F_{X_k}^{-1}(t)dt+\cdots +\int _{0}^{p_n}F_{X_n}^{-1}(t)dt. \end{aligned}$$

The function f is continuous and convex. Moreover, \(\nabla f(U)=(F_{X_1}^{-1}(U_1),\ldots ,F_{X_n}^{-1}(U_n))=\tilde{Y}\), and from the previous characterization, \({Q}^U_Y(U)=\tilde{Y}\), with \(X_k=_d \tilde{Y}_k=_d {Y}_k, k=1,\ldots ,n\). This ends the proof.

Now, as stated in Proposition 5.1, the components of Y are independent, since

$$\begin{aligned} P(\forall k, F_{X_k}^{-1}(U_k)\le x_k)=P(\forall k, U_k \le F_{X_k}(x_k))=\prod _{k=1}^{n}F_{X_k}(x_k)=\prod _{k=1}^{n}P(F_{X_k}^{-1}(U_k)\le x_k). \end{aligned}$$

\(\square \)

Proof of Proposition 5.2

From 6. in Definition 3.1 we have \(Q_{Y_1,\ldots ,Y_n}(p_1,\ldots ,p_n)=(F_{Y_1}^{-1}(p_1),\ldots ,F_{Y_n}^{-1}(p_n))\) when the \(Y_i\) are independent. Thus, from Corollary 3.7, we get

$$\begin{aligned} I(Y)=\sum _{i=1}^n \int _0^1 F_{Y_i}^{-1}(t).(b_i-at) \mathrm{d}t. \end{aligned}$$

Then, for every \(i=1,\ldots ,n\), we can define \(f_i: [0,1]\longrightarrow [0,1]\) by \(f_i(t)=\frac{1}{2b_i-a}(at^{2}+2(b_i-a)t)\). We have \(f_i(0)=0\), \(f_i(1)=1\), \(f_i\) is increasing (indeed, in Theorem 3.4, the components of \(\phi \) are nonnegative, where \(\phi (p_1,\ldots ,p_n)=(b_1,\ldots ,b_n)-a(p_1,\ldots ,p_n)\), which implies \(b_i\ge a\)), \(f_i\) is convex and \(f_i'(1-t)=\frac{b_i-at}{\alpha _i}\), where we denote \(\alpha _i=b_i-\frac{a}{2}\).

In Eq. (17), taking as a particular case Y equal to zero except component i equal to one, an immediate computation gives \(\alpha _i=b_i-\frac{a}{2}= I(0,\ldots ,0,1,0,\ldots ,0)\) because, from Point 1 in Definition 3.1, we get \(Q_{(0,\ldots ,0,1,0,\ldots ,0)}=(0,\ldots ,0,1,0,\ldots ,0).\) This finally proves

$$\begin{aligned} I(Y)=\sum _{i=1}^n \alpha _i \int _0^1 F_{Y_i}^{-1}(t).f_i'(1-t) \mathrm{d}t, \end{aligned}$$

i.e., the first assertion in Proposition 5.2.

For the second assertion, remark that since \(b_k=\alpha _k+\frac{a}{2}\), \(\alpha _k\ge \alpha _{k'}\) implies \(b_k\ge b_{k'}\). Thus, if \(Y_k=_dY_{k'}\) and has nonnegative values, we get

$$\begin{aligned} \alpha _k \int _0^1 F_{Y_k}^{-1}(t).f_k'(1-t)= & {} \int _0^1 F_{Y_k}^{-1}(t).(b_k-at)\mathrm{d}t\\= & {} \int _0^1 F_{Y_{k'}}^{-1}(t).(b_k-at)dt \ge \int _0^1 F_{Y_{k'}}^{-1}(t).(b_{k'}-at)\mathrm{d}t \end{aligned}$$

the last term being also equal to \(\alpha _{k'} \int _0^1 F_{Y_{k'}}^{-1}(t).f'_{k'}(1-t)dt\). This completes the proof of Proposition 5.1.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Bas, S., Bich, P. & Chateauneuf, A. Multidimensional inequalities and generalized quantile functions. Econ Theory 71, 375–409 (2021).

Download citation


  • Multidimensional distributions
  • Quantile
  • Inequality
  • Optimal coupling

JEL Classification

  • D63
  • D81