Sharp upper and lower bounds for maximum likelihood solutions to random Gaussian bilateral inequality systems


This paper focuses on finding a solution maximizing the joint probability of satisfaction of a given set of (independent) Gaussian bilateral inequalities. A specially structured reformulation of this nonconvex optimization problem is proposed, in which all nonconvexities are embedded in a set of 2-variable functions composing the objective. From this, it is shown how a polynomial-time solvable convex relaxation can be derived. Extensive computational experiments are also reported, and compared to previously existing results, showing that the approach typically yields feasible solutions and upper bounds within much sharper confidence intervals.

This is a preview of subscription content, log in to check access.


  1. 1.

    Andrieu, L., Henrion, R., Römisch, W.: A model for dynamic chance constraints in hydro power reservoir management. Eur. J. Oper. Res. 207, 579–589 (2010)

    MathSciNet  Article  Google Scholar 

  2. 2.

    Androulakis, I.P., Maranas, C.D., Floudas, C.A.: \(\alpha BB\): a global optimization method for general constrained nonconvex problems. J. Glob. Optim. 7, 337–363 (1995)

    MathSciNet  Article  Google Scholar 

  3. 3.

    Bienstock, D., Chertkov, M., Harnett, S.: Chance-constrained optimal power flow: risk-aware network control under uncertainty. SIAM Rev. 56(3), 461–495 (2014)

    MathSciNet  Article  Google Scholar 

  4. 4.

    Boyd, S., Barratt, C.: Linear Controller Design: Limits of Performance. Prentice Hall, Englewood Cliffs (1991)

    Google Scholar 

  5. 5.

    Bremer, I., Henrion, R., Möller, A.: Probabilistic constraints via SQP solver: application to a renewable energy management problem. Comput. Manag. Sci. 12, 435–459 (2015)

    MathSciNet  Article  Google Scholar 

  6. 6.

    Charnes, V., Cooper, W.: Chance-constrained programming. Manag. Sci. 6, 73–79 (1959)

    MathSciNet  Article  Google Scholar 

  7. 7.

    Cheng, J., Lisser, A.: A second-order cone programming approach for linear programs with joint probabilistic constraints. Oper. Res. Lett. 40(5), 325–328 (2012)

    MathSciNet  Article  Google Scholar 

  8. 8.

    Cheng, J., Houda, M., Lisser, A.: Second-order cone programming approach for elliptically distributed joint probabilistic constraints with dependent rows. Optimization-online DB FILE/2014/05/4363 (2015)

  9. 9.

    Fábián, C.I., Csizmás, E., Drenyovszki, R., van Ackooij, W., Vajnai, T., Kovács, L., Szántai, T.: Probability maximization by inner approximation. Acta Polytech. Hung. 15(1), 105–125 (2018)

    Google Scholar 

  10. 10.

    Floudas, C.A.: Deterministic Global Optimization Theory, Methods and Applications. Kluwer, Dordrecht (2000)

    Google Scholar 

  11. 11.

    Grant, M., Boyd, S.: The CVX users’ guide. CVX Research Inc., Version 2.1. (2013). Accessed Dec 2018

  12. 12.

    Grötschel, M., Lovasz, L., Schrijver, A.: Geometric Algorithms and Combinatorial Optimization. Springer, Berlin (1988)

    Google Scholar 

  13. 13.

    Henrion, R., Strugarek, C.: Convexity of chance constraints with independent random variables. Comput. Optim. Appl. 41, 263–276 (2008)

    MathSciNet  Article  Google Scholar 

  14. 14.

    Henrion, R., Strugarek, C.: Convexity of chance constraints with dependent random variables: the use of copulae. In: Bertocchi, M., Consigli, G., Dempster, M. (eds.) Stochastic Optimization Methods in Finance and Energy. International Series in Operations Research and Management Science, vol. 163, pp. 427–439. Springer, Berlin (2011)

    Google Scholar 

  15. 15.

    Jagannathan, R.: Chance-constrained programming with joint constraints. Oper. Res. 22, 358–372 (1974)

    MathSciNet  Article  Google Scholar 

  16. 16.

    Küçükyavuz, S.: On mixing sets arising in chance-constrained programming. Math. Program. 132(1–2), 31–56 (2012)

    MathSciNet  Article  Google Scholar 

  17. 17.

    Liu, X., Küçükyavuz, S., Luedtke, J.: Decomposition algorithm for two-stage chance constrained programs. Math. Program. Ser. B 157(1), 219–243 (2016)

    MathSciNet  Article  Google Scholar 

  18. 18.

    Locatelli, M., Schoen, F.: Global Optimization Theory, Algorithms and Applications. SIAM, Philadelphia (2013)

    Google Scholar 

  19. 19.

    Luedtke, J.: An integer programming and decomposition approach to general chance constrained mathematical programs. In: Eisenbrand, F., Shepherd, F.B. (eds.) Integer Programming and Combinatorial Optimization. Lecture Notes in Computer Science, vol. 6080, pp. 271–284. Springer, Berlin (2010)

    Google Scholar 

  20. 20.

    Luedtke, J.: A branch-and-cut decomposition algorithm for solving chance constrained mathematical programs with finite support. Math. Program. 146(1–2), 219–244 (2014)

    MathSciNet  Article  Google Scholar 

  21. 21.

    Luedtke, J., Ahmed, S.: A sample approximation approach for optimization with probabilistic constraints. SIAM J. Optim. 19, 674–699 (2008)

    MathSciNet  Article  Google Scholar 

  22. 22.

    Luedtke, J., Ahmed, S., Nemhauser, G.L.: An integer programming approach for linear programs with probabilistic constraints. Math. Program. 122, 247–272 (2010)

    MathSciNet  Article  Google Scholar 

  23. 23.

    Meyer, C.A., Floudas, C.A.: Convex envelopes of trilinear monomials with positive or negative domains. J. Glob. Optim. 29, 125–155 (2004)

    Article  Google Scholar 

  24. 24.

    Miller, L.B., Wagner, H.: Chance-constrained programming with joint constraints. Oper. Res. 13, 930–945 (1965)

    Article  Google Scholar 

  25. 25.

    Minoux, M., Zorgati, R.: Convexity of Gaussian chance constraints and of related probability maximization problems. Comput. Stat. 31(1), 387–408 (2016).

    MathSciNet  Article  MATH  Google Scholar 

  26. 26.

    Minoux, M., Zorgati, R.: Global probability maximization for a Gaussian bilateral inequality in polynomial time. J. Glob. Optim. 68(4), 879–898 (2017).

    MathSciNet  Article  MATH  Google Scholar 

  27. 27.

    Neumaier, A., Shcherbina, O., Huyer, W., Vinko, T.: A comparison of complete global optimization solvers. Math. Progr. 103, 335–356 (2005)

    MathSciNet  Article  Google Scholar 

  28. 28.

    Prekopa, A.: Stochastic Programming. Kluwer, Dordrecht (1995)

    Google Scholar 

  29. 29.

    Rikun, A.D.: A convex envelope formula for multilinear functions. J. Glob. Optim. 10, 425–437 (1997)

    MathSciNet  Article  Google Scholar 

  30. 30.

    Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)

    Google Scholar 

  31. 31.

    Sakalli, U.S., Baykoç, O.F., Birgören, B.: Stochastic optimization for blending problem in brass casting industry. Ann. Oper. Res. 186, 141–157 (2011)

    MathSciNet  Article  Google Scholar 

  32. 32.

    Sherali, H.D., Alameddine, A.: An explicit characterization of the convex envelope of a bivariate bilinear function over special polytopes. Ann. Oper. Res. 25, 197–210 (1990)

    MathSciNet  Article  Google Scholar 

  33. 33.

    Sherali, H.D., Tuncbilek, C.H.: A reformulation-convexification approach for solving nonconvex quadratic programming problems. J. Glob. Optim. 7, 1–31 (1995)

    MathSciNet  Article  Google Scholar 

  34. 34.

    Shih, J.S., Frey, H.C.: Coal blending optimization under uncertainty. Eur. J. Oper. Res. 83, 452–465 (1995)

    Article  Google Scholar 

  35. 35.

    Souza Lobo, M., Vandenberghe, L., Boyd, S., Lebret, H.: Applications of second-order cone programming. Linear Algebra Appl. 284, 193–228 (1998)

    MathSciNet  Article  Google Scholar 

  36. 36.

    Stein, O., Kirst, P., Steuermann, P.: An enhanced spatial branch-and-bound method in global optimization with nonconvex constraints. Research Report Karlsruhe Institute of Technology-Germany (22 March 2013)

  37. 37.

    Tawarmalani, M., Sahinidis, N.: Semidefinite relaxations of fractional programs via novel convexification techniques. J. Glob. Optim. 20, 137–158 (2001)

    MathSciNet  Article  Google Scholar 

  38. 38.

    Tawarmalani, M., Sahinidis, N.: Convexification and Global Optimization in Continuous and Mixed-Integer Nonlinear Programming. Kluwer, Dordrecht (2002)

    Google Scholar 

  39. 39.

    van Ackooij, W., Henrion, R., Moller, A., Zorgati, R.: On joint probabilistic constraints with Gaussian coefficient matrix. Oper. Res. Lett. 39, 99–102 (2011)

    MathSciNet  Article  Google Scholar 

  40. 40.

    van Ackooij, W., de Oliveira, W.: Convexity and optimization with copulæstructured probabilistic constraints. Optim. J. Math. Program. Oper. Res. 65(7), 1349–1376 (2016)

    MATH  Google Scholar 

Download references


Three anonymous Reviewers are gratefully acknowledged for all remarks and constructive comments which resulted in an improved revised version of the paper.

Author information



Corresponding author

Correspondence to Riadh Zorgati.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.



Proposition A1

Let \(\gamma \ge 0\) and \(\rho : {\mathbb {R}}_{+}\rightarrow {\mathbb {R}}\) be defined as:

$$\begin{aligned} \rho (z)=2-z(z+\gamma )-\dfrac{(z+\gamma )e^{-\frac{z^{2}}{2}}}{\sqrt{2\pi }\left( \varPhi (\gamma )+\varPhi (z)-1 \right) }. \end{aligned}$$


  1. (i)

    \(\rho \) can have at most one zero, and when such a zero exits, it necessarily belongs to the interval \(\left[ \sqrt{1+\frac{\gamma ^{2}}{4}} -\frac{\gamma }{2}; \sqrt{2+\frac{\gamma ^{2}}{4}} -\frac{\gamma }{2}\right] \);

  2. (ii)

    if \(\gamma > 0\), \(\rho \) is strictly decreasing on the above interval.


We first show that, for all \(z \ge \gamma > 0\),

$$\begin{aligned} \dfrac{(z+\gamma )e^{-\frac{z^{2}}{2}}}{\sqrt{2\pi }\left( \varPhi (\gamma )+\varPhi (z)-1 \right) } \le 1. \end{aligned}$$

To that aim, we consider the function \(\zeta : {\mathbb {R}}_{+}\rightarrow {\mathbb {R}}\) defined as:

$$\begin{aligned} \zeta (z)=(z+\gamma )e^{-\frac{z^{2}}{2}}-\sqrt{2\pi }\left( \varPhi (\gamma )+\varPhi (z)-1 \right) . \end{aligned}$$

Since \(\frac{d\zeta (z)}{dz}=(1-z(z+\gamma ))e^{-\frac{z^{2}}{2}}-e^{-\frac{z^{2}}{2}}=-z(z+\gamma )e^{-\frac{z^{2}}{2}}\), \(\zeta \) is strictly decreasing on \(\left. \right] 0, +\infty \left[ \right. \). Thus, to show that \(\zeta (z) \le 0\) (for \(z \ge \gamma \)), it suffices to show that \(\zeta (\gamma )=2\gamma e^{-\frac{\gamma ^{2}}{2}}-\sqrt{2\pi }(2\varPhi (\gamma )-1) \le 0\).

Let us consider the function \(\xi \) defined, for any \(\gamma \ge 0\), as:

$$\begin{aligned} \xi (\gamma )=2\gamma e^{-\frac{\gamma ^{2}}{2}}-\sqrt{2\pi }(2\varPhi (\gamma )-1). \end{aligned}$$

Observing that: \(\frac{d\xi (\gamma )}{d\gamma }=-2\gamma ^{2}e^{-\frac{\gamma ^{2}}{2}} < 0\) for all \(\gamma >0\), and that \(\xi (0)=0\), we can conclude that \(\xi (\gamma ) \le 0\) for all \(\gamma \ge 0\), and from this it follows that \(\zeta (\gamma ) \le 0\).

As an immediate consequence of the above property, we deduce \(\rho (z) \ge 1-z(z+\gamma )\) and thus, for all \(z \ge \gamma \):

$$\begin{aligned} 1-z(z+\gamma ) \le \rho (z) \le 2-z(z+\gamma ). \end{aligned}$$

Any zero \({\bar{z}}\) of \(\rho (z)\) should thus meet the double-sided inequality \(1 \le {\bar{z}}({\bar{z}}+\gamma ) \le 2\) or equivalently:

$$\begin{aligned} \sqrt{1+\frac{\gamma ^{2}}{4}} -\frac{\gamma }{2} \le {\bar{z}} \le \sqrt{2+\frac{\gamma ^{2}}{4}} -\frac{\gamma }{2} \end{aligned}$$

which proves (i).

Now, to prove (ii), we are going to determine a majorant of the derivative \(\frac{d\rho (z)}{dz}\) on the above interval and show that it can be bounded from above by 0. The only tricky part is to determine a majorant of the derivative of

$$\begin{aligned} q(z)=\dfrac{-(z+\gamma )e^{-\frac{z^{2}}{2}}}{\sqrt{2\pi }\left( \varPhi (\gamma )+\varPhi (z)-1 \right) }. \end{aligned}$$

The latter reads:

$$\begin{aligned} \frac{dq(z)}{dz}=\frac{e^{-\frac{\gamma ^{2}}{2}}}{\sqrt{2\pi }\left( \varPhi (\gamma )+\varPhi (z)-1 \right) }\left[ z(z+\gamma )-1+ \dfrac{(z+\gamma )e^{-\frac{z^{2}}{2}}}{\sqrt{2\pi }\left( \varPhi (\gamma )+\varPhi (z)-1 \right) }\right] . \end{aligned}$$

On the interval \(\left[ \sqrt{1+\frac{\gamma ^{2}}{4}} -\frac{\gamma }{2}; \sqrt{2+\frac{\gamma ^{2}}{4}} -\frac{\gamma }{2}\right] \), we know that \(z(z+\gamma )-1\) can be bounded from above with 1, and from (29) that \(\dfrac{(z+\gamma )e^{-\frac{z^{2}}{2}}}{\sqrt{2\pi }\left( \varPhi (\gamma )+\varPhi (z)-1 \right) } \) can be bounded from above with 1. Thus,

$$\begin{aligned} \dfrac{dq(z)}{dz} \le \dfrac{2e^{-\frac{z^{2}}{2}}}{\sqrt{2\pi }\left( \varPhi (\gamma )+\varPhi (z)-1 \right) } \end{aligned}$$

and, using once more (29), we get:

$$\begin{aligned} \dfrac{dq(z)}{dz} \le \frac{2}{z+\gamma }. \end{aligned}$$

We thus deduce the following upper bound for \(\dfrac{d\rho (z)}{dz}\) on the interval \(\left[ \sqrt{1+\frac{\gamma ^{2}}{4}} -\frac{\gamma }{2}; \sqrt{2+\frac{\gamma ^{2}}{4}} -\frac{\gamma }{2}\right] \):

$$\begin{aligned} \dfrac{d\rho (z)}{dz} \le -2z-\gamma +\frac{2}{z+\gamma } =-\gamma +\dfrac{2-2z(z+\gamma )}{z+\gamma }. \end{aligned}$$

Since, in the interval under consideration, it holds: \(z(z+\gamma ) \ge 1\), we deduce that, if \(\gamma >0, \dfrac{d\rho (z)}{dz} < 0\) on this interval and this proves (ii).\(\square \)

Proposition A2

The function \({\tilde{\rho }}: {\mathbb {R}}_{+}\rightarrow {\mathbb {R}}\) defined as:

$$\begin{aligned} {\tilde{\rho }}(z)=2-z^{2}-\dfrac{2ze^{-\frac{z^{2}}{2}}}{\sqrt{2\pi }\left( 2\varPhi (z)-1 \right) } \end{aligned}$$

has a unique zero \({\bar{z}}\) on \({\mathbb {R}}_{+}\setminus {0}\), which necessarily lies with interval \(\left[ 1; \sqrt{2}\right] \). Moreover, \({\tilde{\rho }}\) is strictly decreasing on this interval.


We first show that, for all \(z > 0\),

$$\begin{aligned} \dfrac{2ze^{-\frac{z^{2}}{2}}}{\sqrt{2\pi }\left( 2\varPhi (z)-1 \right) } \le 1. \end{aligned}$$

The derivative of the function \(\zeta (z)=ze^{-\frac{z^{2}}{2}}-\sqrt{2\pi }\left( \varPhi (z)-\frac{1}{2} \right) \) is equal to \((1-z^{2})e^{-\frac{z^{2}}{2}}-e^{-\frac{z^{2}}{2}}=-z^{2}e^{-\frac{z^{2}}{2}}\), hence, \(\zeta \) is strictly decreasing on \(\left. \right] 0, +\infty \left[ \right. \) and since \(\zeta (0)=0\), it follows that \(\zeta (z) \le 0\) for all \(z \ge 0\), and this proves (31).

Now, if \({\bar{z}}\) is a zero of \({\tilde{\rho }}(z)\), it holds:

$$\begin{aligned} -{\bar{z}}^{2}+2 = \dfrac{2{\bar{z}} e^{-\frac{{\bar{z}}^{2}}{2}}}{\sqrt{2\pi }\left( 2\varPhi ({\bar{z}})-1 \right) }. \end{aligned}$$

Since the right-hand side above is necessarily included in the interval \(\left[ 0;1 \right] \), it is seen that \({\bar{z}}\) has to belong to the interval \(\left[ 1;\sqrt{2} \right] \). To show strict monotonicity of \({\tilde{\rho }}\) on this interval, we compute the derivative:

$$\begin{aligned} \dfrac{d{\tilde{\rho }}}{dz}=-2z-\dfrac{2e^{-\frac{z^{2}}{2}}}{\sqrt{2\pi }\left( 2\varPhi (z)-1 \right) } \left[ 1-z^{2}-\dfrac{2ze^{-\frac{z^{2}}{2}}}{\sqrt{2\pi }\left( 2\varPhi (z)-1 \right) }\right] . \end{aligned}$$

Using (31) and the fact that \(z \ge 1\), we can derive the following upper bound for \(\dfrac{d{\tilde{\rho }}}{dz}\):

$$\begin{aligned} \dfrac{d{\tilde{\rho }}}{dz} \le -2z-\frac{1}{z}\left[ 1-z^{2}-1 \right] = -z \le -1. \end{aligned}$$

Thus \({\tilde{\rho }}\) is strictly deacreasing on \(\left[ 1;\sqrt{2} \right] \) and has a unique zero on this interval which can be determined to any prescribed accuracy by dichotomic search (a 6-digit approximation is 1.175461). \(\square \)

Proposition A3

For all \((\varepsilon ,y) \in D\) defined by Eq. (16), the following bounds are valid:

$$\begin{aligned}&\frac{-(b-a)e^{\frac{-\gamma ^{2}}{2}}}{\sqrt{2\pi }\alpha ^{2}(2\varPhi (\gamma )-1)} \le \frac{\partial \varphi (\varepsilon ,y)}{\partial \varepsilon } \le 0 \end{aligned}$$
$$\begin{aligned}&0 \le \frac{\partial \varphi (\varepsilon ,y)}{\partial y} \le \frac{e^{\frac{-\gamma ^{2}}{2}}}{\sqrt{2\pi }\alpha ^{2}(2\varPhi (\gamma )-1)}. \end{aligned}$$


Denoting \(s=\gamma +\frac{y}{\alpha +\varepsilon }\), \(t=\frac{b-a-y}{\alpha +\varepsilon }-\gamma \) for conciseness, simple calculation leads to:

$$\begin{aligned} \frac{\partial \varphi (\varepsilon ,y)}{\partial \varepsilon }=\frac{-ye^{\frac{-s^{2}}{2}}-(b-a-y)e^{\frac{-t^{2}}{2}}}{(\varPhi (s)+\varPhi (t)-1)\sqrt{2\pi }(\alpha +\varepsilon )^{2}}. \end{aligned}$$

Since, on D, \(y \ge 0\) and \((b-a-y) \ge 0\), we can see that both terms in the numerator of the above expression are negative, and from this \(\frac{\partial \varphi (\varepsilon ,y)}{\partial \varepsilon } \le 0\) follows.

To get the lower bound, we use the fact that, for any \((\varepsilon ,y) \in D\), the double inequality \(t \ge s \ge \gamma \), holds, therefore:

  • both terms \(-y e^{\frac{-s^{2}}{2}}\) and \(-(b-a-y)e^{\frac{-s^{2}}{2}}\) can be bounded from below by \(-y e^{\frac{-\gamma ^{2}}{2}}\) and \(-(b-a-y)e^{\frac{-\gamma ^{2}}{2}}\) respectively;

  • \(\varPhi (s)+\varPhi (t)-1\) can be bounded from below by \(2\varPhi (\gamma )-1\);

  • \((\alpha +\varepsilon )^{2}\) can be bounded from below by \(\alpha ^{2}\).

From this, the lower bound given in (32) follows.

Now, \(\frac{\partial \varphi (\varepsilon ,y)}{\partial y}\) can be written:

$$\begin{aligned} \frac{\partial \varphi (\varepsilon ,y)}{\partial y}=\frac{1}{(\varPhi (s)+\varPhi (t)-1)\sqrt{2\pi }(\alpha +\varepsilon )}\left[ e^{\frac{-s^{2}}{2}}-e^{\frac{-t^{2}}{2}}\right] . \end{aligned}$$

The zero lower bound in (33) follows from \(s \le t\), and the upper bound in (33) follows from the fact that \(e^{\frac{-s^{2}}{2}}-e^{\frac{-t^{2}}{2}}\) can be bounded from above with \(e^{\frac{-\gamma ^{2}}{2}}\), and \(\varPhi (s)+\varPhi (t)-1 \ge 2\varPhi (\gamma )-1\) and \(\alpha +\varepsilon \ge \alpha \). \(\square \)

Proposition A4

Let \(\varOmega =\left\{ \left( \begin{array}{c} u \\ v \end{array} \right) : -B_{\varepsilon }^{+} \le u \le -B_{\varepsilon }^{-}, \; -B_{y}^{+} \le v \le -B_{y}^{-} \right\} \) and consider \(\left( \begin{array}{cc} {\hat{\varepsilon }}, {\hat{y}} \end{array} \right) ^{T}\) any given point in D. Then, for any \(\left( \begin{array}{cc} u, v \end{array} \right) ^{T} \notin \varOmega \), there exists \(\left( \begin{array}{cc} {\bar{u}}, {\bar{v}} \end{array} \right) ^{T} \in \varOmega \) such that:

$$\begin{aligned} \varphi ^{\sharp }({\bar{u}}, {\bar{v}})-{\bar{u}} {\hat{\varepsilon }}-{\bar{v}}{\hat{y}} \le \varphi ^{\sharp }(u,v)- u{\hat{\varepsilon }}-v{\hat{y}}. \end{aligned}$$

As a consequence, the minimum value of the function \(\varphi ^{\sharp }(u,v)- u{\hat{\varepsilon }}-v{\hat{y}}\) for all \(\left( \begin{array}{cc} u, v \end{array} \right) ^{T} \in {\mathbb {R}}^2\) is necessarily attained at some point of \(\varOmega \).


Eight distinct cases have to be considered, depending on the values taken by u and v with respect to the values \(B_{\varepsilon }^{+}, B_{\varepsilon }^{-}, B_{y}^{+}, B_{y}^{-}\). The first five cases are listed below, and in each case, it will be seen that the corresponding value of \(\left( \begin{array}{cc} {\bar{u}}, {\bar{v}} \end{array} \right) ^{T}\) is the orthogonal projection of \(\left( \begin{array}{cc} u, v \end{array} \right) ^{T}\) on \(\varOmega \).

  • Case 1\(-B_{\varepsilon }^{-} \le u\) and \(v \le -B_{y}^{+}; \left( \begin{array}{c} {\bar{u}} \\ {\bar{v}} \end{array} \right) =\left( \begin{array}{c} -B_{\varepsilon }^{-} \\ -B_{y}^{+} \end{array} \right) \);

  • Case 2\(u \in \left[ -B_{\varepsilon }^{+},-B_{\varepsilon }^{-} \right] \) and \(v \le -B_{y}^{+}; \left( \begin{array}{c} {\bar{u}} \\ {\bar{v}} \end{array} \right) =\left( \begin{array}{c} u \\ -B_{y}^{+} \end{array} \right) \);

  • Case 3\(u \le -B_{\varepsilon }^{+} \) and \(v \le -B_{y}^{+}; \left( \begin{array}{c} {\bar{u}} \\ {\bar{v}} \end{array} \right) =\left( \begin{array}{c} -B_{\varepsilon }^{+} \\ -B_{y}^{+} \end{array} \right) \);

  • Case 4\(u \le -B_{\varepsilon }^{+} \) and \(v \in \left[ -B_{y}^{+}, -B_{y}^{-} \right] ; \left( \begin{array}{c} {\bar{u}} \\ {\bar{v}} \end{array} \right) =\left( \begin{array}{c} -B_{\varepsilon }^{+} \\ v \end{array} \right) \);

  • Case 5\(u \le -B_{\varepsilon }^{+}\) and \(-B_{y}^{-} \le v; \left( \begin{array}{c} {\bar{u}} \\ {\bar{v}} \end{array} \right) =\left( \begin{array}{c} -B_{\varepsilon }^{+} \\ -B_{y}^{-} \end{array} \right) \).

In each of the above cases 1–5, let \(\left( \begin{array}{cc} {\bar{\varepsilon }}, {\bar{y}} \end{array} \right) ^{T}\) denote a point in D at which the function \(\varphi (\varepsilon ,y)+{\bar{u}}\varepsilon +{\bar{v}}y\) attains its maximum value, so that:

$$\begin{aligned} \varphi ^{\sharp }({\bar{u}}, {\bar{v}})=\varphi ({\bar{\varepsilon }},{\bar{y}})+{\bar{u}} {\bar{\varepsilon }}+{\bar{v}}{\bar{y}}. \end{aligned}$$

It is easily checked that in all cases 1–5, \(\left( \begin{array}{cc} {\bar{\varepsilon }}, {\bar{y}} \end{array} \right) ^{T}\) lies on the boundary of D. To be more precise, let us denote OPQN the 4 extreme points of D with respective coordinates:

$$\begin{aligned} \left( \begin{array}{c} 0 \\ 0 \end{array} \right) , \; \left( \begin{array}{c} 0 \\ y_{\max } \end{array} \right) , \; \left( \begin{array}{c} \beta \\ y_{\max }-\gamma \beta \end{array} \right) , \; \left( \begin{array}{c} \beta \\ 0 \end{array} \right) , \; \end{aligned}$$

(where \(y_{\max }=\frac{b-a}{2}-\gamma \alpha \)). Then, it is observed that:

  • in case 1, \(\left( \begin{array}{cc} {\bar{\varepsilon }}, {\bar{y}} \end{array} \right) ^{T}\) coincides with N; (This statement follows from the fact that, the gradient of the function \(\varphi (\varepsilon ,y)+{\bar{u}}\varepsilon +{\bar{v}}y\) being equal to: \( \left[ \begin{array}{cc} \frac{\partial \varphi (\varepsilon ,y)}{\partial \varepsilon } +{\bar{u}} ,&\, \frac{\partial \varphi (\varepsilon ,y)}{\partial y} +{\bar{v}} \end{array} \right] ^{T}\), in case 1 its first component is everywhere positive on D and its second component is everywhere negative on D; thus, no \(\left( \begin{array}{cc} \varepsilon , y \end{array} \right) ^{T} \in D\) can maximize the above function except \(\left( \begin{array}{c} {\bar{\varepsilon }} \\ {\bar{y}} \end{array} \right) = \left( \begin{array}{c} \beta \\ 0 \end{array} \right) \), which corresponds to N );

  • Similarly, in case 2, \(\left( \begin{array}{cc} {\bar{\varepsilon }}, {\bar{y}} \end{array} \right) ^{T}\) is located on the segment \(\left[ O, N \right] \); in case 3, it coincides with O; in case 4, it is located on the segment \(\left[ O, P \right] \); in case 5, it coincides with P.

Now, since \(\varphi ^{\sharp }(u,v)\) is defined as the maximum value over \(\left( \begin{array}{cc} \varepsilon , y \end{array} \right) ^{T} \in D\) of \(\varphi (\varepsilon , y)+u\varepsilon +vy\), it holds:

$$\begin{aligned} \varphi ^{\sharp }(u,v) \ge \varphi ({\bar{\varepsilon }}, {\bar{y}})+u{\bar{\varepsilon }}+v{\bar{y}}. \end{aligned}$$

and, using (36), the following inequality can be readily deduced from the above:

$$\begin{aligned} \varphi ^{\sharp }(u,v)-u{\hat{\varepsilon }}-v{\hat{y}} \ge \varphi ^{\sharp }({\bar{u}}, {\bar{v}})- {\bar{u}}{\hat{\varepsilon }}-{\bar{v}}{\hat{y}}+(u-{\bar{u}})({\bar{\varepsilon }}-{\hat{\varepsilon }})+(v-{\bar{v}})({\bar{y}}-{\hat{y}}). \end{aligned}$$

As a consequence, (35) holds as soon as it can be proved that:

$$\begin{aligned} (u-{\bar{u}})({\bar{\varepsilon }}-{\hat{\varepsilon }})+(v-{\bar{v}})({\bar{y}}-{\hat{y}}) \ge 0. \end{aligned}$$

It is easily checked that (37) indeed holds in each of the cases 1–5. For instance, in case 1 it holds: \(u \ge {\bar{u}}, \, {\bar{\varepsilon }}=\beta \ge {\hat{\varepsilon }}, \, v \le {\bar{v}}, \, {\hat{y}} \ge {\bar{y}}=0\), from which (37) is deduced.

Similarly in case 2, it holds: \({\bar{u}}=u, \; v \le {\bar{v}}\), and \(y \ge {\bar{y}}=0\), from which (37) is also deduced.

For cases 3–5, similar reasoning would lead to the same conclusion.

Let us now turn to analyze the last 3 cases (6–8). In order to precisely define these cases, we have to consider the function of \(\varepsilon , \sigma : \left[ 0, \beta \right] \rightarrow {\mathbb {R}}\), the values of which are those of \(\varphi (\varepsilon ,y)+u\varepsilon +vy\), for all \(\left( \begin{array}{cc} \varepsilon , y \end{array} \right) ^{T}\) belonging to the \(\left[ P,Q \right] \) segment. Its analytic expression is given by Eq. (21) in the proof of Proposition 2, where it is shown that either \(\sigma \) is concave on \(\left[ 0, \beta \right] \), or there exists a value \(\varepsilon ^{0} \in \left[ 0, \right. \beta \left[ \right. \) of \(\varepsilon \) such that \(\sigma \) is concave on \(\left[ 0, \varepsilon ^{0} \right] \) and convex on \(\left[ \epsilon ^{0}, \beta \right] \). From this, it follows that there exists \({\tilde{\varepsilon }} \in \left[ 0, \beta \right] \) such that:

  • if \(u-\gamma v \in \left[ \delta _{{\mathrm{min}}}, \; \delta _{\max }\right] \triangleq \left[ -\frac{d \sigma }{d \varepsilon }(0); -\frac{d \sigma }{d \varepsilon }({\tilde{\varepsilon }}) \right] \), the maximum value of \(\sigma \) over \(\left[ 0, \beta \right] \) is attained for \({\bar{\varepsilon }} \in \left[ 0, {\tilde{\varepsilon }} \right] \);

  • if \(u-\gamma v > -\frac{d \sigma }{d \varepsilon }({\tilde{\varepsilon }})=\delta _{\max }\), the maximum value of \(\sigma \) over \(\left[ 0, \beta \right] \) is attained for \({\bar{\varepsilon }}=\beta \);

  • if \(u-\gamma v < -\frac{d \sigma }{d \varepsilon }(0)=\delta _{{\mathrm{min}}}\), it is attained for \({\bar{\varepsilon }}=0\).

(we recall that \(\sigma \) is a decreasing function of \(\varepsilon \), so that \(0 \le \delta _{{\mathrm{min}}} \le \delta _{\max }).\)

In view of the above:

  • Case 6 arises when \((-B_{\varepsilon }^{-} \le u) \; \vee \; (-B_{y}^{-} \le v)\) and \(u-\gamma v < \delta _{{\mathrm{min}}}\) (\(\vee \) denotes the logical connector “OR”).

  • Case 7 arises when \((-B_{\varepsilon }^{-} \le u) \; \vee \; (-B_{y}^{-} \le v)\) and \(u-\gamma v \in \left[ \delta _{{\mathrm{min}}}, \; \delta _{\max }\right] \).

  • Case 8 arises when \((-B_{\varepsilon }^{-} \le u) \; \vee \; (-B_{y}^{-} \le v)\) and \(u-\gamma v > \delta _{\max }\).

In all three cases 6–8, \(\left( \begin{array}{cc} {\bar{u}}, {\bar{v}} \end{array} \right) ^{T}\) is defined as follows: if L(uv) denotes the line in \(\left( \begin{array}{cc} u, v \end{array} \right) ^{T}\) space defined as: \(L(u,v)=\left\{ \left( \begin{array}{c} u' \\ v' \end{array} \right) : u'-\gamma v'=u-\gamma v\right\} \), then \(\left( \begin{array}{cc} {\bar{u}}, {\bar{v}} \end{array} \right) ^{T}\) is the point closest to \(\left( \begin{array}{cc} u, v \end{array} \right) ^{T}\) in \(L(u,v) \bigcap \varOmega \).

For instance, if \(u-\gamma v-\gamma B_{y}^{-} \le -B_{\varepsilon }^{-}\), then \(\left( \begin{array}{c} {\bar{u}} \\ {\bar{v}} \end{array} \right) = \left( \begin{array}{c} u-\gamma v -\gamma B_{y}^{-}\\ -B_{y}^{-} \end{array} \right) \).

If \(u-\gamma v-\gamma B_{y}^{-} > -B_{\varepsilon }^{-}\), then \(\left( \begin{array}{c} {\bar{u}} \\ {\bar{v}} \end{array} \right) = \left( \begin{array}{c} -B_{\varepsilon }^{-}\\ -(B_{\varepsilon }^{-}+u-\gamma v)/\gamma \end{array} \right) \).

To illustrate the type of reasoning leading to show that (37) [hence (35)] holds, let us consider Case 6, assuming that:

$$\begin{aligned} \left( \begin{array}{c} {\bar{u}} \\ {\bar{v}} \end{array} \right) = \left( \begin{array}{c} u-\gamma v -\gamma B_{y}^{-}\\ -B_{y}^{-} \end{array} \right) . \end{aligned}$$

In that case, \({\bar{\varepsilon }}=0\) and \({\bar{y}}=y_{\max }\). Thus:

$$\begin{aligned} (u-{\bar{u}})({\bar{\varepsilon }}-{\hat{\varepsilon }})= & {} -(\gamma v + \gamma B_{y}^{-} ) {\hat{\varepsilon }}, \\ (v-{\bar{v}})({\bar{y}}-{\hat{y}})= & {} (v+B_{y}^{-})(y_{\max }-{\hat{y}}), \end{aligned}$$

so that the sum of these two quantities reads:

$$\begin{aligned} (v+B_{y}^{-})(y_{\max }-\gamma {\hat{\varepsilon }}-{\hat{y}}). \end{aligned}$$

Since in case 6, with \(\left( \begin{array}{cc} {\bar{u}}, {\bar{v}} \end{array} \right) ^{T}\) as defined above, \(v \ge -B_{y}^{-}\) and \(\left( \begin{array}{cc} {\hat{\varepsilon }}, {\hat{y}} \end{array} \right) ^{T} \in D\) implies \(y_{\max }-\gamma {\hat{\varepsilon }}-{\hat{y}} \ge 0\), (37) holds, thus proving (35).

The analysis of cases 7 and 8 could be carried out in a similar way, leading to the same conclusion.

This completes the proof of Proposition A4.\(\square \)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Minoux, M., Zorgati, R. Sharp upper and lower bounds for maximum likelihood solutions to random Gaussian bilateral inequality systems. J Glob Optim 75, 735–766 (2019).

Download citation


  • Random Gaussian inequalities
  • Joint probability maximization
  • Global optimization
  • Fenchel transform
  • Concave envelopes