1 Introduction

Semi-infinite programs (SIPs) are mathematical programs that have finitely many variables and infinitely many constraints. These infinitely many constraints are commonly denoted as a parameterized inequality that must be satisfied for all parameter values from a set of infinite cardinality. In the present article, we consider a generalization of SIPs, which we term existence-constrained semi-infinite programs (ESIPs). Rather than being subject to infinitely many inequalities, ESIPs are subject to infinitely many existence constraints that in turn assert that an inequality can be satisfied by at least one solution from a set of infinite cardinality. This kind of constraint gives ESIPs a hierarchical structure with three levels rather than the two levels present in regular SIPs.

Many important applications can be cast as ESIPs. Grossmann and coworkers [1,2,3] consider the problem of designing flexible chemical plants based on an ESIP formulation. Grossmann et al. [2] furthermore propose a flexibility index to provide a scalar measure of the flexibility of a design considering the same problem structure. The semi-infinite existence constraints in the flexibility problems originate from a consideration of uncertainty that is analogous to the adjustable robust approach proposed two decades later by Ben-Tal et al. [4] as an extension of robust optimization. Similar to two-stage stochastic programs, adjustable robust programs comprise first-stage decisions that are fixed before realization of the uncertainty and second-stage (recourse) decisions that are fixed afterward. In contrast to two-stage stochastic programs, adjustable robust programs consider the worst case with respect to uncertainty rather than a statistical measure based on probability distributions. The consideration of a worst case together with the presence of recourse variables gives rise to semi-infinite existence constraints. More recent proposals of adjustable robust optimization models include investment problems [5], network expansion problems [6], and operational planning problems for transmission grids [7].

Published solution approaches to ESIPs stemming from the robust optimization literature usually rely on relatively strong assumptions. For example, in [1,2,3], a convexity assumption is made on the constraint function. Furthermore, it is assumed that the uncertainty set is polyhedral and that the recourse variables are unconstrained. Under these conditions, it can be shown that the resulting ESIP can be written equivalently as a finite program [3].

Another special case of ESIPs that has received attention in the literature is min-max-min programs. Polak and Royset [8] consider min-max-min programs absent convexity assumptions, where the inner min operator is applied over a finite set. They distinguish between finite min-max-min programs, where also the max operator is applied over a finite set and semi-infinite min-max-min programs, where the max operator is applied over a set of infinite cardinality. They employ a smoothing technique to replace the inner min operator by a smoothed approximation, essentially reducing the min-max-min program to a min-max approximation. This approximation is used and improved successively to solve finite min-max-min programs to local optimality. Similarly, semi-infinite min-max-min programs are solved through an additional discretization of the max operator. Tsoukalas et al. [9] extend this approach by also applying the smoothing technique to the max operator in finite min-max-min programs. While applying such a smoothing approach to ESIPs may seem promising, it is not directly applicable since all host sets in ESIPs may have infinite cardinality. Furthermore, it is not obvious if the smoothing technique can be extended to be applicable to sets of infinite cardinality.

To our best knowledge, there exist no published methods for solving ESIPs deterministically absent any convexity assumptions. We propose such a method based on a family of discretization-based methods that have proved particularly useful for the solution of (generalized) SIPs ((G)SIPs) absent convexity assumptions. Therefore, we provide the following review of methods for the solution of SIPs and GSIPs absent convexity assumptions. For a broader review of (G)SIP theory, applications, and methods, the reader is referred to [10,11,12,13,14,15].

The particular difficulty in solving (G)SIPs absent convexity assumptions is the fact that their lower-level programs may be nonconvex. As a consequence, the problem cannot be reduced to a finite problem by utilizing reformulations based on optimality conditions of the lower-level program. Indeed, methods capable of solving such (G)SIPs have in common that they solve, either explicitly or implicitly, the lower-level program to global optimality.

One approach of implicitly solving the lower-level program is to employ convergent overestimators of the semi-infinite constraint. Lo Bianco and Piazzi [16] propose to use interval bounds of the constraint function to construct a genetic algorithm for the solution of SIPs. Similarly, Bhattacharjee et al. [17] propose to use interval bounds on the constraint function in a deterministic setting to provide a sequence of feasible points converging to a solution of the SIP. As an alternative to interval extensions, approaches employing convex and concave relaxations to overestimate the semi-infinite constraint in (G)SIPs are proposed by Floudas and Stein [18], Mitsos et al. [19], and Stein and Steuermann [20].

In the context of this article, the most relevant methods for solving (G)SIPs with a nonconvex lower-level program are discretization methods, i.e., methods that construct a finite approximation of the SIP by a finite discretization of the lower-level variables. Particularly algorithms building on an adaptive discretization strategy first proposed by Blankenship and Falk [21] have shown promise in solving practical problems. As such, the algorithm proposed by Bhattacharjee et al. [22] combines the previously mentioned method of generating feasible points in [17] with the algorithm in [21] as a lower bounding procedure to solve SIPs globally. The same approach is extended to the GSIP case by Lemonidis [23]. Other contributions building on [21] propose to generate feasible points by employing additional discretization-based subproblems. Mitsos [24] proposes a discretization-based upper bounding procedure used in concert with the algorithm in [21] as a lower bounding procedure. The algorithm is guaranteed to finitely solve SIPs globally with guaranteed feasibility under the assumption that there exists an \(\varepsilon \)-optimal SIP-Slater point. A GSIP algorithm following the same basic approach is proposed by Mitsos and Tsoukalas [25]. Also following a discretization approach, Tsoukalas and Rustem [26] propose a discretized oracle problem that evaluates whether or not a given target objective value can be attained by an SIP. They construct an algorithm based on a binary search of the objective space that is guaranteed to solve SIPs globally with guaranteed feasibility finitely under slightly stronger assumptions than the algorithm in [24]. Employing and adapting subproblems from the algorithms in [24, 26], we [27] propose a hybrid algorithm that inherits the stronger convergence guarantees of the algorithm in [24] while outperforming its predecessors on a standard SIP test set.

Here, we extend the concept of discretization methods in general and the algorithm from [27] in particular in order to solve ESIPs to global \(\varepsilon \)-optimality with guaranteed feasibility. The proposed discretization scheme prescribes to discretize what we term the medial-level variables while associating with each discretization point a vector of lower-level variables. Following this scheme, we derive the subproblems of the algorithm in [27] for the ESIP case in order to restate the algorithm for the solution of ESIPs. Convergence and finite termination of the algorithm is guaranteed under similar assumptions as in the SIP case with substantial differences arising exclusively from the fact that the revised assumptions need to take the presence of a third level into account.

The remainder of this article is organized as follows: In Sect. 2, definitions and assumptions are collected. In Sect. 3, the proposed algorithm is presented and a proof of finite termination is provided. In Sect. 4, numerical results are presented for the adjustable robust design of a reactor-cooler system proposed by Halemane and Grossmann [3]. Finally, Sect. 5 briefly concludes and gives an outlook for future work.

2 Preliminaries

This section comprises definitions and assumptions used throughout this article, as well as results that follow immediately from the assumptions made.

2.1 Definitions

Definition 2.1

(Notation) Throughout this article, scalars and scalar-valued functions are set in light face (e.g., \(\varphi \)), vectors and vector-valued functions are set in bold face (e.g., \(\varvec{u}\)), and sets and set-valued functions are set in calligraphic font (e.g., \(\mathcal {U}\)). Furthermore, the following definitions hold.

  1. (i)

    \(\varvec{0}\) and \(\varvec{1}\) denote vectors of contextually appropriate length, containing only zeros and ones, respectively.

  2. (ii)

    Given a point \(\bar{\varvec{u}}\), \(\mathcal {B}_\delta (\bar{\varvec{u}})\) denotes an open neighborhood of \(\bar{\varvec{u}}\) with radius \(\delta \), i.e.,

    $$\begin{aligned} \mathcal {B}_\delta (\bar{\varvec{u}}) = \left\{ \varvec{u} : ||\varvec{u} - \bar{\varvec{u}}|| < \delta \right\} . \end{aligned}$$

Definition 2.2

(ESIP formulation) We consider an ESIP of the form

$$\begin{aligned} f^* = \underset{\varvec{x} \in \mathcal {X}}{\text {min}}\, f(\varvec{x}) \quad \text {s.t.}\,\quad \forall \varvec{y} \in \mathcal {Y} \left[ \exists \varvec{z} \in \mathcal {Z}(\varvec{y}) : g(\varvec{x},\varvec{y},\varvec{z}) \le 0 \right] \end{aligned}$$
(ESIP)

without any convexity or concavity assumptions and with

$$\begin{aligned} \mathcal {X}= & {} \{ \varvec{x} \in \mathcal {X}^0 \subsetneq \mathbb {R}^{n_x} : \varvec{h}^u(\varvec{x}) \le \varvec{0} \} \\ \mathcal {Y}= & {} \{ \varvec{y} \in \mathcal {Y}^0 \subsetneq \mathbb {R}^{n_y} : \varvec{h}^m(\varvec{y}) \le \varvec{0} \} \\ \mathcal {Z}(\varvec{y})= & {} \{ \varvec{z} \in \mathcal {Z}^0 \subsetneq \mathbb {R}^{n_z} : \varvec{h}^l(\varvec{y},\varvec{z}) \le \varvec{0} \}. \end{aligned}$$

In line with Definition 2.2, we define the notion of ESIP feasibility as follows.

Definition 2.3

(ESIP feasibility) A point \(\bar{\varvec{x}} \in \mathcal {X}\) is called ESIP feasible if and only if it is feasible in (ESIP), i.e.,

$$\begin{aligned} \forall \varvec{y} \in \mathcal {Y} \left[ \exists \varvec{z} \in \mathcal {Z}(\varvec{y}) : g(\bar{\varvec{x}},\varvec{y},\varvec{z}) \le 0 \right] . \end{aligned}$$

Multiple generalizations can be made to (ESIP) such that the concepts discussed in the present article remain applicable. First, while (ESIP) possesses only a single scalar-valued semi-infinite existence constraint, handling multiple vector-valued constraints is straightforward with the proposed method. Similarly, the restriction to continuous variables is only made for notational convenience. Finally, while the sets \(\mathcal {Y}\), and \(\mathcal {Z}(\varvec{y})\) are independent of \(\varvec{x}\), similar problems with dependencies on \(\varvec{x}\) can be handled by the proposed method under mild assumptions. All these generalizations of (ESIP) and the conditions for their solution with the proposed method are discussed in [28, Section 5.2.4].

The proposed algorithm relies on the global solution of several nonlinear programs (NLPs) to global optimality. To this end, we define the approximate solution of an NLP as follows.

Definition 2.4

(Approximate solution of NLPs) The approximate global solution of a feasible NLP

$$\begin{aligned} \underset{\varvec{u} \in \mathcal {U}}{\text {min}}\, \varphi (\varvec{u}) \end{aligned}$$

with an optimality tolerance \(\varepsilon ^{NLP} > 0\) yields a feasible point \(\bar{\varvec{u}}\), a lower bound \(\varphi ^-\), and an upper bound \(\varphi ^+\) on the (unknown) globally optimal objective value \(\varphi ^*\) with

$$\begin{aligned} \varphi ^- \le \varphi ^* \le \varphi (\bar{\varvec{u}}) = \varphi ^+ \le \varphi ^- + \varepsilon ^{NLP}. \end{aligned}$$

In compact notation, we write

$$\begin{aligned} \left[ \varphi ^-,\varphi (\bar{\varvec{u}}) \right] \ni \varphi ^* = \underset{\varvec{u} \in \mathcal {U}}{\text {min}}\, \varphi (\varvec{u}). \end{aligned}$$

Maximization programs are denoted analogously with the difference that it holds that

$$\begin{aligned} \varphi ^- = \varphi (\bar{\varvec{u}}) \le \varphi ^* \le \varphi ^+ \le \varphi ^- + \varepsilon ^{NLP}. \end{aligned}$$

Similarly, the proposed algorithm relies on solving a min-max program (MMP), the approximate solution of which we define as follows.

Definition 2.5

(Approximate solution of MMPs) The approximate global solution of a feasible MMP

$$\begin{aligned} \underset{\varvec{u} \in \mathcal {U}}{\text {inf}}\, \underset{\varvec{v} \in \mathcal {V}}{\text {max}}\, \varphi (\varvec{u},\varvec{v}) \end{aligned}$$

with an optimality tolerance \(\varepsilon ^{MMP} > 0\) yields a feasible point \(\bar{\varvec{u}}\), a lower bound \(\varphi ^-\), and an upper bound \(\varphi ^+\) on the globally optimal objective value \(\varphi ^*\) with

$$\begin{aligned} \varphi ^- \le \varphi ^* \le \max _{\varvec{v} \in \mathcal {V}} \varphi (\bar{\varvec{u}},\varvec{v}) \le \varphi ^+ \le \varphi ^- + \varepsilon ^{MMP}. \end{aligned}$$

In compact notation, we write

$$\begin{aligned} \left[ \varphi ^-,\varphi ^+ \right] \ni \varphi ^* = \underset{\varvec{u} \in \mathcal {U}}{\text {inf}}\, \underset{\varvec{v} \in \mathcal {V}}{\text {max}}\, \varphi (\varvec{u},\varvec{v}) \end{aligned}$$

Max-min programs \(\underset{\varvec{u} \in \mathcal {U}}{\text {sup}}\, \underset{\varvec{v} \in \mathcal {V}}{\text {min}}\, \varphi (\varvec{u},\varvec{v})\) are denoted analogously with the difference that it holds that

$$\begin{aligned} \varphi ^- \le \min _{\varvec{v} \in \mathcal {V}} \varphi (\bar{\varvec{u}},\varvec{v}) \le \varphi ^* \le \varphi ^+ \le \varphi ^- + \varepsilon ^{MMP}. \end{aligned}$$

2.2 Assumptions

We make the assumptions of compact host sets and continuous objective and constraint functions as is standard in deterministic global optimization.

Assumption 1

(Host sets) The host sets \(\mathcal {X}^0 \subsetneq \mathbb {R}^{n_x}\), \(\mathcal {Y}^0 \subsetneq \mathbb {R}^{n_y}\), \(\mathcal {Z}^0 \subsetneq \mathbb {R}^{n_z}\) are compact.

Assumption 2

(Defining functions) The functions \(f : \mathcal {X}^0 \rightarrow \mathbb {R}\), \(\varvec{h}^u : \mathcal {X}^0 \rightarrow \mathbb {R}^{n_h^u}\), \(g : \mathcal {X}^0 \times \mathcal {Y}^0 \times \mathcal {Z}^0 \rightarrow \mathbb {R}\), \(\varvec{h}^m : \mathcal {Y}^0 \rightarrow \mathbb {R}^{n_h^m}\), and \(\varvec{h}^l : \mathcal {Y}^0 \times \mathcal {Z}^0 \rightarrow \mathbb {R}^{n_h^l}\) are continuous on their respective domains.

Proposition 2.1

Given Assumptions 1 and 2, the sets \(\mathcal {X}\) and \(\mathcal {Y}\) are compact. Furthermore, \(\mathcal {Z}(\varvec{y})\) is compact for any \(\varvec{y} \in \mathcal {Y}^0\).

Proof

By Assumption 1, \(\mathcal {X}^0\) and \(\mathcal {Y}^0\) are compact. Furthermore, by Assumption 2, \(\varvec{h}^u\) and \(\varvec{h}^m\) are continuous on \(\mathcal {X}^0\) and \(\mathcal {Y}^0\), respectively. Accordingly, the 0-sublevel sets of \(\varvec{h}^u\) and \(\varvec{h}^m\) are compact. Then, \(\mathcal {X}\) and \(\mathcal {Y}\) are compact by virtue of being intersections of compact sets.

Now, consider any fixed \(\varvec{y} \in \mathcal {Y}^0\). \(\mathcal {Z}(\varvec{y})\) is either empty and therefore compact or compactness of \(\mathcal {Z}(\varvec{y})\) follows as above from compactness of \(\mathcal {Z}^0\) together with continuity of \(\varvec{h}^l\). \(\square \)

Furthermore, for the guaranteed provision of feasible points, the proposed algorithm relies on the existence of a point that is strictly feasible with respect to the semi-infinite existence constraints. Accordingly, we assume the existence of such an ESIP-Slater point for feasible instances of (ESIP).

Assumption 3

(\(\hat{\varepsilon }^f\)-optimal ESIP-Slater point) (ESIP) is infeasible or for some \(\hat{\varepsilon }^f > 0\), there exists an \(\hat{\varepsilon }^f\)-optimal ESIP-Slater point in (ESIP), i.e., for some \(\hat{\varepsilon }^f,\varepsilon ^S > 0\) there exists a point \(\varvec{x}^S \in \mathcal {X}\) that satisfies \(f(\varvec{x}^S) \le f^* + \hat{\varepsilon }^f\) and

$$\begin{aligned} \forall \varvec{y} \in \mathcal {Y} \left[ \exists \varvec{z} \in \mathcal {Z}(\varvec{y}) : g(\varvec{x}^S,\varvec{y},\varvec{z}) \le - \varepsilon ^S \right] . \end{aligned}$$

Finally, we assume that NLPs and MMPs can be solved as outlined in Definitions 2.4 and 2.5, which is usually the case given Assumptions 1 and 2.

Assumption 4

(Approximate solution) For any given \(\varepsilon ^{NLP}, \varepsilon ^{MMP} > 0\), NLPs and MMPs can be found to be infeasible or solved according to Definitions 2.4 and 2.5, respectively.

3 Solution Algorithm

In this section, we propose an algorithm for the global solution of (ESIP), which is based on our discretization-based SIP algorithm [27]. The algorithm in [27] in turn is a hybridization of the SIP algorithms proposed in [24, 26] and employs, either directly or in an adapted form, subproblems from both preceding algorithms to generate convergent lower and upper bounds on the globally optimal objective value of the SIP in question. In [27], the hybrid algorithm is shown to inherit the slightly superior convergence guarantees of [24] while providing better performance than both preceding algorithms on a standard test set.

Global solution of (ESIP) can be achieved by leaving the algorithmic structure of the hybrid SIP algorithm largely unchanged and adapting the subproblems to take the presence of a third level in (ESIP) into account. In the following, all adapted subproblems are stated and their relevant properties in relation to (ESIP) are elucidated. Finally, the algorithm is stated and a proof of finite termination is provided.

3.1 Subproblems

Before stating the subproblems that are specific to the proposed algorithm, we derive the natural subproblems of (ESIP). In SIPs, a distinction is made between the upper-level variables (variables of the SIP) and the lower-level variables (parameters of the semi-infinite constraints). Accordingly, given fixed upper-level variable values, the maximization of the semi-infinite constraint over the domain of lower-level variables is termed the lower-level program. Here, we distinguish between three levels according to their variable domains. Accordingly, we term \(\varvec{x}\) the upper-level variables, \(\varvec{y}\) the medial-level variables, and \(\varvec{z}\) the lower-level variables.

Given this naming convention, we denote for given \((\bar{\varvec{x}},\bar{\varvec{y}}) \in \mathcal {X} \times \mathcal {Y}\) the lower-level program of (ESIP)

$$\begin{aligned} g^\diamond (\bar{\varvec{x}},\bar{\varvec{y}}) = \underset{\varvec{z} \in \mathcal {Z}(\bar{\varvec{y}})}{\text {min}}\, g(\bar{\varvec{x}},\bar{\varvec{y}},\varvec{z}). \end{aligned}$$
(LLP)

If (LLP) is feasible, the existence of the minimum asserted therein follows from Assumptions 1 and 2. Otherwise, it holds that \(g^\diamond (\bar{\varvec{x}},\bar{\varvec{y}}) = \infty \) by convention.

Lemma 3.1

Under Assumptions 1 and 2, for any \((\bar{\varvec{x}},\bar{\varvec{y}}) \in \mathcal {X} \times \mathcal {Y}\), (LLP) is either infeasible or attains its minimum.

Proof

For any \((\bar{\varvec{x}},\bar{\varvec{y}}) \in \mathcal {X} \times \mathcal {Y}\), \(g(\bar{\varvec{x}},\bar{\varvec{y}},\cdot )\) is continuous on \(\mathcal {Z}(\bar{\varvec{y}})\) and \(\mathcal {Z}(\bar{\varvec{y}})\) is compact (Proposition 2.1). As a consequence and by Weierstrass’ theorem, for any \((\bar{\varvec{x}},\bar{\varvec{y}}) \in \mathcal {X} \times \mathcal {Y}\), (LLP) is either infeasible or attains its minimum. \(\square \)

Given Lemma 3.1 and the convention on infeasible problems, we can further write the medial-level program of (ESIP) for given \(\bar{\varvec{x}} \in \mathcal {X}\) as

$$\begin{aligned} \begin{aligned} \left[ g^-(\bar{\varvec{x}}), g^+(\bar{\varvec{x}})\right] \ni g^*(\bar{\varvec{x}}) = \underset{\varvec{y} \in \mathcal {Y}}{\text {sup}}\, \underset{\varvec{z} \in \mathcal {Z}(\varvec{y})}{\text {min}}\, g(\bar{\varvec{x}}, \varvec{y}, \varvec{z}). \end{aligned} \end{aligned}$$
(MLP)

(MLP) is used in the proposed algorithm in order to assess feasibility in (ESIP) of iterates produced by subproblems operating on the upper-level variables. In this sense, the relation between (ESIP) and (MLP) is analogous to the relation between an SIP and its lower-level program. Indeed, a point \(\bar{\varvec{x}} \in \mathcal {X}\) is ESIP feasible if and only if (MLP) is infeasible or its globally optimal objective is non-positive.

Lemma 3.2

Under Assumptions 1 and 2, a point \(\bar{\varvec{x}} \in \mathcal {X}\) is ESIP feasible if and only if (MLP) is infeasible or it holds that \(g^*(\bar{\varvec{x}}) \le 0\).

The proof of Lemma 3.2 is omitted as it follows immediately by construction of (MLP) and Lemma 3.1.

All remaining subproblems are based on a discretization of the medial-level feasible set. Indeed, given a discretization index set \(\mathcal {K} \subsetneq \mathbb {N}\) and discretized medial-level variable values \(\varvec{y}^{k} \in \mathcal {Y}, k \in \mathcal {K}\), a discretized version and relaxation of (ESIP) is given by

$$\begin{aligned} \underset{\varvec{x} \in \mathcal {X}}{\text {min}}\, f(\varvec{x}) \; \quad \text {s.t.}\,\quad \forall k \in \mathcal {K} \left[ \exists \varvec{z} \in \mathcal {Z}(\varvec{y}^{k}) : g(\varvec{x},\varvec{y}^{k},\varvec{z}) \le 0 \right] . \end{aligned}$$

Introducing instances of the lower-level variables \(\varvec{z}^{k} \in \mathcal {Z}(\varvec{y}^k), k \in \mathcal {K}\) to the discretized program yields the lower bounding program

$$\begin{aligned} \left[ f^{LBP,-},f(\bar{\varvec{x}}^{LBP})\right] \ni f^{LBP,*}&= \underset{\varvec{x} \in \mathcal {X}, \varvec{z}^k \in \mathcal {Z}^0}{\text {min}}\, f(\varvec{x}) \\&\qquad \text {s.t.}\,\;\forall k \in \mathcal {K} \left[ \begin{array}{ll} g(\varvec{x},\varvec{y}^{k},\varvec{z}^k) &{}\le 0 \\ \varvec{h}^l(\varvec{y}^{k},\varvec{z}^k) &{}\le \varvec{0} \end{array} \right] . \end{aligned}$$
(LBP)

For any valid discretization, the projection of (LBP) onto \(\mathcal {X}\) is a relaxation of (ESIP) and thereby, the solution of (LBP) provides a lower bound on the globally optimal objective value of (ESIP).

Lemma 3.3

Let \(\mathcal {K} \subsetneq \mathbb {N}\) and for all \(k \in \mathcal {K}\), \(\varvec{y}^{k} \in \mathcal {Y}\). Then, it holds that

$$\begin{aligned} f^{LBP,*} \le f^*. \end{aligned}$$

The proof of Lemma 3.3 is omitted as it is straightforward.

An upper bounding program is obtained by again discretizing the medial-level variables as in (LBP) and then restricting the discretized constraints by a restriction parameter. Given a discretization index set \(\mathcal {K} \subsetneq \mathbb {N}\), discretized medial-level variable values \(\varvec{y}^{k} \in \mathcal {Y}, k \in \mathcal {K}\), instances of the lower-level variables \(\varvec{z}^{k} \in \mathcal {Z}(\varvec{y}^{k}), k \in \mathcal {K}\), and a restriction parameter \(\varepsilon ^g > 0\), the upper bounding program is given by

$$\begin{aligned} \left[ f^{UBP,-},f(\bar{\varvec{x}}^{UBP})\right] \ni f^{UBP,*}&= \underset{\varvec{x} \in \mathcal {X}, \varvec{z}^k \in \mathcal {Z}^0}{\text {min}}\, f(\varvec{x}) \\&\qquad \text {s.t.}\,\;\forall k \in \mathcal {K} \left[ \begin{array}{ll} g(\varvec{x},\varvec{y}^{k},\varvec{z}^k) &{} \le -\varepsilon ^g \\ \varvec{h}^l(\varvec{y}^{k},\varvec{z}^k) &{}\le \varvec{0} \end{array} \right] . \end{aligned}$$
(UBP)

Analogously to the SIP case discussed in [24, 27], the projection of (UBP) onto \(\mathcal {X}\) is generally neither a relaxation nor a restriction of (ESIP). However, under Assumption 3 and through proper manipulation of the restriction parameter \(\varepsilon ^g\) and the discretization, (UBP) can be shown to provide an ESIP feasible point and thereby an upper bound on the globally optimal objective value of (ESIP) in finitely many iterations.

Finally, the restriction program aims to maximize the restriction of the discretized constraints while attaining a target objective value. Given a discretization index set \(\mathcal {K} \subsetneq \mathbb {N}\), discretized medial-level variable values \(\varvec{y}^{k} \in \mathcal {Y}, k \in \mathcal {K}\), instances of the lower-level variables \(\varvec{z}^{k} \in \mathcal {Z}(\varvec{y}^{k}), k \in \mathcal {K}\), and a target objective value \(f^{RES}\), the restriction program is given by

$$\begin{aligned} \left[ \eta ^{RES,-},\eta ^{RES,+}\right] \ni \eta ^*&= \underset{\eta \in \mathbb {R}, \varvec{x} \in \mathcal {X}, \varvec{z}^k \in \mathcal {Z}^0}{\text {max}}\, \eta \\&\qquad \text {s.t.}\,\; f(\varvec{x}) \le f^{RES} \\&\qquad \forall k \in \mathcal {K} \left[ \begin{array}{ll} g(\varvec{x},\varvec{y}^{k},\varvec{z}^k) &{} \le -\eta \\ \varvec{h}^l(\varvec{y}^{k},\varvec{z}^k) &{} \le \varvec{0} \end{array} \right] . \end{aligned}$$
(RES)

The observations made in [27] about the relation of the restriction program to the lower and upper bounding programs also hold here. Indeed, for \(\eta = 0\), exactly those points are feasible in (RES) that are members of the level set with objective function value \(f^{RES}\) of (LBP). As a consequence, if for some \(f^{RES}\), it holds that \(\eta ^* < 0\), then said level set is empty, \(f^{RES}\) is a lower bound on \(f^{LBP,*}\) and by extension a lower bound on \(f^*\). If on the other hand it holds that \(\eta ^* \ge 0\), the associated solution \(\bar{\varvec{x}}^{RES}\) may be ESIP feasible. In case this is verified via a solution of (MLP), \(f(\bar{\varvec{x}}^{RES})\) is an upper bound on \(f^*\) and furthermore, \(\eta ^*\) is the largest possible restriction parameter value for which the current best solution remains feasible in (UBP) for the given discretization. As argued in [27], this property motivates the use of such solutions of (RES) as updates for the restriction parameter \(\varepsilon ^g\) for subsequent solutions of (UBP).

The increased difficulty of (ESIP) when compared to an SIP is reflected in all subproblems of the proposed algorithms. Indeed, (MLP) is a max-min program, whereas its analogue in an SIP is a single-level program. As a consequence, although under Assumptions 1 and 2, (MLP) can in turn be solved according to Definition 2.5 using discretization methods such as the one proposed by Falk and Hoffman [29], the solution of a max-min program is generally more costly than the solution of a single-level program of similar size and structure. Similarly, the remaining discretization-based subproblems exhibit not only the typical increase in the number of constraints with the number of discretization points, but also an increase in the number of variables. However, the variables that are associated with discretization points appear in a block structure, which could potentially be exploited by employing a decomposition method for an efficient solution of the discretization-based subproblems. Indeed, the block structure observed here is similar to the structure observed in the subproblems of a two-stage stochastic programming algorithm, which are commonly solved using decomposition methods.

3.2 Algorithm Statement

Given the subproblems (LBP), (UBP), (RES), and (MLP), the proposed algorithm can be stated as in Algorithm 1.

figure a

Note that in contrast to the original algorithm in [27], in the case of an infeasible instance of (UBP), (LBP) is solved instead of (UBP) with a reduced restriction parameter. This change is required since Assumption 3 in contrast to its equivalent in [27] allows for (ESIP) to be infeasible. This change guarantees that an infeasible instance of (ESIP) is detected by (LBP) becoming infeasible after finitely many iterations.

3.3 Proof of Convergence

For the following discussion, note that Algorithm 1 is separated into three main blocks of instructions associated with the three subproblems (LBP), (UBP), and (RES). We term these blocks the lower bounding procedure (Lines 3–12), the upper bounding procedure (Lines 13–23), and the restriction procedure (Lines 26–40). In order to prove finite termination of Algorithm 1, we first establish properties of the three main procedures.

Lower bounding procedure The lower bounding procedure is required to produce a convergent sequence of lower bounds for Algorithm 1 to terminate finitely. This is achieved by successively tightening (LBP) with solution points of (MLP). As observed in [27] for the SIP case, particular attention must be paid to the optimality tolerance \(\varepsilon ^{MMP}\). In [27], a successive refinement of the optimality tolerance is performed to ensure that a sufficiently small tolerance is reached finitely. In the following lemma, we opt instead to show that a sufficiently small tolerance exists that ensures convergence of the lower bounding procedure. Practical implementations of Algorithm 1 may still employ the successive refinement strategy proposed in [27]. Indeed, in the proof of the following lemma, we let \(\varepsilon ^{MMP}\) approach zero in the limit of an iteration sequence. Harwood et al. [30] show for the SIP algorithm in [24] that such a tolerance refinement is generally required to ensure convergence of the discretization-based lower bounding procedure.

Lemma 3.4

Consider a sequence of successive iterations of the lower bounding procedure and let \(\varepsilon ^{MMP}\) for each solution of (MLP) be positive but sufficiently small. Then, under Assumptions 1, 2 and 4, if (ESIP) is infeasible, (LBP) becomes infeasible after finitely many iterations of the lower bounding procedure. Otherwise, the sequence of lower bounds produced converges to an \(\varepsilon ^{NLP}\)-underestimate of \(f^*\) finitely or in the limit.

Proof

If (ESIP) is infeasible, each point \(\bar{\varvec{x}} \in \mathcal {X}\) furnished by (LBP) is ESIP infeasible. By compactness of \(\mathcal {X}\), there exists \(\varepsilon > 0\) independent of \(\bar{\varvec{x}}\) such that \(g^*(\bar{\varvec{x}}) > 2 \varepsilon \). Therefore, by Assumption 4, \(\varepsilon ^{MMP}\) can be chosen such that \(0< \varepsilon ^{MMP} < \varepsilon \). It follows that the point \(\bar{\varvec{y}} \in \mathcal {Y}\) yielded by the solution of (MLP) satisfies

$$\begin{aligned} \min _{\varvec{z} \in \mathcal {Z}(\mathcal {\bar{\varvec{y}}})} g(\bar{\varvec{x}},\bar{\varvec{y}},\varvec{z}) \ge g^*(\bar{\varvec{x}}) - \varepsilon ^{MMP} > \varepsilon . \end{aligned}$$

By continuity of g on \(\mathcal {X}^0 \times \mathcal {Y}^0 \times \mathcal {Z}^0\) and compactness of \(\mathcal {X}^0 \times \mathcal {Y}^0 \times \mathcal {Z}^0\), g is uniformly continuous. As a consequence, there exists \(\delta > 0\) independent of \(\bar{\varvec{x}}\) and \(\bar{\varvec{y}}\) such that

$$\begin{aligned} \forall \varvec{x} \in \mathcal {X} \cap \mathcal {B}_\delta (\bar{\varvec{x}}) \forall \varvec{z} \in \mathcal {Z}(\mathcal {\bar{\varvec{y}}}) \left[ g(\varvec{x},\bar{\varvec{y}},\varvec{z}) > 0\right] . \end{aligned}$$

That is, the point added to the discretization renders an open neighborhood of \(\bar{\varvec{x}}\) with radius \(\delta \) infeasible in (LBP). By compactness of \(\mathcal {X}\), it follows that (LBP) is rendered infeasible in finitely many iterations of the lower bounding procedure.

If (ESIP) is feasible and (LBP) furnishes an ESIP feasible solution \(\bar{\varvec{x}} \in \mathcal {X}\) finitely, it follows \(f(\bar{\varvec{x}}) \ge f^*\). Furthermore, with (LBP) being a relaxation of (ESIP) ((Lemma 3.3) and by Assumption 4, it holds that

$$\begin{aligned} f(\bar{\varvec{x}}) - \varepsilon ^{NLP} \le f^{LBP,-} \le f^* \le f(\bar{\varvec{x}}), \end{aligned}$$

proving that \(f^{LBP,-}\) is an \(\varepsilon ^{NLP}\)-underestimate of \(f^*\).

If (ESIP) is feasible and (LBP) does not furnish an ESIP feasible solution, let \(\{\bar{\varvec{x}}^k\}_{k \ge 1} \subsetneq \mathcal {X}\) denote the sequence of solutions furnished by (LBP) and \(\{\bar{\varvec{y}}^k\}_{k \ge 1}\) the sequence of associated solutions furnished by (MLP). By construction of (LBP), it holds that

$$\begin{aligned} \forall k,m : m > k \ge 1 \; \exists \varvec{z} \in \mathcal {Z}(\bar{\varvec{y}}^k) : \left[ g(\bar{\varvec{x}}^m,\bar{\varvec{y}}^k,\varvec{z}) \le 0 \right] . \end{aligned}$$

By continuity of \(g(\cdot ,\bar{\varvec{y}}^k,\varvec{z})\) on \(\mathcal {X}\) for all \(\varvec{z} \in \mathcal {Z}(\bar{\varvec{y}}^k)\), there exists \(\delta > 0\) for any \(\varepsilon > 0\) such that

$$\begin{aligned} \forall k,m : m > k \ge 1 \; \forall \varvec{x} \in \mathcal {X} \cap \mathcal {B}_\delta (\bar{\varvec{x}}^m) \; \exists \varvec{z} \in \mathcal {Z}(\bar{\varvec{y}}^k) : \left[ g(\varvec{x},\bar{\varvec{y}}^k,\varvec{z}) < \varepsilon \right] . \end{aligned}$$
(1)

By compactness of \(\mathcal {X}\), it holds that \(\bar{\varvec{x}}^k \xrightarrow {k \rightarrow \infty } \hat{\varvec{x}} \in \mathcal {X}\), yielding that there exists K for any \(\delta > 0\) such that

$$\begin{aligned} \forall k,m : m > k \ge K \left[ \bar{\varvec{x}}^k \in \mathcal {B}_\delta (\bar{\varvec{x}}^m) \right] . \end{aligned}$$
(2)

Combining (1) and (2) yields that for any \(\varepsilon > 0\), there exists K such that

$$\begin{aligned} \forall k \ge K \; \exists \varvec{z} \in \mathcal {Z}(\bar{\varvec{y}}^k) : \left[ g(\bar{\varvec{x}}^k,\bar{\varvec{y}}^k,\varvec{z}) < \varepsilon \right] . \end{aligned}$$

Furthermore, it follows from Assumption 4 that for all \(k \ge 1\)

$$\begin{aligned} g^*(\bar{\varvec{x}}^k) \le \min _{\varvec{z} \in \mathcal {Z}(\bar{\varvec{y}}^k)} g(\bar{\varvec{x}}^k,\bar{\varvec{y}}^k,\varvec{z}) + \varepsilon ^{MMP}. \end{aligned}$$

Then, letting \(\varepsilon ^{MMP} \rightarrow 0\) for \(k \rightarrow \infty \), yields that the accumulation point \(\hat{\varvec{x}}\) is ESIP feasible and it holds \(f(\hat{\varvec{x}}) \ge f^*\). Furthermore, by continuity of f, it follows \(f(\bar{\varvec{x}}^k) \xrightarrow {k \rightarrow \infty } f(\hat{\varvec{x}})\). Together with Assumption 4 and (LBP) being a relaxation of (ESIP) (Lemma 3.3), it follows that the sequence of lower bounds produced by the lower bounding procedure converges to a value in the interval

$$\begin{aligned} \left[ f(\hat{\varvec{x}})-\varepsilon ^{NLP},f^* \right] \subseteq \left[ f^*-\varepsilon ^{NLP},f^* \right] , \end{aligned}$$

proving that the sequence of lower bounds converges to an \(\varepsilon ^{NLP}\)-underestimate of \(f^*\). \(\square \)

Upper bounding procedure As mentioned previously, (UBP) is not generally a restriction of (ESIP) and therefore does not necessarily yield upper bounds for (ESIP). However, as will be shown here, the upper bounding procedure as a whole does provide upper bounds reliably. To this end, it is first established that the upper bounding procedure cannot enter an infinite loop. This is of particular importance if (ESIP) is infeasible since only the lower bounding procedure can prove infeasibility of (ESIP).

Lemma 3.5

Consider a sequence of iterations of the upper bounding procedure. Let that sequence be contiguous, i.e., let the sequence be produced by Algorithm 1 looping on Lines 13–22. Let furthermore \(0< \varepsilon ^{MMP} < \varepsilon ^g\) for each solution of (MLP). Then, under Assumptions 1, 2 and 4, the considered sequence of iterations of the upper bounding procedure is finite.

Proof

According to Algorithm 1, a contiguous sequence of iterations of the upper bounding procedure is terminated if in one execution, (UBP) either becomes infeasible or yields a point \(\bar{\varvec{x}} \in \mathcal {X}\) that is found to be ESIP feasible. Furthermore, within a contiguous sequence of iterations of the upper bounding procedure, the restriction parameter \(\varepsilon ^g\) remains unchanged.

Let \(\bar{\varvec{x}} \in \mathcal {X}\) be furnished by a feasible instance of (UBP) and let \(\bar{\varvec{x}}\) not be found ESIP feasible. It follows that the solution of (MLP) yields \(g^+(\bar{\varvec{x}}) > 0\). Furthermore, by Assumption 4, the point \(\bar{\varvec{y}} \in \mathcal {Y}\) yielded by the solution of (MLP) satisfies

$$\begin{aligned} \min _{\varvec{z} \in \mathcal {Z}(\bar{\varvec{y}})} g(\bar{\varvec{x}},\bar{\varvec{y}},\varvec{z}) \ge g^+(\bar{\varvec{x}}) - \varepsilon ^{MMP} > - \varepsilon ^{MMP}. \end{aligned}$$

By continuity of g on \(\mathcal {X}^0 \times \mathcal {Y}^0 \times \mathcal {Z}^0\) and compactness of \(\mathcal {X}^0 \times \mathcal {Y}^0 \times \mathcal {Z}^0\), g is uniformly continuous. As a consequence and due to \(\varepsilon ^g - \varepsilon ^{MMP} > 0\), there exists \(\delta > 0\) independent of \(\bar{\varvec{x}}\) and \(\bar{\varvec{y}}\) such that

$$\begin{aligned} \forall \varvec{x} \in \mathcal {X} \cap \mathcal {B}_\delta (\bar{\varvec{x}}) \forall \varvec{z} \in \mathcal {Z}(\bar{\varvec{y}}) \left[ g(\varvec{x},\bar{\varvec{y}},\varvec{z}) > - \varepsilon ^g \right] . \end{aligned}$$

That is, the point added to the discretization renders an open neighborhood of \(\bar{\varvec{x}}\) with radius \(\delta \) infeasible in (UBP). By compactness of \(\mathcal {X}\), it follows that (UBP) is either rendered infeasible or yields an ESIP feasible point \(\bar{\varvec{x}} \in \mathcal {X}\) that is confirmed as such in finitely many iterations of the upper bounding procedure. \(\square \)

Given Lemma 3.5 and Assumption 3, the upper bounding property of the upper bounding procedure can be proven as follows.

Lemma 3.6

Consider a sequence of successive iterations of the upper bounding procedure and let \(\varepsilon ^{MMP} < \varepsilon ^g\) for each solution of (MLP). Then, under Assumptions 14, if (ESIP) is feasible, the upper bounding procedure furnishes an \((\hat{\varepsilon }^f + \varepsilon ^{NLP})\)-optimal point after finitely many iterations.

Proof

By Lemma 3.5 and \(\varepsilon ^{MMP} < \varepsilon ^g\), it follows that for any given \(\varepsilon ^g\), the upper bounding procedure is executed only a finite number of times before either (UBP) becomes infeasible or some point is found to be ESIP feasible. In both cases, the restriction parameter \(\varepsilon ^g\) is reduced according to \(\varepsilon ^g \leftarrow \varepsilon ^g / r^g\) and similarly the restriction procedure only ever reduces \(\varepsilon ^g\). It follows that for any \(\varepsilon ^S > 0\) and after finitely many iterations of the upper bounding procedure, it holds \(\varepsilon ^g < \varepsilon ^S\). Furthermore, by Assumption 3 and feasibility of (ESIP), there exists a point \(\varvec{x}^S \in \mathcal {X}\) that satisfies \(f(\varvec{x}^S) \le f^* + \hat{\varepsilon }^f\) and

$$\begin{aligned} \forall \varvec{y} \in \mathcal {Y} \left[ \exists \varvec{z} \in \mathcal {Z}(\varvec{y}) : g(\varvec{x}^S,\varvec{y},\varvec{z}) \le - \varepsilon ^S \right] . \end{aligned}$$

for some \(\hat{\varepsilon }^f, \varepsilon ^S > 0\). As a consequence, after finitely many iterations of the upper bounding procedure, \(\varvec{x}^S\) is feasible in (UBP) irrespective of the discretization employed. At this point, (UBP) can no longer become infeasible and by Lemma 3.5, a point \(\bar{\varvec{x}} \in \mathcal {X}\) is found to be ESIP feasible finitely. Due to Assumption 4, this point satisfies

$$\begin{aligned} f(\bar{\varvec{x}}) \le f(\varvec{x}^S) + \varepsilon ^{NLP} \le f^* + \hat{\varepsilon }^f + \varepsilon ^{NLP} \end{aligned}$$

and \(\bar{\varvec{x}}\) is \((\hat{\varepsilon }^f + \varepsilon ^{NLP})\)-optimal in (ESIP). \(\square \)

Restriction procedure With the properties of the lower and upper bounding procedure established, the essential properties required for solving (ESIP) are already in place. Indeed, as discussed in [27] for the SIP case, the restriction procedure is not required for finite termination of Algorithm 1 but is only added to improve practical performance. Accordingly, the following lemma establishes that the restriction procedure does not impede the guarantee for finite termination by entering an infinite loop.

Lemma 3.7

Let the optimality gap UBD–LBD be finite and consider a sequence of iterations of the restriction procedure. After finitely many iterations, the restriction procedure provides a bound update, at least halving the optimality gap, or defers to the lower bounding procedure.

Proof

If the solution of (RES) yields \(\eta ^{RES,+} < 0\), the lower bound is updated with \(f^{RES}\) and the optimality gap is halved due to

$$\begin{aligned} LBD \leftarrow f^{RES} = \tfrac{1}{2}(LBD+UBD). \end{aligned}$$

If on the other hand, the solution of (RES) yields \(\eta ^{RES,-} \le 0 \le \eta ^{RES,+}\), the restriction procedure immediately defers to the lower bounding procedure in order to obtain an update to the lower bound. Otherwise, letting \(\bar{\varvec{x}}\) denote the solution of (RES), if \(\bar{\varvec{x}}\) is found to be ESIP feasible, it provides an update to the upper bound. By construction of (RES) and \(f^{RES} = \tfrac{1}{2}(LBD+UBD)\), the bound update satisfies

$$\begin{aligned} UBD \leftarrow f(\bar{\varvec{x}}) \le \tfrac{1}{2}(LBD+UBD), \end{aligned}$$

meaning the optimality gap is at least halved. Finally, if none of the above conditions are met, the restriction procedure is only executed finitely many times before deferring to the lower bounding procedure. \(\square \)

Finite termination Given Lemmas 3.4, 3.6 and 3.7 and a proper choice of tolerances, finite termination of Algorithm 1 can be proven as follows.

Theorem 3.1

Let the tolerances in Algorithm 1 be chosen such that \(\varepsilon ^f > \hat{\varepsilon }^f + 2 \varepsilon ^{NLP}\). Let furthermore in each iteration of the lower bounding procedure \(\varepsilon ^{MMP} > 0\) be sufficiently small in the sense of Lemma 3.4 and in each iteration of the upper bounding procedure \(0< \varepsilon ^{MMP} < \varepsilon ^g\). Then, under Assumptions 14, Algorithm 1 terminates finitely either proving infeasibility of (ESIP) or yielding an \(\varepsilon ^f\)-optimal point \(\varvec{x}^*\).

Proof

If (ESIP) is infeasible, no solutions furnished by (UBP) can be ESIP feasible. Together with Lemma 3.5 and \(\varepsilon ^{MMP} < \varepsilon ^g\) for the solution of (MLP) in the upper bounding procedure, it follows that the upper bounding procedure defers back to the lower bounding procedure finitely. With Lemma 3.4 and \(\varepsilon ^{MMP}\) being chosen sufficiently small in the sense of Lemma 3.4 for the solution of (MLP) in the lower bounding procedure, it further follows that (LBP) is rendered infeasible finitely. As a consequence, Algorithm 1 terminates finitely having proven infeasibility of (ESIP).

If (ESIP) is feasible, it follows from Lemma 3.4 that the lower bounds produced by the lower bounding procedure converge to an \(\varepsilon ^{NLP}\)-underestimate of \(f^*\). Accordingly, letting \(\{f^{LBP,-,k}\}_{k \ge 1}\) denote this sequence of lower bounds, there exists \(K^{LBP}\) for any \(\varepsilon > 0\) such that

$$\begin{aligned} \forall k \ge K^{LBP}\left[ f^{LBP,-,k} > f^* - \varepsilon ^{NLP} - \varepsilon \right] . \end{aligned}$$
(3)

Furthermore, it follows from Lemma 3.6 that after a finite number \(K^{UBP}\) of iterations, the upper bounding procedure furnishes an \((\hat{\varepsilon }^f + \varepsilon ^{NLP})\)-optimal point. Letting \(\{\bar{\varvec{x}}^{UBP,k}\}_{k \ge 1}\) denote the sequence of points furnished by the upper bounding procedure, it holds that

$$\begin{aligned} \forall k \ge K^{UBP}\left[ f(\bar{\varvec{x}}^{UBP,k}) \le f^* + \hat{\varepsilon }^f + \varepsilon ^{NLP} \right] . \end{aligned}$$
(4)

Combining (3) and (4) yields that for any \(\varepsilon >0\), there exists \(K^{LBP}\) such that

$$\begin{aligned} \forall k \ge K^{LBP} \; \forall l \ge K^{UBP}\left[ f(\bar{\varvec{x}}^{UBP,l}) - f^{LBP,-,k} < \hat{\varepsilon }^f + 2 \varepsilon ^{NLP} + \varepsilon \right] . \end{aligned}$$

Due to \(\varepsilon ^f > \hat{\varepsilon }^f + 2 \varepsilon ^{NLP}\), it follows that after finitely many iterations of the lower bounding procedure and finitely many iterations of the upper bounding procedure, Algorithm 1 terminates with an \(\varepsilon ^f\)-optimal point.

By construction of Algorithm 1, the restriction procedure is only executed once the optimality gap \(\Delta f = UBD - LBD\) is finite. It follows by Lemma 3.7 that the restriction procedure is only executed finitely many times before providing a bound update at least halving the optimality gap or deferring to the lower bounding procedure. After at most

$$\begin{aligned} K^{RES} = \log _2\left( \Delta f / \varepsilon ^f \right) \end{aligned}$$

bound updates by the restriction procedure, the optimality tolerance \(\varepsilon \) is reached. As a consequence, the restriction procedure either provides \(K^{RES}\) bound updates or defers sufficiently many times to the lower bounding procedure. Either way, the optimality tolerance \(\varepsilon ^f\) is reached and Algorithm 1 terminates finitely. \(\square \)

4 Numerical Experiments

In this section, we provide numerical results for the solution of an ESIP example problem. To this end, we employ a C++ implementation of Algorithm 1, allowing the presence of coupling equality constraints on the medial and lower level in the sense of [31]. A detailed description of this and other generalizations is provided in [28, Section 5.2.4]. Min-max subproblems are solved using a C++ implementation of the algorithm in [29] that is similarly extended in the sense of [31] to allow coupling equality constraints on the lower level. NLP subproblems of Algorithm 1 and the min-max algorithm are solved to global optimality using MAiNGO v0.1.24 [32].

The numerical experiments are conducted on a single thread of a laptop computer with a 64-bit Intel Core i7-8550U @ 1.80 GHz (4.00 GHz boost clock speed) and 16 GB of memory, running Linux 5.1.15-arch1-1-ARCH. The CPU times presented in the following are derived from MAiNGO solution reports and represent the CPU times required for solving all NLP subproblems of Algorithm 1 and the min-max algorithm. The ESIP is solved to an absolute and relative optimality tolerance of \(10^{-3}\), subproblems (LBP), (UBP), and (RES) are solved to an absolute and relative optimality tolerance of \(10^{-4}\), and (MLP) is solved to an absolute optimality tolerance of \(10^{-2}\) and a relative optimality tolerance of \(10^{-1}\).

We consider the adjustable robust design of a reactor-cooler system proposed by Halemane and Grossmann [3]. The objective of the design problem is to choose the reactor volume \(\hat{V}\) and the heat exchanger area A such that the total annualized cost (TAC) of operation is minimized subject to a semi-infinite existence constraint derived from parametric uncertainties in the model parameters and operational constraints. That is, a design is considered feasible, if for all possible uncertainty realizations, there exist operational decisions that ensure satisfaction of the operational constraints. Halemane and Grossmann [3] consider both the nominal objective function (corresponding to the nominal uncertainty realization) and the expected value of the objective function approximated by a weighted sum formulation. Here, we consider the nominal case and note that the weighted sum formulation does not pose an additional challenge apart from adding variables and constraints to the upper level of the resulting ESIP. The complete description of the problem instance and the ESIP formulation employed in the following are collected in [28, Section 5.3.2].

Table 1 Numerical results for Algorithm 1 solving [3, Example 2] with nominal objective function, compared to results reported in [3]

For the solution of the adjustable robust design problem, Halemane and Grossmann [3] rely on the ESIP formulation being reducible to a finite problem by considering only the vertices of the (polyhedral) medial-level feasible set. They propose an algorithm that solves a sequence of subproblems that are tightened successively through the addition of constraints associated with vertices of the medial-level feasible set. This approach is proven to be valid under the assumptions of a jointly convex constraint function g and an unconstrained lower-level feasible set [3, Theorem 2]. Both these assumptions are violated for the problem in question. Halemane and Grossmann [3] argue that although not guaranteed, the approach may still succeed in the presence of nonconvex constraint functions.

For the solution of the problem using Algorithm 1, its algorithmic parameters are set to \(\varepsilon ^{g,0} = {10^{-2}}\) and \(r^g = {2}\). As is apparent from Table 1, Algorithm 1 yields results that are consistent with the results reported in [3]. We find a slightly better solution than [3], which is likely due to a difference in the termination criteria employed. Note also that one parameter of the problem is changed from the value reported in [3] since the reported parameter value does not yield consistent results [28, Section 5.3.2]. The results confirm that the results reported in [3] are correct despite the fact that the problem violates the assumptions made therein. Indeed, the only discretization point added by Algorithm 1 is a vertex of the medial-level feasible set, which indicates that the problem can be reduced to a finite problem by considering that vertex.

Termination of Algorithm 1 is achieved after 1 iteration of the lower bounding procedure, 1 iteration of the upper bounding procedure, 7 iterations of the restriction procedure, and overall 6.15 CPU seconds.

5 Conclusions

We propose a discretization-based algorithm for the solution of existence-constrained semi-infinite programs (ESIPs) based on our previous work on the solution of semi-infinite programs (SIPs) [27]. While SIPs possess two levels, the lower of which is discretized in discretization algorithms, the presence of semi-infinite existence constraints adds a third level. The proposed algorithm performs a discretization of the medial-level variables while introducing for each discretization point a vector of lower-level variables. By this approach, the subproblems constructed by the algorithm obtain the same bounding properties as their counterparts in the SIP algorithm [27]. Finite termination with an ESIP feasible and \(\varepsilon \)-optimal solution is proven under assumptions that are similar to the ones made for the SIP case.

The proposed algorithm is implemented, and an adjustable robust design problem from the chemical engineering literature is solved as an example problem. While the problem has been solved previously using a method that requires a convexity assumption to reduce the problem to a finite counterpart, the proposed algorithm does not require such a property. The obtained results are consistent with the ones reported in the literature.

The proposed approach of generating subproblems yields a vector of lower-level variables for each discretization point in the discretization-based subproblems. As a consequence, these subproblems grow in terms of the number of constraints and variables as the number of discretization points increases. It is therefore expected that for problem instances that require many iterations of the ESIP algorithm, standard general-purpose solvers will encounter tractability issues in solving these subproblems. However, the affected subproblems also exhibit a block structure that may, depending on the particular problem structure, enable the use of decomposition methods with a favorable scaling behavior. Indeed, the the block structure mirrors a similar structure in two-stage stochastic programs, the solution of which is often achieved using appropriate decomposition methods. Nevertheless, care should be taken to minimize the number of required iterations, e.g., through tuning algorithmic parameters and improving the algorithm.