1 Introduction

In multi-objective design optimization, the objective function evaluations are generally computationally costly, mainly due to the long convergence times of simulation models. A simple and common remedy to this problem is to use a statistical model learned from previous evaluations as the fitness function, instead of the ‘true’ objective function. This method is also known as Bayesian Global Optimization (BGO) [30]. In BGO, a Gaussian Process (GP) model is used as a statistical model. In each iteration, the algorithm evaluates a new solution and updates the Gaussian Process model. A new solution is chosen by the score of an infill criterion, given a statistical model. For multi-objective problems, the family of these algorithms is called Multi-objective Bayesian global optimization (MOBGO). Compared to evolutionary multi-objective optimization algorithms (EMOAs), MOBGO requires only a small budget of function evaluations to achieve a similar result with respect to hypervolume indicator, and it has already been used in real-world applications to solve expensive evaluation problems [40]. According to the authors’ knowledge, BGO was used for the first time in the context of airfoil optimization in [27], and then applied in the field of biogas plant controllers [16], detection in water quality management [41], machine learning algorithm configuration [23], and structural design optimization [33].

In the context of Bayesian global optimization, an infill or pre-selection criterion is used to evaluate how promising a new point is. In single-objective optimization, the Expected Improvement (EI) is widely used as the infill criterion, and it was first introduced by Mockus et al. [30] in 1978. The EI exploits both the Kriging prediction and the variance in order to give a quantitative measure of a solution’s improvement. Later, the EI became more popular due to the work of Jones et al [21]. In MOBGO, a commonly used criterion is Expected Hypervolume Improvement (EHVI), which is a straightforward generalization of the EI and was proposed by Emmerich [10] in 2005. Compared with other criteria, EHVI leads to an excellent convergence to—and coverage of—the true Pareto front [5, 37]. Nevertheless, the calculation of EHVI itself so far has been time-consuming [23, 32, 36, 41], even in the 2-D caseFootnote 1. Moreover, EHVI has to be computed many times by an optimizer to search for a promising solution in every iteration. For these reasons, a fast algorithm for computing EHVI is needed.

The first method suggested for EHVI calculation was Monte Carlo integration and was proposed by Emmerich [10, 13]. This method is simple and straightforward. However, the accuracy of EHVI highly depends on the number of iterations. The first exact EHVI calculation algorithm in the 2-D case was derived in [12], with time complexity of \(O(n^3\log n)\). Here, n is the number of non-dominated points in the archive. The EHVI calculation algorithm in [12] partitions an objective space into boxes and then calculates the EHVI by summing all the EHVI values of each box. Couckuyt et al. [5] introduced an exact EHVI calculation algorithm (CDD13) for \(d>2\) by representing a non-dominated space with three types of boxes, where d represents the number of objective functions. The method in [5] was also practically much faster than those discussed in [12], though a detailed complexity analysis was missing. Hupkens et al. [19] reduced the time complexity to \(O(n^2)\) and \(O(n^3)\) in the 2-D and 3-D cases, respectively. The algorithms in [19] improve the algorithms in [12] by two ways: (1) only summing the EHVI values of each box in a non-dominated space; (2) reusing some intermediate integrations during the EHVI calculation. The algorithms in [19] further improve the practical efficiency of EHVI on test data in comparison to [5]. Recently, Emmerich et al. [14] proposed an asymptotically optimal algorithm with time complexity of \(\varTheta (n\log n)\) in the bi-objective case. More recently, Yang et al. proposed an asymptotically optimal algorithm with time complexity \(\varTheta (n\log n)\) in the 3-D case [38]. The algorithm, KMACFootnote 2 in [38], partitions a non-dominated space by slices linearly and re-derives the EHVI calculation formulas. However, a generalization of this technique to more than three dimensions/objectives and the empirical testing of MOBGO algorithms on benchmark optimization problems, are still missing so far.

This paper mainly contributes to extending the state-of-the-art EHVI calculation methods into higher dimensional cases. The paper is structured as follows: Sect. 2 introduces the nomenclature, Kriging, and the framework of MOBGO; Sect. 3 provides some fundamental definitions used in this paper; Sect. 4 describes how to partition an integration space into (hyper)boxes efficiently, and how to calculate EHVI based on this partitioning method; Sect. 5 shows experimental results of speed comparison and MOBGO based algorithms’ performance on 10 well-known scientific benchmarks in 6- and 18-dimensional search spaces; Sect. 6 draws the main conclusions and discusses some potential topics for further research.

2 Multi-objective Bayesian global optimization

A multi-objective optimization (MOO) problem is an optimization problem that involves multiple objective functions. A MOO problem can be formulated as:

$$\begin{aligned}&``\max \hbox {''} \big ({y}_1(\mathbf {x}),{y}_2(\mathbf {x}),\ldots ,{y}_d(\mathbf {x}) \big ) \qquad \mathbf {x} \in \mathbb {R}^m \end{aligned}$$
(1)

where d is the number of objective functions, \({y}_i(i=1,\ldots ,d)\) are the objective functions, and a decision vector \(\mathbf {x}\) is in an m-dimensional space (Table 1).

2.1 Notations

The following table summarizes the notations used in this paper.

Table 1 Notations

2.2 Kriging

As a statistical interpolation method, Kriging is a Gaussian process based multivariate regression method. Compared with simulator-based evaluations in design optimization, one prediction/evaluation of the Kriging model is typically cheap [28]. Therefore, Kriging is widely used as a popular surrogate model to approximate noise-free data in computer experiments. Kriging models are fitted from previously evaluated points. Given a set of n decision vectors \(\mathbf {X}=\{\mathbf {x}^{(1)}, \mathbf {x}^{(2)}, \ldots , \mathbf {x}^{(n)}\},\mathbf {x}^{(i)} \in \mathbb {R}^m|_{i,=1,\ldots ,n}\) in an m-dimensional search space, and associated function values \(\mathbf {Y}(\mathbf {X}) =\big ({y}(\mathbf {x}^{(1)}), {y}(\mathbf {x}^{(2)}), \ldots , {y}(\mathbf {x}^{(n)})\big )^{\top }\), Kriging assumes \(\mathbf {Y}(\mathbf {X})\) to be a realization of a random process Y of the following form [3, 21]:

$$\begin{aligned} Y(\mathbf {x}) = \mu (\mathbf {x}) + \epsilon (\mathbf {x}) \end{aligned}$$
(2)

where \(\mu (\mathbf {x})\) is the estimated mean value over all given sampled points, and \(\epsilon (\mathbf {x})\) is a realization of a Gaussian process with zero mean and variance \(\sigma ^2\). The regression part \(\mu (\mathbf {x})\) approximates the function Y globally and the Gaussian process \(\epsilon (\mathbf {x})\) takes local variations into account. Opposed to other regression methods (such as support vector machine), Kriging/GP also provides an uncertainty qualification of a prediction. The correlation between the deviations at two decision vectors (\(\mathbf {x}\) and \(\mathbf {x'}\)) is defined as:

$$\begin{aligned} Corr[\epsilon (\mathbf {x}),\epsilon (\mathbf {x'})] = R(\mathbf {x},\mathbf {x'}) = \prod _{i=1}^m R_i(x_i,x_i') \end{aligned}$$
(3)

Here R(., .) is the correlation function, which decreases with the distance between two points. It is common practice to use a Gaussian correlation function (also known as a squared exponential kernel):

$$\begin{aligned} R(\mathbf {x},\mathbf {x'}) = \prod ^m_{i=1} \text {exp}(-\theta _i(x_i - x'_i)^2) \qquad (\theta _i \ge 0) \end{aligned}$$
(4)

where \(\theta _i\) are parameters of the correlation model. They can be interpreted as a measurement of the variables’ importance. The optimal \(\varvec{\theta } = (\theta _1^{opt}, \ldots , \theta _m^{opt})\) in the Kriging models are usually optimized by a continuous optimization algorithm. In this paper, the optimal \(\varvec{\theta }\) is optimized by the simplex search method of Lagarias et al. (fminsearch) [26], with the parameter of max function evaluations equal to 1000. The covariance matrix can then be expressed through the correlation function:

$$\begin{aligned} Cov({\epsilon (\mathbf {x})}) = \sigma ^2 {{\varvec{\Sigma }}}, \qquad \text {where} \qquad {{\varvec{\Sigma }}}_{i,j}=R({\mathbf {x_i}},{\mathbf {x_j}}) \end{aligned}$$
(5)

When \(\mu (\mathbf {x})\) is assumed to be an unknown constant, the unbiased prediction is called ordinary Kriging (OK). In OK, the Kriging model determines the hyperparameters \({{\varvec{\theta }}} = [\theta _1, \theta _2, \ldots , \theta _n]\) by maximizing the likelihood over the observed dataset. The expression of the likelihood function is:

$$\begin{aligned} L = -\frac{n}{2}\ln ({\sigma }^2) - \frac{1}{2}\ln (|{{\varvec{\Sigma }}}|) \end{aligned}$$
(6)

The maximum likelihood estimates of the mean \(\hat{\mu }\) and the variance \(\hat{\sigma }^2\) are given by:

$$\begin{aligned} \hat{\mu }&= \frac{\mathbf {1}^{\top }_n {{\varvec{\Sigma }}}^{-1} \mathbf {y}}{\mathbf {1}^{\top }_n {{\varvec{\Sigma }}}^{-1} \mathbf {1}_n} \end{aligned}$$
(7)
$$\begin{aligned} \hat{\sigma }^2&= \frac{1}{n}(\mathbf {y} - \mathbf {1}_n\hat{\mu })^{T}{{\varvec{\Sigma }}}^{-1}(\mathbf {y}-\mathbf {1}_n\hat{\mu }) \end{aligned}$$
(8)

Then the predictor of the mean and the variance at a target point \(\mathbf {x}^t\) can be derived. They are shown in [21]:

$$\begin{aligned} \mu (\mathbf {x}^t)&= \hat{\mu } + \mathbf {c}^{\top } {{\varvec{\Sigma }}}^{-1} (\mathbf {y} - \hat{\mu }\mathbf {1}_n) \end{aligned}$$
(9)
$$\begin{aligned} \sigma ^2(\mathbf {x}^t)&= \hat{\sigma }^2 \left[ 1 - \mathbf {c}^{\top } {{\varvec{\Sigma }}}^{-1}\mathbf {c} + \frac{1-\mathbf {c}^{T}\varSigma ^{\top }\mathbf {c}}{\mathbf{1 }_n^{\top }{\varvec{\Sigma }}^{-1}\mathbf {1}_n}\right] \end{aligned}$$
(10)

where \(\mathbf {c} = (Corr[y(x^t),y(x_1)], \ldots , Corr[y(x^t),y(x_n)])^{\top }\).

2.3 Structure of MOBGO

In MOBGO, it is assumed that d objective functions are mutually independent in an objective space. Each objective function is approximated by a Kriging model individually, based on the \(\eta \) existing evaluated data \(D = \big ( (\mathbf {x}^{(1)}, \mathbf {y}^{(1)}=y(\mathbf {x}^{(1)})), \dots , (\mathbf {x}^{(\eta )}, \mathbf {y}^{(\eta )}=y(\mathbf {x}^{(\eta )})) \big )\). Each Kriging model is a one-dimensional normal distribution, with a mean \(\mu \) and a standard deviation \(\sigma \). Given a target solution \(\mathbf {x}^{t}\), the Kriging models can predict the multivariate outputs by means of an independent joint normal distribution with means \(\mu _1\), \(\dots \), \(\mu _d\) and standard deviations \(\sigma _1\), \(\dots \), \(\sigma _d\). These predictive means and standard deviations are used to calculate the score of an infill criterion, which can quantitatively measure how promising the target point \(\mathbf {x}^{t}\) is when compared with the current Pareto-front approximation set. A promising solution \(\mathbf {x}^*\) can be found by maximizing/minimizingFootnote 3 the score of the infill criterion. Then, this promising solution \(\mathbf {x}^*\) is evaluated by the ‘true’ objective functions, and both the dataset D and the Pareto-front approximation set \(\mathcal {P}\) are updated.

The basic structure of the MOBGO algorithm is shown in Algorithm 1. It mainly contains three parts: initialization of a sampling dataset, searching for an optimal solution and updating the Kriging models, and returning the Pareto-front approximation set \(\mathcal {P}\).

figure a

First, a dataset D is initialized and a Pareto-front approximation set \(\mathcal {P}\) is computed, as shown in Algorithm 1 from Step 1 to Step 5. The initialization of D contains the generation of the decision vectors using Latin Hypercube Sampling method (LHS) [29] (Step 1), calculation of the corresponding objective values (Step 2) and storage of this information in dataset D (Step 3). This dataset D will be utilized to build the Kriging models in the second part.

The second part of MOBGO is the main loop, as shown in Algorithm 1 from Step 6 to Step 12. This main loop starts with training Kriging models \(M_i\) based on dataset D (Step 7). Note that M contains d independent models for each objective function, and these models will be used as temporary objective functions instead of ‘true’ objective functions in Step 8. Then, an optimizer finds a promising solution \(\mathbf {x}^*\) by maximizing or minimizing an infill criterion C (Step 8). Here, an infill criterion is calculated by its corresponding calculation formula, whose inputs include Kriging models M, the current Pareto-front approximation set \(\mathcal {P}\), a target decision vector \(\mathbf {x}^t\), etc. Theoretically, any single-objective optimization algorithm can be utilized as an optimizer to search for a promising solution \(\mathbf {x}^*\). In this paper, the BI-population CMA-ES is chosen for its favorable performance on BBOB function testbed [18]. Step 9 and Step 10 will update the dataset D by adding \((\mathbf {x}^*,y(\mathbf {x}^*))\) into D and update the Pareto-front approximation set \(\mathcal {P}\). The main loop from Step 6 to Step 12 will continue until g meets the termination criterion \(T_c\). The last part of MOBGO returns Pareto-front approximation set \(\mathcal {P}\).

The choice of infill criterion C at Step 8 distinguishes different types of MOBGO based algorithms. In this paper, EHVI-MOBGO and PoI-MOBGO, which set EHVI and PoI [22, 24, 35] as the infill criterion C respectively, are compared in Sect. 5.2.

3 Definitions

Pareto dominance, or briefly dominance, is a fundamental concept in MOO and provides an ordering relation on the set of potential solutions.Footnote 4Dominance is defined as follows:

Definition 1

(Dominance [4]) Given two decision vectors \(\mathbf {x}^{(1)},\mathbf {x}^{(2)} \in \mathbb {R}^m \) and their corresponding objective values \(\mathbf {y}^{(1)}=y(\mathbf {x}^{(1)})\), \(\mathbf {y}^{(2)}=y(\mathbf {x}^{(2)})\) in a maximization problem, it is said that \(\mathbf {y}^{(1)}\) dominates \(\mathbf {y}^{(2)}\), being represented by \(\mathbf {y}^{(1)} \prec \mathbf {y}^{(2)}\), iff \(\forall i \in \{ 1, 2, \ldots , d \}: {y}_i(\mathbf {x}^{(1)}) \ge {y}_i(\mathbf {x}^{(2)})\) and \(\exists j \in \{ 1, 2, \ldots , d \}: {y}_j(\mathbf {x}^{(1)}) > {y}_j(\mathbf {x}^{(2)})\).

From the perspectives of searching and optimization, non-dominated points are of greater interest. The concept of non-dominance is defined as:

Definition 2

(Non-dominance [14]) Given a decision vector set \(\mathbf {X} \in \mathbb {R}^m\), and the image of the vector set \(\mathbf {Y} = \{y(\mathbf {x}) | \mathbf {x} \in \mathbf {X} \}\), the non-dominated subset of \(\mathbf {Y}\) is defined as:

$$\begin{aligned} nd(\mathbf {Y}) := \{ \mathbf {y} \in \mathbf {Y} | \not \exists \mathbf {z} \in \mathbf {Y}: \mathbf {z} \prec \mathbf {y} \} \end{aligned}$$
(11)

A vector \(\mathbf {y} \in nd(\mathbf {Y})\) is called a non-dominated point of \(\mathbf {Y}\).

Definition 3

(Dominated subspace of a set) Let \(\mathcal {P}\) be a subset of \(\mathbb {R}^d\). The dominated subspace of \(\mathcal {P}\) in \(\mathbb {R}^d\), notation \( \text{ dom } (\mathcal {P})\), is then defined as:

$$\begin{aligned} \text{ dom }(\mathcal {P}):= \{\, \mathbf {y} \in \mathbb {R}^d\, |\, \exists \mathbf {p} \in \mathcal {P} \text{ with } \mathbf {p} \prec \mathbf {y}\, \} \end{aligned}$$
(12)

Definition 4

(Non-dominated space of a set) Let \(\mathcal {P}\) be a subset of \(\mathbb {R}^d\) and let \(\mathbf {r} \in \mathbb {R}^d\) be such that \(\forall \mathbf {p} \in \mathcal {P}: \mathbf {p} \prec \mathbf {r}\). The non-dominated space of \(\mathcal {P}\) with respect to \(\mathbf {r}\), denoted as \(\text{ ndom }(\mathcal {P})\), is then defined as:

$$\begin{aligned} \text{ ndom } (\mathcal {P}):= \{ \mathbf {y} \in \mathbb {R}^d\, |\, \mathbf {y} \prec \mathbf {r} \text{ and } \not \exists \mathbf {p} \in \mathcal {P} \text{ such } \text{ that } \mathbf {p} \prec \mathbf {y} \, \} \end{aligned}$$
(13)

Note that the notion of dominated space as well as the notion of non-dominated space of a set can also be defined for (countably and non-countably) infinite sets \(\mathcal {P}\).

The Hypervolume Indicator (HV), introduced in [42], is one of the essential unary indicators for evaluating the quality of a Pareto-front approximation set. Its theoretical properties are discussed in [43]. Notably, HV does not require the knowledge of the Pareto front in advance. The maximization of HV leads to a high-qualified and diverse Pareto-front approximation set. The Hypervolume Indicator is defined as follows:

Definition 5

(Hypervolume indicator) Given a finite approximation to a Pareto front, say \(\mathcal {P}= \{ \mathbf {y}^{(1)}, \dots , \mathbf {y}^{(n)}\} \subset \mathbb {R}^d\), the Hypervolume Indicator of \(\mathcal {P}\) is defined as the d-dimensional Lebesgue measure of the subspace dominated by \(\mathcal {P}\) and bounded below by a reference point \(\mathbf {r}\):

$$\begin{aligned} \text{ HV }(\mathcal {P}) = \lambda _d(\cup _{\mathbf {y} \in \mathcal {P}} [\mathbf {r}, \mathbf {y}]) \end{aligned}$$
(14)

with \(\lambda _d\) being the Lebesgue measure on \(\mathbb {R}^d\).

The hypervolume indicator measures the size of the dominated subspace bounded below by a reference point \(\mathbf {r}\). This reference point needs to be provided by users. Theoretically, in order to get the extreme non-dominated points, this reference point should be chosen in a way that it is dominated by all elements of a Pareto-front approximation set \(\mathcal {P}\) during the optimization process. However, there is no requirement of setting the reference point in practice if the user is not interested in extreme non-dominated points.

Another important infill criterion is Hypervolume Improvement, which is also called the Improvement of Hypervolume in [11]. The definition of Hypervolume Improvement is:

Definition 6

(Hypervolume improvement) Given a finite collection of vectors \(\mathcal {P}\subset \mathbb {R}^d\), the Hypervolume Improvement of a vector \(\mathbf {y} \in \mathbb {R}^d\) is defined as:

$$\begin{aligned} \text{ HVI }(\mathbf {y}, \mathcal {P}) = \text{ HV }(\mathcal {P}\cup \{\mathbf {y}\}) - \text{ HV }(\mathcal {P}) \end{aligned}$$
(15)

When we want to emphasize the reference point \(\mathbf {r}\), the notation \(\text{ HVI }(\mathbf {y}, \mathcal {P},\mathbf {r})\) will be used to denote Hypervolume Improvement.

Example 1

Figure 1 illustrates the concept of Hypervolume Improvement using two examples. The first example, on the left, is a 2-D example: Suppose a Pareto-front approximation set is \(\mathcal {P}\), which is composed by \(\mathbf {y}^{(1)}= (1,2.5)^\top \), \(\mathbf {y}^{(2)}= (2,1.5)^\top \) and \(\mathbf {y}^{(3)} = (3,1)^\top \). When a new point \(\mathbf {y^{(+)}} = (2.8,2.3)^\top \) is added, the Hypervolume Improvement\(HVI(\mathbf {y^{(+)}},\mathcal {P}, \mathbf {r})\) is the area of the yellow polygon. The second example (on the right in Fig. 1) illustrates the Hypervolume Improvement by means of a 3-D example. Assume a Pareto-front approximation set is \(\mathcal {P}=\)( \(\mathbf {y}^{(1)}=(4,4,1)^\top \), \(\mathbf {y}^{(2)}=(1,2,4)^\top \), \(\mathbf {y}^{(3)}=(2,1,3)^\top \)). The Hypervolume Improvement of \(\mathbf {y}^{(+)}=(3,3,2)^\top \) relative to \(\mathcal {P}\) is given by the joint volume covered by the yellow slices.

Fig. 1
figure 1

The left and right figures illustrate Hypervolume Improvement in a 2-D and a 3-D example, respectively

Probability of Improvement (PoI) is an important criterion in MOBGO. It was first introduced by Stuckman in [34]. Later, Emmerich et al. [13] generalized it to multi-objective optimization. PoI is defined as:

Definition 7

(Probability of improvement) Given parameters of the multivariate predictive distribution \(\varvec{\mu }\), \(\varvec{\sigma }\) and the Pareto-front approximation set \(\mathcal {P}\), the Probability of Improvement is defined as:

$$\begin{aligned}&\text{ PoI }(\varvec{\mu }, \varvec{\sigma }, \mathcal {P}) : = \int \limits _{\mathbb {R}^d} \mathrm {I}(\mathbf {y} \text{ impr } \mathcal {P}) \varvec{\xi }_{\varvec{\sigma }, \varvec{\mu }}(\mathbf {y}) d\mathbf {y} \qquad \qquad \mathrm {I}(v) = \left\{ \begin{array}{ll} 1 &{} \text{ if } v= {\mathrm {true}}\\ 0 &{} \text{ if } v= {\mathrm {false}} \end{array} \right. \end{aligned}$$
(16)

where \(\varvec{\xi }_{\varvec{\mu }, \varvec{\sigma }}\) is a multivariate independent normal distribution with the mean values \(\varvec{\mu } \in \mathbb {R}^d\) and the standard deviations \(\varvec{\sigma } \in \mathbb {R}^d_+\). Here, (\(\mathbf {y} \text{ impr } \mathcal {P}\)) represents \(\mathbf {y} \in \mathbb {R}^d\) as an improvement with respect to \(\mathcal {P}\), if and only if the following holds: \(\mathbf {y} \prec \mathbf {r}\) and \(\forall \mathbf {p} \in \mathcal {P}: \lnot (\mathbf {p} \prec \mathbf {y})\).

In Eq. (16), \(\text{ I }(\mathbf {y} \text{ impr } \mathcal {P})=1\) means that \(\mathbf {y}\) is an element of the non-dominated space of \(\mathcal {P}\). In other words, \(\mathbf {y} \in [\mathbf {r}, \infty ^d] {\setminus } \text{ dom } (\mathcal {P})\) if \(\text{ I }(\mathbf {y} \text{ impr } \mathcal {P})=1\). A reference point \(\mathbf {r}\) is not indicated in Eq. (16) because \(\mathbf {r}\) must be chosen as \([-\infty ]^d\) in PoI. Therefore, PoI is a reference-free infill criterion.

Definition 8

(Expected hypervolume improvement) Given parameters of the multivariate predictive distribution \(\varvec{\mu }\), \(\varvec{\sigma }\) and the Pareto-front approximation set \(\mathcal {P}\), the expected hypervolume improvement is defined as:

$$\begin{aligned} \textit{EHVI}(\varvec{\mu }, \varvec{\sigma }, \mathcal {P}, \mathbf {r}) := \int \limits _{\mathbb {R}^d} \text{ HVI }(\mathcal {P}, \mathbf {y}, \mathbf {r}) \cdot \varvec{\xi }_{\varvec{\sigma }, \varvec{\mu }}(\mathbf {y}) d\mathbf {y} \end{aligned}$$
(17)

Example 2

An illustration of the 2-D EHVI is shown in Fig. 2. The light gray area is the dominated subspace of \(\mathcal {P}= \{{\mathbf {y}^{(1)}} = (3,1)^\top ,\)\(\mathbf {y}^{(2)}=(2, 1.5)^\top ,\)\( {\mathbf {y}^{(3)}} = (1, 2.5)^\top \}\) bounded by the reference point \(\mathbf {r} = (0,0)\). The bivariate Gaussian distribution has the parameters \(\mu _1=2.5,\)\(\mu _2=2,\)\(\sigma _1 = 0.7,\)\(\sigma _2 = 0.8\). The probability density function (\(\varvec{\xi }\)) of the bivariate Gaussian distribution is indicated as a 3-D plot. Here \(\mathbf {y}^{(+)}\) is a sample from this distribution and the area of improvement relative to \(\mathcal {P}\) is indicated by the dark shaded area. Variables \(y_1\) and \(y_2\) stand for the first and the second objective values, respectively.

Fig. 2
figure 2

Expected hypervolume improvement in 2-D (cf. Example 2)

For computing integrals of EHVI in Sect. 4, it is useful to define \(\varDelta \) and \(\varPsi _{\infty }\) functions.

Definition 9

(\(\varDelta \)function (see also [14, 39])) For a given vector of objective function values \(\mathbf {y} \in \mathbb {R}^d\) and \(\mathbf {y} \not \in \mathcal {P}\), \(\varDelta (\mathbf {y}, \mathcal {P}, \mathrm {r})\) is the subset of the vectors in \(\mathbb {R}^d\) which are exclusively dominated by the vector \(\mathbf {y}\) but not by elements in \(\mathcal {P}\), and which dominate the reference point \(\mathbf {r}\), that is:

$$\begin{aligned} \varDelta (\mathbf {y}, \mathcal {P}, \mathbf {r}) = \{\mathbf {z} \in \mathbb {R}^d \ | \ \mathbf {y} \prec \mathbf {z} \text{ and } \mathbf {z} \prec \mathbf {r} \text{ and } \not \exists \mathbf {q} \in \mathcal {P}: \mathbf {q} \prec \mathbf {z}\} \end{aligned}$$
(18)

Definition 10

(\(\varPsi _{\infty }\)function (see also [19])) Let \(\phi (s)= 1/\sqrt{2\pi }e^{-\frac{1}{2}s^2} (s\in \mathbb {R})\) denote the PDF (\(\xi \)) of the standard normal distribution. Moreover, let \(\varPhi (s)= \frac{1}{2}\left( 1 + {{\,\mathrm{erf}\,}}\left( \frac{s}{\sqrt{2}}\right) \right) \) denote its cumulative probability distribution function (CDF), and \({{\,\mathrm{erf}\,}}\) denote the Gaussian error function. The general normal distribution with mean \(\mu \) and standard deviation \(\sigma \) has PDF \(\xi _{\mu ,\sigma }(s)=\phi _{\mu , \sigma }(s) = \frac{1}{\sigma }\phi (\frac{s-\mu }{\sigma })\) and its CDF is \(\varPhi _{\mu , \sigma }(s) = \varPhi (\frac{s-\mu }{\sigma })\). Then the function \(\varPsi _{\infty }(a,b,\mu ,\sigma )\) is defined as:

$$\begin{aligned}&\varPsi _{\infty }(a,b,\mu ,\sigma ) : = \int \limits _{b}^{\infty }(z-a)\dfrac{1}{\sigma }\phi \left( \dfrac{z-\mu }{\sigma }\right) dz \end{aligned}$$
(19)

One can easily show that \(\psi _{\infty }(a,b,\mu ,\sigma ) =\sigma \phi \left( \dfrac{b-\mu }{\sigma }\right) + (\mu -a)\left[ 1-\varPhi \left( \dfrac{b-\mu }{\sigma }\right) \right] \).

4 Efficient EHVI calculation

This section mainly discusses an efficient partitioning method for a non-dominated space and how to employ this partitioning method to calculate EHVI and PoI.

4.1 Partitioning a non-dominated space

The efficiency of an infill criterion calculation is determined by a non-dominated search algorithm and the number of integration slices. The main idea of the partitioning method is to separate the integration volume (a non-dominated space) into as few integration slices as possible. Then, the integral of the criterion is calculated within each integration slice. The value of the criterion is the sum of its contribution in every integration slice.

4.1.1 The 2-D case

In the 2-D case, the partitioning method is simple and has already been published by Emmerich et al. [14]. Given a Pareto-front approximation set \(\mathcal {P}\) with n elements, the algorithm in [14] adopts a new way to derive the EHVI calculation formulas and only partitions a non-dominated space into \(n+1\) integration slices, instead of \((n+1)^2\) grids in [5, 19]. For the sake of completeness, we will introduce this integration technique here briefly.

Suppose \(\mathbf {Y}={\mathbf {y}^{(1)}, \dots , \mathbf {y}^{(n)}}\) and \(d=2\), an integration space (a non-dominated space) of \(\mathbf {Y}\) can be divided into \(n+1\) disjoint integration slices (\(S_2^{(i)}, i=1,\dots ,n+1\)) by drawing lines parallel to \(y_2\)-axis at each element in \(\mathbf {y}\), as indicated in Fig. 3. Then, each integration slice can be expressed by its lower bound (\(\mathbf {l}_2^{(i)}\)) and upper bound (\(\mathbf {u}_2^{(i)}\)). In order to define the slices formally, we argue a Pareto-front approximation set \(\mathcal {P}\) with two sentinels: \(\mathbf {y}^{(0)} = (r_1, \infty )^\top \) and \(\mathbf {y}^{(n+1)} = (\infty , r_2)^\top \). Then, the integration slices for the 2-D case are defined by:

$$\begin{aligned} S_2^{(i)} = (\mathbf {l}_2^{(i)},\mathbf {u}_2^{(i)}) = \left( \left( \begin{array}{c}l_1^{(i)} \\ l_2^{(i)} \end{array} \right) , \left( \begin{array}{c}u_1^{(i)} \\ u_2^{(i)} \end{array}\right) \right) = \left( \left( \begin{array}{c}y_1^{(i-1)} \\ y_2^{(i)} \end{array} \right) , \left( \begin{array}{c}y_1^{(i)} \\ \infty \end{array}\right) \right) , i=1, \dots , N_2 \end{aligned}$$
(20)
Fig. 3
figure 3

Partitioning of the 2-D integration region into slices

In the 2-D case, the number of integration slices is straightforward, namely, \(N_2=n+1\).

Fig. 4
figure 4

Upper left: 3-D Pareto-front approximation. Upper right: Integration slices in 3-D. Lower center: The projection of 3-D integration slices into the \(y_1y_2\)-plane, each slice can be described by lower bound and upper bound

4.1.2 The 3-D case

Similar to the 2-D partitioning method, in the 3-D case, each integration slice can be defined by its lower bound (\(\mathbf {l}_3\)) and upper bound (\(\mathbf {u}_3\)). Since the upper bound of each integration slice is always \(\infty \) in the \(y_3\) axis, we can describe each integration slice as follows:

$$\begin{aligned} S_3^{(i)} = (\mathbf {l}_3^{(i)}, \mathbf {u}_3^{(i)})=\left( \left( \begin{array}{c}l_1^{(i)}\\ l_2^{(i)}\\ l_3^{(i)}\end{array}\right) , \left( \begin{array}{c}u_1^{(i)}\\ u_2^{(i)}\\ \infty \end{array}\right) \right) , \quad \quad i=1, \dots , N_3 \end{aligned}$$
(21)

Example 3

An illustration of integration slices is shown in Fig. 4. A Pareto front set \(\mathcal {P}\) is composed by 4 points (\(\mathbf {y}^{(1)}=(1,3,4)^\top , \mathbf {y}^{(2)}=(4,2,3)^\top , \mathbf {y}^{(3)}=(2,4,2)^\top \) and \(\mathbf {y}^{(4)}=(3,5,1)^\top \)), and this Pareto front is shown in the upper left figure. The upper right figure represents the partitioned integration slices of \(\mathcal {P}\). The lower center figure illustrates the projection of the upper right figure onto the \(y_1y_2\)-plane with rectangle slices and \(\mathbf {l}, \mathbf {u}\). The rectangular slices, which share a similar color but of different opacity, represent integration slices with the same value of \(y_3\) in their lower bound. The lower bound of the 3-D integration slice \(B_4\) is \(\mathbf {l}_3^{(4)} = (1,2,2)^\top \), and the upper bound of the slice is \(\mathbf {u}_3^{(4)} = (2,4,\infty )^\top \).

figure b

Algorithm 2 describes how to obtain the slices \(S_3^{(1)}\), \(\dots \), \(S_3^{(i)}\), \(\dots \), \(S_3^{(N_3)}\) with the corresponding lower and upper bounds (\(\mathbf {l}_3^{(i)}\) and \(\mathbf {u}_3^{(i)}\)). The partitioning algorithm is similar to the sweep line algorithm described in [15]. The basic idea of this algorithm is to use an AVL tree to process points in descending order of the \(y_3\) coordinate. For each such point, say \(\mathbf {y}^{(i)}\), the algorithm finds all the points \((\mathbf {y}^{(d[1])},\dots , \mathbf {y}^{(d[s])})\) which are dominated by \(\mathbf {y}^{(i)}\) in the \(y_1y_2\)-plane and inserts \(\mathbf {y}^{(i)}\) into the tree. Moreover, because of \(\mathbf {y}^{(i)}\), the algorithm will also discard all the points (\(\mathbf {y}^{\text {(d[1])}}\), \(\dots \), \(\mathbf {y}^{\text {(d[s])}}\)) from the AVL tree. See Fig. 5 for describing one such iteration. In each iteration, \(s+1\) slices are created by coordinates of the points \(\mathbf {y}^{\text {(t)}}\), \(\mathbf {y}^{\text {(d[1])}}\), \(\dots \), \(\mathbf {y}^{\text {(d[s])}}\), \(\mathbf {y}^{(r)}\), and \(\mathbf {y}^{(i)}\) as illustrated in Fig. 5.

The number of the integration slices in the 3-D case \(N_3\) is \(2n+1\) where all points are in general position (for each \(i, i=1,\dots , d\): the i-th coordinate is different for each pair of points in \(\mathbf {Y}\)). Otherwise, \(2n+1\) provides an upper bound for the obtained number of slices.

Proof

In the algorithm, each point \(\mathbf {y}^{(i)}|_{i=1,\dots ,n}\) creates two slices. The first one, say slice \(A^{(i)}\), is created when the point \(\mathbf {y}^{(i)}\) is added to the AVL tree. Another slice, say slice \(S_3^{(i)}\), is created when the point \(\mathbf {y}^{(i)}\) is discarded from the AVL tree due to domination by another point, say \(\mathbf {y}^{(s)}\), in the \(y_1y_2\)-plane. These two slices are defined as follows \(A^{(i)} = ((y^{(t)},y_2^{(l2)},y_3^{(i)})^\top ,(y_1^{(u1)},y_2^{(i)},\infty )^\top )\) whereas \(y_2^{(l2)}\) is either \(y_2^{(r)}\) if no point is dominated by \(\mathbf {y}^{(i)}\) in the \(y_1y_2\)-plane, or \(y_2^{(d[1])}\), otherwise. Moreover, \(S_3^{(i)}=((y^{(i)}_1,y^{(r)}_2,y^{(s)}_3)^\top ,\)\((y^{(u)}_1,y^{(s)}_2,\infty )^\top )\) and \(\mathbf {y}^{(u)}\) denote either the right neighbor among the newly dominated points in the \(y_1y_2\)-plane, or \(\mathbf {y}^{(s)}\) if \(\mathbf {y}^{(i)}\) is the rightmost point among all newly dominated points. In this way, each slice can be attributed to exactly one point in \(\mathcal {P}\), except the slice that is created in the final iteration. In the final iteration, one additional point \(\mathbf {y}^{(n+1)}=(\infty , \infty ,\infty )^\top \) is added to the AVL tree. This point will create a new slice when it is added, but because it is never discarded, it adds only a single slice. Therefore, \(2n + 1\) slices are created in total. \(\square \)

Fig. 5
figure 5

Boundary search for slices in 3-D case

4.1.3 Higher dimensional cases

In higher dimensional cases, the non-dominated space can be partitioned into axis aligned hyperboxes, similar to the 3-D case. In the d-dimensional case (\(d\ge 4\)), the hyperboxes can be denoted by \(S_d^{(1)}, \dots , S_d^{(i)}, \dots , S_d^{N_d}\) with their lower bounds (\(\mathbf {l}^{(1)}, \dots , \mathbf {l}^{(N_d)}\)) and upper bounds (\(\mathbf {u}^{(1)}, \dots , \)\(\mathbf {u}^{(N_d)}\)). Here, \(N_d\) is the number of hyperboxes and has the same definition as \(N_2\) and \(N_3\). The hyper-integral box \(S_d^{(i)}\) is defined as:

$$\begin{aligned} S_d^{(i)} = (\mathbf {l}_d^{(i)}, \mathbf {u}_d^{(i)}) = \big ( (l_1^{(i)}, \ldots , l_d^{(i)})^{\top }, (u_1^{(i)}, \ldots , \infty )^{\top } \big ) \quad \quad i=1, \dots , N_d \end{aligned}$$
(22)

An efficient algorithm for partitioning a higher dimensional, non-dominated space is proposed in this section, which is based on two state-of-the-art algorithms DKLV17 [6] by Dächert et al. and LKF17 [25] by Lacour et al. Here, algorithm DKLV17 is an efficient algorithm to locate the local lower bound pointsFootnote 5 (\(\mathbf {L}_d\)) in a dominated space for maximization problems, based on a specific neighborhood structure among local lower bounds. Moreover, LKF17 is an efficient algorithm to calculate the HVI by partitioning the dominated space. In other words, LKF17 is also efficient in partitioning the dominated space and provides the boundary information for each hyperbox in the dominated space.

The idea behind the proposed algorithm is transforming the problem of partitioning a non-dominated space into the problem of partitioning the dominated space, by means of introducing an intermediate Pareto-front approximation set \(\mathcal {P}^{'}\). This transformation is done by the following steps. Suppose that we have a current Pareto-front approximation set \(\mathcal {P}\) for a maximization problem and we want to partition the non-dominated space of \(\mathcal {P}\). Firstly, DKLV17 is applied to locate the local lower bound points (\(\mathbf {L}_d\)) of \(\mathcal {P}\) in the dominated space. Secondly, regard \(\mathbf {L}_d\) as a new Pareto-front approximation \(\mathcal {P}^{'}\) for a minimization problem with a reference point \(\{\infty \}^d\). The dominated space of \(\mathcal {P}^{'}\) is actually the non-dominated space of \(\mathcal {P}\). Then, LKF17 can be applied to partition the dominated space of \(\mathcal {P}^{'}\) by locating the lower bound points \(\mathbf {l}_d\) and the upper bound points \(\mathbf {u}_d\). These bound points (\(\mathbf {l}_d\),\(\mathbf {u}_d\)) of \(\mathcal {P}^{'}\) in the dominated space for a minimization problem are exact the lower/upper bound points of the partitioned, non-dominated hyperboxes of \(\mathcal {P}\) for a maximization problem. The pseudo code of partitioning non-dominated space in higher dimensional cases is shown in Algorithm 3.

figure c

Example 4

Figure 6 illustrates Algorithm 3. In the 2-D case, suppose a Pareto-front approximation set is \(\mathcal {P}\), which consists of \(\mathbf {y}^{(1)} = (1,2.5)^\top \), \(\mathbf {y}^{(2)} = (2,1.5)^\top \) and \(\mathbf {y}^{(3)} = (3,1)^\top \). The reference point is \(\mathbf {r}=(0,0)\), see Fig. 6 (above left). Use DKLV17 to locate the local lower bound points \(\mathbf {L}_2\), which consist of \(\mathbf {L}_2^{(1)} = (0,2.5)^\top \), \(\mathbf {L}_2^{(2)} = (1,1.5)^\top \), \(\mathbf {L}_2^{(3)} = (2,1)^\top \) and \(\mathbf {L}_2^{(4)} = (3,0)^\top \), see Fig. 6 (above right). Regard all of the local lower bound points \(\mathbf {L}_2\) as the elements of a new Pareto-front approximation set \(\mathcal {P}^{'}=(\mathbf {L}_2^{(1)}, \ldots , \mathbf {L}_2^{(4)})\). Set a new reference point \(\mathbf {r}^{'}=(\infty ,\infty )\) and utilize LKF17 to partition the dominated space of \(\mathcal {P}^{'}\), by considering minimization, see Fig. 6 (below left). The partitioned non-dominated space of \(\mathcal {P}\) is then the partitioned dominated space of \(\mathcal {P}^{'}\), see Fig. 6 (below right).

Fig. 6
figure 6

The illustration of partitioning a non-dominated space in higher dimensional cases. Above left: Pareto-front approximation set \(\mathcal {P}\). Above right: Locating \(\mathbf {L}_2\) points using DKLV17. Below left: Partitioning the dominated space of \(\mathcal {P}^{'}\) using LKF17. Below right: The partitioned non-dominated space of \(\mathcal {P}\)

4.2 EHVI calculation

Footnote 6This section discusses the problem of exact EHVI calculation. Moreover, a new and efficient algorithm is also derived. Sections 4.2.1 and 4.2.2 introduce the proposed method in the 2-D and 3-D cases, respectively. Section 4.2.3 illustrates the general calculation formulas in higher dimensional cases, based on the proposed method.

In order to simplify the notation, \(\varDelta (\mathbf {y})\) is used whenever \(\mathcal {P}, \mathbf {r}\) are given by the context. Based on \(\varDelta (\mathbf {y})\), the expected hypervolume improvement function can be re-defined as:

$$\begin{aligned} \text{ EHVI }(\varvec{\mu }, \varvec{\sigma }, \mathcal {P}, \mathbf {r})&= \int \limits _{\mathbb {R}^d} \text{ HVI }(\mathcal {P}, \mathbf {y}, \mathbf {r}) \cdot \varvec{\xi }_{\varvec{\sigma }, \varvec{\mu }}(\mathbf {y}) d\mathbf {y} \nonumber \\&= \int \limits _{y_1=-\infty }^\infty \cdots \int \limits _{y_d=-\infty }^\infty \lambda _d[S_d \cap \varDelta (\mathbf {y})] \varvec{\xi }_{\varvec{\mu }, \varvec{\sigma }}(\mathbf {y}) d\mathbf {y} \end{aligned}$$
(23)

For the convenience of expressing the EHVI formula in the remaining parts of this paper, two functions (\(\ell \) and \(\vartheta \)) are defined as follows:

Definition 11

(\(\ell \)function) Given the parameters of an integration slice \(S_d^{(i)}\) in a d-dimensional space, the Hypervolume Improvement of slice \(S_d^{(i)}\) in dimension \(k \le d\) is defined as:

$$\begin{aligned} \ell (u_k^{(i)},y_k,l_k^{(i)}) := \lambda _1[S_d^{(i)}\cap \varDelta (y_k)]=|[l_k^{(i)},u_k^{(i)}]\cap [l_k^{(i)},y_k]|=\min \{u_k^{(i)},y_k\}-l_k^{(i)} \end{aligned}$$
(24)

Definition 12

(\(\vartheta \)function) Given the parameters of an integration slice \(S_d^{(i)}\) in a d-dimensional space and multivariate predictive distribution \(\varvec{\mu }, \varvec{\sigma }\), the function \(\vartheta (l_k^{(i)},u_k^{(i)},\sigma _k,\mu _k)\) is then defined as:

$$\begin{aligned} \vartheta (l_k^{(i)},u_k^{(i)},\sigma _k,\mu _k) :=&\int \limits _{y_k=u_k^{(i)}}^\infty \lambda _1[S_d^{(i)}\cap \varDelta (y_k)]\cdot {\varvec{\xi }}_{\mu _k,\sigma _k}(y_k) dy_k \nonumber \\ =&\int \limits _{y_k=u_k^{(i)}}^\infty (u_k^{(i)} - l_k^{(i)}) \cdot {\varvec{\xi }}_{\mu _k,\sigma _k}(y_k)dy_k \nonumber \\ =&(u_k^{(i)} - l_k^{(i)}) \cdot \left( 1 - \varPhi \Big ( \frac{u_k^{(i)}-\mu _k}{\sigma _k} \Big ) \right) \qquad k=1,\ldots , d-1 \end{aligned}$$
(25)

4.2.1 2-D EHVI calculation

According to the definition of the 2-D integration slice in Eq. 4.1.1, the Hypervolume Improvement\(\mathbf {y} \in \mathbb {R}^2\) in the 2-D case is:

$$\begin{aligned} \text{ HVI }_2(\mathbf {y}, \mathcal {P}, \mathbf {r}) = \sum _{i=1}^{N_2} \lambda _2 [S_2^{(i)}\cap \varDelta (\mathbf {y})] \end{aligned}$$
(26)

\(\text{ HVI }_2\) gives rise to the compact integral for the original EHVI:

$$\begin{aligned} \text{ EHVI }(\varvec{\mu },\varvec{\sigma },\mathcal {P},\mathbf {r}) =&\int \limits _{y_1=-\infty }^\infty \int \limits _{y_2=-\infty }^{\infty }\sum _{i=1}^{N_2} \lambda _2 [S_2^{(i)}\cap \varDelta (\mathbf {y})]\cdot \varvec{\xi }_{\varvec{\mu },\varvec{\sigma }}(\mathbf {y})d\mathbf {y} \end{aligned}$$
(27)

Here \(\mathbf {y}=(y_1,y_2)\), the intersection of \(S_2^{(i)}\) with \(\varDelta (y_1, y_2)\) is non-empty if and only if \((\mathbf {y})\) dominates the lower left corner of \(S_2^{(i)}\). Therefore:

$$\begin{aligned} \text{ EHVI }(\varvec{\mu },\varvec{\sigma },\mathcal {P},\mathbf {r}) =&\sum _{i=1}^{N_2}\int \limits _{y_1=l_1^{(i)}}^{\infty }\int \limits _{y_2=l_2^{(i)}}^{\infty }\lambda _2[S_2^{(i)}\cap \varDelta (\mathbf {y})]\cdot \varvec{\xi }_{\varvec{\mu },\varvec{\sigma }}(\mathbf {y})d\mathbf {y} \end{aligned}$$
(28)

In Eq. (28), the summation is done after integration. This operation is allowed, because integration is a linear mapping. Moreover, the integration interval \(\int _{y_1=l_1^{(i)}}^{\infty }\) can be divided into \((\int _{y_1=l_1^{(i)}}^{u_1^{(i)}} + \int _{y_1=u_1^{(i)}}^{\infty })\), because the HVI in one dimension \(\lambda _1[S_2^{(i)}\cap \varDelta (y_1)]\) differs in these two integration intervals. Here \(\lambda _1[B_i\cap \varDelta (y_k)]\) is the HVI in dimension k, i.e., a 1-D HVI. Equation (28) can then be expressed as:

$$\begin{aligned} \text{ EHVI }(\varvec{\mu },\varvec{\sigma },\mathcal {P},\mathbf {r}) =&\sum _{i=1}^{N_2}\int \limits _{y_1=l_1^{(i)}}^{u_1^{(i)}}\int \limits _{y_2=l_2^{(i)}}^{\infty }\lambda _2[S_2^{(i)}\cap \varDelta (\mathbf {y})]\cdot \varvec{\xi }_{\varvec{\mu },\varvec{\sigma }}(\mathbf {y})d\mathbf {y} \end{aligned}$$
(29)
$$\begin{aligned}&+ \sum _{i=1}^{N_2}\int \limits _{y_1=u_1^{(i)}}^{\infty }\int \limits _{y_2=l_2^{(i)}}^{\infty }\lambda _2[S_2^{(i)}\cap \varDelta (\mathbf {y})]\cdot \varvec{\xi }_{\varvec{\mu },\varvec{\sigma }}(\mathbf {y})d\mathbf {y} \end{aligned}$$
(30)

According to the definition of HVI, \(\ell (u_1^{(i)},y_1,l_1^{(i)})\) is constant and is \((u_1^{(i)}-l_1^{(i)})\) in \(\int _{y_1=u_1^{(i)}}^{\infty }\). Therefore, the Expected Improvement in dimension \(y_1\) is also a constant and it is: \(\vartheta (l_1^{(i)},u_1^{(i)},\sigma _1,\mu _1)\). Recall the \(\varPsi _{\infty }\) function, by which the terms (29) and (30) can be expressed as follows:

$$\begin{aligned}&\text {Term } (29) = \sum _{i=1}^{N_2} \left( \varPsi _{\infty }(l_1^{(i)},l_1^{(i)},\mu _1,\sigma _1) - \varPsi _{\infty }(l_1^{(i)},u_1^{(i)},\mu _1,\sigma _1) \right) \cdot \varPsi _{\infty }(l_2^{(i)},l_2^{(i)},\mu _2,\sigma _2) \end{aligned}$$
(31)
$$\begin{aligned}&\text {Term } (30) = \sum _{i=1}^{N_2} \vartheta (l_1^{(i)},u_1^{(i)},\mu _1,\sigma _1) \cdot \varPsi _{\infty }(l_2^{(i)},l_2^{(i)},\mu _2,\sigma _2) \end{aligned}$$
(32)

According to Eq. (28), the exact EHVI calculation needs to compute the terms (29) and (30) \(n+1\) times, and each calculation requests O(1) computation. To keep \(\mathcal {P}\) sorted in the first coordinate requires an effort of amortized time complexity \(O(\log n)\) per iteration. Hence, the time complexity of the expected hypervolume improvement in the 2-D case is in \(O(n \log n)\). In the case when \(\mathcal {P}\) is sorted, we can show that the time complexity is in \(\varTheta (n)\).

4.2.2 3-D EHVI calculation

Given a partitioning of the non-dominated space into integration slices \(S_3^{(1)}\), \(\dots \), \(S_3^{(i)}\), \(\dots \), \(S_3^{(2n+1)}\), the EHVI integrations over each slice can be computed separately. To see how this calculation can be done, the Hypervolume Improvement of a point \(\mathbf {y} \in \mathbb {R}^3\) is rewritten as:

$$\begin{aligned} \text{ HVI }_3(\mathcal {P}, \mathbf {y}, \mathbf {r}) = \sum _{i=1}^{N_3} \lambda _3 [S_3^{(i)}\cap \varDelta (\mathbf {y})] \end{aligned}$$
(33)

where \(\varDelta \mathbf {(y)}\) is the part of the objective space that is dominated by \(\mathbf {y}\). The \(\text{ HVI }\) expression in the definition of EHVI in Eq. (17) can be replaced by \(\text{ HVI }_3\) in Eq. (33):

$$\begin{aligned} \text{ EHVI }(\varvec{\mu },\varvec{\sigma },\mathcal {P},\mathbf {r}) = \sum _{i=1}^{N_3}\int \limits _{y_1=l_1^{(i)}}^{\infty }\int \limits _{y_2=l_2^{(i)}}^{\infty }\int \limits _{y_3=l_3^{(i)}}^{\infty }\lambda _3[S_3^{(i)}\cap \varDelta (\mathbf {y})]\cdot \varvec{\xi }_{\varvec{\mu },\varvec{\sigma }}(\mathbf {y})d\mathbf {y} \end{aligned}$$
(34)

Similar to the 2-D case, we can divide the integration interval \(\int _{y_1=l_1^{(i)}}^{\infty }\) and \(\int _{y_2=l_2^{(i)}}^{\infty }\) into \((\int _{y_1=l_1^{(i)}}^{u_1^{(i)}} + \int _{y_1=u_1^{(i)}}^{\infty })\) and \((\int _{y_2=l_2^{(i)}}^{u_2^{(i)}} + \int _{y_2=u_2^{(i)}}^{\infty })\), respectively. Also, again we can swap integration and summation based on the fact that integration is a linear mapping. Based on this subdivision, Eq. (34) can be expressed as:

$$\begin{aligned} \text {Eq.} \,\, (34) =&\sum _{i=1}^{N_3}\int \limits _{y_1=l_1^{(i)}}^{u_1^{(i)}}\int \limits _{y_2=l_2^{(i)}}^{u_2^{(i)}}\int \limits _{y_3=l_3^{(i)}}^{\infty }\lambda _3[S_3^{(i)}\cap \varDelta (\mathbf {y})]\cdot \varvec{\xi }_{\varvec{\mu },\varvec{\sigma }}(\mathbf {y})d\mathbf {y} \end{aligned}$$
(35)
$$\begin{aligned}&+ \sum _{i=1}^{N_3}\int \limits _{y_1=l_1^{(i)}}^{u_1^{(i)}}\int \limits _{y_2=u_2^{(i)}}^{\infty }\int \limits _{y_3=l_3^{(i)}}^{\infty }\lambda _3[S_3^{(i)}\cap \varDelta (\mathbf {y})]\cdot \varvec{\xi }_{\varvec{\mu },\varvec{\sigma }}(\mathbf {y})d\mathbf {y} \end{aligned}$$
(36)
$$\begin{aligned}&+ \sum _{i=1}^{N_3}\int \limits _{y_1=u_1^{(i)}}^{\infty }\int \limits _{y_2=l_2^{(i)}}^{u_2^{(i)}}\int \limits _{y_3=l_3^{(i)}}^{\infty }\lambda _3[S_3^{(i)}\cap \varDelta (\mathbf {y})]\cdot \varvec{\xi }_{\varvec{\mu },\varvec{\sigma }}(\mathbf {y})d\mathbf {y} \end{aligned}$$
(37)
$$\begin{aligned}&+ \sum _{i=1}^{N_3}\int \limits _{y_1=u_1^{(i)}}^{\infty }\int \limits _{y_2=u_2^{(i)}}^{\infty }\int \limits _{y_3=l_3^{(i)}}^{\infty }\lambda _3[S_3^{(i)}\cap \varDelta (\mathbf {y})]\cdot \varvec{\xi }_{\varvec{\mu },\varvec{\sigma }}(\mathbf {y})d\mathbf {y} \end{aligned}$$
(38)

Recalling the definition of the \(\vartheta \) function and calculation of \(\lambda _1[B_i\cap \varDelta (y_k)]\), the term (35) can be rewritten as follows:

$$\begin{aligned} \text {Term} \,\, (35) =&\sum _{i=1}^{N_3} (\varPsi _{\infty }(l_1^{(i)},l_1^{(i)},\mu _1,\sigma _1) - \varPsi _{\infty }(l_1^{(i)},u_1^{(i)},\sigma _1,\mu _1)) \nonumber \\&\cdot (\varPsi _{\infty }(l_2^{(i)},l_2^{(i)},\mu _2,\sigma _2) - \varPsi _{\infty }(l_2^{(i)},u_2^{(i)},\sigma _2,\mu _2)) \cdot \varPsi _{\infty }(l_3^{(i)},l_3^{(i)},\mu _3,\sigma _3) \end{aligned}$$
(39)

Similar to the derivation of the term (35), the terms (36), (37) and (38) can be written as follows:

$$\begin{aligned} \text {Term} \,\, (36) =&\sum _{i=1}^{N_3} ( \varPsi _{\infty }(l_1^{(i)},l_1^{(i)},\mu _1,\sigma _1) - \varPsi _{\infty }(l_1^{(i)},u_1^{(i)},\sigma _1,\mu _1)) \cdot \vartheta (l_2^{(i)},u_2^{(i)},\sigma _2,\mu _2) \nonumber \\&\cdot \varPsi _{\infty }(l_3^{(i)},l_3^{(i)},\mu _3,\sigma _3) \end{aligned}$$
(40)
$$\begin{aligned} \text {Term} \,\, (37) =&\sum _{i=1}^{N_3} \vartheta (l_1^{(i)},u_1^{(i)},\sigma _1,\mu _1) \cdot ( \varPsi _{\infty }(l_2^{(i)},l_2^{(i)},\mu _2,\sigma _2) - \varPsi _{\infty }(l_2^{(i)},u_2^{(i)},\sigma _2,\mu _2)) \nonumber \\&\cdot \varPsi _{\infty }(l_3^{(i)},l_3^{(i)},\mu _3,\sigma _3) \end{aligned}$$
(41)
$$\begin{aligned} \text {Term} \,\, (38) =&\sum _{i=1}^{N_3} \vartheta (l_1^{(i)},u_1^{(i)},\sigma _1,\mu _1) \cdot \vartheta (l_2^{(i)},u_2^{(i)},\sigma _2,\mu _2) \cdot \varPsi _{\infty }(l_3^{(i)},l_3^{(i)},\mu _3,\sigma _3) \end{aligned}$$
(42)

The final EHVI formula is the sum of the terms (39), (40), (41) and (42).

During the EHVI calculation, \(y_1y_2\)-projections are mutually non-dominated. Moreover, the points are sorted by the \(y_2\) coordinate in the AVL tree. Therefore, identifying a neighboring/discard point takes time \(O(\log n)\). Then the EHVI for these integration slices is calculated by the summation of the term (39), (40), (41) and (42), with the parameters of \(\varvec{\mu }, \varvec{\sigma }\) and \(S_3^{(N_3)}\). The EHVI time complexity for each slice is O(1). Moreover, the dominated points (\(\mathbf {y}^{\text {(d[s])}}\)) are removed from the AVL tree, and the new points (\(\mathbf {y}^{(j)}\)) are inserted into the AVL tree. Since the points dominated by the new point \(\mathbf {y}^{(j)}\) are deleted at the end of the current loop, they will not occur again in later computations. Hence, the total number of open slices does not exceed \(N_3\), as mentioned before, and the total computational cost is \(O(n \log n)\).

4.2.3 Higher dimensional EHVI

The interval of integration in each coordinate (except the last) can be divided into 2 parts: [lu] and \([u,\infty ]\). Therefore, the equation for EHVI for each hyperbox can be decomposed into \(2^{d-1}\) parts. For the interval of \([u,\infty ]\), the improvements (\(\lambda _k[S_d^{(i)}\cap \varDelta (y_k)]\)) are constant values, and the \(\varPsi _{\infty }\) function can be simplified by calculating function \(\varPhi \) and the improvement in these coordinates. For the last coordinate, there is no need to separate the interval, because the improvement in this coordinate (\(\lambda _d[S_d^{(i)}\cap \varDelta (y_d)]\)) is a variable in \([l,\infty ]\).

According to the definition of higher dimensional integral boxes in Sect. 4.1.3, the EHVI (\(d\geqslant 4\)) can be calculated by the following equation:

$$\begin{aligned}&\text{ EHVI }(\varvec{\mu },\varvec{\sigma },\mathcal {P},\mathbf {r}) \\&\quad = \sum _{i=1}^{N_d}\int \limits _{y_1=l_1^{(i)}}^{\infty } \ldots \int \limits _{y_d=l_d^{(i)}}^{\infty } \lambda _d[S_d^{(i)}\cap \varDelta (y_1,\ldots ,y_d)]\cdot \varvec{\xi }_{\varvec{\mu },\varvec{\sigma }}(\mathbf {y})d\mathbf {y} \\&\quad =\sum _{i=1}^{N_d}\left( \left( \int \limits _{y_1=l_1^{(i)}}^{y_1=u_1^{(i)}} + \int \limits _{y_1=u_1^{(i)}}^{\infty }\right) \cdots \left( \int \limits _{y_{d-1}=l_{d-1}^{(i)}}^{y_{d-1}=u_{d-1}^{(i)}} + \int \limits _{y_{d-1}=u_{d-1}^{(i)}}^{\infty }\right) \cdot \int \limits _{y_d=l_d^{(i)}}^{\infty } \right) \\&\qquad \cdot \lambda _d[S_d^{(i)}\cap \varDelta (y_1,\ldots ,y_d)]\cdot \varvec{\xi }_{\varvec{\mu },\varvec{\sigma }}(\mathbf {y})d\mathbf {y} \\&\quad = \begin{pmatrix} \int _{y_1=l_1^{(i)}}^{u_1^{(i)}} &{} \int _{y_2=l_2^{(i)}}^{u_2^{(i)}} &{} \cdots &{} \int _{y_{d-2}=l_{d-2}^{(i)}}^{u_{d-2}^{(i)}} &{} \int _{y_{d-1}=l_{d-1}^{(i)}}^{u_{d-1}^{(i)}} &{} \int _{y_d=l_d^{(i)}}^{u_d^{(i)}} \\ \int _{y_1=l_1^{(i)}}^{u_1^{(i)}} &{} \int _{y_2=l_2^{(i)}}^{u_2^{(i)}} &{} \cdots &{} \int _{y_{d-2}=l_{d-2}^{(i)}}^{u_{d-2}^{(i)}} &{} \int _{y_{d-1}=l_{d-1}^{(i)}}^{\infty } &{} \int _{y_d=l_d^{(i)}}^{u_d^{(i)}} \\ \int _{y_1=l_1^{(i)}}^{u_1^{(i)}} &{} \int _{y_2=l_2^{(i)}}^{u_2^{(i)}} &{} \cdots &{} \int _{y_{d-2}=l_{d-2}^{(i)}}^{\infty } &{} \int _{y_{d-1}=l_{d-1}^{(i)}}^{u_{d-1}^{(i)}} &{} \int _{y_d=l_d^{(i)}}^{u_d^{(i)}} \\ \int _{y_1=l_1^{(i)}}^{u_1^{(i)}} &{} \int _{y_2=l_2^{(i)}}^{u_2^{(i)}} &{} \cdots &{} \int _{y_{d-2}=l_{d-2}^{(i)}}^{\infty } &{} \int _{y_{d-1}=l_{d-1}^{(i)}}^{\infty } &{} \int _{y_d=l_d^{(i)}}^{u_d^{(i)}} \\ \vdots &{} \vdots &{} \ddots &{} \vdots &{} \vdots &{} \vdots \\ \int _{y_1=l_1^{(i)}}^{\infty } &{} \int _{y_2=l_2^{(i)}}^{\infty } &{} \cdots &{} \int _{y_{d-2}=l_{d-2}^{(i)}}^{u_{d-2}^{(i)}} &{} \int _{y_{d-1}=l_{d-1}^{(i)}}^{\infty } &{} \int _{y_d=l_d^{(i)}}^{u_d^{(i)}} \\ \int _{y_1=l_1^{(i)}}^{\infty } &{} \int _{y_2=l_2^{(i)}}^{\infty } &{} \cdots &{} \int _{y_{d-2}=l_{d-2}^{(i)}}^{u_{d-2}^{(i)}} &{} \int _{y_{d-1}=l_{d-1}^{(i)}}^{u_{d-1}^{(i)}} &{} \int _{y_d=l_d^{(i)}}^{u_d^{(i)}} \\ \int _{y_1=l_1^{(i)}}^{\infty } &{} \int _{y_2=l_2^{(i)}}^{\infty } &{} \cdots &{} \int _{y_{d-2}=l_{d-2}^{(i)}}^{\infty } &{} \int _{y_{d-1}=l_{d-1}^{(i)}}^{u_{d-1}^{(i)}} &{} \int _{y_d=l_d^{(i)}}^{u_d^{(i)}} \\ \int _{y_1=l_1^{(i)}}^{\infty } &{} \int _{y_2=l_2^{(i)}}^{\infty } &{} \cdots &{} \int _{y_{d-2}=l_{d-2}^{(i)}}^{\infty } &{} \int _{y_{d-1}=l_{d-1}^{(i)}}^{\infty } &{} \int _{y_d=l_d^{(i)}}^{u_d^{(i)}} \\ \end{pmatrix}\\&\qquad \lambda _d[S_d^{(i)}\cap \varDelta (y_1,\ldots ,y_d)]\cdot \varvec{\xi }_{\varvec{\mu },\varvec{\sigma }}(\mathbf {y})d\mathbf {y} \end{aligned}$$
$$\begin{aligned}&\quad = \begin{pmatrix} \left( {\begin{array}{c}\varPsi _\infty (l_{1}^{(i)},l_{1}^{(i)},\mu _{1},\sigma _{1}) -\\ \varPsi _\infty (l_{1}^{(i)},u_{1}^{(i)},\sigma _{1},\mu _{1})\end{array}}\right) &{} \left( {\begin{array}{c}\varPsi _\infty (l_{2}^{(i)},l_{2}^{(i)},\mu _{2},\sigma _{2}) -\\ \varPsi _\infty (l_{2}^{(i)},u_{2}^{(i)},\sigma _{2},\mu _{2})\end{array}}\right) &{} \cdots &{} \left( {\begin{array}{c}\varPsi _\infty (l_{d-2}^{(i)},l_{d-2}^{(i)},\mu _{d-2},\sigma _{d-2}) -\\ \varPsi _\infty (l_{d-2}^{(i)},u_{d-2}^{(i)},\sigma _{d-2},\mu _{d-2})\end{array}}\right) &{} \left( {\begin{array}{c}\varPsi _\infty (l_{d-1}^{(i)},l_{d-1}^{(i)},\mu _{d-1},\sigma _{d-1}) -\\ \varPsi _\infty (l_{d-1}^{(i)},u_{d-1}^{(i)},\sigma _{d-1},\mu _{d-1})\end{array}}\right) &{} \varPsi _{\infty }(l_d^{(i)},l_d^{(i)},\mu _d,\sigma _d) &{} + \\ \left( {\begin{array}{c}\varPsi _\infty (l_{1}^{(i)},l_{1}^{(i)},\mu _{1},\sigma _{1}) -\\ \varPsi _\infty (l_{1}^{(i)},u_{1}^{(i)},\sigma _{1},\mu _{1})\end{array}}\right) &{} \left( {\begin{array}{c}\varPsi _\infty (l_{2}^{(i)},l_{2}^{(i)},\mu _{2},\sigma _{2}) -\\ \varPsi _\infty (l_{2}^{(i)},u_{2}^{(i)},\sigma _{2},\mu _{2})\end{array}}\right) &{} \cdots &{} \left( {\begin{array}{c}\varPsi _\infty (l_{d-2}^{(i)},l_{d-2}^{(i)},\mu _{d-2},\sigma _{d-2}) -\\ \varPsi _\infty (l_{d-2}^{(i)},u_{d-2}^{(i)},\sigma _{d-2},\mu _{d-2})\end{array}}\right) &{} \vartheta (l_{d-1}^{(i)},u_{d-1}^{(i)},\sigma _{d-1},\mu _{d-1}) &{} \varPsi _{\infty }(l_d^{(i)},l_d^{(i)},\mu _d,\sigma _d) &{} + \\ \left( {\begin{array}{c}\varPsi _\infty (l_{1}^{(i)},l_{1}^{(i)},\mu _{1},\sigma _{1}) -\\ \varPsi _\infty (l_{1}^{(i)},u_{1}^{(i)},\sigma _{1},\mu _{1})\end{array}}\right) &{} \left( {\begin{array}{c}\varPsi _\infty (l_{2}^{(i)},l_{2}^{(i)},\mu _{2},\sigma _{2}) -\\ \varPsi _\infty (l_{2}^{(i)},u_{2}^{(i)},\sigma _{2},\mu _{2})\end{array}}\right) &{} \cdots &{} \vartheta (l_{d-2}^{(i)},u_{d-2}^{(i)},\sigma _{d-2},\mu _{d-2}) &{} \left( {\begin{array}{c}\varPsi _\infty (l_{d-1}^{(i)},l_{d-1}^{(i)},\mu _{d-1},\sigma _{d-1}) -\\ \varPsi _\infty (l_{d-1}^{(i)},u_{d-1}^{(i)},\sigma _{d-1},\mu _{d-1})\end{array}}\right) &{} \varPsi _{\infty }(l_d^{(i)},l_d^{(i)},\mu _d,\sigma _d) &{} + \\ \left( {\begin{array}{c}\varPsi _\infty (l_{1}^{(i)},l_{1}^{(i)},\mu _{1},\sigma _{1}) -\\ \varPsi _\infty (l_{1}^{(i)},u_{1}^{(i)},\sigma _{1},\mu _{1})\end{array}}\right) &{} \left( {\begin{array}{c}\varPsi _\infty (l_{2}^{(i)},l_{2}^{(i)},\mu _{2},\sigma _{2}) -\\ \varPsi _\infty (l_{2}^{(i)},u_{2}^{(i)},\sigma _{2},\mu _{2})\end{array}}\right) &{} \cdots &{} \vartheta (l_{d-2}^{(i)},u_{d-2}^{(i)},\sigma _{d-2},\mu _{d-2}) &{} \vartheta (l_{d-1}^{(i)},u_{d-1}^{(i)},\sigma _{d-1},\mu _{d-1}) &{} \varPsi _{\infty }(l_d^{(i)},l_d^{(i)},\mu _d,\sigma _d) \\ \vdots &{} \vdots &{} \ddots &{} \vdots &{} \vdots &{} \vdots &{} \vdots \\ \vartheta (l_{1}^{(i)},u_{1}^{(i)},\sigma _{1},\mu _{1}) &{} \vartheta (l_{2}^{(i)},u_{2}^{(i)},\sigma _{2},\mu _{2}) &{} \cdots &{} \left( {\begin{array}{c}\varPsi _\infty (l_{d-2}^{(i)},l_{d-2}^{(i)},\mu _{d-2},\sigma _{d-2}) -\\ \varPsi _\infty (l_{d-2}^{(i)},u_{d-2}^{(i)},\sigma _{d-2},\mu _{d-2})\end{array}}\right) &{} \left( {\begin{array}{c}\varPsi _\infty (l_{d-1}^{(i)},l_{d-1}^{(i)},\mu _{d-1},\sigma _{d-1}) -\\ \varPsi _\infty (l_{d-1}^{(i)},u_{d-1}^{(i)},\sigma _{d-1},\mu _{d-1})\end{array}}\right) &{} \varPsi _{\infty }(l_d^{(i)},l_d^{(i)},\mu _d,\sigma _d) &{} + \\ \vartheta (l_{1}^{(i)},u_{1}^{(i)},\sigma _{1},\mu _{1}) &{} \vartheta (l_{2}^{(i)},u_{2}^{(i)},\sigma _{2},\mu _{2}) &{} \cdots &{} \left( {\begin{array}{c}\varPsi _\infty (l_{d-2}^{(i)},l_{d-2}^{(i)},\mu _{d-2},\sigma _{d-2}) -\\ \varPsi _\infty (l_{d-2}^{(i)},u_{d-2}^{(i)},\sigma _{d-2},\mu _{d-2})\end{array}}\right) &{} \vartheta (l_{d-1}^{(i)},u_{d-1}^{(i)},\sigma _{d-1},\mu _{d-1}) &{} \varPsi _{\infty }(l_d^{(i)},l_d^{(i)},\mu _d,\sigma _d) &{} + \\ \vartheta (l_{1}^{(i)},u_{1}^{(i)},\sigma _{1},\mu _{1}) &{} \vartheta (l_{2}^{(i)},u_{2}^{(i)},\sigma _{2},\mu _{2}) &{} \cdots &{} \vartheta (l_{d-2}^{(i)},u_{d-2}^{(i)},\sigma _{d-2},\mu _{d-2}) &{} \left( {\begin{array}{c}\varPsi _\infty (l_{d-1}^{(i)},l_{d-1}^{(i)},\mu _{d-1},\sigma _{d-1}) -\\ \varPsi _\infty (l_{d-1}^{(i)},u_{d-1}^{(i)},\sigma _{d-1},\mu _{d-1})\end{array}}\right) &{} \varPsi _{\infty }(l_d^{(i)},l_d^{(i)},\mu _d,\sigma _d) &{} + \\ \vartheta (l_{1}^{(i)},u_{1}^{(i)},\sigma _{1},\mu _{1}) &{} \vartheta (l_{2}^{(i)},u_{2}^{(i)},\sigma _{2},\mu _{2}) &{} \cdots &{} \vartheta (l_{d-2}^{(i)},u_{d-2}^{(i)},\sigma _{d-2},\mu _{d-2}) &{} \vartheta (l_{d-1}^{(i)},u_{d-1}^{(i)},\sigma _{d-1},\mu _{d-1}) &{} \varPsi _{\infty }(l_d^{(i)},l_d^{(i)},\mu _d,\sigma _d) \end{pmatrix} \end{aligned}$$
(43)
$$\begin{aligned}&\quad = \begin{pmatrix} \omega (i,1,0) &{} \omega (i,2,0) &{} \cdots &{} \omega (i,d-2,0) &{} \omega (i,d-1,0) &{} \varPsi _{\infty }(l_d^{(i)},l_d^{(i)},\mu _d,\sigma _d) &{} + \\ \omega (i,1,0) &{} \omega (i,2,0) &{} \cdots &{} \omega (i,d-2,0) &{} \omega (i,d-1,1) &{} \varPsi _{\infty }(l_d^{(i)},l_d^{(i)},\mu _d,\sigma _d) &{} + \\ \omega (i,1,0) &{} \omega (i,2,0) &{} \cdots &{} \omega (i,d-2,1) &{} \omega (i,d-1,0) &{} \varPsi _{\infty }(l_d^{(i)},l_d^{(i)},\mu _d,\sigma _d) &{} + \\ \omega (i,1,0) &{} \omega (i,2,0) &{} \cdots &{} \omega (i,d-2,1) &{} \omega (i,d-1,1) &{} \varPsi _{\infty }(l_d^{(i)},l_d^{(i)},\mu _d,\sigma _d) &{} + \\ \vdots &{} \vdots &{} \ddots &{} \vdots &{} \vdots &{} \vdots &{} \vdots \\ \omega (i,1,1) &{} \omega (i,2,1) &{} \cdots &{} \omega (i,d-2,0) &{} \omega (i,d-1,0) &{} \varPsi _{\infty }(l_d^{(i)},l_d^{(i)},\mu _d,\sigma _d) &{} + \\ \omega (i,1,1) &{} \omega (i,2,1) &{} \cdots &{} \omega (i,d-2,0) &{} \omega (i,d-1,1) &{} \varPsi _{\infty }(l_d^{(i)},l_d^{(i)},\mu _d,\sigma _d) &{} + \\ \omega (i,1,1) &{} \omega (i,2,1) &{} \cdots &{} \omega (i,d-2,1) &{} \omega (i,d-1,0) &{} \varPsi _{\infty }(l_d^{(i)},l_d^{(i)},\mu _d,\sigma _d) &{} + \\ \omega (i,1,1) &{} \omega (i,2,1) &{} \cdots &{} \omega (i,d-2,1) &{} \omega (i,d-1,1) &{} \varPsi _{\infty }(l_d^{(i)},l_d^{(i)},\mu _d,\sigma _d) \\ \end{pmatrix} \nonumber \\&\quad = \begin{pmatrix} \omega (i,1,C^{(0)_2}_1) &{} \omega (i,2,C^{(0)_2}_2) &{} \cdots &{} \omega (i,d-2,C^{(0)_{2}}_{(d-2)}) &{} \omega (i,d-1,C^{(0)_{2}}_{(d-1)}) &{} \varPsi _{\infty }(l_d^{(i)},l_d^{(i)},\mu _d,\sigma _d) &{} + \\ \omega (i,1,C^{(1)_2}_1) &{} \omega (i,2,C^{(1)_2}_2) &{} \cdots &{} \omega (i,d-2,C^{(1)_{2}}_{(d-2)}) &{} \omega (i,d-1,C^{(1)_{2}}_{(d-1)}) &{} \varPsi _{\infty }(l_d^{(i)},l_d^{(i)},\mu _d,\sigma _d) &{} + \\ \omega (i,1,C^{(2)_2}_1) &{} \omega (i,2,C^{(2)_2}_2) &{} \cdots &{} \omega (i,d-2,C^{(2)_{2}}_{(d-2)}) &{} \omega (i,d-1,C^{(2)_{2}}_{(d-1)}) &{} \varPsi _{\infty }(l_d^{(i)},l_d^{(i)},\mu _d,\sigma _d) &{} + \\ \omega (i,1,C^{(3)_2}_1) &{} \omega (i,2,C^{(3)_2}_2) &{} \cdots &{} \omega (i,d-2,C^{(3)_{2}}_{(d-2)}) &{} \omega (i,d-1,C^{(3)_{2}}_{(d-1)}) &{} \varPsi _{\infty }(l_d^{(i)},l_d^{(i)},\mu _d,\sigma _d) &{} + \\ \vdots &{} \vdots &{} \ddots &{} \vdots &{} \vdots &{} \vdots &{} \vdots \\ \omega (i,1,C^{(2^{d-1}-4)_2}_1) &{} \omega (i,2,C^{(2^{d-1}-4)_2}_2) &{} \cdots &{} \omega (i,d-2,C^{(2^{d-1}-4)_{2}}_{d-2}) &{} \omega (i,d-1,C^{(2^{d-1}-4)_{2}}_{d-1}) &{} \varPsi _{\infty }(l_d^{(i)},l_d^{(i)},\mu _d,\sigma _d) &{} + \\ \omega (i,1,C^{(2^{d-1}-3)_2}_1) &{} \omega (i,2,C^{(2^{d-1}-3)_2}_2) &{} \cdots &{} \omega (i,d-2,C^{(2^{d-1}-3)_{2}}_{d-2}) &{} \omega (i,d-1,C^{(2^{d-1}-3)_{2}}_{d-1}) &{} \varPsi _{\infty }(l_d^{(i)},l_d^{(i)},\mu _d,\sigma _d) &{} + \\ \omega (i,1,C^{(2^{d-1}-2)_2}_1) &{} \omega (i,2,C^{(2^{d-1}-2)_2}_2) &{} \cdots &{} \omega (i,d-2,C^{(2^{d-1}-2)_{2}}_{d-2}) &{} \omega (i,d-1,C^{(2^{d-1}-2)_{2}}_{d-1}) &{} \varPsi _{\infty }(l_d^{(i)},l_d^{(i)},\mu _d,\sigma _d) &{} + \\ \omega (i,1,C^{(2^{d-1}-1)_2}_1) &{} \omega (i,2,C^{(2^{d-1}-1)_2}_2) &{} \cdots &{} \omega (i,d-2,C^{(2^{d-1}-1)_{2}}_{d-2}) &{} \omega (i,d-1,C^{(2^{d-1}-1)_{2}}_{d-1}) &{} \varPsi _{\infty }(l_d^{(i)},l_d^{(i)},\mu _d,\sigma _d) \\ \end{pmatrix} \nonumber \\&\quad = \sum _{i=1}^{N_d} \overbrace{ \left( \sum _{j=0}^{2^{d-1}-1} \left( \underbrace{ \prod _{k=1}^{d-1} \omega (i,k,C_k^{(j)_2}) \cdot \varPsi _{\infty }(l_d^{(i)},l_d^{(i)},\mu _d,\sigma _d)}_{\text {An integration for a divided interval.}} \right) \right) }^{\text {The EHVI value for a hyperbox}~ S_d^{(i)},~ \text {denoted as}~ EHVI_{S_d^{(i)}}.} \end{aligned}$$
(44)

In the term (43), the integral of each dimension \(\int _{y_k=l_k^{(i)}}^{u_k^{(i)}}\lambda _k[S_k^{(i)}\cap \varDelta (y_1,\ldots ,y_k)]\cdot \varvec{\xi }\)\(_{\varvec{\mu },\varvec{\sigma }}(\mathbf {y})d\mathbf {y}|_{1 \le k \le d-1}\) has two and only two different expressions (\(\varPsi _\infty \) or \(\vartheta \)), except that in the last dimension (\(k=d\)), the expression of \(\int _{y_d=l_d^{(i)}}^{u_d^{(i)}}\lambda _d[S_d^{(i)}\cap \varDelta (y_1,\ldots ,y_d)]\cdot \varvec{\xi }_{\varvec{\mu },\varvec{\sigma }}(\mathbf {y})d\mathbf {y}|_{k=d}\) is always \(\varPsi _\infty \). The final expression of the EHVI is the sum of the \(\textit{EHVI}_{S_d^{(i)}}\) in Eq. (44) for all the partitioned, non-dominated hyperboxes \(S_d^{(i)}|_{i=1,\ldots ,N_d}\). In Eq. (44), \(\textit{EHVI}_{S_d^{(i)}}\) has \(2^{d-1}\) terms because the integration has two different expressions in dimension k (\({1 \le k \le d-1}\)) and has only one expression in d-th dimension (\(k=d\)).

In Eq. (44), \((j)_2\) stands for the binary string representation of the integer j. The length of \((j)_2\) is \(d-1\). \(C^{(j)_2}_k\) is a bit and represents the k-th bit of \((j)_2\) in the binary string. For example, if \(d=5\), \(j=8\), then \((j)_2=(1 ~ 0 ~ 0 ~ 0)\), \(C^{(j)_2}_{k=4} = 1\) and \(C^{(j)_2}_{k=1,2,3} = 0\). In Eq. (44), \(\omega (i,k,C_k^{(j)_2})\) is defined as:

$$\begin{aligned}&\omega (i,k,C_k^{(j)_2}) := {\left\{ \begin{array}{ll} \varPsi _\infty (l_k^{(i)},l_k^{(i)},\mu _k,\sigma _k) - \varPsi _\infty (l_k^{(i)},u_k^{(i)},\sigma _k,\mu _k) &{} \text{ if }\,\, C_k^{(j)_2}=0 \\ \vartheta (l_k^{(i)},u_k^{(i)},\sigma _k,\mu _k) &{} \text{ if } \,\, C_k^{(j)_2}=1 \\ \end{array}\right. } \end{aligned}$$
(45)

Equation (44) shows how to calculate EHVI in the case of d objectives. According to Eq. (44), the runtime complexity of the proposed algorithm can be calculated. The exact EHVI is given by \(\sum _{j=0}^{2^{d-1}-1} \left( \prod _{k=1}^{d-1} \omega (i,k,C_k^{(j)_2}) \cdot \varPsi _{\infty }(l_d^{(i)},l_d^{(i)},\mu _d,\sigma _d) \right) \), which requires O(1) computation steps for each hyperbox calculation. Currently, the exact number of hyperboxes \(N_d|_{d\ge 4}\) for a non-dominated space is still unknown. It is hypothesized by the authors that \(N_d\) is the exact number of the local lower bound points, which can be calculated by the DKLV17 algorithm. The LKF17 algorithm partitions the non-dominated space into \(\varTheta (n^{\lfloor d/2\rfloor })\) hyperboxes and the time complexity for computing these hyperboxes grows linearly with the number of boxes (see, e.g., Lacour et al. [25]). Given a fixed dimension d, the time complexity of our EHVI computation algorithm is therefore also in \(\varTheta (n^{\lfloor d/2\rfloor })\).

Note that every hyperbox requires \(O(2^{d-1})\) time complexity. Therefore, the complexity in terms of d and n is given by \(O(2^{d-1} \cdot n^{\lfloor d/2\rfloor })\). Due to the exponential dependence on d, the EHVI computation algorithm is only useful in moderate dimensional cases. Note, however, that a much faster computation cannot be expected, as the time complexity of the hypervolume indicator itself scales superpolynomially with the number of objectives d, under the assumption P \(\ne \) NP. It is easy to show, that the EHVI computation has at least the same time complexity than the hypervolume indicator computation [14], and it is therefore also an NP hard problem in d—but polynomial in n for any fixed value of d.

4.3 Probability of improvement (PoI)

According to the partitioning method in Sect. 4.1, \(\text{ PoI }\) can be calculated as follows:

$$\begin{aligned} \text{ PoI }(\varvec{\mu },\varvec{\sigma },\mathcal {P})&= \int \limits _{y_1=-\infty }^\infty \cdots \int \limits _{y_d=-\infty }^\infty \mathrm {I}(\mathbf {y} \text{ impr } \mathcal {P}) \varvec{\xi }_{\varvec{\mu }, \varvec{\sigma }}(\mathbf {y}) \mathrm {d}y_1 \dots \mathrm {d}y_d \nonumber \\&= \sum _{i=1}^{N_d} \prod _{j=1}^{d} \mathrm {I}(\mathbf {y} \text{ impr } \mathcal {P}) \left( \varPhi \left( \frac{u_j^{(i)}-\mu _j}{\sigma _j}\right) - \varPhi \left( \frac{l_j^{(i)}-\mu _j}{\sigma _j}\right) \right) \end{aligned}$$
(46)

Here, \(N_d\) is the number of integration slices, and \(N_2=n+1\), \(N_3=2n+1\) in the 2-D and 3-D case respectively. Since \(\text{ PoI }\) is a reference-free indicator, a reference point \(\mathbf {r}=\{ - \infty \}^d\) is only used in order to obtain the correct boundary information (\(\mathbf {l}_d,\mathbf {u}_d\)).

5 Experiments

5.1 Speed comparison

The test benchmarks from Emmerich and Fonseca [15] were used to generate Pareto-front sets. The Pareto-front sets and evaluated points were randomly generated based on convexSpherical and concaveSpherical functions. Two EHVI calculation algorithms, CDD13 [5] and KMAC, were compared using the same benchmarks in this experiment.Footnote 7 Note that KMAC algorithm in this paper includes the KMAC_2D for 2-D EHVI calculation in [14], the KMAC_3D algorithm for 3-D EHVI calculation in [38], and the extended KMAC algorithm for higher dimensional cases (\(d\ge 4\)) in this paper.

The parameters: \(\sigma _d = 2.5\), \(\mu _d = 10, d = 2, \ldots , 5\) are used in the experiments. Pareto front sizes are \(|\mathcal {P}|\in {\lbrace 10, 20, \ldots , 200 \rbrace }\) and Batch Size is 1, which represents the number of the evaluated points under the same Pareto-front approximation set. Ten \(\mathcal {P}\) sets are randomly generated by the same parameters. Average runtimes over 100 repetitions (10 repetitions for 10 \(\mathcal {P}\) sets) were computed. All the experiments were performed on the same computer, and the hardware is: Intel(R) Xeon(R) CPU I7 3770 3.40GHz, RAM 16GB. The operating system is Ubuntu 16.04 LTS (64 bit), the compiler of KMAC is g++ 4.9.2 with compiler flag -Ofast, and CDD13 is based on MATLAB 8.4.0.150421 (R2014b), 64 bit. The experiments are set to halt when the algorithms cannot finish the EHVI computation within 30 min.

Fig. 7
figure 7

Speed comparison of EHVI calculation. Above: concave random Pareto-front set; Below: convex random Pareto-front set

The experimental results in Fig. 7 show that KMAC is much faster than CDD13, especially when \(|\mathcal {P}|\) is increased. Moreover, CDD13 can not calculate the exact EHVI value within 30 min when \(|\mathcal {P}|\) is bigger than 30 in the 5-D case.

5.2 Benchmark performance

Five state-of-the-art algorithms are compared in this section, namely: EHVI-MOBGO, PoI-MOBGOFootnote 8, NSGA-II [8], NSGA-III [7, 17] and SMS-EMOA [1]. The benchmarks are DTLZ1, DTLZ2, DTLZ3, DTLZ4, DTLZ5, DTLZ7 [9], MaF1, MaF5, MaF12 and MaF13 [2]. The dimension m of all the benchmarks is 6 or 18. The parameter settings for all of these algorithms are shown in Table 2. The number of function evaluations (\(T_c\) in Algorithm 1) for MOBGO based algorithms is 300. The Reference points for each benchmark are shown in Table 3. For DTLZ1 and DTLZ2, the reference points are chosen from the article [5]. For the other test problems, the reference points are revised from the articles [2, 5]. All experiments were repeated 10 times.

Table 2 Algorithm parameter settings
Table 3 Reference Points

The final Pareto fronts are evaluated by means of the Hypervolume indicator. Tables 4 and 5 show the experimental results (statistical means and standard deviations) in 6- and 18-dimensional search spaces, respectively. Compared with EMOAs in 6- and 18-dimensional search spaces, either EHVI-MOBGO or PoI-MOBGO yields the best result on the 10 benchmark function with 300 function evaluations. On the test problems of DTLZ7 and MaF13, even if the function evaluation budget of the EMOAs is increased to 2000, EHVI-MOBGO can still outperform the EMOAs in 6- and 18-dimensional search spaces. Among EHVI-MOBGO and PoI-MOBGO, EHVI-MOBGO outperforms PoI-MOBGO in most cases, but PoI-MOBGO yields better results on two and three (out of ten) test problems when \(m=6\) and \(m=18\), respectively. The reason is that the dominated space of the PoI is \(\mathbf {y} \in (-\infty , \infty )^d {\setminus } \text{ dom } (\mathcal {P})\)Footnote 9, which is bigger than that of the EHVI \(\mathbf {y} \in (\mathbf {r}, \infty ^d) {\setminus } \text{ dom } (\mathcal {P})\). Therefore, PoI performs better when searching for extreme non-dominated points. In other words, EHVI is a reference based infill criterion and it cannot indicate any improvement of an evaluated point in a discarded part of the non-dominated space, namely, \(\mathbf {y} \in (-\infty ^d, \mathbf {r}) {\setminus } \text{ dom } (\mathcal {P})\).

Table 4 Empirical Comparisons w.r.t HV in the case of \(m=6\)
Table 5 Empirical comparisons w.r.t HV in the case of \(m=18\)

6 Conclusions and outlook

This paper describes an efficient algorithm for EHVI calculation. It reviews and benchmarks recently proposed asymptotically optimal algorithms with \(\varTheta (n\log n)\) time complexity and generalizes them to higher dimensional cases with \(d\ge 4\). By using the fast box decomposition techniques, which were recently developed by Dächert et al. [6] and Lacour et al. [25], a non-dominated space can be partitioned with only \(\varTheta (n^{\lfloor d/2 \rfloor })\) hyperboxes. The time complexity of our new EHVI computation algorithm scales linearly with the number of hyperboxes of arbitrary box decompositions. Unlike previous EHVI computation algorithms, the new algorithm does not require full grid partitionings with \(\varTheta (n^d)\) boxes. The new algorithm is, therefore, a significant improvement in terms of asymptotic time complexity. In addition, our benchmarks on random non-dominated sets show that this new algorithm is also many orders of magnitude faster for computations of typically sized problems where \(n \le 1000\).

This paper also compares the performance of MOBGO based algorithms with three other state-of-the-art EMOAs on 10 benchmark test problems. For budgets of function evaluations up to 300, MOBGO based algorithms can yield the Pareto-front approximation sets with higher HV values than that of EMOAs in both 6- and 18-dimensional search spaces. In most cases, EHVI-MOBGO yields better performance than PoI-MOBGO. However, PoI-MOBGO performs better than EHVI-MOBGO on up to 3 test problems in 6- and 18- dimensional search spaces. The reason is that the PoI can imply an improvement of an evaluated point in the whole non-dominated space, but the EHVI can only indicate the improvement in the non-dominated space bounded by a reference point. A remedy to the EHVI’s disadvantage can be achieved by setting a large reference point or using the dynamic reference point strategy. However, the selected reference point must not be too large. Otherwise, EHVI at any evaluated points would be similar, even identical, because of the insufficient numerical stability of the computations involved.

For future research, it is recommended to further investigate on reference-free computation of EHVI. Moreover, it is still an open question of how to obtain fast EHVI calculations for a larger number of objective functions. Although it is conjectured that in the worst case time complexity will increase superpolynomially with the number of objectives, a better average case time complexity could be obtained by adopting concepts from recently proposed divide-and-conquer algorithms for computing the hypervolume indicator [20, 31].