1 Introduction

Pareto-based multi-objective genetic algorithms show excellent performance when optimizing for multiple and often conflicting goals. In our work, we are interested in multi-criteria optimization of digital hardware [12, 16] using the Cartesian Genetic Programming model [20] to represent circuits. Experience shows that for this specific application domain global multi-objective genetic optimizers can be rather slow, especially when compared with local Evolutionary Strategy (ES) techniques [17]. However, in the presence of multiple objectives local search techniques typically work with fitness functions that are linear combinations of the single objectives, rather than with the Pareto-based principle. Linear weighting schemes need to be balanced by a human designer each time there is a new set of goal functions, in order to obtain the highest possible performance. The demand for an unsupervised and preference-free multi-objective method for CGP is a challenge we address in the presented work.

We describe a periodization technique that alternates the execution of global and local evolutionary optimizers [13]. The technique relies on a periodized execution model that blends algorithm properties, functionalities and convergence behaviors in a simple and straight-forward way. A typical example is the combination of global search for the early phase of an optimization run with local search for the final phase [7, 25].

Additionally, we present a local search algorithm termed hybrid Evolutionary Strategy (hES), a synthesis of a standard ES and a Pareto set preserving technique, and investigate its performance when periodized with multi-objective genetic optimizers NSGAII [4] and SPEA2 [29]. The characteristic of hES is that it applies a \(\mu +\lambda \) ES on the Pareto-dominant individuals obtained by a multi-objective genetic algorithm while keeping diversity in and avoiding deterioration of the Pareto set.

The remainder of this Chapter is structured as follows: Sect. 2 presents related work on hybrid evolutionary search techniques. Our periodization model is defined in Sect. 3, followed by a discussion of the hybrid Evolutionary Strategy (hES) in Sect. 4. Section 5 defines the fitness metrics used in our experiments and Sect. 6 shows the benchmarks and presents the results. Finally, Sect. 7 concludes the work.

2 Related Work

Early work on multi-objective optimization of CGP was presented by Kaufmann and Platzner in [14]. The authors used NSGAII and SPEA2 for the optimization of Boolean circuits for functional quality, area, and delay. In the following years the authors have refined their method in [11, 24]. In the meantime, Walker et al. have also proposed a similar approach for the optimization of digital circuits regarding multiple objective in [27].

Hernández-Díaz et al. [7] presented a two-stage multi-objective evolutionary algorithm based on Differential Evolution (DE) and Rough Sets (RS) theory. In the first stage, the authors employed a fast converging multi-objective DE scheme to compute an initial Pareto frontier approximation. In the second stage, they improve the Pareto set diversity using RS theory for detecting loosely-covered regions. The algorithm’s performance is verified on the standard ZDT{1, ..., 6} and DTLZ{1, ..., 4} benchmarks [6, 19]. To compare the computed Pareto sets, the authors used three metrics, the unary additive epsilon indicator [32], the standard deviation of crowding distances (SDC) [5] and the space covered by a Pareto set [31]. The proposed algorithm generally outperformed NSGAII, except on the DTLZ2 and DTLZ4 benchmarks using the SDC metric.

Talbi et al. [25] proposed a similar two-stage approach and used a multi-objective genetic algorithm (GA) to calculate a first rough Pareto frontier approximation, followed by a local search technique for refining the approximation. The authors observed improved behavior to a GA-only approach as soon as the complexity of the test problems increases.

Zapotecas et al. [30] presented a hybrid approach combining the global optimizer NSGAII with the local optimizers of Nelder and Mead [21] and the golden section method. The authors enhanced the exploratory NSGAII by local search methods in order to reduce the number of fitness evaluations. The hybrid algorithm was compared to standard NSGAII on continuous benchmarks ZDT{1, ..., 4}, ZDT6 and DTLZ{1, 2} using the metrics inverted generational distance [26], spacing [22] and coverage indicator [28]. With the exception of the ZDT6 and DTLZ{1, 2} benchmarks in combination with the spacing metric, the hybrid algorithm outperformed NSGAII.

Harada et al. [8] analyzed GA-and-LS and GA-then-LS schemata in which local search is applied either after each generation or after a completed run of a genetic algorithm. The authors concluded that GA-then-LS is superior to GA-and-LS on multiple benchmarks and used generational and Pareto-optimal frontier distances [5] for comparison.

The work of Ishibuchi et al. [9, 10] is close to our approach. The authors discuss various implementations of standard multi-objective optimizers such as SPEA2 and NSGAII combined with local search. The key idea of their approach is to periodically swap between different optimizers during a run. The authors conclude that the performance of such a hybrid optimizer is sensitive to the balance between global and local search. However, by carefully weighting global and local search strategies the periodized hybrid optimizer outperformed the standard multi-objective optimizer.

In our work, which base on the work of Kaufmann et al. [13], we investigate hybrid Evolutionary Strategies (hES) and its periodization with the multi-objective optimizers NSGAII and SPEA2 in a GA-and-LS manner.

3 The Periodization Model

Let \(A=(a_1,a_2,\dots ,a_n)\) be the set of algorithms used in the periodization. As an illustrative example, consider \(A = \{\text {GA1}, \text {GA2}, \text {LS}\}\). For a hypothetical periodized algorithm that executes a single step/generation of GA1, followed by two steps of LS, then a single step of GA2 and two steps of LS, the index sequence I for the algorithm selection is given by \((a_1,a_3,a_2,a_3)\), and the repetition sequence F is \((f_1, f_2, f_3, f_4)=(1, 2, 1, 2)\). While in this specific example, F is a vector of constants, the number of repetitions can be adaptively adjusted based on the history of the optimization run \(\mathscr {H}\). In particular, global search GAs with fast convergence in the beginning of an optimization run could be repeated more often in the early search phases, while local search algorithms that excel at improving nearly optimal nondominated sets could be used more intensively in the final optimization phase.

With t as the current generation number, \(\mathscr {H}\) as the history of the current optimization run, \(A=(a_1,a_2,\dots ,a_n)\), \(n \in \mathbb {N}\) as the set of algorithms used in the periodization, \(I=(i_1,i_2,\dots ,i_m)\), \(m \in \mathbb {N}\), \(i_k \in (1,2,\dots ,n)\) as the set of indices for the selected algorithms in the execution sequence, and \(F=(f_1, f_2, \ldots , f_m)\), \(f_k(t, \mathscr {H}) \rightarrow \mathbb {N}\) as the number of repetitions for the algorithms in I, the complete periodized execution model P is defined as:

$$\begin{aligned} P:= & {} A_I^F = (a_{i_1}^{f_1(t,\mathscr {H})}, a_{i_2}^{f_2(t,\mathscr {H})}, \dots , a_{i_m}^{f_m(t,\mathscr {H})}). \end{aligned}$$

The history \(\mathscr {H}\) can be large if considering the complete information of an optimization run, or more compact if considering, for example, only the dominated space of the current nondominated set. In our experiments, we choose \(f_k(t, \mathscr {H)}) := f_k(t) \equiv \textit{const}\). For the general case, however, \(\mathscr {H}\) changes with each generation. Accordingly, the number of algorithm repetitions \(f_k(t, \mathscr {H}_t)\) computed in generation t for algorithm \(a_k\) may differ from the \(f_k(t+1, \mathscr {H}_{t+1})\) computed in the next generation. Therefore, the repetition vector F needs to be updated after each generation. An example where this effect becomes relevant is when some algorithm is iterated until local convergence occurs. That is, if for l algorithm repetitions the best individual or the nondominated area does not change, the periodization scheme proceeds with the next algorithm. However, if the population can be improved, the algorithm is executed again for at least l generations.

4 Hybrid Evolutionary Strategies

Evolutionary Strategies in their original form rely solely on a mutation operator. The \(\{\mu {, \atop +}\lambda \} \text { ES}\) uses \(\mu \) parents to create \(\lambda \) offspring individuals and selects \(\mu \) new parents from all individuals in case of a ‘+’ variant or from the new individuals in case of the ‘,’ variant.

hES is a \(1+\lambda \) ES designed for periodization with multi-objective evolutionary algorithms. In particular, we include two concepts from the Elitist Nondominated Sorting GA II in hES: fast nondominated sorting and crowding distance as a diversity metric. Fast nondominated sorting calculates nondominated sets for the objective space points. The crowding distance for a point is defined as the volume of a hyper-cube bounded by the adjoining points in the same nondominated set. Consequently, the crowding distance creates an order, denoted by \(\prec _n\), on the points of a nondominated set. hES uses fast nondominated sorting to decompose parents and offspring individuals into nondominated sets, and uses crowding distances to decide which of the individuals might be skipped in order to keep the nondominated set diverse. In summary, the key ideas are:

  1. 1.

    A local search style algorithm is executed for every element of a given set of solutions. Exactly one individual, which is nondominated, from a parent and its offspring individuals proceeds to the next population.

  2. 2.

    Offspring individuals that are mutually nondominated to their parent but have a different Pareto vector are skipped. This prevents unnecessary fluctuations in the nondominated set.

  3. 3.

    Neutral genetic drift, as presented by Miller in [20], is achieved by skipping a parent if at least one of its offspring individuals holds an equal Pareto vector.

  4. 4.

    Parents and offspring individuals are partitioned into nondominated sets and new parents are selected using NSGAII’s crowding distance metric.

Algorithm 1 shows the pseudocode of an hES implementation, hES-step. The algorithm starts with the creation of offspring individuals in lines 1–4. To this end, for every individual in the parent population \(P_t\), hES-step executes \(1+\lambda \) ES appending the newly created offspring individuals to \(Q_t\). The \(1+\lambda \) ES loop is implemented by the ES-generate in Algorithm 1. After the offspring individuals are created, hES-step proceeds with the concatenation of parents and offspring individuals by calling the add-replace procedure, listed in Algorithm 3. add-replace clones the parent population and successively adds offspring individuals that have a unique Pareto vector to this population. An offspring individual with a Pareto vector identical to its parent replaces the parent. Then, hES-step partitions the concatenated set \(R_t\) in line 6 into nondominated sets \(\mathscr {F}_i\) using NSGAII’s fast-nondominated-sort. After that, starting with the dominant set \(\mathscr {F}_1\), the algorithm partitions \(\mathscr {F}_1\) by the parents into \(G=\{G_1, G_2,\ldots \}\). That means all individuals of \(G_i\) have the same parent p. Additionally, if \(p \in \mathscr {F}_1\), then \(p \in G_i\). Should a non-empty set \(G_i\) not contain the parent p, one of the least crowded individuals of \(G_i\) is selected to proceed to the next generation. Otherwise, the parent proceeds to the next generation. Once p or one of its offspring individuals is transferred to the next generation, p and all of its offspring individuals are skipped by hES-step from further processing in the currect generation.

figure a
figure b
figure c

5 Performance Assessment

To analyze the performance of multi-objective optimizers, we need to compare the calculated Pareto sets. In this work we employ two methods: the ranking of Pareto sets by a quality indicator and the analysis of the mean Pareto set, attained during multiple runs. Both methods are described by Knowles et al. [18] and are also implemented in the PISA toolbox by Bleuler et al. [1].

5.1 Quality Indicators

To compare Pareto sets, Zitzler et al. [32] introduced the concept of a Quality Indicator (QI) as a function mapping a set of Pareto sets to a set of real numbers. Under QI, the Pareto sets define a relation on the Pareto set quality. In our work, we use the unary additive epsilon indicator \(I^1_{ \varepsilon +}\). It is based on the binary additive epsilon indicator \(I_{\varepsilon +}\) which is defined for two Pareto sets A and B as:

$$ I_{\varepsilon +}(A, B) = \inf _{\varepsilon \in \mathbb {R}} \{\forall b \in B\ \exists a \in A : a \preceq _{\varepsilon +} b \}. $$

Here, the relation \(\preceq _{\varepsilon +}\) is defined as \(a \preceq _{\varepsilon +} b \Leftrightarrow \forall i: a_i \le \varepsilon + b\). For a reference Pareto set R, the unary additive epsilon indicator \(I^1_{\varepsilon +}\) can be now derived as

$$I^1_{\varepsilon +}(A) = I_{\varepsilon +}(A, R).$$

Following Knowles et al. [18], we use the non-parametric Kruskal-Wallis (KW) test [2] to statistically evaluate sequences of quality numbers. The Kruskal-Wallis test differentiates between the null hypothesis \(H_0=\,\)“The distribution functions of the sequences are identical” and the alternative hypothesis \(H_A=\,\)“At least one sequence tends to yield better observations than another sequence”. In case the test rejects \(H_0\), we provide for all sequence pairs the one-tailed p-value. Table 1 presents an example: for an algorithm tuple \((A_{\text {row}}, A_{\text {col}})\) a p-value equal or below \(\alpha \) indicates a lower mean for \(A_{\text {row}}\). Thus, one can conclude for Table 1 that \(A_1\) outperforms \(A_2\) and \(A_3\), and \(A_3\) outperforms \(A_2\). In our experiments, we configure the significance level \(\alpha \) to 0.01.

Table 1 Interpretation of the Kruskal-Wallis test: given the Kruskal-Wallis test rejects \(H_0\), a dot denotes a p-value higher than \(\alpha \)

5.2 Empirical Attainment Functions

An additional way of interpreting the results of multi-objective optimizers is to look at the Pareto points that are covered, i.e., weakly dominated, with a certain probability during the multiple repetitions of an optimization algorithm. All Pareto points that have been reached in x% of the runs are referred to as the x%-attainment. The attainment allows for a direct graphical interpretation as shown in the examples of Figs. 2 and 3.

In order to statistically compare the attainments we use the two-tailed Kolmogorov-Smirnov test [23]. It distinguishes between \(H_0\)= “Sequences A and B follow the same distribution” and \(H_A\)= “Sequences A and B follow different distributions”. Table 2 contains exemplary results for the Kolmogorov-Smirnov (KS) test. It can be interpreted as: \(A_1\) differs significantly from \(A_2\) and \(A_3\). In our experiments, we configure the significance level \(\alpha \) to 0.05.

Table 2 Interpretation of the Kolmogorov-Smirnov test: A dot denotes an accepted \(H_0\) hypothesis at the given \(\alpha \)

6 Evaluation

We experimented with several benchmarks to compare hES and the periodized variants of hES, NSGAII and SPEA2. At first, we used the standard benchmarks for multi-objective algorithms DTLZ{2, 6} and ZDT6. These benchmarks are available with the PISA toolbox [1] and are described in [19]. Second, we compared our algorithms on the evolution of digital circuits, i.e., even 5- and 7-parity and (2, 2) and (3, 3) adders and multipliers, using Cartesian Genetic Programming (CGP) [20] as the hardware representation model. Figure 1 illustrates the CGP phenotype. Besides the functional quality of the digital circuit, which in this case is set as a constraint, we select the circuit’s area and speed to define a multi-objective benchmark [15].

Fig. 1
figure 1

Cartesian Genetic Programming (CGP) encodes a two dimensional grid of functional units connected by feed forward wires, thus forming a directed acyclic graph. The CGP model is parametrized with the number of primary inputs \(n_i\) and outputs \(n_o\), number of rows \(n_r\) and columns \(n_c\), number of functional block inputs \(n_n\), the maximal length of a wire l and the functional set F that can be computed by the nodes

In our experiments we executed 20 repetitions for every combination of goal function and algorithm. For the hES, ES-generate produces 32 offspring individuals for each parent. For the other benchmarks, Algorithm 1 was configured to have one offspring individual per parent.

Table 3 presents the configuration of the benchmarks DTLZ{2, 6} and ZDT6. For these benchmarks, NSGAII and SPEA2 employ the SBX crossover operator [3]. The optimization runs were stopped after 10,000 fitness evaluations. Table 4 shows for the digital circuit benchmarks the CGP configurations, termination criteria, and population sizes. We limit the functional set to the Boolean functions presented in Table 5. To simplify the nomenclature, we use the following abbreviations:

$$\begin{aligned} \begin{array}{ll} \text {hES} &{} \rightarrow \texttt {h} \\ \text {SPEA2} &{} \rightarrow \texttt {s} \\ \text {NSGAII} &{} \rightarrow \texttt {n} \\ \end{array} \end{aligned}$$
Table 3 DTLZ2, DTLZ6, and ZDT6 benchmark configurations
Table 4 CGP benchmark configurations. S is the termination number, measured in fitness evaluations. |P| and |A| denote the capacity of the parent population and the archive
Table 5 CGP configuration: functional set F

6.1 Periodization of hES for DTLZ2, DTLZ6 and ZDT6

To examine the effect of local search, we first execute the standard NSGAII and SPEA2 for a given benchmark in order to determine the reference performance. Then, we increase step by step the influence of local search by periodizing NSGAII with hES until only hES is executed. In terms of our periodization model (cf. Sect. 3), we investigate the six periodization schemes: n, s, nh, nh \(^4\), nh \(^{10}\), and h.

Table 6 shows the results of the KW test applied to the benchmarks DTLZ{2, 6} and ZDT6 with respect to the unary additive epsilon indicator \(I^1_{\varepsilon +}\) at the significance level \(\alpha =1\%\). The results of the KS test are omitted as they indicate differences significant at \(\alpha =5\%\) between the nondominated sets for almost all combinations of algorithm and benchmark.

Table 6 Comparison of the nondominated sets of DTLZ2, DTLZ6, and ZDT6. A dot denotes an accepted \(H_0\). When the KW test rejects \(H_0\) for the algorithm pair \((a_{row},a_{col})\), a one-tailed p-value lower than \(\alpha =0.01\) indicates that \(a_{\text {row}}\) evolves significantly better nondominated sets than \(a_{\text {col}}\) regarding \(I^1_{\varepsilon +}\)

The central observation for the DTLZ2 and DTLZ6 experiments is that the quality of the nondominated sets degrades with an increasing influence of local search. Starting with the periodization of NSGAII and hES, the KW test shows falling performance when increasing the number of hES iterations. The hES-only experiment results in the worst performance of all the algorithms. The KW test results are confirmed by the graphical interpretation of the 75% attained nondominated sets in Fig. 2a, b.

For the ZDT6 benchmark, the influence of hES is not as one-sided as for the DTLZ{2,6} benchmarks. The nh periodization outperforms SPEA2 and NSGAII. Further increase in the influence of hES in the nh \(^{4}\) periodization lets it dominate all other algorithms. The nh \(^{10}\) periodization is on a par with SPEA2 and NSGAII, while the execution of hES alone, as with DTLZ{2, 6}, falls behind. The 75% attainments pictured in Fig. 3 confirm this. Interestingly, despite the large gap between nh \(^{10}\) and the group of SPEA2 and NSGAII, the KW test finds no significant differences at \(\alpha =1\%\) between the corresponding indicator sequences. The KS test, similar to the results for the DTLZ{2, 6} benchmarks, reveals significant differences at \(\alpha = 5\)% for all algorithm combinations except the NSGAII and SPEA2 pair.

Fig. 2
figure 2

75%-attainments for the 2-dimensional DTLZ2 a and DTLZ6 b benchmarks. A hypothetical optimum is located at \(\mathbf {0}\)

Fig. 3
figure 3

75%-attainments for the 2-dimensional ZDT6 benchmarks. A hypothetical optimum is located at \(\mathbf {0}\)

The DTLZ{2,6} and ZDT6 benchmarks demonstrate the various kinds of impact that local search may have when periodized with global search algorithms. To gain an insight into whether the order of the algorithms in the periodization sequence influences the results, and into how an hES-less periodizations of SPEA2 and NSGAII compares to the regular SPEA2 and NSGAII, we fixate on the ZDT6 benchmark and apply 2- and 3-tuple permutations of the NSGAII, SPEA2, and hES algorithms. All the experiments were repeated 100 times and the execution was stopped after 200 generations.

Table 7 shows the results for 2-tuple combinations of NSGAII, SPEA2, and hES. The general observation taken from the KW test is that hES periodized with either NSGAII or SPEA2 outperforms standard NSGAII, SPEA2, and their combinations. Interestingly, the hES-after-SPEA2 outperforms the hES-after-NSGAII, while SPEA2-after-hES does not. This shows that the performance of this particular periodization scheme may be sensitive to the initial order of the executed algorithms.

Table 7 ZDT6 nondominated sets comparison for 2- and 3-tuple combinations of NSGAII, SPEA2 and hES to NSGAII and SPEA2. A dot denotes an accepted \(H_0\). When the KW test rejects \(H_0\) for the algorithm pair \((a_{row},a_{col})\), a one-tailed p-value lower than \(\alpha =0.01\) indicates that \(a_{\text {row}}\) evolves significantly better nondominated sets than \(a_{\text {col}}\) regarding \(I^1_{\varepsilon +}\). A star indicates significantly different nondominated set distributions at \(\alpha =0.05\) according to the KS test

The KS test confirms the results observed before. There are basically two classes of algorithms, showing significantly different results: the class of algorithms periodized with hES, and the class of NSGAII, SPEA2 and their combinations. In contrast to the previous test, the differences between (hs) and (nh) are now identified as significant.

Next, Table 7 shows also the results of 3-tuple combinations of hES with NSGAII and SPEA2. Similar to the results achieved for the 2-tuple tests, all the periodized algorithms outperform (KW) and differ (KS) from NSGAII and SPEA2.

In summary, we can conclude that for the DTLZ{2, 6} benchmarks, an increasing impact of hES reduces the quality of the nondominated sets evolved, while for the ZDT6 benchmark, the schemes periodized with hES have the dominating results.

6.2 Periodization of hES for Digital Circuit Design

In contrast to the previous benchmarks, we optimize for three objectives in this section: the functional quality, the area, and the propagation delay. In total, each algorithm is executed 20 times for each pair of goal functions. Table 8 summarizes the number of runs that resulted in functionally correct solutions. The first observation is that for the parity, the (3, 3) adder, and the (3, 3) multiplier benchmarks, almost none of the algorithms managed to evolve functionally correct circuits. We focus therefore on the experiments involving the (2, 2) adder and the (2, 2) multiplier, when discussing the influence of local search on the evolution of correct circuits.

Table 8 The number of circuits with perfect functional quality evolved during 20 runs
Table 9 Non-dominated sets comparison: A dot denotes an accepted \(H_0\). When the KW test rejects \(H_0\) for the algorithm pair \((a_{row},a_{col})\), a one-tailed p-value lower than \(\alpha =0.01\) indicates that \(a_{\text {row}}\) evolves significantly better nondominated sets than \(a_{\text {col}}\) regarding \(I^1_{\varepsilon +}\). A star indicates significantly different nondominated set distributions at \(\alpha =0.05\) according to the KS test
Table 10 Non-dominated sets comparison: A dot denotes an accepted \(H_0\). When the KW test rejects \(H_0\) for the algorithm pair \((a_{row},a_{col})\), a one-tailed p-value lower than \(\alpha =0.01\) indicates that \(a_{\text {row}}\) evolves significantly better nondominated sets than \(a_{\text {col}}\) regarding \(I^1_{\varepsilon +}\). A star indicates significantly different nondominated set distributions at \(\alpha =0.05\) according to the KS test

Despite treating equally all objectives, hES is most effective in finding functionally correct solutions. While SPEA2 outperforms NSGAII for this particular CGP configuration on the (2, 2) adder and multiplier benchmarks, increasing the influence of hES in the periodization with NSGAII produces even greater success rates. nh \(^4\) and especially nh \(^{10}\) periodization schemes reveal only a small gap with the hES-only performance. This insight is also partly confirmed by the results of the KW and the KS tests presented in Table 9. For the (2, 2) adder, the KW test finds no significant differences in nondominated sets at \(\alpha =1\%\) while the KS test partitions the algorithms into groups of {n}, {s, nh, nh \(^4\), nh \(^{10}\)}, and {h} with significant differences in the evolved nondominated sets at \(\alpha =5\%\). For the (2, 2) multiplier benchmark, NSGAII is dominated by all, and hES by SPEA2 and nh according to the KW test. The KS test splits the algorithms, similarly to what happened with the (2, 2) benchmark, into groups of {n}, {s, nh, nh \(^4\), nh \(^{10}\)}, and {h}. Additionally, in the group of {s, nh, nh \(^4\), nh \(^{10}\)} SPEA2 evolves different nondominated sets than it does for nh \(^4\), nh \(^{10}\).

The (3, 3) adder and multiplier benchmarks split the algorithms into {n} and {s, nh, nh \(^4\), nh \(^{10}\), h} groups with different nondominated sets according to the KW test. The KS test reveals, similar to what happened with the (2, 2) benchmarks, the same general tendency of differences between the nondominated sets evolved by the NSGAII, the hES, and the group of the remaining algorithms.

The general partitioning according to the quality of the evolved nondominated sets between NSGAII, hES, and the rest of the algorithms, is even more pronounced for the parity benchmarks, as now the KW test also confirms significant differences (Table  10). That is, SPEA2, nh, nh \(^4\), and nh \(^{10}\) are better than NSGAII and hES for 5- and 7-parity functions and NSGAII is better than hES for the 7-parity function. The KS test finds significant differences between the three groups, also finding a significant difference between nh and nh \(^{10}\) for 5-parity.

In summary, we can state that periodizations of hES with NSGAII, as well as the non-periodized SPEA2, create, for almost all benchmarks, nondominated sets which are better than those of the non-periodized hES and NSGAII. Additionally, with the increasing influence of hES in a periodization scheme, the probability for the evolution of correct CGP circuits increases.

7 Conclusion

In this work, we investigated the periodization of multi-objective local and global search algorithms. For this, we relied on a periodized execution model and on the hybrid Evolutionary Strategies as a local search technique tailored to periodization with Pareto-based genetic multi-objective optimizers such as NSGAII and SPEA2.

The results show that for the DTLZ{2, 6} benchmarks, hES and its periodization with NSGAII underperforms. For ZDT6 and, most importantly, for the evolution of digital circuit benchmarks on the CGP model, hES and its periodizations are significantly better than the reference algorithms NSGAII and SPEA2. Furthermore, the periodized execution model proved to be a simple, fast and flexible approach to combine multiple optimization algorithms for merging functional and behavior properties. Thus, blending multi- and single-objective optimizers, local and global search algorithms and differently converging methods creates a new family of optimization algorithms.