Keywords

1 Introduction

Developed by Kennedy and Eberhart [1, 2] particle swarm optimization (PS0) is a stochastic optimization method modeled on social behavior and intelligence of animal such as flocks of birds and fish schooling. Similar to other evolutionary methods, it is based on the population. The mechanism of the PSO method relies on particles following their best personal particle and globally the best particle in the swarm towards the most promising areas of the search space. Because of its easy implementation and high convergence rate, it is widely used in solving various optimization problems, including energetic [3], mechanics [4], scheduling problem [5], antenna design [6, 7], control systems [8], image classification [9] and many others. However likewise other evolutionary algorithms, PSO encounters some troubles including stagnation in local optima, excessive loss of diversity and premature convergence [10]. A variety of different variants of PSO have been introduced to counteract these disadvantages and enhance the efficiency of PSO. Among them, the following improvements can be distinguished:

  • Adjustment of basic coefficients. According to Shi and Eberhart [11], a key to the improvement of the PSO performance is inertia weight, which should be linearly decreased from 0.9 to 0.4. Clerc [12] recommended to use fixed factors, and indicates that inertia weight of 0.729 with fixed acceleration coefficients of 1.494 can enhance convergence speed. Five years later Trelea [13] proved that PSO with inertia weight of 0.6 and constant acceleration coefficients of 1.7 allowed to get faster convergence than that achieved by Eberhart [11] and Clerc [12]. The PSO method with nonlinear factors were proposed by Borowska [14, 15]. Furthermore, the efficiency of changing factors was examined by Ratnawera et al. [16]. The cited authors concluded that time-varying acceleration coefficients (TVAC) helped to control local and global searching process more efficiently.

  • Modification of the update equations. To improve searching process the researches propose to use a new update equation [17, 18] or add a new component to existing velocity equation [19]. Another approach is to introduce, for ineffective particles, a repair procedure [10] with other velocity updating equations that helps more precisely determine swarm motion and stimulate particles when their efficiency decreases.

  • Topology structure. According to Kennedy [20] topology structure affects the way information exchange and the swarm diversity. Many different topological structures have been proposed including: square, four clusters, ring, pyramid and the von Neumann topology [20,21,22,23]. Another approach is a multi-swarm structure recommended by Liang and Suganthan [24] and Chen et al. [25]. In contrast, Gong et al. [22] have introduced a two-cascading-layer structure. In turn, Wang et al. [26] developed PSO based on multiple layers.

  • Learning strategy. It is used to improve performance of algorithm by breading high quality exemplars from which other swarm particles can acquire knowledge and learn to search space. A multi-swarm PSO based on dynamic learning strategy has been presented by Ye et al.[27]. Likewise, Liang et al.[28] has proposed a comprehensive learning strategy (CLPSO) according to which, particle velocity is updated based on historical best information of all other particles. To greater improve the performance and adaptability of CLPSO, Lin et al. [29] recommend to use an adaptive comprehensive learning strategy with dynamically adjusting learning probability level according to the performance of the particles during the optimization process. Another approach is based on social learning PSO as described by Cheng et al. [30].

  • Hybrid methods combine beneficial features of two or more approaches. They are used to strength PSO efficiency and achieve faster convergence as well as better accuracy of the resultant solution. Holden et al. [31] have proposed to join PSO with an ant colony optimization method. Li et al.[32] have combined PSO with jumping mechanism of SA (simulated annealing). A modified version based on PSO and SA has been developed by Shieh et al. [33]. In turn, PSO with chaos has been presented by Tian and Shi[34], whereas Chen et al. [35] have proposed learning PSO based on biogeography. Furthermore, a hybrid approach based on improved PSO, cuckoo search and clustering method has been developed by Bouer and Hatamlou [36].

In order to enhance the PSO performance, Gong et al. [22] have merged the latter two categories and proposed genetic learning particle swarm optimization (GL-PSO). In GL-PSO, except PSO and genetic operators, a two layer structure have been applied in which the former is used to generate exemplars whereas the latter to update particles through the PSO algorithm.

The GL-PSO method improves the performance of PSO by constructing superior exemplars from which individuals of the population learn to move in the search space. Unfortunately, this approach is not free from disadvantages. In fact, the algorithm can achieve high convergence rate but in case of complex problems, due to global topology, the particle diversity quickly decreases and, as a result, impairs the exploration capability.

In order to enhance the diversity and adaptability of GL-PSO as well as to improve its performance in solving complex optimization problems, in this paper, a new modified genetic learning method, referred to as GL-PSOIF, has been demonstrated. The proposed GL-PSOIF method is based on GL-PSO in which two modifications have been introduced. Specifically, instead of global topology, an interlaced ring topology has been introduced. The second modification relies on introducing a flexible local search operator. The task of the interlaced ring topology is to increase the population diversity and improve effectiveness of the method by generating better quality exemplars. In turn, a flexible local search operator has been introduced to enrich searching and improve the exploration and the exploitation ability. To evaluate the impact of the proposed modifications on performance of the proposed method, the interlaced ring topology has been first integrated with GL-PSO only (referred to as GL-PSOI) and then together with a flexible local search operator (referred to as GL-PSOIF). Both methods were tested on a set of benchmark problems and a CEC2014 test suite [38]. The results were compared with five different variants of PSO, including the genetic learning particle swarm optimization (GL-PSO) [22], the comprehensive particle swarm optimizer (CLPSO) [28], the standard particle swarm optimization (PSO), the global genetic learning particle swarm optimization (GGL-PSOD) [23], and the heterogeneous comprehensive learning particle swarm optimization (HCLPSO) [39].

2 The PSO Method

The PSO method was inspired by the social behavior of flocks of organisms (bird flocking, fish schooling, bees swarm) living in their natural environment [2, 3]. Likewise other evolutionary method, PSO is based on a population. Individuals of the population are called particles, and the population itself is called a swarm. In the PSO, the optimisation process is achieved by migration the particles towards the most promising area of the search space. Assuming that migration occurs in the D-dimensional search space, we can imagine particle swarm as a set of points each of which possess knowledge about: its actual position described by the position vector xj = (xj1, xj2…,  xjD), its current speed of movement described by velocity vector vj = (vj1, vj2…, vjD), its best position encountered by itself described by pbestj = (pbestj1, pbestj2…, pbestjD), and the best position encountered in all swarm described as gbest = (gbest1, gbest2…, gbestD). In the first iteration, the position vector value and the velocity vector value are randomly generated. In subsequent iterations, values of the vectors are updated based on the knowledge and acquired experience of the particles. Changing of the particles velocity is achieved based on the Eq. (1).

$$ v_{j} (l + 1) = w \cdot v_{j} (l) + c_{1} \cdot r_{1} (pbest_{j} - x_{j} (l)) + c_{2} \cdot r_{2} (gbest - x_{j} (l)) $$
(1)

Changing the particle position is realized by adding its actual velocity to its previous position (2)

$$ x_{j} (l + 1) = x_{j} (l) + \cdot v_{j} (l + 1) $$
(2)

where: w - inertia weight, pbestj -the best j particle position., gbest - the best position. in a swarm, r1, r2 .-random numbers generated from (0, 1), c1, c2- acceleration coefficients.

3 Genetic Learning Particle Swarm Optimization

In contrast to PSO, the GL-PSO algorithm possess a two-cascading-layer structure. One layer is used to generate exemplars, the other to update particles position and velocity through the PSO algorithm. To generate exemplars, three operators (crossover, mutation and selection) of the GA algorithm [37] are applied.

Exemplars ej are selected from offspring. To generate offspring oj for each dimension of particle j, a crossover operator is applied according to the formula:

$$ o_{j} = \left\{ {\begin{array}{*{20}l} {r \cdot pbest_{j,} + \left( {1 - r} \right) \cdot gbest, if f\left( {pbest_{j} } \right) < f(pbest_{k} ) } \hfill \\ {pbest_{k} \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \;otherwise} \hfill \\ \end{array} } \right. $$
(3)

where k is random selected particle, r –random number from (0, 1).

Next, for each dimension, a random number \( r\left[ {0,1} \right] \) is generated and then if r < pm, (where pm probability mutation) the offspring is mutated. Then the offspring undergoes the selection operation according to the formula:

$$ e_{j} \leftarrow \left\{ {\begin{array}{*{20}l} {o_{j} , \quad \quad if f\left( {o_{j} } \right) < f(e_{j} )} \hfill \\ {e_{j} , \quad \quad otherwise} \hfill \\ \end{array} } \right. $$
(4)

The particle velocity is updated based on the following equation:

$$ v_{j} (l + 1) = w \cdot v_{j} (l) + c \cdot r_{{}} (e_{j} - x_{j} (l)) $$
(5)

where ej is the exemplar of the j particle.

4 The Proposed Method

In order to improve the performance of global genetic learning particle swarm optimization (GL-PSO), in this article two modifications have been proposed: interlaced ring topology and flexible local search operator.

4.1 Interlaced Ring Topology

One of the main reason for inability to obtain and pursue satisfactory performance of the GL-PSO is the lack or weakennes ability to maintain diversity of the population (swarm). This leads to a loss of balance between exploration and exploitation and consequently to premature convergence and unsatisfactory results. To avoid this, it is necessary to develop tools that could help increase adaptability of the algorithm, which, in turn, should give satisfactory results.

Lin et al. [22] have introduced ring and a global learning component with linearly adjusted control parameters to enhance a GL-PSO diversity. This improves the adaptability of the method but is not sufficient. Hence, the problem remains open and other solutions should be sought. To improve the adaptability of the GL-PSO, in this paper, instead of global learning, the interlaced ring topology has been proposed. This approach uses two neighbour particles, like in the ring topology, but in every next iteration (except the first one), the order of the particles is changed as follows. The particle collection is divided into two parts (sets) and particles of the second part take up spaces between the particles of the first part alternately (one particle from the first set, another particle from the second set, and next one from the first set, another from the second set etc.) according to Eq. 6 and 7.

$$ n_{j} \, = \,\begin{array}{*{20}c} {\frac{j + 1}{2}} & {for\, odd \,j} \\ \end{array} $$
(6)
$$ n_{j} \, = \,\begin{array}{*{20}c} {\frac{j + N}{2}} & {for \,even \,j} \\ \end{array} $$
(7)

where nj is the position of the particle to be moved to the j place in the ring, j = 1… N, N is a swarm size (for example n2 = 5 means that the second position in the ring is occupied by a particle from 5th place in the swarm).

Then, the position of exemplars are generated according to Eqs. 8, 9 and 10.

$$ o_{j} = r \cdot pbest_{{n_{j1} }} + ( 1 - r) \cdot pbest_{{n_{j2} }} $$
(8)
$$ n_{j1} = \left\{ {\begin{array}{*{20}l} {N, \quad \quad j = 1} \hfill \\ {j - 1, \quad j > 1} \hfill \\ \end{array} } \right. $$
(9)
$$ n_{j2} = \left\{ {\begin{array}{*{20}l} {1, \quad \quad \;\,j = N} \hfill \\ {j + 1, \quad j\, < \,N} \hfill \\ \end{array} } \right. $$
(10)

where according to the ring topology nj1 and nj2 are the indexes of the adjacent particles from the left and right side of the particle j, respectively.

4.2 Flexible Local Search Operator

To improve the searching behavior of PSO and improve the exploitation capacity of the swarm, a flexible local search operator is introduced. The particle positions are updated according to the formula:

$$ x_{j}^{k + 1} = \left\{ {\begin{array}{*{20}l} {pbest_{j} \cdot (1 + N(0,1)), \quad \quad otherwise} \hfill \\ {x_{j}^{k} + v_{j}^{k + 1} , \quad \quad \quad \quad \quad \quad \;\,if\, p < s} \hfill \\ \end{array} } \right. $$
(11)

where p is a a randomly selected number in the range [0,1], s is a real number linearly increasing from 0.6 to 0.8. This means that each particle has a 40 to 20% possibility to perform search in the vicinity of its personal best position. This means that, according to [16], in the early stage of the optimization process, the exploration is enhanced, and the local exploitation in the latter stage is facilitated.

5 Test Results

In order to investigate the efficiency of the proposed modifications, the GL-PSOI (in which only the interlaced ring topology was adopted) and GL-PSOIF (with interlaced ring topology and flexible local search operator) were evaluated, separately. Both strategies were tested on a set of classical benchmark problems, and on the CEC2014 test suite. Twelve of them (6 selected benchmark function and 6 CEC2014 functions) are described in this article and depicted in Tables 1 and 2.

Table 1. Optimization test functions.
Table 2. Selected CEC2014 test suite.

The results of the tests were compared with performances of CLPSO, HCLPSO, PSO, GL-PSO and GGL-PSOD. The parameter settings of this algorithms are listed in Table 3.

Table 3. Parameters settings.

Both in the GL-PSOI and GL-PSOIF, the inertia weight w = 0.6 [13]. The acceleration coefficients used in the computations were equal c1 = c2 = 1.7. In case of the set of benchmark functions, the population consisted of 20 particles, the dimension of the search space was 30, the maximum number of function evaluations was 300000. The search range depends on the function used as shown in Table 1. For each problem, the simulations were run 30 times. For CEC2014 functions, the population consisted of 50 particles, the dimension of the search space was D = 30, and the maximum number of function evaluations was D × 104. The search range was [-100,100]n. For CEC2014 functions, the algorithms were run 31 times independently.

The exemplary results of the tests are summarized in Tables 4 and 5.

Table 4. The comparison test results of the PSO algorithms on the benchmark functions.
Table 5. The comparison test results of the PSO algorithms on the CEC2014 test suite.

The exemplary charts showing the mean fitness selected functions in the following iterations for GL-PSO, GGL-PSO, CLPSO, HCLPSO, PSO, GL-PSOI and GL-PSOIF algorithms, are depicted in Figs. 1, 2 and 3.

Fig. 1.
figure 1

Convergence performance for f2 function.

Fig. 2.
figure 2

Convergence performance for f4 function.

Fig. 3.
figure 3

Convergence performance for f6 function.

The results of the tests confirmed that both GL-PSOI and GL-PSOIF are more effective and can achieve superior performance over the remaining tested methods. In case of unimodal functions, the GL-PSOI with interlaced ring topology obtained superior results over the ones for GL-PSOIF. For multimodal functions superior results were achieved by GL-PSOIF.

In case f2 function, GL-PSO achieved worse results than GL-PSOI and GL-PSOIF but better than those obtained by the CLPSO, HCLPSO and PSO. For f3 function, GL-PSOI achieved the best result. The performance of GL-PSO was worse than that obtained by GL-PSOI but superior then performance of GL-PSOIF. For unimodal f7 function the best results were obtained by CLPSO. The outcomes achieved by GL-PSOI and GL-PSOIF were worse than results obtained by CLPSO but better than the results achieved by the remaining tested methods. For multimodal functions, the results show that (almost in all cases) GL-PSOIF exhibit the best performance.

The convergence curves presented in Figs. 1, 2, and 3 indicate that both GL-PSOI and GL-PSOIF converge slower in the early stage of the optimization process than most of the compared methods. At this stage, each algorithm, except PSO, is faster. Then both algorithms accelerate and converge faster than the others.

In case of the unimodal f2 function, both algorithms initially revealed slower convergence, which was followed by a further rapid acceleration after about 5x104 iterations showing superiority over the rest evaluated methods. For the unimodal f2 function, GL-PSOIF performed a bit slower than GL-PSOI, which could be due to the introduction of flexible search operator, which did not improved the GL-PSOIF run. In case multimodal functions (Figs. 2 and 3), GL-PSOIF converges slowly (other methods are faster) but after about 1.3x105 iterations accelerates and after 2x105 iterations becomes the fastest.

6 Statistical Test

In order to evaluate the differences between algorithms, a statistical t-test was used. A confidence level of 0, 05 was selected for all statistical comparisons. Tables 4 and 5 shows the results of the t-test performed on the test functions. The signature ‘+’ indicates that GL-PSOIF is significantly better than the other algorithms, ‘−’ worse to the other algorithms, and ‘=’ equal to the other algorithms. The rows in Table 6 named ‘+’, ‘−’ and ‘=’ mean the number of times that the GL-PSOIF is better than, worse than or equal to the other algorithms. The results of the t-test indicate that proposed algorithm is significantly better than other methods with 95% confidence level in a statistically meaningful way.

Table 6. The comparison test results of the PSO algorithms.

7 Conclusion

In this study, a new genetic learning particle swarm optimization with interlaced ring topology and flexible local search operator (GL-PSOIF) has been proposed. To assess the impact of introduced modifications on performance of the evaluated method, first the interlaced ring topology was integrated with GL-PSO only (referred to as GL-PSOI) and then with the flexible local search operator (GL-PSOIF). The efficiency of the new strategy was tested on a set of benchmark problems and the CEC2014 test suite. The results were compared with five different variants of PSO, including GL-PSO, GGL-PSOD, PSO, CLPSO and HCLPSO. The results of the experimental trials indicated that the genetic learning particle swarm optimization with interlaced ring topology is effective for unimodal function. In case of the multimodal function, GL-PSOIF showed superior performance over the remaining tested methods.