Advertisement

Water Resources Management

, Volume 32, Issue 12, pp 4013–4030 | Cite as

Multi Objective Optimization with a New Evolutionary Algorithm

  • Samaneh Seifollahi-Aghmiuni
  • Omid Bozorg Haddad
Open Access
Article

Abstract

Various objectives are mainly met through decision making in real world. Achieving desirable condition for all objectives simultaneously is a necessity for conflicting objectives. This concept is called multi objective optimization widely used nowadays. In this study, a new algorithm, comprehensive evolutionary algorithm (CEA), is developed based on general concepts of evolutionary algorithms that can be applied for single or multi objective problems with a fixed structure. CEA is validated through solving several mathematical multi objective problems and the obtained results are compared with the results of the non-dominated sorting genetic algorithm II (NSGA-II). Also, CEA is applied for solving a reservoir operation management problem. Comparisons show that CEA has a desirable performance in multi objective problems. The decision space is accurately assessed by CEA in considered problems and the obtained solutions’ set has a great extent in the objective space of each problem. Also, CEA obtains more number of solutions on the Pareto than NSGA-II for each considered problem. Although the total run time of CEA is longer than NSGA-II, solution set obtained by CEA is about 32, 4.4 and 1.6% closer to the optimum results in comparison with NSGA-II in the first, second and third mathematical problem, respectively. It shows the high reliability of CEA’s results in solving multi objective problems.

Keywords

Evolutionary algorithm Multi colony Multi objective Optimization Pareto 

1 Introduction

Nowadays, many optimization problems in various sciences include several conflicting objective functions (OFs) while desirable condition for all functions should be supplied. Since increasing desirability of one OF might not necessarily increase the desirability of others, selecting the best solution in multi objective problems (MOPs) has always been a challenge. Therefore, a set of solutions is assessed to supply desirable condition for all OFs, called Pareto, and Pareto front is the best one in each problem. Different solutions on a Pareto are not superior to each other based on their OF values. A set of non-dominant solutions (NDSs) can be available for decision makers by presenting the Pareto front for each MOP and they can select a solution among several possible ones considering the problem’s conditions.

Specific methods are used for solving MOPs. Complexities of optimization problems are ignored in some methods through simplification or linearization and several OFs are considered as a single function applying weight factors or priority structures (Huang et al. 2005; Stanimirović et al. 2011). Due to lack of information about conflicting objectives such as priority of them in different problems, it is impossible to find a unique solution as the best one applying simplifications for MOPs and that is how several specific optimization methods are developed (Yang et al. 2007; Barros et al. 2008; Niknam et al. 2011; Kourakosa and Mantogloub 2013). MOPs can be solved in two general categories: (Barros et al. 2008) preference-structure based (PSB) methods in which the problem is solved as a single objective problem (SOP) considering the importance degree of different OFs compared with each other; and (Deb 2002) best-pareto based (BPB) methods in which a set of optimum NDSs is determined for the problem.

Difficulties of the PSB methods have led to wide application of BPB methods. Evolutionary optimization algorithms (EOAs) such as genetic algorithm (GA) (Deb 2002), ant colony optimization (ACO) algorithm (Rada-Vilela et al. 2013), particle swarm optimization (PSO) algorithm (Reyes-Sierra and Coello Coello 2006) and so forth, find a set of possible solutions instead of one solution for the problem in each run. Therefore, they are more efficient in solving MOPs and their required computational time can be controlled. Since EOAs usually consider a separate set of possible solutions for optimizing each OF, a set of solutions can be determined out of combined sets in each run which are located on the nearest Pareto to the Pareto front. So, each function is optimized in relevant set of solutions. To consider impacts of each function’s optimization on the other functions, some solutions of different sets are relocated with each other in each iteration to simultaneously optimize all OFs. Finally, a set of optimum solutions is obtained in which all OFs become near to their optimum condition as much as possible.

Comprehensive evolutionary algorithm (CEA) is a new EOA that can solve SOPs and MOPs with a unique structure. In the general process of developing CEA, a set of initial solutions is considered for each OF which is optimized based on three processes of selection, generation, and replacement. These processes are consecutively done for specific times to achieve the desirable condition for all OFs which means obtaining the best Pareto for the problem. CEA can present a Pareto with high density (including more possible solutions) and the best dispersion in the objective space of the problem. Also, this algorithm can achieve the global optimum solution of some functions and such solutions on the obtained final Pareto shows high capabilities of CEA in solving MOPs. In addition, CEA is able to implicitly analyze the sensitivity of the results to some parameters based on the problem condition as one of the main novelty of this algorithm. In this study, performance of CEA is assessed for solving several mathematical problems and a reservoir operation problem.

2 Multi Objective Optimization Process in CEA

Two main factors in optimization are: (Barros et al. 2008) OF as a scale for evaluating the problem qualification; and (Deb 2002) constraints as conditions for evaluating the desirability of OF for different situations in the problem. Evaluated OF according to the problem constraints is called fitness function and the desirable conditions in each problem are defined based on satisfying all constraints.

CEA can present a near to optimum solution for SOPs and the best set of NDSs (the best Pareto) for MOPs. Flowchart of the CEA for MOPs is shown in Fig. 1. According to this figure, some information is necessary about algorithm parameters and simulation model of the problem. The appropriate values for some parameters are determined either by the user or by CEA, considering the problem’s conditions. Considering CEA’s capability for implicitly sensitivity analysis compared to other EOAs, only algorithmic parameters defined by the user require sensitivity analysis. In multi objective optimization with CEA, several sets of possible solutions are considered and each OF is evaluated in one of these sets while other functions are assessed in other sets simultaneously in an iterative process to create final NDS set. Three processes of selection, generation and replacement based on different operators, are followed in each iteration. In CEA a wide range of selection and generation operators is considered which can be selectively activated by the user in optimization process. In other words, different selection and generation operators are applied in CEA to solve optimization problems based on user choices.
Fig. 1

Flowchart of multi objective CEA

Defined number of solutions are selected or generated in each set by activated operators for selection or generation processes. Therefore, each operator can have a share in optimizing each OF in each problem. The operator’s share shows the percent of total number of solutions in each set which are produced by that operator. So, in the beginning of optimization, a set of initial solutions is simultaneously generated for each OF considering allowable range for decision variables. The fitness function of each solution is calculated after simulating the problem for each set. There could be some solutions that do not satisfy at least one constraint of the problem, called infeasible solutions. These solutions could be removed from the set, modified to become feasible solutions, or taken a value corresponding to the amount of violation from constraints named penalty value. The latter is considered in CEA. Then, the algorithm starts the iterative optimization process (in the Fig. 1, parameter It counts number of iterations). In the first iteration, all sets of different OFs are integrated to achieve an initial set of NDSs. After that, in other iterations, first sets of solutions for different OFs are integrated and initial set of NDSs for the current iteration is defined. Then, initial set and final set of NDSs, respectively obtained at the end of the current and previous iterations, are integrated and a final set of NDSs is defined at the end of the current iteration. The final set of NDSs at the end of each iteration would be considered as the final Pareto for the MOP at the end of each iteration. The distance of the obtained Pareto at the end of each iteration from a base point (usually the origin of coordinate) is considered as a scale to identify the best Pareto for each problem.

If the defined stopping criteria for the problem is satisfied, the final best Pareto would be presented as the solution of the problem. The stopping criteria can be defined in different forms. In CEA, two criteria including number of algorithm iterations and the least difference (according to the computational precision defined by the user) between the obtained Paretos’ distance in several consecutive iterations are simultaneously considered as the stopping criteria. The number of consecutive iterations for assessing the stopping criteria is determined through implicit sensitivity analysis by the algorithm. If the stopping criteria is not satisfied, the optimization process would continue based on selecting the best solution for each set from the final Pareto, obtained at the end of the current iteration. Selection process is done based on fitness functions in which in each iteration, numbers of superior solutions in each set are selected. Two types of generation processes, crossover and mutation, are considered in CEA to produce new solutions for the next iteration (based on selected solutions through selection process) considering the share of each operator. At the end, there are new sets of solutions for each OF and their fitness function needs to be calculated. Then, performance of various selection and generation operators is evaluated and the share of each operator is modified according to its performance, enabling the algorithm to directly evaluate the impact of different operators through optimization process and identify the efficient operators in each problem. In other words, CEA does an implicit sensitivity analysis for activated selection and generation operators. Also, impact (share) of operators with undesirable performance is reduced at the end of each iteration to keep the algorithm move forward with higher quality and speed.

In multi objective algorithms which consider a separate set of solutions for each OF, applying a technique to influence each function by current optimum situation of other functions is necessary. It leads to optimize all OFs at the same time. So, at the end of each iteration, final best Pareto is sorted for each OF and numbers of best solutions (in the first ranks) are moved to all sets of solutions except the set of solution related to the considered OF. Finally, there will be a new set of solutions for each OF and the next iteration of the algorithm can be started.

3 Different Selection and Generation Operators in CEA

Several operators for selection and generation processes are considered in CEA to make it more comprehensive than other evolutionary algorithms. Four operators of roulette wheel (Lipowski and Lipowska 2012), tournament (Miller and Goldberg 1995), random, and Boltzmann (Lee 2003) are considered as selection operators. In roulette wheel operator, solution with the best (worst) fitness function would have highest (lowest, not zero) selection probability. In tournament operator in CEA, solutions that are not selected in each iteration, can be evaluated again in the next iteration. In random operators, selecting solutions with less desirability is possible and it may reduce the convergence speed of the algorithm. Although, undesired impact of this situation is moderated due to implicit sensitivity analysis of the operators in CEA. Boltzmann operator generates a new solution in the allowable range of decision variables. If the new solution is better than the best solution of the current iteration based on the fitness function, it will be selected as a superior solution for the next iteration.

Possible solutions in each iteration are generated by two kinds of operators in CEA, twenty-four types of crossover operators (classified into three categories of one-point cut, two-point cut, and full crossover) and five types of mutation operators. A row of decision variables is considered as a possible solution in CEA. Thus, in one and two-point cut crossovers, the structure of a solution is broken from one and two points, respectively. Also, there is no break point in full crossover operators and various types of mutation operators. For all crossover operators, one or two new solutions are generated based on two selected solutions while in all mutation operators, only a new solution is generated based on one selected solution. Having different new solutions through all generation operators provides opportunity to extend evaluating the decision space in each problem. If allowable range for decision variables is not satisfied in some types of crossover and mutation operators, a random value for related decision variable will be generated in the allowable range and replaced with its invalid value in the new solution.

4 Solving Multi Objective Mathematical Problems Using CEA

CEA is validated through solving several mathematical optimization problems and comparing the results with non-dominated sorting genetic algorithm II (NSGA-II) in MATLAB R2012b using a Core i5 computer with CPU of 2.67 GHz and 4.00 GB RAM.

4.1 Unconstrained Minimization (DEB)

Two OFs of f1(x1) and f2(x1, x2), Eq. (1) and (2) respectively, should be minimized in this problem. Equation (3) shows the allowable range of decision variables.
$$ \operatorname{Minimize}\kern0.5em {f}_1\left({x}_1\right)={x}_1 $$
(1)
$$ \operatorname{Minimize}\kern0.5em {f}_2\left({x}_1,{x}_2\right)=\frac{1}{x_1}\left\{2-\exp \left[-{\left(\frac{x_2-0.2}{0.004}\right)}^2\right]-0.8\exp \left[-{\left(\frac{x_2-0.6}{0.4}\right)}^2\right]\right\} $$
(2)
$$ 0.1\le {x}_i\le 1\kern0.5em i=1,2 $$
(3)
This problem has no constraint. It is solved by NSGA-II and CEA for 400 iterations and 200 initial solutions and the final Pareto is presented in Fig. 2. Also, Fig. 3 shows cumulative share of different selection and generation operators in the best run (run 2) of CEA. Statistics of different runs are shown in Table 1.
Fig. 2

a 3-D objective space; b 2-D objective space; and c Final Pareto for unconstrained minimization problem (DEB); d 3-D objective space; e 2-D objective space; and f Final Pareto for constrained maximization problem (Kita); g Final Pareto; and h Comparison of final Paretos obtained from NSGA-II and CEA in 3-D objective space for minimization with three OFs (DTLZ2)

Fig. 3

Share of selection operators for: a f1; and b f2; Share of crossover operators for: c f1; and d f2; Share of mutation operators for: e f1; and f f2 in CEA for unconstrained minimization (DEB)

Table 1

Statistics of different runs for mathematical multi objective problems

 

Algorithm

Parameter

No. Run

1

2

3

4

5

Unconstrained minimization problem (DEB)

NSGA-II

Average Euclidean distance of the final Pareto from the origin of coordinate

3.269

3.273

3.417

3.230

3.292

Number of points on the final Pareto

200

200

200

200

200

Number of iterations to achieve the final Pareto

400

400

400

400

400

Time (sec)

37.2

37.4

37.2

36.7

36.8

 

CEA

Average Euclidean distance of the final Pareto from the origin of coordinate

2.297

2.196

2.265

2.405

2.391

Number of points on the final Pareto

347

736

613

661

489

Number of iterations to achieve the final Pareto

90

67

82

162

57

Time (sec)

56.6

47.1

50.0

132.1

36.8

Constrained maximization problem (Kita)

NSGA-II

Average Euclidean distance of the final Pareto from the origin of coordinate

8.790

8.806

8.787

8.789

8.803

Number of points on the final Pareto

200

200

200

200

200

Number of iterations to achieve the final Pareto

400

400

400

400

400

Time (sec)

39.3

40.9

39.5

39.7

39.2

CEA

Average Euclidean distance of the final Pareto from the origin of coordinate

9.058

9.065

9.05

9.090

9.208

Number of points on the final Pareto

665

445

702

753

429

Number of iterations to achieve the final Pareto

82

122

118

122

82

Time (sec)

54.1

87.1

85.3

83.7

54.6

Minimization with three OFs (DTLZ2)

NSGA-II

Average Euclidean distance of the final Pareto from the origin of coordinate

1.049

1.021

1.016

1.038

1.049

Number of points on the final Pareto

380

380

380

380

380

Number of iterations to achieve the final Pareto

400

400

400

400

400

Time (sec)

66.0

67.6

65.4

62.8

64.5

CEA

Average Euclidean distance of the final Pareto from the origin of coordinate

1.000002

1.000000

1.000000

1.000000

1.000001

Number of points on the final Pareto

531

443

423

559

598

Number of iterations to achieve the final Pareto

400

400

282

206

282

Time (sec)

1207.5

1257.9

842.6

602.8

892.8

In Figs. 2a and b, the surface of two OFs are presented in the objective space of the problem. As shown in Fig. 2c, the final Pareto of CEA has a suitable extension in the objective space and includes many solutions which are not superior to each other. Also, it is completely coincident on the Pareto of NSGA-II but the density of points (solutions) on the Pareto of CEA is significantly more than Pareto of NSGA-II. Therefore, CEA is able to assess the decision space of MOPs more accurate than other algorithms to find feasible NDSs. Obtained Pareto from all runs of CEA are closer to the best Pareto of the problem than NSGA-II (Table 1). Obtained Pareto of the best run (run 2) of CEA is about 32% closer to the origin of coordinate than the obtained Pareto of the best run (run 4) of NSGA-II. Also, CEA could achieve more number of NDSs with less number of iterations than NSGA-II in all runs. However, the computational time of CEA is more than NSGA-II.

Since CEA could achieve the global optimum of f1 in the first iterations, there is no performance improvement in different selection and generation operators for this OF. Thus, curves of all operators coincide on each other for f1 (Fig. 3a, c and e). According to Fig. 3b, tournament and Boltzmann operators have the most and the least shares in optimizing f2, respectively. In Fig. 3c, d, and f just the curve of the best crossover and mutation operators is shown separately and for other types of operators, the envelope curves are shown. Based on Figs. 3d and f, one of the one-point cut crossover (the 8th) and the second mutation operators have the best performance in this problem. Also, the performance of other generation operators is changing in the range of upper and lower envelope curves.

4.2 Constrained Maximization (Kita)

OFs of this problem are shown in Eqs. (4) and (5), respectively. Equations (6)–(8) identify problem’s linear and nonlinear constraints. Allowable range of continuous decision variables is also shown in Eq. (9).
$$ \operatorname{Maximize}\kern0.5em {f}_1\left({x}_1,{x}_2\right)=-{x}_1^2+{x}_2 $$
(4)
$$ \operatorname{Maximize}\kern0.5em {f}_2\left({x}_1,{x}_2\right)=\frac{x_1}{2}+{x}_2+1 $$
(5)
$$ {x}_1+6{x}_2\le 39 $$
(6)
$$ {x}_1+2{x}_2\le 15 $$
(7)
$$ {x}_1\left(30-{x}_2\right)\ge 5 $$
(8)
$$ 0\le {x}_i\le 7\kern0.5em i=1,2 $$
(9)
This problem is solved by NSGA-II and CEA for 400 iterations and 200 initial solutions and the final Pareto is presented in Fig. 2. Also, Fig. 4 shows the cumulative share of different selection and generation operators in the best run (run 5) of CEA, respectively. Statistics of different runs are shown in Table 1.
Fig. 4

Share of selection operators for: a f1; and b f2; Share of crossover operators for: c f1; and d f2; Share of mutation operators for: e f1; and f f2 in CEA for constrained maximization (Kita)

In Fig. 2d and e, two OFs are shown in objective space of the problem. As shown in Fig. 2f, similar to the DEB problem, the Pareto of CEA has a suitable extension in the objective space and includes much more solutions, showing capability of CEA for extensively and accurately assessing the decision space, however it is completely coincident on the Pareto of NSGA-II. Obtained Pareto of CEA is closer to the best Pareto than NSGA-II for all runs (Table 1). The obtained Pareto of the best run (run 5) from CEA is about 4.4% closer to the origin of coordinate than the obtained Pareto of the best run (run 2) from NSGA-II. Also, CEA is able to achieve more number of feasible NDSs in less number of iterations than NSGA-II in all runs.

Since CEA could achieve the global optimum of f2 in the first iterations, there is no performance improvement for selection and generation operators. According to Fig. 4a, random and Boltzmann operators have the most and the least shares respectively in optimizing f1. Figure 4c and e show that one of the full crossover (the 23rd) and the forth mutation operators have the best performance for this problem and the performance of other types of generation operators is changing in the range of upper and lower envelope curves.

4.3 Minimization with Unlimited Objective Functions (DTLZ2)

Number of OFs and decision variables in these problems are unlimited. These problems are usually assessed as two or three OF problems with 12 decision variables. The general form of these problems with three OFs is defined as Eqs. (10)–(13):
$$ \operatorname{Minimize}\kern0.5em {f}_1(X)=\left[1+\sum \limits_{i=3}^{12}{\left({x}_i-0.5\right)}^2\right]\cos \left(\frac{\pi }{2}{x}_1\right)\cos \left(\frac{\pi }{2}{x}_2\right) $$
(10)
$$ \operatorname{Minimize}\kern0.5em {f}_2(X)=\left[1+\sum \limits_{i=3}^{12}{\left({x}_i-0.5\right)}^2\right]\cos \left(\frac{\pi }{2}{x}_1\right)\sin \left(\frac{\pi }{2}{x}_2\right) $$
(11)
$$ \operatorname{Minimize}\kern0.5em {f}_3(X)=\left[1+\sum \limits_{i=3}^{12}{\left({x}_i-0.5\right)}^2\right]\sin \left(\frac{\pi }{2}{x}_1\right) $$
(12)
$$ 0\le {x}_i\le 1\kern0.5em i=1,2,\dots, 12 $$
(13)
This problem is solved by NSGA-II and CEA for 400 iterations and 380 initial solutions and the final Pareto is shown in Fig. 2. The cumulative share of different selection and generation operators in the best run (run 2) of CEA is presented in Figs. 5 and 6, respectively. Also, the statistics of different runs are shown in Table 1.
Fig. 5

Share of selection operators for: a f1; b f2; and c f3 in CEA for minimization with three OFs (DTLZ2)

Fig. 6

Share of crossover operators for: a f1; b f2; and c f3; Share of mutation operators for: d f1; e f2; and (f) f3 in CEA for minimization with three OFs (DTLZ2)

Final Pareto obtained from several runs of CEA (Fig. 2g) shows a concave space towards the origin of coordinate and has an acceptable extension in the objective space. Despite the relative conformity between obtained Paretos from CEA and NSGA-II (Fig. 2h), the density of CEA’s Pareto is more than NSGA-II’s and the Pareto of NSGA-II is almost above CEA’s Pareto. Obtained Pareto from CEA is closer to the best Pareto than NSGA-II (Table 1). The Pareto of the best run (run 2) of CEA is about 1.6% closer to the origin than the Pareto of the best run (run 3) of NSGA-II. Also, obtained Pareto from different runs of CEA have more number of NDSs and are closer to the origin compared to NSGA-II.

Obtained solutions for f2 and f3 from CEA are global optimums and the share of selection and generation operators for these OFs does not change during the optimization process (Figs. 5b, c and 6b, c, e and f). Based on Fig. 5a, tournament and Boltzmann operators have the most and the least shares in optimizing f1, respectively. Figure 6a and d show that one of the one-point cut crossover (the 7th) and the fifth mutation operators have the best performance in this problem.

5 Solving Multi Objective Reservoir Operation Problem Using CEA

Performance of CEA in water resources management problems is evaluated considering a multi objective monthly operation of a reservoir for one year. Data of reservoir 1 in Seifollahi-Aghmiuni et al. (2015) is used here (Fig. 7a). Two operation objectives are considered for this reservoir with the minimum (Smin) and maximum (Smax) storage capacities of 400 × 106 and 3000 × 106 m3, respectively in which 2% of the stored water is seepage from the lake in each time interval. These objectives include generating hydropower energy in a powerhouse with 650 × 106 watt installation capacity (Power Plant Capacity-PPC), 96% efficiency and 35% plant factor (solid arrow in Fig. 7a) and supplying one urban site (round dot arrow in Fig. 7a) through release from two separate outlets in different elevations for each objective. About 9% of delivered water to the urban site and all of the delivered water to the powerhouse are considered as return flows (long dash arrows in Fig. 7a). General information and water outlets’ characteristics for this problem are presented in Table 2. OFs for optimization are considered as below:
$$ \operatorname{Maximize}\kern0.5em {F}_{Power}=\frac{\sum \limits_{t=1}^{12}{PT}_t}{12\times PPC} $$
(14)
$$ \operatorname{Maximize}\kern0.5em {F}_{De mand}=\sum \limits_{t=1}^{12}\frac{\sum \limits_{j=1}^2{Rw}_{j,t}}{De_t} $$
(15)
in which, FPower and FDemand = OFs of hydropower generation and urban site supplement, respectively; PTt = generated power in the powerhouse in each time interval t (106 watt); Rwj, t = volume of released water from outlet j in time interval t for supplying the urban site (106 m3); Det = volume of urban demand in time interval t (106 m3); t and j = variables which show the number of considered time interval and water outlet, respectively. These two OFs are solved by CEA for 400 iterations and 252 initial solutions and the final Pareto is shown in Fig. 7b. The obtained Pareto has logical shape in the objective space and CEA could achieve a correct form of the final Pareto for optimizing two OFs in a water management problem. The average Euclidean distance of the Pareto in Fig. 7b from the origin is equal to 0.819 considering the number of existing points on this Pareto (548 NDSs).
Fig. 7

a Schematic of reservoir operation problem; b Final Pareto of CEA; c Variation range of generated power; d Variation range of water release for powerhouse; e Variation range of water release for urban site supplement; and f Variation range of water storage for reservoir operation problem

Table 2

Information of the reservoir operation problem

No. of month

1

2

3

4

5

6

7

8

9

10

11

12

Natural inflow to the reservoir (106 m3)

850

1030

630

450

280

110

170

230

290

460

570

630

 Evaporation from the reservoir (mm)

60

60

70

90

80

70

50

40

5

5

20

50

 Precipitation on the reservoir (mm)

60

30

20

0

0

10

50

70

90

100

90

80

 Urban demand (106 m3)

600

660

720

780

780

600

360

300

180

180

300

540

Outlets’ information of the reservoir

No. of outlets

Urban site supplement

Hydropower generation

Sediment release

1

2

1

2

1

2

 Water outlet elevation (m)

828

855

880

905

825

830

 Water outlet capacity (106 m3/month)

490

490

300

300

510

510

Each NDS on the final Pareto includes a series of optimum values for decision variables. Thus, the range of values for decision variables of the reservoir operation problem is also shown in Fig. 7. In this problem, the volume of release from the reservoir for generating hydropower and supplying the urban site as the first and second operation priorities respectively, is considered as decision variable. Figure 7c shows that generated power in the powerhouse is changing with a wide range and its average is usually near to its lower bound. It could be because the level of the stored water is below the elevation of at least one of the related outlets to the powerhouse (Fig. 7d) due to the less volume of inflow to the reservoir and it makes releasing water from that outlet impossible. Thus, the real capacity of water discharge from the reservoir for generating hydropower decreases and the powerhouse cannot generate electricity with its installation capacity. Based on Fig. 7e, the average of release for supplying the urban site is near to the urban demand except in hot and dry seasons. Thus, the reservoir has better performance in supplying the urban site compared to generating energy. Water storage in the reservoir also changes in its allowable range (Fig. 7f).

The share of different selection and generation operators in CEA for optimizing FPower and FDemand in the best run (run 1) is presented in Fig. 8. Table 3 shows statistics of different runs of CEA. It should be mentioned that only some types of crossover operator (5 types of one-point cut, 5 types of two-point cut and 3 types of full crossover) are randomly activated for solving this problem. According to Fig. 8, Boltzmann and roulette wheel operators have the most share in optimizing FPower and FDemand, respectively. Also, for optimizing FPower, one of the full crossover (type 22nd) and type 2 of mutation operators, and for optimizing FDemand one of the two-point cut crossover (type 18th) and type 5 of mutation operators have the best performance.
Fig. 8

Share of selection operators for: a FPower; and b FDemand; Share of crossover operators for: c FPower; and d FDemand; Share of mutation operators for: e FPower; and f FDemand in CEA for reservoir operation problem

Table 3

Statistics of different runs of CEA for reservoir operation problem

Parameter

No. Run

1

2

3

Average Euclidean distance of the final Pareto from the origin of coordinate

0.819

0.814

0.814

Number of points on the final Pareto

555

332

796

Number of iterations to achieve the final Pareto

400

400

400

Time (min)

46.0

50.0

45.7

6 Conclusion

In this research a comprehensive evolutionary algorithm (CEA) was presented for the first time which can optimize SOPs and MOPs applying a unique structure. Many parameters of this algorithm did not need sensitivity analysis and their suitable values were implicitly determined based on the problem characteristics during the optimization by CEA. This was one of the major capabilities of CEA compared with other EOAs. General structure of CEA was developed in a multi colony form and different OFs were optimized in separate colonies by this algorithm, simultaneously. Also, CEA could solve SOPs in a single colony form. Thus, CEA could be used for solving any type of optimization problem without any need of recoding while other EOAs require some changes in their coding to solve different problems.

In this research, performance of CEA was assessed compared to NSGA-II in solving several mathematical problems. The extension and density of Pareto of CEA in all problems was more than the obtained Pareto from NSGA-II. Also, they were closer to the origin of coordinate than Pareto of NSGA-II. These two factors are important in solving MOPs. Despite that the convergence speed to the final solutions in CEA was less than NSGA-II, obtained results from CEA were more desirable than NSGA-II which shows its capabilities as a new EOA. The less standard deviation of obtained results from CEA in different runs indicates high reliability and accuracy of this algorithm in solving MOPs (the standard deviation of Euclidean distance of final Pareto from the origin was about 0.088, 0.064 and 0.000001 for DEB, Kita and DTLZ2 problems, respectively). Selection and generation operators made different improvements in the results of each problem during the optimization process which shows the necessity of applying various types of these operators in EOAs. Thus, CEA included a great extent of various types for selection and generation operators and could efficiently use them based on their performance in each problem. Also, the performance of CEA was assessed in solving a reservoir operation problem (maximizing reliability of generating hydropower energy and urban supplement) which resulted logical and acceptable performance for these kind of management problems.

Notes

Compliance with Ethical Standards

Conflict of Interest

None.

References

  1. Barros FVF, Nascimento LSV, Martins ESPR, Junior DSR (2008) "The use of multi objective optimization for reservoir’s system operation with an evolutionary algorithm: The case of the metropolitan region of fortaleza, Brazil." 13th IWRA World Water Congress, 1–4 September, Montpellier, FranceGoogle Scholar
  2. Deb K (2002) "A fast and elitist multiobjective genetic algorithm: NSGA-II." IEEE Trans Evol Comput 6(2):182–197CrossRefGoogle Scholar
  3. Huang HZ, Tian Z, Zuo MJ (2005) "Intelligent interactive multiobjective optimization method and its application to reliability optimization." IIE Trans 37(11):983–993CrossRefGoogle Scholar
  4. Kourakosa G, Mantogloub M (2013) "Development of a multi objective optimization algorithm using surrogate models for coastal aquifer management." J Hydrol 479:13–23CrossRefGoogle Scholar
  5. Lee C-Y (2003) "Entropy-Boltzmann selection in the genetic algorithms." IEEE Tran Syst Man Cybern 33(1):138–149CrossRefGoogle Scholar
  6. Lipowski A, Lipowska D (2012) "roulette-wheel selection via stochastic acceptance." Physical A: Statistical Mechanics and its Applications 391(6):2193–2196CrossRefGoogle Scholar
  7. Miller BL, Goldberg DE (1995) "Genetic Algorithms, tournament selection, and the effects of noise." Complex Syst 9(3):193–212Google Scholar
  8. Niknam T, Zeinoddini Meymand H, Doagou Mojarrad H (2011) "An efficient algorithm for multi objective optimal operation management of distribution network considering fuel cell power plants." Energy 36(1):119–132CrossRefGoogle Scholar
  9. Rada-Vilela J, Chica M, Cordón O, Damas S (2013) "A comparative study of multi objective ant colony optimization algorithms for the time and space assembly line balancing problem." Appl Soft Comput 13(11):4370–4382CrossRefGoogle Scholar
  10. Reyes-Sierra M, Coello Coello CA (2006) "Multi objective particle swarm optimizars: A survey of the state-of-the-art." Int J Comput Intell Res 2(3):287–308Google Scholar
  11. Seifollahi-Aghmiuni S, Bozorg Haddad O, Loáiciga HA (2015) "Development of a sample multiattribute and multireservoir system for testing operational models." J Irrig Drain Eng 142(1).  https://doi.org/10.1061/(ASCE)IR.1943-4774.0000908.
  12. Stanimirović IP, Zlatanović ML, Petković MD (2011) "On the linear weighted sum method for multi objective optimization." Scientific Journal of Facta Universitatis, Series Mathematics and Informatics 26:49–63Google Scholar
  13. Yang CC, Chang LC, Yeh CH, Chen CS (2007) "Multiobjective planning of surface water resources by multiobjective genetic algorithm with constrained differential dynamic programming." Journal of Water Resources Planning and Management, (ASCE) 133(6):499–508CrossRefGoogle Scholar

Copyright information

© The Author(s) 2018

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Department of Physical Geography and Bolin Center for Climate ResearchStockholm UniversityStockholmSweden
  2. 2.Faculty of Agricultural Engineering and Technology, College of Agriculture and Natural ResourcesUniversity of TehranKarajIran

Personalised recommendations