Abstract
In this paper, multiobjective particle swarm optimization with preference information (MOPSOPI) has been proposed. In the proposed algorithm, the information entropy is employed for measuring the probability distribution of particles; the user’s preference information is represented as the ranking of each particle through the possible matrix. The optimal procedure is guided by the preference information since the global best performance of particle is randomly chosen among nondominated solutions with higher ranking value in each iteration. Finally, the MOPSOPI is applied to optimize the steelmaking process; the power supply curve obtained reduces the electric energy consumption, shortens the smelting time and prolongs the lifespan of the furnace lining. The application results show the effectiveness of the proposed algorithm.
This is a preview of subscription content, log in to check access.
References
AlvarezBenitez JE, Everson RM, Fieldsend JE (2005) A MOPSO algorithm based exclusively on pareto dominance concepts[C]. Evolutionary multicriterion optimization. Springer, Berlin, pp 459–473
Amirjanov A (2006) The development of a changing range genetic algorithm[J]. Comput Methods Appl Mech Eng 195(19):2495–2508
Coello CAC, Pulido GT, Lechuga MS (2004) Handling multiple objectives with particle swarm optimization[J]. IEEE Trans Evol Comput 8(3):256–279
Deb K, Pratap A, Agarwal S et al (2002) A fast and elitist multiobjective genetic algorithm: NSGAII[J]. IEEE Trans Evol Comput 6(2):182–197
Eberhart RC, Kennedy J (1995) A new optimizer using particle swarm theory[C]. Proc Sixth Int Symp Micro Mach Hum Sci 1:39–43
Feng L, Mao ZZ, Yuan P (2012) Improved multiobjective particleswarm algorithm and its application to electric ace furnace in steelmaking process[J]. Control Theory Appl 27(9):1313–1319 (in Chinese)
Ishibuchi H, Tsukamoto N, Nojima Y (2008) Evolutionary manyobjective optimization: A short review[C]. IEEE Congr Evol Comput 2419–2426
Jiao L, Luo J, Shang R et al (2014) A modified objective function method with feasibleguiding strategy to solve constrained multiobjective optimization problems[J]. Appl Soft Comput 14:363–380
Kennedy J, Eberhart R (1995) Particle swarm optimization[C]. Proc IEEE Int Conf Neural Netw 4(2):1942–1948
Kennedy JF, Kennedy J, Eberhart RC (2001) Swarm intelligence [M]. Morgan Kaufmann, 2001.
Lee KB, Kim JH (2013) Multiobjective particle swarm optimization with preferencebased sort and its application to path following footstep optimization for humanoid robots[J]. IEEE Trans Evol Comput 17(6):755–766
Liou TS, Wang MJJ (1992) Ranking fuzzy numbers with integral value[J]. Fuzzy Set Syst 50(3):247–255
Mostaghim S, Teich J (2003) Strategies for finding good local guides in multiobjective particle swarm optimization (MOPSO)[C]. Swarm Intelligence Symposium, 2003. SIS’03. Proceedings of the 2003 IEEE. IEEE 26–33
Purshouse RC, Fleming PJ (2003) Evolutionary manyobjective optimisation: An exploratory analysis[C]. Evolutionary Computation, 2003. CEC’03. The 2003 Congress on. IEEE 3: 2066–2073
ReyesSierra M, Coello CAC (2006) Multiobjective particle swarm optimizers: a survey of the stateoftheart[J]. Int J Comput Intell Res 2(3):287–308
Shannon CE (2001) A mathematical theory of communication[J]. ACM SIGMOBILE Mob Comput Commun Rev 5(1):3–55
Tanaka M, Watanabe H, Furukawa Y et al (1995) GAbased decision support system for multicriteria optimization[C]//Systems, man and Cybernetics, 1995. Intelligent systems for the 21st century. IEEE Int Conf 2:1556–1561
Van den Bergh F, Engelbrecht AP (2006) A study of particle swarm optimization particle trajectories[J]. Inform Sci 176(8):937–971
Xu ZS (2001) Algorithm for priority of fuzzy complementary judgement matrix[J]. J Syst Eng 16(4):311–314
Yuan P, Wang FL, Mao ZZ (2005) Optimized power supply model in melting period of SREAF[J]. J Northeast Univ (Nat Sci) 26(10):930–933, in Chinese
Zapotecas Martínez S, Coello Coello CA (2011) A multiobjective particle swarm optimizer based on decomposition[C]. Proceedings of the 13th annual conference on Genetic and evolutionary computation. ACM 69–76
Zheng X, Liu H (2010) A scalable coevolutionary multiobjective particle swarm optimizer[J]. Int J Comput Intell Syst 3(5):590–600
Zhou A, Qu BY, Li H et al (2011) Multiobjective evolutionary algorithms: a survey of the state of the art[J]. Swarm Evol Comput 1(1):32–49
Acknowledgments
The authors would like to acknowledge the anonymous reviewers for their helpful comments. This work was supported by the State Key Program of National Natural Science Foundation of China (No. 61333006).
Author information
Appendix A
Appendix A
Description of algorithm 1
Each step of proposed MOPSOPI named algorithm 1 is described as follows.

1)
Initialize
The initial population size is Pop, in which each particle has its own position xt and velocity vt. The xi t and vi t are the values of the ith particle at the tth iteration in the update process. The position and velocity for every particle can be specified by Np×Pop matrices, which are initialized randomly within the lower and upper values, where Np is the number of decision variables of the problem. The personal best performance (Pbi) of the ith particle is set to be the position of itself. For each iteration, nondominated solutions are stored in the external archive, which is also initialized as a null set.

2)
Convert constrained functions
For using the multiobjective evolutionary algorithms to solve constrained optimal problem, the constrained problem should be converted into multiobjective problem (Amirjanov 2006). Then, the objective function of constrained problem will be converted into two parts: one part is the original objective function F(x), and the other part is the satisfactory summation function φ(x) under constraint conditions. Therefore, the new objective function E(x) is formulated as follow:
$$ E\left(\boldsymbol{x}\right)=\left(F\left(\boldsymbol{x}\right),\varphi \left(\boldsymbol{x}\right)\right). $$(A1)The level of individual x that satisfies the constraint j is φ(x) explained as follows:
$$ \begin{array}{l}{\varphi}_{g_j}\left(\boldsymbol{x}\right)=\left\{\begin{array}{ll}1,\hfill & {g}_j\left(\boldsymbol{x}\right)<0\hfill \\ {}{g}_j\left(\boldsymbol{x}\right)/{\delta}_j,\hfill & 0\le {g}_j\left(\boldsymbol{x}\right)\le {\delta}_j\hfill \\ {}0,\hfill & otherwise\hfill \end{array}\right.\hfill \\ {}{\varphi}_{h_j}\left(\boldsymbol{x}\right)=\left\{\begin{array}{ll}\left{h}_j\left(\boldsymbol{x}\right)\right{\gamma}_j,\hfill & \left{h}_j\left(\boldsymbol{x}\right)\right\le {\gamma}_j\hfill \\ {}0,\hfill & otherwise\hfill \end{array}\right.\hfill \end{array} $$(A2)In order to find out the feasible solution, the usual methods evaluate every constraint violation value whether small or equal to zero. But we adopt the (Eq. (A2)) to see the satisfactory degree of solution x. In (Eq. (A2)), the parameter δ and γ are the tolerance value, where parameter δ and γ are adapted to reduce the strength of constraints, especially the equality constraints. These two parameters can maintain the diversity of particle population through adding some infeasible individuals.
At last, the satisfactory summation function φ(x) is defined as follows:
$$ \varphi \left(\boldsymbol{x}\right)={\displaystyle \sum_{j=1}^l{\varphi}_{g_j}\left(\boldsymbol{x}\right)+{\displaystyle \sum_{j=l+1}^p{\varphi}_{h_j}\left(\boldsymbol{x}\right)}} $$(A3) 
3)
Evaluate solutions
First, for all the solutions, through a rapid dominance sort, the nondominance particles are saved into the external archive (E _{ t }), and the dominance particles are discarded out of P _{ t }. After that, if the current population size is not equal to the preset, some new particles randomly generated will add to the current population. Finally, by choosing the Pb _{ i } and the global best performance (Gb) from E _{ t }, the particles are guided by the user’s preference information, which are described in section II.B.

4)
Update particles
The velocity matrix and the position matrix are updated according to the following equations:
$$ {v}_i^{t+1}=\omega {v}_i^t+{c}_1{r}_1^t\left(P{b}_i^t{x}_i^t\right)+{c}_2{r}_2^t\left(G{b}^t{x}_i^t\right) $$(A4)$$ {x}_i^{t+1}={v}_i^{t+1}+{x}_i^t. $$(A5)where the superscripts t and t + 1 refer to the time index of the current and the next iterations, ω is the inertia weight and decrease according to slope of the current iteration from 0.9 to 0.1. The acceleration coefficients c _{1} and c _{2} are the learning factors of the swarm, which control how far a particle will move in a single iteration, usually c _{1} = c _{2}, and value range is [0, 2]. r _{1} and r _{2} are random real values uniformly distributed in the interval [0, 1]. The particles update their velocities and positions by using the current position and velocity information as given in (Eqs. (A4) and (A5)).

5)
Go back to 3) and termination conditions check
Go back to 3) and termination conditions check until one of the termination conditions is met. If the termination condition is satisfied, the algorithm terminates and exports the solutions, otherwise, executes sequentially.
Algorithm 1 Multiobjective Particle Swarm Optimization with preference information
1) Initialize
t = 0
for i = 1 : Pop
for j =1 : N _{ p }
x _{ t } = rand(Min, Max)
v _{ t } = rand(Min, Max)
Evaluate F(x)
end
end
E _{ t } = []
2) Convert constraint functions
Calculate φ(x) for particles
Generate new objective function E(x)
3) Evaluate solutions
t = t + 1
Sort the particles using quick sort method
E _{ t } = E _{ t } ∪ nondominated solutions
for i = 1 : N _{ E }
Evaluate the particles in E _{ t }
end
Choose Gb ^{t} from E _{ t } for the particles
4) Update particles
for i = 1 : Pop
for j = 1 : Np
Update v ^{t} and x ^{t} by (A4) and (A5)
end
end
5) Go back to 3) and termination conditions check
The parameters used in algorithm are described as follows.
Rand(Min, Max): Random integer value between Max and Min, v ^{t}: Velocity matrix of the particles, x ^{t}: Position matrix of the particles, E _{ t }: External archive, F(x): objective function, φ(x): satisfactory summation function, E(x): new objective function, Pop: Initial population size, N _{ p }: Number of particles, N _{ E }: Number of external archive.
Rights and permissions
About this article
Cite this article
Feng, L., Mao, Z., Yuan, P. et al. Multiobjective particle swarm optimization with preference information and its application in electric arc furnace steelmaking process. Struct Multidisc Optim 52, 1013–1022 (2015). https://doi.org/10.1007/s0015801512762
Received:
Revised:
Accepted:
Published:
Issue Date:
Keywords
 Multiobjective optimization problem
 Particle swarm optimization (PSO)
 Preference information
 Information entropy
 Power supply curve