Advertisement

A Multi-objective Particle Swarm Optimization Based on Decomposition

  • Yanmin Liu
  • Ben Niu
Part of the Communications in Computer and Information Science book series (CCIS, volume 375)

Abstract

Decomposition is a classic method in traditional multi-objective optimization problems (MOPs). However, so far it has not yet been widely used in multi-objective particle swarm optimization (MOPSO). This paper proposes a MOPSO based on decomposition strategy (MOPSO-D), in which MOPs is decomposed into a number of scalar optimization sub-problems by a set of even spread weight vectors, and each sub-problem is optimized by a particle (here, it is viewed as a sub-swarm) personal history best position (pbest) and global best position in the its all neighbors (gbest) in a single run. By computing the Euclidean distances between any two weight vectors corresponding to a particle, the neighborhood identification strategy of each particle is assigned. The method of decomposition inherited the traditional method merits and makes MOPSO-D have lower computational complexity at each generation than NSMOPSO and OMOPSO. Simulation experiments on multi-objective 0-1 knapsack problems and continuous multi-objective optimization problems show MOPSO-D outperforms or performs similarly to NSMOPSO and OMOPSO.

Keywords

Particle swarm optimizer Decomposition Multi-objective optimization problems 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Coello, C.A.C., Lechuga, M.S.: MOPSO: A Proposal for Multiple Objective Particle Swarm Optimization. In: IEEE Congress on Evolutionary Computation, Piscataway, New Jersey, pp. 1051–1056. IEEE Press, New York (2002)Google Scholar
  2. 2.
    Hu, X., Eberhart, R.C.: Multiobjective Optimization Using Dynamic Neighborhood Particle Swarm Optimization. In: Proceedings of the 2002 Congress on Evolutionary, Honolulu, Hi, pp. 1677–1681. IEEE Press, New York (2002)Google Scholar
  3. 3.
    Kennedy, J., Eberhart, R.C.: Particle Swarm Optimization. In: Proceedings of the 2002 Conference on Neural Networks, Piscataway, pp. 1942–1948. IEEE Press, New York (1995)Google Scholar
  4. 4.
    Li, X.: A Non-dominated Sorting Particle Swarm Optimizer for Multi-objective Optimization. In: Cantú-Paz, E., et al. (eds.) GECCO 2003. LNCS, vol. 2723, pp. 37–48. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  5. 5.
    Mostaghim, S., Teich, J.: Strategies for Finding Good Local Guides in Multi-objective Particle Swarm Optimization (MOPSO). In: IEEE Swarm Intelligence Symposium, Indianapolis, pp. 26–33. IEEE Press, New York (2003)Google Scholar
  6. 6.
    Mostaghim, S., Teich, J.: The Role of ε-dominance in Multi-Objective Particle Swarm Optimization Methods. In: IEEE Congress on Evolutionary Computation, Canberra, Australia, pp. 1764–1771. IEEE Press, New York (2003)Google Scholar
  7. 7.
    Jin, Y., Okabe, T., Sendhoff, B.: Adapting Weighted Aggregation for Multiobjective Evolution Strategies. In: Zitzler, E., Deb, K., Thiele, L., Coello Coello, C.A., Corne, D.W. (eds.) EMO 2001. LNCS, vol. 1993, pp. 96–110. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  8. 8.
    Liu, Y.M., Niu, B.: A Novel PSO Model Based on Simulating Human Social Communication Behavior. Discrete Dynamics in Nature and Society, 1-21 (2012)Google Scholar
  9. 9.
    Bazgan, C., Hugot, H., Vanderpooten, D.: Solving Efficiently the 0-1 Multi-objective Knapsack Problem, vol. 36, pp. 260–279 (2009)Google Scholar
  10. 10.
    Hughes, E.J.: Multiple single objective Pareto sampling. In: IEEE Congress on Evolutionary Computation, Canberra, Australia, pp. 2678–2684. IEEE Press, New York (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Yanmin Liu
    • 1
    • 2
  • Ben Niu
    • 3
  1. 1.School of mathematics and computer scienceZunyi Normal CollegeZunyiChina
  2. 2.School of economics and managementTongji UniversityShanghaiChina
  3. 3.College of ManagementShenzhen UniversityShenzhenChina

Personalised recommendations