Advertisement

Two Improvement Strategies for Logistic Dynamic Particle Swarm Optimization

  • Qingjian Ni
  • Jianming Deng
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6593)

Abstract

A new variant of particle swarm optimization, Logistic Dynamic Particle Swarm Optimization (termed LDPSO), is introduced in this paper. LDPSO is constructed based on the new inspiration of population generation method according to the historical information about particles. It has a better searching capability in comparison to the canonical method. Furthermore, according to the characteristics of LDPSO, two improvement strategies are designed respectively. Mutation strategy is employed to prevent premature convergence of particles. Selection strategy is adopted to maintain the diversity of particles. Experiment results demonstrate the efficiency of LDPSO and the effectiveness of the two improvement strategies.

Keywords

logistic dynamic particle swarm optimization mutation selection 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Clerc, M.: The swarm and the queen: towards a deterministic and adaptive particle swarm optimization. In: The 1999 Congress on Evolutionary Computation, vol. 3, pp. 1951–1957. IEEE, Piscataway (1999)Google Scholar
  2. 2.
    Clerc, M., Kennedy, J.: The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Transactions on Evolutionary Computation 6(1), 58–73 (2002)CrossRefGoogle Scholar
  3. 3.
    Kennedy, J.: Small worlds and mega-minds: effects of neighborhood topology on particle swarm performance. In: The 1999 Congress on Evolutionary Computation, vol. 3, pp. 1931–1938. IEEE, Piscataway (1999)Google Scholar
  4. 4.
    Kennedy, J.: Dynamic-probabilistic particle swarms. In: The 2005 Genetic and Evolutionary Computation Conference, pp. 201–207. ACM, Washington (2005)Google Scholar
  5. 5.
    Kennedy, J.: In search of the essential particle swarm. In: The 2006 IEEE Congress on Evolutionary Computation, pp. 1694–1701. Inst. of Elec. and Elec. Eng. Computer Society, Vancouver (2006)CrossRefGoogle Scholar
  6. 6.
    Kennedy, J., Eberhart, R.: Particle swarm optimization. In: The 1995 IEEE International Conference on Neural Networks, vol. 4, pp. 1942–1948. IEEE, Perth (1995)Google Scholar
  7. 7.
    Kennedy, J., Mendes, R.: Population structure and particle swarm performance. In: The 2002 World Congress on Computational Intelligence, vol. 2, pp. 1671–1676. IEEE, Piscataway (2002)Google Scholar
  8. 8.
    Poli, R.: Mean and variance of the sampling distribution of particle swarm optimizers during stagnation. IEEE Transactions on Evolutionary Computation 13(4), 712–721 (2009)CrossRefGoogle Scholar
  9. 9.
    Shi, Y., Eberhart, R.: A modified particle swarm optimizer. In: The 1998 IEEE International Conference on Evolutionary Computation, pp. 69–73. IEEE, Anchorage (1998)Google Scholar
  10. 10.
    Wang, Z., Xing, H.: Dynamic-probabilistic particle swarm synergetic model: A new framework for a more in-depth understanding of particle swarm algorithms. In: The 2008 IEEE Congress on Evolutionary Computation, pp. 312–321. IEEE, Hong Kong (2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Qingjian Ni
    • 1
  • Jianming Deng
    • 1
  1. 1.School of Computer Science & EngineeringSoutheast UniversityNanjingChina

Personalised recommendations