Advertisement

Abstract

This paper proposes an adaptive particle swarm optimization (APSO) with adaptive parameters and elitist learning strategy (ELS) based on the evolutionary state estimation (ESE) approach. The ESE approach develops an ‘evolutionary factor’ by using the population distribution information and relative particle fitness information in each generation, and estimates the evolutionary state through a fuzzy classification method. According to the identified state and taking into account various effects of the algorithm-controlling parameters, adaptive control strategies are developed for the inertia weight and acceleration coefficients for faster convergence speed. Further, an adaptive ‘elitist learning strategy’ (ELS) is designed for the best particle to jump out of possible local optima and/or to refine its accuracy, resulting in substantially improved quality of global solutions. The APSO algorithm is tested on 6 unimodal and multimodal functions, and the experimental results demonstrate that the APSO generally outperforms the compared PSOs, in terms of solution accuracy, convergence speed and algorithm reliability.

Keywords

Particle Swarm Optimization Particle Swarm Optimization Algorithm Inertia Weight Multimodal Function Fast Convergence Speed 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Kennedy, J., Eberhart, R.C.: Particle Swarm Optimization. In: Proceedings of the IEEE International Conference on Neural Networks, Perth, Australia, pp. 1942–1948 (1995)Google Scholar
  2. 2.
    Li, X.D., Engelbrecht, A.P.: Particle Swarm Optimization: an Introduction and Its Recent Developments. In: Proceedings of the 2007 Genetic Evolutionary Computation Conference, pp. 3391–3414 (2007)Google Scholar
  3. 3.
    Shi, Y., Eberhart, R.C.: A Modified Particle Swarm Optimizer. In: Proceedings of the IEEE World Congress on Computation Intelligence, pp. 69–73 (1998)Google Scholar
  4. 4.
    Ratnaweera, A., Halgamuge, S., Watson, H.: Self-organizing Hierarchical Particle Swarm Optimizer with Time-varying Acceleration Coefficients. J. IEEE Trans. Evol. Comput. 8, 240–255 (2004)CrossRefGoogle Scholar
  5. 5.
    Angeline, P.J.: Using Selection to Improve Particle Swarm Optimization. In: Proceedings of the IEEE Congress on Evolutionary Computation, Anchorage, AK, pp. 84–89 (1998)Google Scholar
  6. 6.
    Brits, R., Engelbrecht, A.P., van den Bergh, F.: A Niching Particle Swarm Optimizer. In: Proceedings of the 4th Asia-Pacific Conference on Simulated Evolutionary Learning, pp. 692–696 (2002)Google Scholar
  7. 7.
    Parrott, D., Li, X.D.: Locating and Tracking Multiple Dynamic Optima by a Particle Swarm Model Using Speciation. J. IEEE Trans. Evol. Comput. 10, 440–458 (2006)CrossRefGoogle Scholar
  8. 8.
    Yao, X., Liu, Y., Lin, G.M.: Evolutionary Programming Made Faster. J. IEEE Trans. Evol. Comput. 3, 82–102 (1999)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Zhi-hui Zhan
    • 1
  • Jun Zhang
    • 1
  1. 1.Department of Computer ScienceSun Yat-sen UniversityChina

Personalised recommendations