Advertisement

Opposition Based Salp Swarm Algorithm for Numerical Optimization

  • Divya BairathiEmail author
  • Dinesh Gopalani
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 941)

Abstract

In this paper an improved optimization algorithm called Opposition Based Salp Swarm Algorithm (OSSA) is proposed. This is improved version of recently proposed Salp Swarm Algorithm (SSA), which mimics swarming acts of salps when foraging and navigating in oceans. To improve the performance of SSA, Opposition based learning (OBL) is introduced in Salp Swarm Algorithm. The algorithm is evaluated on several numerical standard functions and is compared with some well known optimization algorithms.

Keywords

Optimization Metaheuristics Salp Swarm Algorithm Opposition based learning Opposition based Salp Swarm Algorithm 

References

  1. 1.
    Glover, F.W., Kochenberger, G.A. (eds.): Handbook of Metaheuristics, vol. 57. Springer, Berlin (2006)zbMATHGoogle Scholar
  2. 2.
    Eberhart, R., Kennedy, J.: A new optimizer using particle swarm theory. In: Proceedings of the Sixth International Symposium on Micro Machine and Human Science 1995. MHS’95, pp. 39–43. IEEE (1995)Google Scholar
  3. 3.
    Mirjalili, S., Mirjalili, S.M., Lewis, A.: Grey wolf optimizer. Adv. Eng. Softw. 69, 46–61 (2014)CrossRefGoogle Scholar
  4. 4.
    Dorigo, M., Birattari, M., Stutzle, T.: Ant colony optimization. IEEE Comput. Intell. Mag. 1(4), 28–39 (2006)CrossRefGoogle Scholar
  5. 5.
    Karaboga, D.: An idea based on honey bee swarm for numerical optimization, vol. 200. Technical report-tr06, Erciyes University, Engineering Faculty, Computer Engineering Department (2005)Google Scholar
  6. 6.
    Mirjalili, S., Gandomi, A.H., Mirjalili, S.Z., Saremi, S., Faris, H., Mirjalili, S.M.: Salp swarm algorithm: a bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 114, 163–191 (2017)CrossRefGoogle Scholar
  7. 7.
    Połap, D.: Polar bear optimization algorithm: meta-heuristic with fast population movement and dynamic birth and death mechanism. Symmetry 9(10), 203 (2017)CrossRefGoogle Scholar
  8. 8.
    Holland, J.H.: Genetic algorithms. Sci. Am. 267(1), 66–72 (1992)CrossRefGoogle Scholar
  9. 9.
    Storn, R., Price, K.: Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 11(4), 341–359 (1997)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Koza, J.R.: Genetic Programming. The MIT Press, Cambridge (1992)zbMATHGoogle Scholar
  11. 11.
    Rechenberg, I.: Evolution strategy: nature’s way of optimization. In: Optimization: Methods and Applications, Possibilities and Limitations, pp. 106–126. Springer, Berlin (1989)Google Scholar
  12. 12.
    Kirkpatrick, S., Gelatt, C.D., Vecchi, M.P.: Optimization by simulated annealing. Science 220(4598), 671–680 (1983)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Rashedi, E., Nezamabadi-Pour, H., Saryazdi, S.: GSA: a gravitational search algorithm. Inf. Sci. 179(13), 2232–2248 (2009)CrossRefGoogle Scholar
  14. 14.
    Kaveh, A., Talatahari, S.: A novel heuristic optimization method: charged system search. Acta Mech. 213, 267–289 (2010)CrossRefGoogle Scholar
  15. 15.
    Mirjalili, S., Mirjalili, S.M., Hatamlou, A.: Multi-verse optimizer: a nature-inspired algorithm for global optimization. Neural Comput. Appl. 27(2), 495–513 (2016)CrossRefGoogle Scholar
  16. 16.
    Glover, F.: Tabu search - Part I. ORSA J. Comput. 1(3), 190–206 (1989)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Glover, F.: Tabu search - Part II. ORSA J. Comput. 2, 4–32 (1990)CrossRefGoogle Scholar
  18. 18.
    He, S., Wu, Q., Saunders, J.: A novel group search optimizer inspired by animal behavioural ecology. In: Proceedings of the 2006 IEEE Congress on Evolutionary Computation, CEC, pp. 1272–1278 (2006)Google Scholar
  19. 19.
    He, S., Wu, Q.H., Saunders, J.: Group search optimizer: an optimization algorithm inspired by animal searching behavior. IEEE Trans. Evol. Comput. 13, 973–990 (2009)CrossRefGoogle Scholar
  20. 20.
    Tan, Y., Zhu, Y.: Fireworks algorithm for optimization. In: Advances in Swarm Intelligence, pp. 355–364. Springer, Heidelberg (2010)Google Scholar
  21. 21.
    Hertz, J.: Introduction to the Theory of Neural Computation, vol. 1. Addison Wesley, Boston (1991)Google Scholar
  22. 22.
    Rumelhart, D.E., Williams, R.J., Hinton, G.E.: Learning internal representations by error propagation. In: Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1, pp. 318–362 (1986)Google Scholar
  23. 23.
    Mendes, R., Cortez, P., Rocha, M., Neves, J.: Particle swarm for feedforward neural network training. In: Proceedings of the International Joint Conference on Neural Networks, vol. 2, pp. 1895–1899 (2002)Google Scholar
  24. 24.
    Meissner, M., Schmuker, M., Schneider, G.: Optimized particle swarm optimization (OPSO) and its application to artificial neural network training. BMC Bioinform. 7, 125 (2006)CrossRefGoogle Scholar
  25. 25.
    Fan, H., Lampinen, J.: A trigonometric mutation operation to differential evolution. J. Glob. Optim. 27, 105–129 (2003)MathSciNetCrossRefGoogle Scholar
  26. 26.
    Slowik, A., Bialko, M.: Training of artificial neural networks using differential evolution algorithm. In: Human System Interactions, pp. 60–65 (2008)Google Scholar
  27. 27.
    Gao, Q., Qi, K., Lei, Y., He, Z.: An improved genetic algorithm and its application in artificial neural network. In: 2005 Fifth International Conference on Information, Communications and Signal Processing, 06–09 December 2005, pp. 357–360 (2005)Google Scholar
  28. 28.
    Olorunda, O., Engelbrecht, A.P.: Measuring exploration/exploitation in particle swarms using swarm diversity. In: 2008 IEEE Congress on Evolutionary Computation. CEC 2008 (IEEE World Congress on Computational Intelligence), pp. 1128–1134. IEEE (2008)Google Scholar
  29. 29.
    Alba, E., Dorronsoro, B.: The exploration/exploitation tradeoff in dynamic cellular genetic algorithms. IEEE Trans. Evol. Comput. 9(2), 126–142 (2005)CrossRefGoogle Scholar
  30. 30.
    Crepinsek, M., Liu, S.H., Mernik, M.: Exploration and exploitation in evolutionary algorithms: a survey. ACM Comput. Surv. (CSUR) 45(3), 35 (2013)CrossRefGoogle Scholar
  31. 31.
    Tizhoosh, H.R.: Opposition-based learning: a new scheme for machine intelligence. In: 2005 International Conference on Computational Intelligence for Modelling, Control and Automation, and International Conference on Intelligent Agents, Web Technologies and Internet Commerce, vol. 1, pp. 695–701. IEEE, November 2005Google Scholar
  32. 32.
    Ali, M.M., Khompatraporn, C., Zabinsky, Z.B.: A numerical evaluation of several stochastic algorithms on selected continuous global optimization test problems. J. Glob. Optim. 31(4), 635–672 (2005)MathSciNetCrossRefGoogle Scholar
  33. 33.
    Bansal, J.C., Sharma, H., Nagar, A., Arya, K.V.: Balanced artificial bee colony algorithm. Int. J. Artif. Intell. Soft Comput. 3(3), 222–243 (2013)CrossRefGoogle Scholar
  34. 34.
    Wolpert, D.H., Macready, W.G.: No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1(1), 67–82 (1997)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Malaviya National Institute of Technology JaipurJaipurIndia

Personalised recommendations