Advertisement

Applied Intelligence

, Volume 48, Issue 2, pp 445–464 | Cite as

Improved monarch butterfly optimization for unconstrained global search and neural network training

  • Hossam Faris
  • Ibrahim Aljarah
  • Seyedali Mirjalili
Article

Abstract

This work is a seminal attempt to address the drawbacks of the recently proposed monarch butterfly optimization (MBO) algorithm. This algorithm suffers from premature convergence, which makes it less suitable for solving real-world problems. The position updating of MBO is modified to involve previous solutions in addition to the best solution obtained thus far. To prove the efficiency of the Improved MBO (IMBO), a set of 23 well-known test functions is employed. The statistical results show that IMBO benefits from high local optima avoidance and fast convergence speed which helps this algorithm to outperform basic MBO and another recent variant of this algorithm called greedy strategy and self-adaptive crossover operator MBO (GCMBO). The results of the proposed algorithm are compared with nine other approaches in the literature for verification. The comparative analysis shows that IMBO provides very competitive results and tends to outperform current algorithms. To demonstrate the applicability of IMBO at solving challenging practical problems, it is also employed to train neural networks as well. The IMBO-based trainer is tested on 15 popular classification datasets obtained from the University of California at Irvine (UCI) Machine Learning Repository. The results are compared to a variety of techniques in the literature including the original MBO and GCMBO. It is observed that IMBO improves the learning of neural networks significantly, proving the merits of this algorithm for solving challenging problems.

Keywords

MBO Global optimization Multilayer perceptron Neural network Optimization 

References

  1. 1.
    Baluja S (1994) Population-based incremental learning. A method for integrating genetic search based function optimization and competitive learning. Technical report, DTIC DocumentGoogle Scholar
  2. 2.
    Boussaïd I, Lepagnot J, Siarry P (2013) A survey on optimization metaheuristics. Inf Sci 237:82–117MathSciNetCrossRefMATHGoogle Scholar
  3. 3.
    Cybenko G (1989) Approximation by superpositions of a sigmoidal function. Math Control Signals Syst 2 (4):303–314MathSciNetCrossRefMATHGoogle Scholar
  4. 4.
    Digalakis JG, Margaritis KG (2001) On benchmarking functions for genetic algorithms. Int J Comput Math 77(4):481– 506MathSciNetCrossRefMATHGoogle Scholar
  5. 5.
    Ding S, Su C, Yu J (2011) An optimizing bp neural network algorithm based on genetic algorithm. Artif Intell Rev 36(2): 153–162CrossRefGoogle Scholar
  6. 6.
    Dorigo M, Birattari M, Stützle T (2006) Ant colony optimization. IEEE Comput Intell Mag 1(4):28–39CrossRefGoogle Scholar
  7. 7.
    Faris H, Aljarah I, Mirjalili S (2016) Training feedforward neural networks using multi-verse optimizer for binary classification problems. Appl Intell 45(2):322–332CrossRefGoogle Scholar
  8. 8.
    Faris H, Hassonah MA, Ala’ M, Mirjalili S, Aljarah I (2017) Al-Zoubi A multi-verse optimizer approach for feature selection and optimizing svm parameters based on a robust system architecture. Neural Computing and Applications, pp 1–15Google Scholar
  9. 9.
    Faris H, Sheta A (2016) A comparison between parametric and non-parametric soft computing approaches to model the temperature of a metal cutting tool. Int J Comput Integr Manuf 29(1):64–75Google Scholar
  10. 10.
    Faris H, Sheta AF, Öznergiz E (2016) Mgp-cc: a hybrid multigene gp-cuckoo search method for hot rolling manufacture process modeling. Systems Science and Control Engineering, (just-accepted), pp 1–16Google Scholar
  11. 11.
    Gori M, Tesi A (1992) On the problem of local minima in backpropagation. IEEE Trans Pattern Anal Mach Intell, (1):76– 86Google Scholar
  12. 12.
    Gudise VG, Venayagamoorthy GK (2003) Senior-Member /eee. Comparison of particle swarm optimization and backpropagation as training algorithms for neural networks. In: Inproceedings of the IEEE Swarm Intelligence Symposium 2003 SIS 2003, pp 110–117Google Scholar
  13. 13.
    Gupta JND, Sexton RS (1999) Comparing backpropagation with a genetic algorithm for neural network training. Omega 27(6): 679–684CrossRefGoogle Scholar
  14. 14.
    Hoffmeister F, Bäck T (1990) Genetic algorithms and evolution strategies: Similarities and differences. In: International conference on parallel problem solving from nature. Springer, pp 455–469Google Scholar
  15. 15.
    Holland J (1992) Genetic algorithms. Scientific American, pp 66–72Google Scholar
  16. 16.
    Hornik K, Stinchcombe M, White H (1989) Multilayer feedforward networks are universal approximators. Neural Netw 2(5): 359–366CrossRefGoogle Scholar
  17. 17.
    Jaddi NS, Abdullah S, Hamdan AR (2015) Multi-population cooperative bat algorithm-based optimization of artificial neural network model. Inf Sci 294:628–644MathSciNetCrossRefGoogle Scholar
  18. 18.
    Jaddi NS, Abdullah S, Hamdan AR (2015) Optimization of neural network model using modified bat-inspired algorithm. Appl Soft Comput 37:71–86CrossRefGoogle Scholar
  19. 19.
    Karaboga D, Akay B, Ozturk C (2007) Artificial bee colony (abc) optimization algorithm for training feed-forward neural networks. In: Modeling decisions for artificial intelligence. Springer, pp 318–329Google Scholar
  20. 20.
    Karaboga D, Basturk B (2007) A powerful and efficient algorithm for numerical function optimization: artificial bee colony (abc) algorithm. J Glob Optim 39(3):459–471MathSciNetCrossRefMATHGoogle Scholar
  21. 21.
    Kennedy J (1997) The particle swarm: Social adaptation of knowledge Proceedings of the 1997 international conference on evolutionary computation. IEEE Service Center, Piscataway, NJ, pp 303–308Google Scholar
  22. 22.
    Kennedy J (1998) The behavior of particles. Evolutionary Programming VII, pp 581–587Google Scholar
  23. 23.
    Kennedy J, Eberhart RC (1995) Particle swarm optimization. In: Proceedings of the IEEE international conference on neural networks. NJ, USA, pp 1942–1948Google Scholar
  24. 24.
    Lichman M (2013) UCI machine learning repositoryGoogle Scholar
  25. 25.
    María Luna J, Romero C, Romero JR, Ventura S (2015) An evolutionary algorithm for the discovery of rare class association rules in learning management systems. Appl Intell 42(3):501– 513CrossRefGoogle Scholar
  26. 26.
    Mirjalili SM, Abedi K, Mirjalili S (2013) Optical buffer performance enhancement using particle swarm optimization in ring-shape-hole photonic crystal waveguide. Optik-Int J Light Electron Opt 124(23):5989–5993CrossRefGoogle Scholar
  27. 27.
    Mirjalili S (2015) How effective is the grey wolf optimizer in training multi-layer perceptrons. Appl Intell 43 (1):150– 161CrossRefGoogle Scholar
  28. 28.
    Mirjalili S, Lewis A (2016) The whale optimization algorithm. Adv Eng Softw 95:51–67CrossRefGoogle Scholar
  29. 29.
    Mirjalili S, Mirjalili SM, Hatamlou A (2016) Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput Appl 27(2):495–513CrossRefGoogle Scholar
  30. 30.
    Mirjalili S, Mirjalili SM, Lewis A (2014) Let a biogeography-based optimizer train your multi-layer perceptron. Inf Sci 269:188–209MathSciNetCrossRefGoogle Scholar
  31. 31.
    Molga M, Smutnicki C (2005) Test functions for optimization needs. Test functions for optimization needsGoogle Scholar
  32. 32.
    Price K, Storn RM, Lampinen JA (2006) Differential evolution: a practical approach to global optimization. Springer Science and Business MediaGoogle Scholar
  33. 33.
    Rumelhart DE, Hinton GE, Williams RJ (1986) Parallel distributed processing: explorations in the microstructure of cognition, vol. 1. chapter Learning Internal Representations by Error Propagation. MIT Press, MA, USA, pp 318–362Google Scholar
  34. 34.
    Yang XS (2008) Nature-inspired metaheuristic algorithms. Luniver Press, USAGoogle Scholar
  35. 35.
    Sexton RS, Dorsey RE, Johnson JD (1998) Toward global optimization of neural networks: a comparison of the genetic algorithm and backpropagation. Decis Support Syst 22(2):171–185CrossRefGoogle Scholar
  36. 36.
    Sexton RS, Gupta JND (2000) Comparative evaluation of genetic algorithm and backpropagation for training neural networks. Inf Sci 129(1–4):45–59CrossRefMATHGoogle Scholar
  37. 37.
    Siddique MNH, Tokhi MO (2001) Training neural networks: backpropagation vs. genetic algorithms. In: Proceedings of the international joint conference on neural networks, 2001, IJCNN’01. IEEE, vol 4, pp 26732678Google Scholar
  38. 38.
    Simon D (2008) Biogeography-based optimization. IEEE Trans Evol Comput 12(6):702–713CrossRefGoogle Scholar
  39. 39.
    Storn R, Price K (1997) Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J Glob Optim 11(4):341–359MathSciNetCrossRefMATHGoogle Scholar
  40. 40.
    Svozil D, Kvasnicka V, Pospichal J (1997) Introduction to multi-layer feed-forward neural networks. Chemom Intell Lab Syst 39(1):43–62CrossRefGoogle Scholar
  41. 41.
    Ventura S, Luna JM (2016) Pattern mining with evolutionary algorithms. Springer Publishing Company, Incorporated, 1st editionGoogle Scholar
  42. 42.
    Wang G-G, Deb S, Cui Z (2015) Monarch butterfly optimization. Neural Computing and Applications, pp 1–20Google Scholar
  43. 43.
    Wang G-G, Deb S, Zhao X, Cui Z A new monarch butterfly optimization with an improved crossover operator. Operational Research, pp 1–25Google Scholar
  44. 44.
    Wang G-G, Gandomi AH, Zhao X, Chu HCE (2016) Hybridizing harmony search algorithm with cuckoo search for global numerical optimization. Soft Comput 20(1):273–285CrossRefGoogle Scholar
  45. 45.
    Wang G, Guo L, Wang H, Duan H, Liu L, Li J (2014) Incorporating mutation scheme into krill herd algorithm for global numerical optimization. Neural Comput Appl 24(3-4):853–871CrossRefGoogle Scholar
  46. 46.
    Ismail Wdaa AS (2008) Differential evolution for neural networks learning enhancement. PhD thesis, Universiti Teknologi , MalaysiaGoogle Scholar
  47. 47.
    Wienholt W (1993) Minimizing the system error in feedforward neural networks with evolution strategy. In: ICANN’93. Springer, pp 490–493Google Scholar
  48. 48.
    Yang X-S (2010) Firefly algorithm, levy flights and global optimization. In: Research and development in intelligent systems XXVI. Springer, pp 209–218Google Scholar
  49. 49.
    Yang X-S (2010) A new metaheuristic bat-inspired algorithm. In: Nature inspired cooperative strategies for optimization (NICSO 2010). Springer, pp 65–74Google Scholar
  50. 50.
    Yang X-S (2012) Flower pollination algorithm for global optimization. In: IEEE international conference on neural networks. NJ, USA, pp 19421948. Springer, pp 240–249Google Scholar
  51. 51.
    Yao X, Liu Y, Lin G (1999) Evolutionary programming made faster. IEEE Trans Evol Comput 3 (2):82–102CrossRefGoogle Scholar
  52. 52.
    Yu J, Wang S, Xi L (2008) Evolving artificial neural networks using an improved pso and dpso. Neurocomputing 71(4): 1054–1060CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2017

Authors and Affiliations

  1. 1.Business Information Technology Department, King Abdullah II School for Information TechnologyThe University of JordanAmmanJordan
  2. 2.School of Information and Communication TechnologyGriffith UniversityNathanAustralia

Personalised recommendations