Advertisement

How Meta-heuristic Algorithms Contribute to Deep Learning in the Hype of Big Data Analytics

  • Simon Fong
  • Suash Deb
  • Xin-she Yang
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 518)

Abstract

Deep learning (DL) is one of the most emerging types of contemporary machine learning techniques that mimic the cognitive patterns of animal visual cortex to learn the new abstract features automatically by deep and hierarchical layers. DL is believed to be a suitable tool so far for extracting insights from very huge volume of so-called big data. Nevertheless, one of the three “V” or big data is velocity that implies the learning has to be incremental as data are accumulating up rapidly. DL must be fast and accurate. By the technical design of DL, it is extended from feed-forward artificial neural network with many multi-hidden layers of neurons called deep neural network (DNN). In the training process of DNN, it has certain inefficiency due to very long training time required. Obtaining the most accurate DNN within a reasonable run-time is a challenge, given there are potentially many parameters in the DNN model configuration and high dimensionality of the feature space in the training dataset. Meta-heuristic has a history of optimizing machine learning models successfully. How well meta-heuristic could be used to optimize DL in the context of big data analytics is a thematic topic which we pondered on in this paper. As a position paper, we review the recent advances of applying meta-heuristics on DL, discuss about their pros and cons and point out some feasible research directions for bridging the gaps between meta-heuristics and DL.

Keywords

Deep learning Meta-heuristic algorithm Neural network training Nature-inspired computing algorithms Algorithm design 

Notes

Acknowledgements

The authors are thankful for the financial support from the Research Grant called “A Scalable Data Stream Mining Methodology: Stream-based Holistic Analytics and Reasoning in Parallel,” Grant no. FDCT/126/2014/A3, offered by the University of Macau, FST, RDAO and the FDCT of Macau SAR government.

References

  1. 1.
    Hinton, G. E., Osindero, S., and Teh, Y. W. “A fast learning algorithm for deep belief nets”, Neural Computation. 2006;18(7):1527–1554Google Scholar
  2. 2.
    Yu Kai, Jia Lei, Chen Yuqiang, and Xu Wei. “Deep learning: yesterday, today, and tomorrow”, Journal of Computer Research and Development. 2013;50(9):1799–1804Google Scholar
  3. 3.
    ILSVRC2012. Large Scale Visual Recognition Challenge 2012 [Internet]. [Updated 2013-08-01]. Available from: http://www.imagenet. Org/challenges/LSVRC/2012/
  4. 4.
    Izadinia, Hamid, et al. “Deep classifiers from image tags in the wild”. In: Proceedings of the 2015 Workshop on Community-Organized Multimodal Mining: Opportunities for Novel Solutions; ACM; 2015Google Scholar
  5. 5.
    Gudise, V. G. and Venayagamoorthy, G. K. “Comparison of particle swarm optimization and back propagation as training algorithms for neural networks”. In: Proceedings of In Swarm Intelligence Symposium SIS’03; 2006. p. 110–117Google Scholar
  6. 6.
    Marc Claesen, Bart De Moor, “Hyperparameter Search in Machine Learning”, MIC 2015: The XI Metaheuristics International Conference, Agadir, June 7–10, 2015, pp. 14-1 to 14-5Google Scholar
  7. 7.
    Steven R. Young, Derek C. Rose, Thomas P. Karnowski, Seung-Hwan Lim, Robert M. Patton, “Optimizing deep learning hyper-parameters through an evolutionary algorithm”, Proceedings of the Workshop on Machine Learning in High-Performance Computing Environments, ACM, 2015Google Scholar
  8. 8.
    Papa, Joao P.; Rosa, Gustavo H.; Marana, Aparecido N.; Scheirer, Walter; Cox, David D. “Model selection for Discriminative Restricted Boltzmann Machines through meta-heuristic techniques”. Journal of Computational Science, v.9, SI, p. 14–18, July 2015Google Scholar
  9. 9.
    Xin-She Yang, “Engineering Optimization: An Introduction with Metaheuristic Applications”, Wiley, ISBN: 978-0-470-58246-6, 347 pages, June 2010Google Scholar
  10. 10.
    Goldberg, D. E. and Holland, J. H. “Genetic algorithms and machine learning”. Machine Learning. 1988;3(2):95–99Google Scholar
  11. 11.
    Iztok Fister Jr., Xin-She Yang, Iztok Fister, Janez Brest, Dusan Fister, “A Brief Review of Nature-Inspired Algorithms for Optimization “, ELEKTROTEHNISKI VESTNIK, 80(3): 1–7, 2013Google Scholar
  12. 12.
    Kennedy, J. “Particle Swarm Optimization”; Springer, USA; 2010. p. 760–766Google Scholar
  13. 13.
    Glover, F. “Tabu search-part I”. ORSA Journal on Computing. 1989;1(3):190–206Google Scholar
  14. 14.
    Xin-She Yang, Suash Deb, Simon Fong, “Metaheuristic Algorithms: Optimal Balance of Intensification and Diversification”, Applied Mathematics & Information Sciences, 8(3), May 2014, pp. 1–7Google Scholar
  15. 15.
    C. Blum, and A. Roli, “Metaheuristics in combinatorial optimization: Overview and conceptual comparison”, ACM Computing Surveys, Volume 35, Issue 3, 2003, pp. 268–308Google Scholar
  16. 16.
    Simon Fong, Suash Deb, Xin-She Yang, “A heuristic optimization method inspired by wolf preying behavior”, Neural Computing and Applications 26 (7), Springer, pp. 1725–1738Google Scholar
  17. 17.
    Suash Deb, Simon Fong, Zhonghuan Tian, “Elephant Search Algorithm for optimization problems”, 2015 Tenth International Conference on Digital Information Management (ICDIM), IEEE, Jeju, 21–23 Oct. 2015, pp. 249–255Google Scholar
  18. 18.
    Beheshti, Z. and Shamsuddin, S. M. H. “A review of population-based meta-heuristic algorithms”. International Journal of Advances in Soft Computing & Its Applications, 2013;5(1):1–35Google Scholar
  19. 19.
    Simon Fong, Xi Wang, Qiwen Xu, Raymond Wong, Jinan Fiaidhi, Sabah Mohammed, “Recent advances in metaheuristic algorithms: Does the Makara dragon exist?”, The Journal of Supercomputing, Springer, 24 December 2015, pp. 1–23Google Scholar
  20. 20.
    Gudise, V. G. and Venayagamoorthy, G. K. “Comparison of particle swarm optimization and back propagation as training algorithms for neural networks”. In: Proceedings of In Swarm Intelligence Symposium SIS’03; 2006. p. 110–117Google Scholar
  21. 21.
    Zhang, J. R., Zhang, J., Lok, T. M., and Lyu, M. R. “A hybrid particle swarm optimization–back-propagation algorithm for feed forward neural network training”. Applied Mathematics and Computation. 2007;185(2):1026–1037Google Scholar
  22. 22.
    Juang, C. F. “A hybrid of genetic algorithm and particle swarm optimization for recurrent network design”. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions 2004;34(2):997–1006Google Scholar
  23. 23.
    Meissner, M., Schmuker, M., and Schneider, G. “Optimized particle swarm optimization (OPSO) and its application to artificial neural network training”. BMC Bioinformatics. 2006;7(1):125Google Scholar
  24. 24.
    Leung, F. H., Lam, H. K., Ling, S. H., and Tam, P. K. “Tuning of the structure and parameters of a neural network using an improved genetic algorithm”. IEEE Transactions on Neural Networks. 2003;14(1):79–88Google Scholar
  25. 25.
    L.M. Rasdi Rere, Mohamad Ivan Fanany, Aniati Murni Arymurthy, “Simulated Annealing Algorithm for Deep Learning”, The Third Information Systems International Conference, Procedia Computer Science 72 (2015), pp. 137–144Google Scholar
  26. 26.
    Hinton, G. E., “Training products of experts by minimizing contrastive divergence”, Neural Computing, 2002 Aug;14(8):1771–800Google Scholar
  27. 27.
    Maass, W. “Networks of spiking neurons: The third generation of neural network models”. Neural Networks. 1997;10(9):1659–1671Google Scholar
  28. 28.
    Simon Fong, Ricardo Brito, Kyungeun Cho, Wei Song, Raymond Wong, Jinan Fiaidhi, Sabah Mohammed, “GPU-enabled back-propagation artificial neural network for digit recognition in parallel”, The Journal of Supercomputing, Springer, 10 February 2016, pp. 1–19Google Scholar
  29. 29.
    Iztok Fister Jr., Simon Fong, Janez Brest, and Iztok Fister, “A Novel Hybrid Self-Adaptive Bat Algorithm,” The Scientific World Journal, vol. 2014, Article ID 709738, 12 pages, 2014. doi: 10.1155/2014/709738
  30. 30.
    Qun Song, Simon Fong, Rui Tang, “Self-Adaptive Wolf Search Algorithm”, 5th International Congress on Advanced Applied Informatics, July 10–14, 2016, Kumamoto City International Center, Kumamoto, JapanGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  1. 1.Department of Computer Information ScienceUniversity of MacauMacau SARChina
  2. 2.INNS-India Regional ChapterRanchiIndia
  3. 3.School of Science and TechnologyMiddlesex UniversityLondonUK

Personalised recommendations