Advertisement

A GPU Implementation of a Bat Algorithm Trained Neural Network

  • Amit Roy ChoudhuryEmail author
  • Rishabh Jain
  • Kapil Sharma
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9949)

Abstract

In recent times, there has been an exponential growth in the viability of Neural Networks (NN) as a Machine Learning tool. Most standard training algorithms for NNs, like gradient descent and its variants fall prey to local optima. Metaheuristics have been found to be a viable alternative to traditional training methods. Among these metaheuristics the Bat Algorithm (BA), has been shown to be superior. Even though BA promises better results, yet being a population based metaheuristic, it forces us to involve many Neural Networks and evaluate them on nearly every iteration. This makes the already computationally expensive task of training a NN even more so. To overcome this problem, we exploit the inherent concurrent characteristics of both NNs as well as BA to design a framework which utilizes the massively parallel architecture of Graphics Processing Units (GPUs). Our framework is able to offer speed-ups of upto 47\(\times \) depending on the architecture of the NN.

Keywords

Neural Networks Bat Algorithm GPU 

References

  1. 1.
    Widrow, B., Rumelhart, D.E., Lehr, M.A.: Neural networks: applications in industry, business and science. Commun. ACM 37(3), 93–106 (1994)CrossRefGoogle Scholar
  2. 2.
    Specht, D.F.: A general regression neural network. IEEE Trans. Neural Netw. 2(6), 568–576 (1991)CrossRefGoogle Scholar
  3. 3.
    Hagan, M.T., Menhaj, M.B.: Training feedforward networks with the Marquardt algorithm. IEEE Trans. Neural Netw. 5(6), 989–993 (1994)CrossRefGoogle Scholar
  4. 4.
    Montana, D.J., Davis, L.: Training feedforward neural networks using genetic algorithms. In: IJCAI, vol. 89, pp. 762–767, August 1989Google Scholar
  5. 5.
    Gupta, J.N., Sexton, R.S.: Comparing backpropagation with a genetic algorithm for neural network training. Omega 27(6), 679–684 (1999)CrossRefGoogle Scholar
  6. 6.
    Gudise, V.G., Venayagamoorthy, G.K.: Comparison of particle swarm optimization and backpropagation as training algorithms for neural networks. In: Proceedings of the 2003 IEEE Swarm Intelligence Symposium, 2003. SIS 3003, pp. 110–117. IEEE, April 2003Google Scholar
  7. 7.
    Khan, K., Sahai, A.: A comparison of BA, GA, PSO, BP and LM for training feed forward neural networks in e-learning context. Int. J. Intell. Syst. Appl. 4(7), 23 (2012)Google Scholar
  8. 8.
    Yang, X.S.: A new metaheuristic bat-inspired algorithm. In: González, J.R., Pelta, D.A., Cruz, C., Terrazas, G., Krasnogor, N. (eds.) NICSO 2010. Studies in Computational Intelligence, vol. 284, pp. 64–74. Springer, Heidelberg (2010)Google Scholar
  9. 9.
    Nvidia, C.U.D.A.: C programming guide version 4.0. NVIDIA Corporation, Santa Clara (2011)Google Scholar
  10. 10.
    Lichman, M.: UCI machine learning repository. University of California, School of Information and Computer Science, Irvine (2013). http://archive.ics.uci.edu/ml
  11. 11.
    Harris, M.: Optimizing parallel reduction in CUDA. NVIDIA DeveloperTechnology 2(4), (2007). http://developer.download.nvidia.com/assets/cuda/files/reduction.pdf
  12. 12.
    Rieck, K., Holz, T., Willems, C., Düssel, P., Laskov, P.: Learning and classification of malware behavior. In: Zamboni, D. (ed.) DIMVA 2008. LNCS, vol. 5137, pp. 108–125. Springer, Heidelberg (2008)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Amit Roy Choudhury
    • 1
    Email author
  • Rishabh Jain
    • 1
  • Kapil Sharma
    • 1
  1. 1.Department of Computer EngineeringDelhi Technological UniversityNew DelhiIndia

Personalised recommendations