Advertisement

Experimental Platform for Accelerate the Training of ANNs with Genetic Algorithm and Embedded System on FPGA

  • Jorge Fe
  • R. J. Aliaga
  • R. Gadea
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7931)

Abstract

When implementing an artificial neural networks (ANNs) will need to know the topology and initial weights of each synaptic connection. The calculation of both variables is much more expensive computationally. This paper presents a scalable experimental platform to accelerate the training of ANN, using genetic algorithms and embedded systems with hardware accelerators implemented in FPGA (Field Programmable Gate Array). Getting a 3x-4x acceleration compared with Intel Xeon Quad-Core 2.83 Ghz and 6x-7x compared to AMD Optetron Quad-Core 2354 2.2Ghz.

Keywords

Genetic Algorithm Optimal Topology Embed System Convolutional Neural Network Experimental Platform 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Haykin, S.: Neural Networks and Learning Machines, 3rd edn. Prentice Hall (November 2008)Google Scholar
  2. 2.
    Curteanu, S., Cartwright, H.: Neural networks applied in chemistry. i. determination of the optimal topology of multilayer perceptron neural networks. Journal of Chemometrics 25(10), 527–549 (2011)CrossRefGoogle Scholar
  3. 3.
    Nguyen, D., Widrow, B.: Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weightsGoogle Scholar
  4. 4.
    Sankaradas, M., Jakkula, V., Cadambi, S., Chakradhar, S., Durdanovic, I., Cosatto, E., Graf, H.: A massively parallel coprocessor for convolutional neural networks. In: 20th IEEE International Conference on Application-specific Systems, Architectures and Processors, ASAP 2009, pp. 53–60 (July 2009)Google Scholar
  5. 5.
    Prado, R., Melo, J., Oliveira, J., Neto, A.: Fpga based implementation of a fuzzy neural network modular architecture for embedded systems. In: The 2012 International Joint Conference on Neural Networks, IJCNN, pp. 1–7 (June 2012)Google Scholar
  6. 6.
    Çavuşlu, M., Karakuzu, C., Şahin, S., Yakut, M.: Neural network training based on fpga with floating point number format and it’s performance. Neural Computing and Applications 20, 195–202 (2011)CrossRefGoogle Scholar
  7. 7.
    Wu, G.D., Zhu, Z.W., Lin, B.W.: Reconfigurable back propagation based neural network architecture. In: 2011 13th International Symposium on Integrated Circuits, ISIC, pp. 67–70 (December 2011)Google Scholar
  8. 8.
    Pinjare, S.L., Arun Kumar, M.: Article: Implementation of neural network back propagation training algorithm on fpga. International Journal of Computer Applications 52(6), 1–7 (2012)CrossRefGoogle Scholar
  9. 9.
    Goldberg, D.E.: Genetic Algorithms in Search, Optimization and Machine Learning, 1st edn. Addison-Wesley Longman Publishing Co., Inc., Boston (1989)zbMATHGoogle Scholar
  10. 10.
  11. 11.
    Aliaga, R., Gadea, R., Colom, R., Cerda, J., Ferrando, N., Herrero, V.: A mixed hardware-software approach to flexible artificial neural network training on fpga. In: International Symposium on Systems, Architectures, Modeling, and Simulation, SAMOS 2009, pp. 1–8 (July 2009)Google Scholar
  12. 12.

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Jorge Fe
    • 1
  • R. J. Aliaga
    • 1
  • R. Gadea
    • 1
  1. 1.Universidad Politécnica de ValenciaSpain

Personalised recommendations