Skip to main content

Parallel Batch Pattern Training Algorithm for MLP with Two Hidden Layers on Many-Core System

  • Conference paper
Book cover Distributed Computing and Artificial Intelligence, 11th International Conference

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 290))

Abstract

The development of parallel batch pattern back propagation training algorithm of multilayer perceptron with two hidden layers and the research of its parallelization efficiency on many-core system are presented in this paper. The model of multilayer perceptron and batch pattern training algorithm are theoretically described. The algorithmic description of the parallel batch pattern training method is presented. Our results show high parallelization efficiency of the developed algorithm on many-core parallel system with 48 CPUs using MPI technology.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Haykin, S.: Neural Networks and Learning Machines, 3rd edn., p. 936. Prentice Hall (2008)

    Google Scholar 

  2. De Llano, R.M., Bosque, J.L.: Study of neural net training methods in parallel and distributed architectures. Future Generation Computer Systems 26(2), 183–190 (2010)

    Article  Google Scholar 

  3. Čerňanský, M.: Training recurrent neural network using multistream extended Kalman filter on multicore processor and CUDA enabled GPU. In: Alippi, C., Polycarpou, M., Panayiotou, C., Ellinas, G. (eds.) ICANN 2009, Part I. LNCS, vol. 5768, pp. 381–390. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  4. Lotrič, U., Dobnikar, A.: Parallel implementations of recurrent neural network learning. In: Kolehmainen, M., Toivanen, P., Beliczynski, B. (eds.) ICANNGA 2009. LNCS, vol. 5495, pp. 99–108. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  5. http://uweb.deis.unical.it/turchenko/research-projects/pagalinnet/

  6. Turchenko, V., Grandinetti, L.: Scalability of enhanced parallel batch pattern BP training algorithm on general-purpose supercomputers. In: de Leon F. de Carvalho, A.P., Rodríguez-González, S., De Paz Santana, J.F., Rodríguez, J.M.C. (eds.) Distributed Computing and Artificial Intelligence. AISC, vol. 79, pp. 525–532. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  7. Turchenko, V., Grandinetti, L.: Parallel batch pattern BP training algorithm of recurrent neural network. In: Proc. 14th IEEE Intern. Conf. on Intelligen. Engin. Syst., Spain, pp. 25–30 (2010)

    Google Scholar 

  8. Turchenko, V., Bosilca, G., Bouteiller, A., Dongarra, J.: Efficient Parallelization of Batch Pattern Training Algorithm on Many-core and Cluster Architectures. In: Proceedings of the 7th IEEE International Conference IDAACS 2013, Germany, pp. 692–698 (2013)

    Google Scholar 

  9. Turchenko, V., Golovko, V., Sachenko, A.: Parallel training algorithm for radial basis function neural network. In: Proceedings of the 7th ICNNAI, Belarus, pp. 47–51 (2012)

    Google Scholar 

  10. Funahashi, K.: On the approximate realization of continuous mappings by neural network. Neural Networks 2, 183–192 (1989)

    Article  Google Scholar 

  11. Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural Networks 2, 359–366 (1989)

    Article  Google Scholar 

  12. Sontag, E.: Feedback stabilization using two-hidden-layer nets. IEEE Transactions on Neural Networks 3, 981–990 (1992)

    Article  Google Scholar 

  13. Golovko, V., Galushkin, A.: Neural Networks: Training, Models and Applications, Moscow, Radiotechnika (2001) (in Russian)

    Google Scholar 

  14. Turchenko, V., Grandinetti, L., Bosilca, G., Dongarra, J.: Improvement of parallelization efficiency of batch pattern BP training algorithm using Open MPI. Procedia Computer Science 1(1), 525–533 (2010)

    Article  Google Scholar 

  15. Turchenko, V., Grandinetti, L., Sachenko, A.: Parallel batch pattern training of neural networks on computational clusters. In: Proc. of the 2012 Intern. Conf. HPCS 2012, Spain, pp. 202–208 (2012)

    Google Scholar 

  16. http://www.open-mpi.org/

  17. Turchenko, V., Grandinetti, L.: Application of BSP-based computational cost model to predict parallelization efficiency of MLP training algorithm. In: Diamantaras, K., Duch, W., Iliadis, L.S. (eds.) ICANN 2010, Part III. LNCS, vol. 6354, pp. 327–332. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Volodymyr Turchenko .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Turchenko, V. (2014). Parallel Batch Pattern Training Algorithm for MLP with Two Hidden Layers on Many-Core System. In: Omatu, S., Bersini, H., Corchado, J., Rodríguez, S., Pawlewski, P., Bucciarelli, E. (eds) Distributed Computing and Artificial Intelligence, 11th International Conference. Advances in Intelligent Systems and Computing, vol 290. Springer, Cham. https://doi.org/10.1007/978-3-319-07593-8_62

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-07593-8_62

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-07592-1

  • Online ISBN: 978-3-319-07593-8

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics