Skip to main content

Parallel Learning of Feedforward Neural Networks Without Error Backpropagation

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9692))

Abstract

A parallel architecture of the steepest descent algorithm for training fully connected feedforward neural networks is presented. This solution is based on a new idea of learning neural networks without error backpropagation. The proposed solution is based on completely new parallel structures to effectively reduce high computational load of this algorithm. Detailed parallel 2D and 3D neural network learning structures are explicitely discussed.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Bilski, J.: The UD RLS algorithm for training the feedforward neural networks. Int. J. Appl. Math. Comput. Sci. 15(1), 101–109 (2005)

    MATH  Google Scholar 

  2. Bilski, J., Litwiński, S., Smola̧g, J.: Parallel realisation of QR algorithm for neural networks learning. In: Rutkowski, L., Siekmann, J.H., Tadeusiewicz, R., Zadeh, L.A. (eds.) ICAISC 2004. LNCS (LNAI), vol. 3070, pp. 158–165. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  3. Bilski, J., Smola̧g, J.: Parallel realisation of the recurrent RTRN neural network learning. In: Rutkowski, L., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2008. LNCS (LNAI), vol. 5097, pp. 11–16. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  4. Bilski, J., Smola̧g, J.: Parallel realisation of the recurrent Elman neural network learning. In: Rutkowski, L., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2010, Part II. LNCS, vol. 6114, pp. 19–25. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  5. Bilski, J., Smola̧g, J.: Parallel realisation of the recurrent multi layer perceptron learning. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2012, Part I. LNCS, vol. 7267, pp. 12–20. Springer, Heidelberg (2012)

    Chapter  Google Scholar 

  6. Bilski, J., Smola̧g, J.: Parallel approach to learning of the recurrent Jordan neural network. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2013, Part I. LNCS, vol. 7894, pp. 32–40. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  7. Bilski, J.: Parallel Structures for Feedforward and Dynamical Neural Networks (in Polish). AOW EXIT (2013)

    Google Scholar 

  8. Bilski, J., Smola̧, J., Galushkin, A.I.: The parallel approach to the conjugate gradient learning algorithm for the feedforward neural networks. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2014, Part I. LNCS, vol. 8467, pp. 12–21. Springer, Heidelberg (2014)

    Chapter  Google Scholar 

  9. Bilski, J., Smola̧g, J.: Parallel architectures for learning the rtrn and elman dynamic neural networks. IEEE Trans. Parallel Distrib. Syst. 26(9), 2561–2570 (2015)

    Article  Google Scholar 

  10. Bilski, J., Smola̧, J., Żurada, J.M.: Parallel approach to the levenberg-marquardt learning algorithm for feedforward neural networks. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) Artificial Intelligence and Soft Computing. LNCS(LNAI), vol. 9119, pp. 3–14. Springer, Heidelberg (2015)

    Chapter  Google Scholar 

  11. Cpałka, K., Łapa, K., Przybył, A., Zalasiński, M.: A new method for designing neuro-fuzzy systems for nonlinear modelling with interpretability aspects. Neurocomputing 135, 203–217 (2014)

    Article  Google Scholar 

  12. Fahlman, S.: Faster learning variations on backpropagation: an empirical study. In: Proceedings of Connectionist Models Summer School, Los Atos (1988)

    Google Scholar 

  13. Hagan, M.T., Menhaj, M.B.: Training feedforward networks with the Marquardt algorithm. IEEE Trans. Neuralnetworks 5(6), 989–993 (1994)

    Google Scholar 

  14. Knop, M., Kapuściński, T., Mleczko, W.: Video key frame detection based on the restricted Boltzmann machine. J. Appl. Math. Comput. Mech. 14(3), 49–58 (2016)

    Article  Google Scholar 

  15. Korytkowski, M., Rutkowski, L., Scherer, R.: Fast image classification by boosting fuzzy classifiers. Inf. Sci. 327, 175–182 (2016)

    Article  MathSciNet  Google Scholar 

  16. Korytkowski, M., Scherer, R., Rutkowski, L.: On Combining backpropagation with boosting. computational collective intelligence. In: 2006 International Joint Conference on Neural Networks, IEEE World Congress on Computational Intelligence, pp. 1274–127. Vancouver, BC, Canada (2006)

    Google Scholar 

  17. Nowak, B., Nowicki, R., Starczewski, J.: The learning of neuro-fuzzy classifier with fuzzy rough sets for imprecise datasets. Artifi. Intell. Soft Comput. 8467, 256–266 (2014)

    Google Scholar 

  18. Riedmiller, M., Braun, H.: A direct method for faster backpropagation learning: the RPROP algorithm. In: IEEE International Conference on Neural Networks. IEEE, San Francisco (1993)

    Google Scholar 

  19. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. In: Rumelhart, D.E., McCelland, J. (red.) Parallel Distributed Processing, vol. 1, chap. 8. The MIT Press, Cambridge, Massachusetts (1986)

    Google Scholar 

  20. Rutkowski, L., Jaworski, M., Pietruczuk, L., Duda, P.: Decision trees for mining data streams based on the gaussian approximation. IEEE Trans. Knowl. Data Eng. 26(1), 108–119 (2014)

    Article  Google Scholar 

  21. Rutkowski, L., Jaworski, M., Pietruczuk, L., Duda, P.: A new method for data stream mining based on the misclassification error. IEEE Trans. Neural Netw. Learn. Syst. 26(5), 1048–1059 (2015)

    Article  MathSciNet  Google Scholar 

  22. Rutkowski, L., Pietruczuk, L., Duda, P., Jaworski, M.: Decision trees for mining data streams based on the McDiarmid’s bound. IEEE Trans. Knowl. Data Eng. 25(6), 1272–1279 (2013)

    Article  Google Scholar 

  23. Rutkowski, L., Jaworski, M., Pietruczuk, L., Duda, P.: The CART Decision Trees for Mining Data Streams. Inf. Sci. 266, 1–15 (2014)

    Article  Google Scholar 

  24. Smola̧g, J., Bilski, J.: A systolic array for fast learning of neural networks. In: Proceedings of Fifth Conference on Neural Networks and Soft Computing, pp. 754–758. Zakopane (2000)

    Google Scholar 

  25. Smola̧g, J., Rutkowski, L., Bilski, J.: Systolic array for neural networks. In: Proceedings of the fourth Conference on Neural Networks and their Applications, pp. 487–497. Zakopane (1999)

    Google Scholar 

  26. Starczewski, A.: A New Validity Index for Crisp Clusters. Pattern Analysis and Applications (2015). doi:10.1007/s10044-015-0525-8

    Google Scholar 

  27. Tadeusiewicz, R.: Neural Networks (in Polish). AOW RM (1993)

    Google Scholar 

  28. Werbos, J.: Backpropagation through time: what it does and how to do it. Proc. IEEE 78(10), 1550–1560 (1990)

    Article  Google Scholar 

  29. Wilamowski, B.M., Yo, H.: Neural network learning without backpropagation. IEEE Trans Neural Netw. 21(11), 1793–1803 (2010)

    Article  Google Scholar 

  30. Wilamowski, B.M., Yo, H.: Improved Computation For Levenberg–Marquardt Training. IEEE Trans. Neural Netw. 21(6), 930–937 (2010)

    Article  Google Scholar 

  31. Yo, H., Reiner, P.D., Xie, T., Bartczak, T., Wilamowski, B.M.: An incremental design of radial basis function networks. IEEE Trans. Neural Netw. Learn. Syst. 25(10), 1793–1803 (2014)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jarosław Bilski .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Bilski, J., Wilamowski, B.M. (2016). Parallel Learning of Feedforward Neural Networks Without Error Backpropagation. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L., Zurada, J. (eds) Artificial Intelligence and Soft Computing. ICAISC 2016. Lecture Notes in Computer Science(), vol 9692. Springer, Cham. https://doi.org/10.1007/978-3-319-39378-0_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-39378-0_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-39377-3

  • Online ISBN: 978-3-319-39378-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics