Skip to main content

Parallel Computation of a New Data Driven Algorithm for Training Neural Networks

  • Conference paper
Book cover Advances in Neural Networks – ISNN 2013 (ISNN 2013)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7951))

Included in the following conference series:

  • 3738 Accesses

Abstract

Different from some early learning algorithms such as backpropagation (BP) or radial basis function (RBF) algorithms, a new data driven algorithm for training neural networks is proposed. The new data driven methodology for training feedforward neural networks means that the system modeling are performed directly using the input-output data collected from real processes, To improve the efficiency, the parallel computation method is introduced and the performance of parallel computing for the new data driven algorithm is analyzed. The results show that, by using the parallel computing mechanisms, the training speed can be much higher.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ampazis, N., Perantonis, S.J.: Two highly efficient second-order algorithms for training feedforward networks. IEEE Trans. Neural Netw. 13, 1064–1073 (2002)

    Article  Google Scholar 

  2. Khashman, A.: A Modified Backpropagation Learning Algorithm With Added Emotional Coefficients. IEEE Trans. Neural Netw. 19, 1896–1909 (2008)

    Article  Google Scholar 

  3. Bortman, M., Aladjem, M.: A Growing and Pruning Method for Radial Basis Function Networks. IEEE Trans. Neural Netw. 20, 1039–1045 (2009)

    Article  Google Scholar 

  4. Wedge, D., Ingram, D., McLean, D., Mingham, C., Bandar, Z.: On global-local artificial neural networks for function approximation. IEEE Trans. Neural Netw. 17, 942–952 (2006)

    Article  Google Scholar 

  5. Zhang, D.Y.: New Theories and Methods on Neural Networks. Tsinghua University Press, Beijing (2006) (in Chinese)

    Google Scholar 

  6. Zhang, D.Y.: New Algorithm for Training Feedforward Neural Networks with Cubic Spline weight functions. Systems Engineering and Electronics 28, 1434–1437 (2006) (in Chinese)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Zhang, D. (2013). Parallel Computation of a New Data Driven Algorithm for Training Neural Networks. In: Guo, C., Hou, ZG., Zeng, Z. (eds) Advances in Neural Networks – ISNN 2013. ISNN 2013. Lecture Notes in Computer Science, vol 7951. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-39065-4_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-39065-4_19

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-39064-7

  • Online ISBN: 978-3-642-39065-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics