Advertisement

A Review of the Theory and Method for New Developed Feedforward Neural Networks

  • Daiyuan ZhangEmail author
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1074)

Abstract

Two new theories and methods of feedforward neural network developed in recent years are systematically reviewed. They are Algebraic Algorithm of feedforward neural network and Spline Weight Function Algorithm of feedforward neural network. The network structure, basic theory and algorithm, and basic performance of the two networks are described. The algebraic algorithm of feedforward neural network can get the global optimum, but BP algorithm is difficult to get the global optimum. Algebraic algorithm has high accuracy and fast training speed. In engineering application, the algebraic algorithm gives the calculation formula of accurately determining the number of hidden layer neurons, but the BP algorithm does not give such an accurate formula. However, algebraic algorithm, like traditional BP algorithm, obtains constant weights, which is difficult to reflect the information of training samples. The spline weight function algorithm of feedforward neural network not only retains the advantages of algebraic algorithm, but also overcomes its shortcomings, so that the weight of trained neural network is a function of input sample (called weight function), which is composed of spline function, not a constant of traditional method. The trained weight function can extract useful information features from the trained samples. The topological structure of the spline weighted neural network is very simple, and the number of its neurons is independent of the number of samples, which is equal to the number of input and output nodes. The calculation speed is fast. Moreover, with the increase of the number of samples, the generalization ability of the network is also improving.

Keywords

Feedforward neural network Algebraic algorithm Spline Weight Function Algorithm 

References

  1. 1.
    Hagan, M.T., Demuth, H.B., Beale, M.: Neural Network Design. China Machine Press, Beijing (2002)Google Scholar
  2. 2.
    Haykin, S.: Neural Networks. Macmillan College Publishing Company, New York (1994)zbMATHGoogle Scholar
  3. 3.
    Ergezinger, S., Tomsen, E.: An accelerated learning algorithm for multilayer perceptrons: optimization layer by layer. Neural Netw. 6, 31–42 (1995)CrossRefGoogle Scholar
  4. 4.
    Ampazis, N., Perantonis, S.J.: Two highly efficient second-order algorithms for training feedforward networks. Neural Netw. 13, 1064–1073 (2002)CrossRefGoogle Scholar
  5. 5.
    Zhang, D.: New Theories and Methods on Neural Networks. Tsinghua University Press, Beijing (2006)Google Scholar
  6. 6.
    Tesauro, G., Janssens, R.: Scaling relationships in back-propagation learning. Complex Syst. 2, 39–44 (1989)zbMATHGoogle Scholar
  7. 7.
    Zhang, D.: New expansion and infinite series. Int. Math. Forum. 9, 1061–1073 (2014).  https://doi.org/10.12988/imf.2014.45102. http://www.m-hikari.com/imf/imf-2014/21-24-2014/zhangIMF21-24-2014.pdfCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.College of ComputerNanjing University of Posts and TelecommunicationsNanjingPeople’s Republic of China

Personalised recommendations