Convergence rate of minimization learning for neural networks
In this paper, we present the convergence rate of the error in a neural network which was learnt by a constructive method. The constructive mechanism is used to learn the neural network by adding hidden units to this neural network. The main idea of this work is to find the eigenvalues of the transformation matrix concerning the error before and after adding hidden units in the neural network. By using the eigenvalues, we show the relation between the convergence rate in neural networks without and with thresholds in the output layer.
KeywordsNeural Network Hide Layer Convergence Rate Output Layer Connection Weight
- Hassibi, B., Stork, D. G.: Second order derivatives for network pruning: Optimal brain surgeon. In S.J. Hanson, J.D.Cowan, C.L. Giles (Eds.). Advances in Neural Information Processing Systems. 5 (1993) 164–171, San Mateo, CA: Morgan KaufmannGoogle Scholar
- Mohamed, H.M., Minamoto, T., and Niijima, K.: Convergence rate of minimization learning for neural networks. DOI Technical Report, DOI-TR-141, Kyushu University, (1997)Google Scholar
- Niijima, K., Yamada, M., Mohamed,H.M., Akanuma, T., Minamoto, T., and Ohkubo, A.: Minimization learning of neural networks by adding hidden units. Research Reports on Information Science and Electrical Engineering of Kyushu University. 2(2) (1997) 173–178Google Scholar
- Parekh, R., Yang, J., Honavar, V.: Constructive Neural Networks Learning Algorithms for Multi-Category Pattern Classification. Technical Report TR95-15, AI Research Group, Dept. of Computer Science, Iowa State UniversityGoogle Scholar