Skip to main content
Log in

Speed up training of the recurrent neural network based on constrained optimization techniques

  • Regular Papers
  • Published:
Journal of Computer Science and Technology Aims and scope Submit manuscript

Abstract

In this paper, the constrained optimization technique for a substantial problem is explored, that is accelerating training the globally recurrent neural network. Unlike most of the previous methods in feedforward neural networks, the authors adopt the constrained optimization technique to improve the gradientbased algorithm of the globally recurrent neural network for the adaptive learning rate during training. Using the recurrent network with the improved algorithm, some experiments in two real-world problems, namely, filtering additive noises in acoustic data and classification of temporal signals for speaker identification, have been performed. The experimental results show that the recurrent neural network with the improved learning algorithm yields significantly faster training and achieves the satisfactory performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Nerrand Oet al. Training recurrent neural networks: Why and how? An illustration in dynamical processing modeling.IEEE Trans. on Neural Networks, 1994, 5(2): 178–184.

    Article  Google Scholar 

  2. Williams R Jet al. Gradient-Based Learning Algorithm for Recurrent Networks. In Back-Propagation Theory: Architectures and Applications, Chauvin Yet al. (eds.), Hillsdale, NJ: Erbaum, 1991.

    Google Scholar 

  3. Chan L Wet al. An Adaptive Training Algorithm for Backpropagation Networks. Computer Speech and Language 2, 1987, pp. 205–218.

  4. Jocobs R A. Increased rates of convergence through learning rate adaptation.Neural Networks, 1988, 1(4): 295–304.

    Article  Google Scholar 

  5. Yu Xiaobu Yuet al. Dynamic learning rate optimization of the backpropagation algorithm.IEEE Trans. on Neural Networks, 1995, 6(3): 669–677.

    Article  Google Scholar 

  6. Wang Yeou-Fanget al. Multiple training concept for backpropagation networks. InProceedings of International Joint Conference on Neural Networks, Singapore, Nov. 1991, pp.535–540.

  7. Fletcher R. Practical Methods of Optimization. John Wiley & Sons, Inc., 1987.

Download references

Author information

Authors and Affiliations

Authors

Additional information

This work was partially supported by National Natural Science Foundation of China under the Grant 69571002 and the Climbing Program — National Key Project for Fundamental Research in China with Grant NSC 92097.

Chen Ke received his B.S. degree and M.S. degree in Computer Science from Nanjing University in 1984 and 1987, respectively and his Ph.D. degree in Computer Science and Engineering from Harbin Institute of Technology in 1990. From 1990 to 1992, he was a postdoctoral fellow at Tsinghua University. During 1992–1993 he was awarded a postdoctoral fellowship by Japan Society for Promotion of Science and worked at Kyushu Institute of Technology in Japan. He is currently an Associate Professor of Information Science at Peking University. His current research interests are neural computation and its applications to machine perception. Dr. Chen is a member of INNS and a senior member of CEI.

Bao Weiquan received his B.S. degree in Electrical Engineering from Peking University in 1994. He is currently a postgraduate student at Peking University. His research interests are speech signal processing and neural computation.

Chi Huisheng graduated from Department of Radioelectronics of Peking University in 1964 (six-year system) and has been working in the university since then. Major research interests are satellite communications, digital communications and speech signal processing. In recent years, the research projects conducted by him involved the neural network auditory model and speaker identification systems. Now he is a Vice President and a Professor of Peking University. Prof. Chi is a Senior Member of IEEE, a Member of INNS, a Member of appraisal group of NSFC and SEC, Fellow of CEI and CIC, and a Vice Chairman of CNNC and CAGIS.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Chen, K., Bao, W. & Chi, H. Speed up training of the recurrent neural network based on constrained optimization techniques. J. of Comput. Sci. & Technol. 11, 581–588 (1996). https://doi.org/10.1007/BF02951621

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02951621

Keywords

Navigation