Skip to main content

Time Window Width Influence on Dynamic BPTT(h) Learning Algorithm Performances: Experimental Study

  • Conference paper
Artificial Neural Networks – ICANN 2006 (ICANN 2006)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 4131))

Included in the following conference series:

  • 3199 Accesses

Abstract

The purpose of the research addressed in this paper is to study the influence of the time window width in dynamic truncated BackPropagation Through Time BPTT(h) learning algorithms. Statistical experiments based on the identification of a real biped robot balancing mechanism are carried out to raise the link between the window width and the stability, the speed and the accuracy of the learning. The time window width choice is shown to be crucial for the convergence speed of the learning process and the generalization ability of the network. Although, a particular attention is brought to a divergence problem (gradient blow up) observed with the assumption where the net parameters are constant along the window. The limit of this assumption is demonstrated and parameters evolution storage, used as a solution for this problem, is detailed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Mohamed, B., Gravez, F., Ouezdou, F.B.: Emulation of the dynamic effects of human torso during walking gait. Journal of Mechanical Design 126, 830–841 (2004)

    Article  Google Scholar 

  2. Tsung, F.-S.: Modeling Dynamical Systems with Recurrent Neural Networks. PhD thesis, Department of Computer Science. University of California, San Diego (1994)

    Google Scholar 

  3. Nguyen, M.H., Cottrell, G.W.: Tau Net: A neural network for modeling temporal variability. Neurocomputing 15, 249–271 (1997)

    Article  MATH  Google Scholar 

  4. Hochreiter, S., Younger, A.S., Conwell, P.R.: Learning to learn using gradient descent. In: Dorffner, G., Bischof, H., Hornik, K. (eds.) ICANN 2001. LNCS, vol. 2130, pp. 87–94. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  5. Pearlmutter, B.A.: Gradient calculation for dynamic recurrent neural networks: a survey. Transactions on Neural Networks 6(5), 1212–1228 (1995)

    Article  Google Scholar 

  6. Werbos, P.J.: Backpropagation through time: what it does and how to do it. Proceedings of the IEEE 78(10), 1550–1560 (1990)

    Article  Google Scholar 

  7. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation Parallel distributed processing: explorations in the microstructure of cognition. In: Rumelhart, D.E., Mc- Clelland, J.L., the PDP Research Group (eds.), pp. 318–362. MIT Press, Cambridge (1986)

    Google Scholar 

  8. Williams, R.J., Zipser, D.: Gradient-based learning algorithms for recurrent connectionist networks. In: Chauvin, Y., Rumelhart, D.E. (eds.) Backpropagation: Theory, Architectures, and Applications, Erlbaum, Hillsdale, NJ (1990)

    Google Scholar 

  9. Williams, R.J., Peng, J.: An efficient gradient–based algorithm for on–line training of recurrent network trajectories. Neural Computation, vol. 2, pp. 490–501. MIT Press, Cambridge (1990)

    Google Scholar 

  10. Campolucci, P., Uncini, A., Piazza, F., Rao, B.D.: On-Line Learning Algorithms for Locally Recurrent Neural Networks. IEEE-NN 10, 253 (1999)

    Google Scholar 

  11. Vukobratovic, M., Borovac, B.: Zero-moment point – thirty five years of its life. International Journal of Humanoid Robotics 1(1), 157–173 (2004)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Scesa, V., Henaff, P., Ouezdou, F.B., Namoun, F. (2006). Time Window Width Influence on Dynamic BPTT(h) Learning Algorithm Performances: Experimental Study. In: Kollias, S.D., Stafylopatis, A., Duch, W., Oja, E. (eds) Artificial Neural Networks – ICANN 2006. ICANN 2006. Lecture Notes in Computer Science, vol 4131. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11840817_10

Download citation

  • DOI: https://doi.org/10.1007/11840817_10

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-38625-4

  • Online ISBN: 978-3-540-38627-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics