Skip to main content

RNN with a Recurrent Output Layer for Learning of Naturalness

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 4984))

Abstract

The behavior of recurrent neural networks with a recurrent output layer (ROL) is described mathematically and it is shown that using ROL is not only advantageous, but is in fact crucial to obtaining satisfactory performance for the proposed naturalness learning. Conventional belief holds that employing ROL often substantially decreases the performance of a network or renders the network unstable, and ROL is consequently rarely used. The objective of this paper is to demonstrate that there are cases where it is necessary to use ROL. The concrete example shown models naturalness in handwritten letters.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Elman, J.L.: Finding structure in time. Cognitive Science: A Multidisciplinary Journal 14(2), 179–211 (1990)

    Article  Google Scholar 

  2. Jaeger, H.: The “echo state” approach to analysing and training recurrent neural networks. Sankt Augustin: GMD-Forschungszentrum Informationstechnik, GMD-Report 148 (December 2001)

    Google Scholar 

  3. Jaeger, H.: Short term memory in echo state networks. Sankt Augustin: GMD-Forschungszentrum Informationstechnik, GMD-Report 152 (March 2002)

    Google Scholar 

  4. Jaeger, H.: Supervised training of recurrent neural networks, especially with ESN approach. Sankt Augustin: GMD-Forschungszentrum Informationstechnik, GMD-Report 159 (October 2002)

    Google Scholar 

  5. Jordan, M.I.: Serial Order: a parallel distributed processing approach. Tech. Rep. 8604, Univ. of California at San Diego, Inst. for Cognitive Science (May 1986)

    Google Scholar 

  6. Jordan, M.I.: Attractor dynamics and parallelism in a connectionist sequential machine. In: Eighth Annual Conf. of Cognitive Science Society, Amherst, MA, USA, pp. 531–546 (August 1986)

    Google Scholar 

  7. Krose, B., van der Smagt, P.: Recurrent networks. In: Ch. 5, An introduction to neural networks, Eighth Edition, Univ. of Amsterdam (November 1996)

    Google Scholar 

  8. Wang, Y.-C., Chien, C.-J., Teng, C.-C.: Direct adaptive iterative learning control of nonlinear systems using an output-recurrent fuzzy neural network. IEEE Trans. on SMC-B 34(3), 1348–1359 (2004)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Masumi Ishikawa Kenji Doya Hiroyuki Miyamoto Takeshi Yamakawa

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Dolinský, J., Takagi, H. (2008). RNN with a Recurrent Output Layer for Learning of Naturalness. In: Ishikawa, M., Doya, K., Miyamoto, H., Yamakawa, T. (eds) Neural Information Processing. ICONIP 2007. Lecture Notes in Computer Science, vol 4984. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-69158-7_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-69158-7_27

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-69154-9

  • Online ISBN: 978-3-540-69158-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics