Abstract
The behavior of recurrent neural networks with a recurrent output layer (ROL) is described mathematically and it is shown that using ROL is not only advantageous, but is in fact crucial to obtaining satisfactory performance for the proposed naturalness learning. Conventional belief holds that employing ROL often substantially decreases the performance of a network or renders the network unstable, and ROL is consequently rarely used. The objective of this paper is to demonstrate that there are cases where it is necessary to use ROL. The concrete example shown models naturalness in handwritten letters.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Elman, J.L.: Finding structure in time. Cognitive Science: A Multidisciplinary Journal 14(2), 179–211 (1990)
Jaeger, H.: The “echo state” approach to analysing and training recurrent neural networks. Sankt Augustin: GMD-Forschungszentrum Informationstechnik, GMD-Report 148 (December 2001)
Jaeger, H.: Short term memory in echo state networks. Sankt Augustin: GMD-Forschungszentrum Informationstechnik, GMD-Report 152 (March 2002)
Jaeger, H.: Supervised training of recurrent neural networks, especially with ESN approach. Sankt Augustin: GMD-Forschungszentrum Informationstechnik, GMD-Report 159 (October 2002)
Jordan, M.I.: Serial Order: a parallel distributed processing approach. Tech. Rep. 8604, Univ. of California at San Diego, Inst. for Cognitive Science (May 1986)
Jordan, M.I.: Attractor dynamics and parallelism in a connectionist sequential machine. In: Eighth Annual Conf. of Cognitive Science Society, Amherst, MA, USA, pp. 531–546 (August 1986)
Krose, B., van der Smagt, P.: Recurrent networks. In: Ch. 5, An introduction to neural networks, Eighth Edition, Univ. of Amsterdam (November 1996)
Wang, Y.-C., Chien, C.-J., Teng, C.-C.: Direct adaptive iterative learning control of nonlinear systems using an output-recurrent fuzzy neural network. IEEE Trans. on SMC-B 34(3), 1348–1359 (2004)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Dolinský, J., Takagi, H. (2008). RNN with a Recurrent Output Layer for Learning of Naturalness. In: Ishikawa, M., Doya, K., Miyamoto, H., Yamakawa, T. (eds) Neural Information Processing. ICONIP 2007. Lecture Notes in Computer Science, vol 4984. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-69158-7_27
Download citation
DOI: https://doi.org/10.1007/978-3-540-69158-7_27
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-69154-9
Online ISBN: 978-3-540-69158-7
eBook Packages: Computer ScienceComputer Science (R0)