Abstract
Training first-order recurrent neural networks to predict symbol sequences from context-free or context-sensitive languages is known as a hard task. A prototype software system has been implemented that can train these networks and evaluate performance after training. A special version of the (1+1)-ES algorithm is employed that allows both incremental and non-incremental training. The system provides advanced analysis tools that take not only the final solution but the whole sequence of intermediate solutions into account. For each of these solutions a qualitative analysis of hidden unit activity and a quantitative evaluation of generalisation ability can be performed.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Chalup, S. K. & Blair, A. D.: Incremental Training of First Order Recurrent Neural Networks to Predict a Context-Sensitive Language. Submitted
Chalup, S. & Blair, A. D.: Hill Climbing in Recurrent Neural Networks for Learning the a n b n c n Language, Proceedings, 6th International Conference on Neural Information Processing (ICONIP’99), (1999) 508–513
Elman, J. L.: Finding structure in time, Cognitive Science, 14, (1990) 179–211
Rechenberg, I.: Cybernetic Solution Path of an Experimental Problem, Royal Aircraft Establishment, Library Translation No. 1122, (1965)
Schwefel, H.-P.: Kybernetische Evolution als Strategie der experimentellen Forschung in der Strömungsmechanik, Diplomarbeit, Technische Universität Berlin, Hermann Föttinger Institut für Hydrodynamik, (1965)
Wiles, J., Blair, A. and Boden, M.: Representation Beyond Finite States: Alternatives to Push-Down Automata, in A Field Guide to Dynamical Recurrent Networks, Kolen, J. F. and Kremer, S. C. (eds.), IEEE Press, (2001) 129–142
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Chalup, S.K., Blair, A.D. (2002). Software for Analysing Recurrent Neural Nets That Learn to Predict Non-regular Languages. In: Adriaans, P., Fernau, H., van Zaanen, M. (eds) Grammatical Inference: Algorithms and Applications. ICGI 2002. Lecture Notes in Computer Science(), vol 2484. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45790-9_25
Download citation
DOI: https://doi.org/10.1007/3-540-45790-9_25
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-44239-4
Online ISBN: 978-3-540-45790-9
eBook Packages: Springer Book Archive