© 2000

Learning with recurrent neural networks

  • Authors


  • The book details a new approach which enables neural networks to deal with symbolic data, folding networks

  • It presents both practical applications and a precise theoretical foundation


Part of the Lecture Notes in Control and Information Sciences book series (LNCIS, volume 254)

Table of contents

  1. Front Matter
    Pages I-X
  2. Barbara Hammer
    Pages 1-4
  3. Barbara Hammer
    Pages 5-18
  4. Barbara Hammer
    Pages 19-49
  5. Barbara Hammer
    Pages 51-101
  6. Barbara Hammer
    Pages 103-131
  7. Barbara Hammer
    Pages 133-135
  8. Back Matter
    Pages 137-150

About this book


Folding networks, a generalisation of recurrent neural networks to tree structured inputs, are investigated as a mechanism to learn regularities on classical symbolic data, for example. The architecture, the training mechanism, and several applications in different areas are explained. Afterwards a theoretical foundation, proving that the approach is appropriate as a learning mechanism in principle, is presented: Their universal approximation ability is investigated- including several new results for standard recurrent neural networks such as explicit bounds on the required number of neurons and the super Turing capability of sigmoidal recurrent networks. The information theoretical learnability is examined - including several contribution to distribution dependent learnability, an answer to an open question posed by Vidyasagar, and a generalisation of the recent luckiness framework to function classes. Finally, the complexity of training is considered - including new results on the loading problem for standard feedforward networks with an arbitrary multilayered architecture, a correlated number of neurons and training set size, a varying number of hidden neurons but fixed input dimension, or the sigmoidal activation function, respectively.


Approximate capability Folding networks Learnability artificial intelligence artificial neural networks neural networks

Bibliographic information

  • Book Title Learning with recurrent neural networks
  • Authors Barbara Hammer
  • Series Title Lecture Notes in Control and Information Sciences
  • Series Abbreviated Title Lect Notes Control Inf Sci
  • DOI
  • Copyright Information Springer-Verlag London 2000
  • Publisher Name Springer, London
  • eBook Packages Springer Book Archive
  • Softcover ISBN 978-1-85233-343-0
  • eBook ISBN 978-1-84628-567-7
  • Series ISSN 0170-8643
  • Series E-ISSN 1610-7411
  • Edition Number 1
  • Number of Pages , 150
  • Number of Illustrations 0 b/w illustrations, 0 illustrations in colour
  • Topics Control, Robotics, Mechatronics
  • Buy this book on publisher's site
Industry Sectors
IT & Software
Energy, Utilities & Environment
Oil, Gas & Geosciences