Skip to main content

Generalization of Elman networks

  • Part III: Learning: Theory and Algorithms
  • Conference paper
  • First Online:
Book cover Artificial Neural Networks — ICANN'97 (ICANN 1997)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1327))

Included in the following conference series:

Abstract

The Vapnik Chervonenkis dimension of Elman networks is infinite. Here, we find constructions leading to lower bounds for the fat shattering dimension that are linear resp. of order log2 in the input length even in the case of limited weights and inputs. Since finiteness of this magnitude is equivalent to learnability, there is no a priori guarantee for the generalization capability of Elman networks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. N. Alon, S. Ben-David, N. Cesa-Bianchi, and D. Haussler. Scale-sensitive dimensions, uniform convergence, and learnability. In Proc. of 34th IEEE Symp. Foundations Computer Science, 1993.

    Google Scholar 

  2. P. L. Bartlett. The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. Technical report, Department of Systems engineering, ANU, 1996.

    Google Scholar 

  3. B. Dasgupta and E. D. Sontag. Sample complexity for learning recurrent perceptron mappings. IEEE Transactions Information Theory, 42, 1996.

    Google Scholar 

  4. S. E. Fahlman. The recurrent cascade-correlation architecture. In Advances in Neural Information Processing Systems, volume 3, 1991.

    Google Scholar 

  5. L. Gurvits and P. Koiran. Approximation and learning of convex superpositions. In 2nd European Conf. Comp. Learning Theorie, 1995.

    Google Scholar 

  6. B. Hammer and V. Sperschneider. Neural networks can approximate mappings on structured objects. In 2nd Int. Conf. Comp. Intelligence and Neuroscience, 1997.

    Google Scholar 

  7. J. Kilian and H. T. Siegelmann. The dynamic universality of sigmoidal neural networks. Information and Computation, 128, 1996.

    Google Scholar 

  8. P. Koiran and E. D. Sontag. Vapnik-Chervonenkis dimension of recurrent neural networks. In Proc. of the 3rd European Conf. Comp. Learning Theorie, 1997.

    Google Scholar 

  9. W. Maass. Vapnik-Chervonenkis dimension of neural nets. Technical Report 96015, NeuroCOLT Technical Report Series, 1996.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Wulfram Gerstner Alain Germond Martin Hasler Jean-Daniel Nicoud

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Hammer, B. (1997). Generalization of Elman networks. In: Gerstner, W., Germond, A., Hasler, M., Nicoud, JD. (eds) Artificial Neural Networks — ICANN'97. ICANN 1997. Lecture Notes in Computer Science, vol 1327. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0020189

Download citation

  • DOI: https://doi.org/10.1007/BFb0020189

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-63631-1

  • Online ISBN: 978-3-540-69620-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics