Skip to main content

A Brief Account of Statistical Theories of Learning and Generalization in Neural Networks

  • Chapter
Statistical Physics, Automata Networks and Dynamical Systems

Part of the book series: Mathematics and Its Applications ((MAIA,volume 75))

Abstract

A network of automata is made up of distinct units (or automata) i ; i = 1,…,N. One assumes that an automaton may be in one of a (generally finite) number of internal states σ i . At (discrete) time v the overall state I(v) of the network is defined as the set of states that the units take at this very time : I(v) = {σ i 7,(v)}. The time evolution of the network is driven by a dynamics which depends on a number of parameters {J}. The parameters {J}, which determine the structure of the network, define how the units influence each other and how information proceeds through the system.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Anshelevich, V., B. Amirikian, A. Lukashin. M. Frank-Kamenetskii, On the ability of neural networks to perform generalization by induction. Biol. Cybern., 125–128 (1989).

    Google Scholar 

  2. Carnevali, P., S. Patarnello, Exhaustive thermodynamical analysis of Boolean learning networks. Europhys. Let. 4, 1199–1204 (1987).

    Article  Google Scholar 

  3. Changeux, J.P., T. Heidmann, P. Piette, Learning by selection. In “The Biology of Learning”, Marier, P., H. Terrace (eds.), Springer, Berlin, 115–133 (1984).

    Chapter  Google Scholar 

  4. Gardner, E., The space of interactions in neural network models. J. Phys. A: Math. Gen. 21, 257–270 (1988).

    Article  Google Scholar 

  5. Gardner, E., B. Derrida, Three unfinished works on the optimal storage capacity of networks. J. Phys. A: Math. Gen. 22, 1983–1994 (1989).

    Article  MathSciNet  Google Scholar 

  6. Gordon, M., P. Peretto, The statistical distribution of Boolean gates in twoinputs, one-output multilayered neural networks. J. Phys. A: Math. Gen. 23, 3061–3072 (1990).

    Article  MathSciNet  MATH  Google Scholar 

  7. Grossman, T., R. Meir, E. Domany, Learning by choice of internal representations. Complex systems 2, 555, (1988).

    MathSciNet  MATH  Google Scholar 

  8. Hinton, G., T. Sejnowski, D. Ackley, Boltzmann machines: Constraint satisfaction networks that learn. Carnegie-Mellon University, Technical Report CMU-CS-84-119, Pittsburgh PA (1984).

    Google Scholar 

  9. Judd, S., On the complexity of loading shallow neural networks. Journal of Complexity, 4, 177–192 (1988).

    Article  MathSciNet  MATH  Google Scholar 

  10. Kohonen, T., Self-organized formation of topologically correct feature maps. Biol. Cybern. 43, 59–69 (1982).

    Article  MathSciNet  MATH  Google Scholar 

  11. Krauth, W., M. Mézard, Learning algorithms with optimal stability in neural networks. J. Phys. A: Math. Gen. 20, L745–L751 (1987).

    Article  Google Scholar 

  12. Le Cun, Y., A learning scheme for asymmetric threshold network. In “Cognitiva 85” Cesta-AFCET (eds), Paris, 599–604 (1985).

    Google Scholar 

  13. Levin, E., N. Tishby, S. Solla, A statistical approach to learning and generalization in layered neural networks. Proceeding of the IJCNN, Washington, IEEE, 2, 403 (1989).

    Google Scholar 

  14. Mézard, M., J.P. Nadal, Learning in feed-forward layered networks: The tiling algorithm. J. Phys. A: Math. Gen. 22, 2191–2203 (1989).

    Article  Google Scholar 

  15. Mézard, M., S. Patarnello, On the capacity of feed forward layered networks. A preprint 89:24 from ENS, Paris, (1989).

    Google Scholar 

  16. Peretto, P., On the dynamics of memorization processes. Neural Networks 1, 309–322 (1988).

    Article  Google Scholar 

  17. Peretto, P., Learning learning sets in neural networks. Int. J. Neural Syst. 1, 31–41 (1989).

    Article  Google Scholar 

  18. Rumelhart, D., T. Hinton, R. Williams, Learning representations by backpropagating errors. Nature, 323, 533–536 (1986).

    Article  Google Scholar 

  19. Sompolinsky, H., N. Tishby, H. Seung, Learning from examples in large neural networks. Phys. Rev. Lett. 65, 1683–1686 (1990).

    Article  Google Scholar 

  20. Vallet, F., The Hebb rule for learning linearly separable Boolean functions: Learning and generalization. Europhys. Lett. 8, 747–751 (1989).

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1992 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Peretto, P., Gordon, M., Rodriguez-Girones, M. (1992). A Brief Account of Statistical Theories of Learning and Generalization in Neural Networks. In: Goles, E., Martínez, S. (eds) Statistical Physics, Automata Networks and Dynamical Systems. Mathematics and Its Applications, vol 75. Springer, Dordrecht. https://doi.org/10.1007/978-94-011-2578-9_5

Download citation

  • DOI: https://doi.org/10.1007/978-94-011-2578-9_5

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-94-010-5137-8

  • Online ISBN: 978-94-011-2578-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics