Abstract
A network of automata is made up of distinct units (or automata) i ; i = 1,…,N. One assumes that an automaton may be in one of a (generally finite) number of internal states σ i . At (discrete) time v the overall state I(v) of the network is defined as the set of states that the units take at this very time : I(v) = {σ i 7,(v)}. The time evolution of the network is driven by a dynamics which depends on a number of parameters {J}. The parameters {J}, which determine the structure of the network, define how the units influence each other and how information proceeds through the system.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Anshelevich, V., B. Amirikian, A. Lukashin. M. Frank-Kamenetskii, On the ability of neural networks to perform generalization by induction. Biol. Cybern., 125–128 (1989).
Carnevali, P., S. Patarnello, Exhaustive thermodynamical analysis of Boolean learning networks. Europhys. Let. 4, 1199–1204 (1987).
Changeux, J.P., T. Heidmann, P. Piette, Learning by selection. In “The Biology of Learning”, Marier, P., H. Terrace (eds.), Springer, Berlin, 115–133 (1984).
Gardner, E., The space of interactions in neural network models. J. Phys. A: Math. Gen. 21, 257–270 (1988).
Gardner, E., B. Derrida, Three unfinished works on the optimal storage capacity of networks. J. Phys. A: Math. Gen. 22, 1983–1994 (1989).
Gordon, M., P. Peretto, The statistical distribution of Boolean gates in twoinputs, one-output multilayered neural networks. J. Phys. A: Math. Gen. 23, 3061–3072 (1990).
Grossman, T., R. Meir, E. Domany, Learning by choice of internal representations. Complex systems 2, 555, (1988).
Hinton, G., T. Sejnowski, D. Ackley, Boltzmann machines: Constraint satisfaction networks that learn. Carnegie-Mellon University, Technical Report CMU-CS-84-119, Pittsburgh PA (1984).
Judd, S., On the complexity of loading shallow neural networks. Journal of Complexity, 4, 177–192 (1988).
Kohonen, T., Self-organized formation of topologically correct feature maps. Biol. Cybern. 43, 59–69 (1982).
Krauth, W., M. Mézard, Learning algorithms with optimal stability in neural networks. J. Phys. A: Math. Gen. 20, L745–L751 (1987).
Le Cun, Y., A learning scheme for asymmetric threshold network. In “Cognitiva 85” Cesta-AFCET (eds), Paris, 599–604 (1985).
Levin, E., N. Tishby, S. Solla, A statistical approach to learning and generalization in layered neural networks. Proceeding of the IJCNN, Washington, IEEE, 2, 403 (1989).
Mézard, M., J.P. Nadal, Learning in feed-forward layered networks: The tiling algorithm. J. Phys. A: Math. Gen. 22, 2191–2203 (1989).
Mézard, M., S. Patarnello, On the capacity of feed forward layered networks. A preprint 89:24 from ENS, Paris, (1989).
Peretto, P., On the dynamics of memorization processes. Neural Networks 1, 309–322 (1988).
Peretto, P., Learning learning sets in neural networks. Int. J. Neural Syst. 1, 31–41 (1989).
Rumelhart, D., T. Hinton, R. Williams, Learning representations by backpropagating errors. Nature, 323, 533–536 (1986).
Sompolinsky, H., N. Tishby, H. Seung, Learning from examples in large neural networks. Phys. Rev. Lett. 65, 1683–1686 (1990).
Vallet, F., The Hebb rule for learning linearly separable Boolean functions: Learning and generalization. Europhys. Lett. 8, 747–751 (1989).
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1992 Springer Science+Business Media Dordrecht
About this chapter
Cite this chapter
Peretto, P., Gordon, M., Rodriguez-Girones, M. (1992). A Brief Account of Statistical Theories of Learning and Generalization in Neural Networks. In: Goles, E., Martínez, S. (eds) Statistical Physics, Automata Networks and Dynamical Systems. Mathematics and Its Applications, vol 75. Springer, Dordrecht. https://doi.org/10.1007/978-94-011-2578-9_5
Download citation
DOI: https://doi.org/10.1007/978-94-011-2578-9_5
Publisher Name: Springer, Dordrecht
Print ISBN: 978-94-010-5137-8
Online ISBN: 978-94-011-2578-9
eBook Packages: Springer Book Archive