Abstract
Basically, evolutive architectures are networks able to be modified by adding or pruning neurons or connections. In the paper, by a synthesis a various works, we point out that evolutive architectures involve a lot of tricks because of indeterminacy of solutions and suboptimality, which are characteristic of ill-posed problems. We also emphasize on interest of stopping criteria, essential to control adding as well as pruning procedure and to avoid overfitting. Finally, we suggest another formulation of learning in evolutive architectures based on more realistic “hardware” constraints.
Preview
Unable to display preview. Download preview PDF.
References
Reilly D., Cooper L., Erlbaum C., “A Neural Model for Category Learning”, Biological Cybernetics, Vol. 45, pp. 35–41, 1982.
Le Cun Y., Denker J., Solla S., “Optimal Brain Damage”, NIPS Conf., Denver (USA) 1989. In Advances in Neural Information Processing, 2, pp. 598–605, 1990.
Sietsma J., Dow R., “Creating artificial networks that generalize”, Neural Networks, Vol. 4, n∘ 1, pp. 67–79, 1991.
Plaut D., Nowlan S., Hinton G., “Experiments on learning by back propagation”, Tech. rep. CMU-CS-86-126, Carnegie-Mellon Univ., 1986.
Plutoski M., White H., “Selecting Concise Training Sets from Clean Data”, IEEE Trans. on Neural Networks, Vol. 4, n∘ 3, pp. 305–318, 1993.
Jutten C., Chentouf R., “A New Scheme for Incremental Learning”, Neural Processing Letters, Vol 2, n∘ 1, pp. 1–4, 1995.
Chentouf R., Jutten C., “Incremental learning with a stopping criterion. Experimental results”, IWANN 95, Málaga (Spain), June 1995.
Fahlman S., Lebiere C., “The Cascade-Correlation Learning Architecture”, in Advances in Neural Information Processing Systems II, Touresky et al. (eds), pp. 524–532, 1989.
Littmann E., Ritter H., “Cascade Network Architectures”, Proc. Intern. Joint Conf. on Neural Networks, Baltimore, Vol. II, pp. 398–404, 1992.
Sirat J., Nadal J.-P., “Neural Trees: a New Tools for Classification”, Networks, 1, pp. 423–438, 1990.
Moreno J. M., Castillo F., Cabestany J., “Hardware implementation of piecewise linear separation incremental algorithms”, Proceeding of NeuroNîmes 93, Nîmes (France), October 1993, pp; 199–208, 1993.
Girosi F., Jones M., Poggio T., “Regularization Theory and Neural Neetworks Architectures”, Neural Computation, Vol. 7, n∘ 2, pp. 219–269, 1995.
Parzen E., “On Estimation of a probability density function and mode”, Ann. Math. Statist., 33, pp. 1065–1076, 1962.
HärdleW., Smoothing techniques with implementations in S, Springer-Verlag, 1990.
Silverman B., Density estimation for statistics and data analysis. Chapman and Hall, 1986.
Comon P., “Supervised classification: a probabilistic approach”, Invited paper, ESANN 95, Brussels (Belgium) April 1995. D facto publisher.
Specht D., “Probabilistic Neural Networks”, Neural Networks, Vol. 3, 1, pp. 109–118, 1990.
Comon P., Bienvenu G., Lefebvre T., “Supervised design of optimal receivers”, NATO Advanced Study Institute on Acoustic Signal Processing and Ocean Exploration, Madeira (Portugal), July 1992.
Jutten C., Comon P., “Neural Bayesian Classifier”, IWANN 93, Barcelona (Spain), June 1993. In “News Trends in Neural Computation”, Mira J., Cabestany J., Prieto A. (Eds), Lecture Notes in Computer Science n∘ 686, Springer-Verlag, pp. 119–124, 1993.
Chen S., Cowan C., Grant P., “Orthogonal least squares learning algorithm for radial basis function networks”, IEEE Trans. on Neural Networks, Vol. 2, n∘ 2, pp. 302–309, 1991.
Hassibi B., Stork D., “Second Order Derivatives for Network Pruning: Optimal Brain Surgeon”. Neural Information Processing Systems, 1992.
Fambon O., Jutten C., “Pruning kernel density estimators”. ESANN 95, Brussels (Belgium) April 1995. D facto publisher.
Akaike H., “Statistical predictor identification”, Ann. Inst. Statist. Math., 22, pp. 203–217, 1970.
Cottrell M., Girard B., Girard Y., Mangeas M., Muller C., “Neural modeling for time series: a statistical stepwise method for weight elimination”. To appear in IEEE Trans. on Neural Networks.
Antoniadis A., Berruyer J., Carmona R., Régression non linéaire et applications, Economica, Paris, 1992.
Fambon O., Jutten C., “A comparison of two weight pruning methods”, ESANN 94, Brussels (Belgium), April 1994, D facto publisher, pp. 37–42, 1993.
Williams P., “Bayesian regularization and pruning using a Laplace prior”, Neural Computation, Vol. 7, n∘ 1, pp. 117–143, 1995.
Reed R., “Pruning algorithm — a survey”, IEEE Trans. on Neural Networks, Vol. 4, n∘ 5, pp. 740–747, 1993.
Jutten C., Fambon O., “Pruning methods: a review”. Invited paper, ESANN 95, Brussels (Belgium) April 1995. D facto publisher.
Kerlirzin Ph., Vallet F., “Robustness in multilayer perceptrons”, Neural Computation, 5, pp. 473–482, 1993.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1995 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Jutten, C. (1995). Learning in evolutive neural architectures: An ill-posed problem?. In: Mira, J., Sandoval, F. (eds) From Natural to Artificial Neural Computation. IWANN 1995. Lecture Notes in Computer Science, vol 930. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-59497-3_197
Download citation
DOI: https://doi.org/10.1007/3-540-59497-3_197
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-59497-0
Online ISBN: 978-3-540-49288-7
eBook Packages: Springer Book Archive