Skip to main content

Learning in evolutive neural architectures: An ill-posed problem?

  • Learning
  • Conference paper
  • First Online:
From Natural to Artificial Neural Computation (IWANN 1995)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 930))

Included in the following conference series:

Abstract

Basically, evolutive architectures are networks able to be modified by adding or pruning neurons or connections. In the paper, by a synthesis a various works, we point out that evolutive architectures involve a lot of tricks because of indeterminacy of solutions and suboptimality, which are characteristic of ill-posed problems. We also emphasize on interest of stopping criteria, essential to control adding as well as pruning procedure and to avoid overfitting. Finally, we suggest another formulation of learning in evolutive architectures based on more realistic “hardware” constraints.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Reilly D., Cooper L., Erlbaum C., “A Neural Model for Category Learning”, Biological Cybernetics, Vol. 45, pp. 35–41, 1982.

    PubMed  Google Scholar 

  2. Le Cun Y., Denker J., Solla S., “Optimal Brain Damage”, NIPS Conf., Denver (USA) 1989. In Advances in Neural Information Processing, 2, pp. 598–605, 1990.

    Google Scholar 

  3. Sietsma J., Dow R., “Creating artificial networks that generalize”, Neural Networks, Vol. 4, n∘ 1, pp. 67–79, 1991.

    Google Scholar 

  4. Plaut D., Nowlan S., Hinton G., “Experiments on learning by back propagation”, Tech. rep. CMU-CS-86-126, Carnegie-Mellon Univ., 1986.

    Google Scholar 

  5. Plutoski M., White H., “Selecting Concise Training Sets from Clean Data”, IEEE Trans. on Neural Networks, Vol. 4, n∘ 3, pp. 305–318, 1993.

    Google Scholar 

  6. Jutten C., Chentouf R., “A New Scheme for Incremental Learning”, Neural Processing Letters, Vol 2, n∘ 1, pp. 1–4, 1995.

    Google Scholar 

  7. Chentouf R., Jutten C., “Incremental learning with a stopping criterion. Experimental results”, IWANN 95, Málaga (Spain), June 1995.

    Google Scholar 

  8. Fahlman S., Lebiere C., “The Cascade-Correlation Learning Architecture”, in Advances in Neural Information Processing Systems II, Touresky et al. (eds), pp. 524–532, 1989.

    Google Scholar 

  9. Littmann E., Ritter H., “Cascade Network Architectures”, Proc. Intern. Joint Conf. on Neural Networks, Baltimore, Vol. II, pp. 398–404, 1992.

    Google Scholar 

  10. Sirat J., Nadal J.-P., “Neural Trees: a New Tools for Classification”, Networks, 1, pp. 423–438, 1990.

    Google Scholar 

  11. Moreno J. M., Castillo F., Cabestany J., “Hardware implementation of piecewise linear separation incremental algorithms”, Proceeding of NeuroNîmes 93, Nîmes (France), October 1993, pp; 199–208, 1993.

    Google Scholar 

  12. Girosi F., Jones M., Poggio T., “Regularization Theory and Neural Neetworks Architectures”, Neural Computation, Vol. 7, n∘ 2, pp. 219–269, 1995.

    Google Scholar 

  13. Parzen E., “On Estimation of a probability density function and mode”, Ann. Math. Statist., 33, pp. 1065–1076, 1962.

    Google Scholar 

  14. HärdleW., Smoothing techniques with implementations in S, Springer-Verlag, 1990.

    Google Scholar 

  15. Silverman B., Density estimation for statistics and data analysis. Chapman and Hall, 1986.

    Google Scholar 

  16. Comon P., “Supervised classification: a probabilistic approach”, Invited paper, ESANN 95, Brussels (Belgium) April 1995. D facto publisher.

    Google Scholar 

  17. Specht D., “Probabilistic Neural Networks”, Neural Networks, Vol. 3, 1, pp. 109–118, 1990.

    Google Scholar 

  18. Comon P., Bienvenu G., Lefebvre T., “Supervised design of optimal receivers”, NATO Advanced Study Institute on Acoustic Signal Processing and Ocean Exploration, Madeira (Portugal), July 1992.

    Google Scholar 

  19. Jutten C., Comon P., “Neural Bayesian Classifier”, IWANN 93, Barcelona (Spain), June 1993. In “News Trends in Neural Computation”, Mira J., Cabestany J., Prieto A. (Eds), Lecture Notes in Computer Science n∘ 686, Springer-Verlag, pp. 119–124, 1993.

    Google Scholar 

  20. Chen S., Cowan C., Grant P., “Orthogonal least squares learning algorithm for radial basis function networks”, IEEE Trans. on Neural Networks, Vol. 2, n∘ 2, pp. 302–309, 1991.

    Google Scholar 

  21. Hassibi B., Stork D., “Second Order Derivatives for Network Pruning: Optimal Brain Surgeon”. Neural Information Processing Systems, 1992.

    Google Scholar 

  22. Fambon O., Jutten C., “Pruning kernel density estimators”. ESANN 95, Brussels (Belgium) April 1995. D facto publisher.

    Google Scholar 

  23. Akaike H., “Statistical predictor identification”, Ann. Inst. Statist. Math., 22, pp. 203–217, 1970.

    Google Scholar 

  24. Cottrell M., Girard B., Girard Y., Mangeas M., Muller C., “Neural modeling for time series: a statistical stepwise method for weight elimination”. To appear in IEEE Trans. on Neural Networks.

    Google Scholar 

  25. Antoniadis A., Berruyer J., Carmona R., Régression non linéaire et applications, Economica, Paris, 1992.

    Google Scholar 

  26. Fambon O., Jutten C., “A comparison of two weight pruning methods”, ESANN 94, Brussels (Belgium), April 1994, D facto publisher, pp. 37–42, 1993.

    Google Scholar 

  27. Williams P., “Bayesian regularization and pruning using a Laplace prior”, Neural Computation, Vol. 7, n∘ 1, pp. 117–143, 1995.

    Google Scholar 

  28. Reed R., “Pruning algorithm — a survey”, IEEE Trans. on Neural Networks, Vol. 4, n∘ 5, pp. 740–747, 1993.

    Google Scholar 

  29. Jutten C., Fambon O., “Pruning methods: a review”. Invited paper, ESANN 95, Brussels (Belgium) April 1995. D facto publisher.

    Google Scholar 

  30. Kerlirzin Ph., Vallet F., “Robustness in multilayer perceptrons”, Neural Computation, 5, pp. 473–482, 1993.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Mira Francisco Sandoval

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Jutten, C. (1995). Learning in evolutive neural architectures: An ill-posed problem?. In: Mira, J., Sandoval, F. (eds) From Natural to Artificial Neural Computation. IWANN 1995. Lecture Notes in Computer Science, vol 930. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-59497-3_197

Download citation

  • DOI: https://doi.org/10.1007/3-540-59497-3_197

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-59497-0

  • Online ISBN: 978-3-540-49288-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics