Skip to main content

Pruning Neural Networks with Distribution Estimation Algorithms

  • Conference paper
  • First Online:
Genetic and Evolutionary Computation — GECCO 2003 (GECCO 2003)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2723))

Included in the following conference series:

Abstract

This paper describes the application of four evolutionary algorithms to the pruning of neural networks used in classification problems. Besides of a simple genetic algorithm (GA), the paper considers three distribution estimation algorithms (DEAs): a compact GA, an extended compact GA, and the Bayesian Optimization Algorithm. The objective is to determine if the DEAs present advantages over the simple GA in terms of accuracy or speed in this problem. The experiments considered a feedforward neural network trained with standard backpropagation and 15 public-domain and artificial data sets. In most cases, the pruned networks seemed to have better or equal accuracy than the original fully-connected networks. We found few differences in the accuracy of the networks pruned by the four EAs, but found large differences in the execution time. The results suggest that a simple GA with a small population might be the best algorithm for pruning networks on the data sets we tested.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 74.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Yao, X.: Evolving artificial neural networks. Proceedings of the IEEE 87 (1999) 1423–1447

    Article  Google Scholar 

  2. Castillo, P.A., Arenas, M.G., Castillo-Valdivieso, J.J., Merelo, J.J., Prieto, A., Romero, G.: Artificial neural networks design using evolutionary algorithms. In: Proceedings of the Seventh World Conference on Soft Computing. (2002)

    Google Scholar 

  3. Pelikan, M., Goldberg, D.E., Cantú-Paz, E.: BOA: The Bayesian optimization algorithm. In Banzhaf, W., Daida, J., Eiben, A.E., Garzon, M.H., Honavar, V., Jakiela, M., Smith, R.E., eds.: Proceedings of the Genetic and Evolutionary Computation Conference 1999: Volume 1, San Francisco, CA, Morgan Kaufmann Publishers (1999) 525–532

    Google Scholar 

  4. Etxeberria, R., Larrañaga, P.: Global optimization with Bayesian networks. In: II Symposium on Artificial Intelligence (CIMAF99). (1999) 332–339

    Google Scholar 

  5. Mühlenbein, H., Mahnig, T.: FDA-A scalable evolutionary algorithm for the optimization of additively decomposed functions. Evolutionary Computation 7 (1999) 353–376

    Article  Google Scholar 

  6. Reed, R.: Pruning algorithms—a survey. IEEE Transactions on Neural Networks 4 (1993) 740–747

    Article  Google Scholar 

  7. Whitley, D., Starkweather, T., Bogart, C.: Genetic algorithms and neural networks: Optimizing connections and connectivity. Parallel Computing 14 (1990) 347–361

    Article  Google Scholar 

  8. Hancock, P.J.B.: Pruning neural networks by genetic algorithm. In Aleksander, I., Taylor, J., eds.: Proceedings of the 1992 International Conference on Artificial Neural Networks. Volume 2., Amsterdam, Netherlands, Elsevier Science (1992) 991–994

    Google Scholar 

  9. LeBaron, B.: An evolutionary bootstrap approach to neural network pruning and generalization. unpublished working paper (1997)

    Google Scholar 

  10. Schmidt, M., Stidsen, T.: Using GA to train NN using weight sharing, weight pruning and unit pruning. Technical report, Aarhus University, Computer Science Department, Aarhus, Denmark (1995)

    Google Scholar 

  11. Whitley, D., Bogart, C.: The evolution of connectivity: Pruning neural networks using genetic algorithms. Technical Report CS-89-113, Colorado State University, Department of Computer Science, Fort Collins (1989)

    Google Scholar 

  12. Thierens, D.: Scalability problems of simple genetic algorithms. Evolutionary Computation 7 (1999) 331–352

    Article  Google Scholar 

  13. Pelikan, M., Goldberg, D.E., Lobo, F.: A survey of optimization by building and using probabilistic models. IlliGAL Report No. 99018, University of Illinois at Urbana-Champaign, Illinois Genetic Algorithms Laboratory, Urbana, IL (1999)

    Google Scholar 

  14. Larrañaga, P., Etxeberria, R., Lozano, J.A., Peña, J.M.: Optimization by learning and simulation of Bayesian and Gaussian networks. Tech Report No. EHU-KZAAIK-4/99, University of the Basque Country, Conostia-San Sebastian, Spain (1999)

    Google Scholar 

  15. Harik, G.R., Lobo, F.G., Goldberg, D.E.: The compact genetic algorithm. In: Proceedings of the 1998 IEEE International Conference on Evolutionary Computation, Piscataway, NJ, IEEE Service Center (1998) 523–528

    Chapter  Google Scholar 

  16. Baluja, S.: Population-based incremental learning: A method for integrating genetic search based function optimization and competitive learning. Tech. Rep. No. CMU-CS-94-163, Carnegie Mellon University, Pittsburgh, PA (1994)

    Google Scholar 

  17. Mühlenbein, H.: The equation for the response to selection and its use for prediction. Evolutionary Computation 5 (1998) 303–346

    Article  Google Scholar 

  18. Harik, G.: Linkage learning via probabilistic modeling in the ECGA. IlliGAL Report No. 99010, University of Illinois at Urbana-Champaign, Illinois Genetic Algorithms Laboratory, Urbana, IL (1999)

    Google Scholar 

  19. Lobo, F.G., Harik, G.R.: Extended compact genetic algorithm in C++. IlliGAL Report No. 99016, University of Illinois at Urbana-Champaign, Illinois Genetic Algorithms Laboratory, Urbana, IL (1999)

    Google Scholar 

  20. Pelikan, M.: A simple implementation of the Bayesian optimization algorithm (BOA) in C++ (version 1.0). IlliGAL Report No. 99011, University of Illinois at Urbana-Champaign, Illinois Genetic Algorithms Laboratory, Urbana, IL (1999)

    Google Scholar 

  21. Blake, C., Merz, C.: UCI repository of machine learning databases (1998)

    Google Scholar 

  22. Inza, I., Larrañaga, P., Etxeberria, R., Sierra, B.: Feature subset selection by Bayesian networks based on optimization. Artificial Intelligence 123 (1999) 157–184

    Article  Google Scholar 

  23. Lim, T.J., Loh, W.Y., Shih, Y.S.: A comparison of prediction accuracy, complexity, and training time of thirty-three old and new classification algorithms. Machine Learning 40 (2000) 203–228

    Article  MATH  Google Scholar 

  24. Alpaydin, E.: Combined 5 × 2cv F test for comparing supervised classification algorithms. Neural Computation 11 (1999) 1885–1892

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Cantú-Paz, E. (2003). Pruning Neural Networks with Distribution Estimation Algorithms. In: Cantú-Paz, E., et al. Genetic and Evolutionary Computation — GECCO 2003. GECCO 2003. Lecture Notes in Computer Science, vol 2723. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45105-6_93

Download citation

  • DOI: https://doi.org/10.1007/3-540-45105-6_93

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-40602-0

  • Online ISBN: 978-3-540-45105-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics