Skip to main content

A New Adaptive Strategy for Pruning and Adding Hidden Neurons during Training Artificial Neural Networks

  • Conference paper
  • 1769 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 5326))

Abstract

This paper presents a new strategy in designing artificial neural networks. We call this strategy as adaptive merging and growing strategy (AMGS). Unlike most previous strategies on designing ANNs, AMGS puts emphasis on autonomous functioning in the design process. The new strategy reduces or increases an ANN size during training based on the learning ability of hidden neurons and the training progress of the ANN, respectively. It merges correlated hidden neurons to reduce the network size, while it splits existing hidden neuron to increase the network size. AMGS has been tested on designing ANNs for five benchmark classification problems, including Australian credit card assessment, diabetes, heart, iris, and thyroid problems. The experimental results show that the proposed strategy can design compact ANNs with good generalization ability.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Kwok, T.Y., Yeung, D.Y.: Constructive algorithms for structure learning in feedforward neural networks for regression problems. IEEE Transactions on Neural Networks 8, 630–645 (1997)

    Article  Google Scholar 

  2. Reed, R.: Pruning algorithms - a survey. IEEE Transactions on Neural Networks 4, 740–747 (1993)

    Article  Google Scholar 

  3. Schaffer, J.D., Whitely, D., Eshelman, L.J.: Combinations of genetic algorithms and neural networks- a survey of the state of the art. In: Whitely, D., Schaffer, J.D. (eds.) International Workshop of Genetic Algorithms and Neural Networks, pp. 1–37. IEEE Computer Society Press, Los Alamitos (1992)

    Google Scholar 

  4. Odri, S.V., Petrovacki, D.P., Krstonosic, G.A.: Evolutional development of a multilevel neural network. Neural Networks 6, 583–595 (1993)

    Article  Google Scholar 

  5. LeCun, Y., Denker, J.S., Solla, S.A.: Optimal brain damage. In: Touretzky, D.S. (ed.) Advances in Neural Information Processing Systems, vol. 2, pp. 598–605. Morgan Kaufmann, San Francisco (1990)

    Google Scholar 

  6. Hassibi, B., Stork, D.G.: Second-order derivatives for network pruning: optimal brain surgeon. In: Lee, C., Hanson, S., Cowan, J. (eds.) Advances in Neural Information Processing Systems, vol. 5, pp. 164–171. Morgan Kaufmann, San Mateo (1993)

    Google Scholar 

  7. Engelbretch, A.P.: A new pruning heuristic based on variance analysis of sensitivity information. IEEE Transaction on Neural Networks 12, 1386–1399 (2001)

    Article  Google Scholar 

  8. Ludermir, T.B., Yamazaki, A., Zanchettin, C.: An optimization methodology for neural network weights and architectures. IEEE Transactions on Neural Networks 17, 1452–1459 (2006)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Islam, M.M., Sattar, M.A., Amin, M.F., Murase, K. (2008). A New Adaptive Strategy for Pruning and Adding Hidden Neurons during Training Artificial Neural Networks. In: Fyfe, C., Kim, D., Lee, SY., Yin, H. (eds) Intelligent Data Engineering and Automated Learning – IDEAL 2008. IDEAL 2008. Lecture Notes in Computer Science, vol 5326. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-88906-9_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-88906-9_6

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-88905-2

  • Online ISBN: 978-3-540-88906-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics