Skip to main content

Trained Neural Networks Ensembles Weight Connections Analysis

  • Conference paper
  • First Online:

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 723))

Abstract

Randomization ensemble creation technique well-known as Bagging is widely used to construct trained ensembles of base classifiers. The computational power and demand of Neural Networks (NNs) approved in both researches or in applications. The weight connections of the NNs holds the real ability for the NNs model to efficient performance. This paper aims to analyze the weight connections of the trained ensemble of NNs, as well as investigating their statistical parametric distributions via presenting a framework to estimate the best-fit distribution to the weight connections. As so far the presented work is the first attempt to explore and analyze the weight connections distribution of a trained ensemble of NNs. Obtained results proven that the T-location scale statistical distribution is approximately the best-fit to the weights of the trained NNs ensemble, consequently we aim in our future work to employ the outcomes to withdraw the weight connections value from approximated best-fit distribution instead of training the classifier from scratch.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   349.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   449.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)

    MathSciNet  MATH  Google Scholar 

  2. Rogova, G.: Combining the results of several neural network classifiers. Neural Netw. 7(5), 777–781 (1994)

    Article  Google Scholar 

  3. Giacinto, G., Roli, F., Fumera, G.: Design of effective multiple classifier systems by clustering of classifiers. In: Proceedings of 15th International Conference on Pattern Recognition, vol. 2, pp. 160–163. IEEE (2000)

    Google Scholar 

  4. Kuncheva, L.I.: Combining Pattern Classifiers: Methods and Algorithms. Wiley, Hoboken (2014)

    Book  MATH  Google Scholar 

  5. Huang, G.-B., Saratchandran, P., Sundararajan, N.: A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation. IEEE Trans. Neural Netw. 16(1), 57–67 (2005)

    Article  Google Scholar 

  6. Shi, B., Bai, X., Yao, C.: An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition. IEEE Trans. Pattern Anal. Mach. Intell. 39(11), 2298–2304 (2017)

    Article  Google Scholar 

  7. El-Sayed, M.A., Khafagy, M.A.: An identification system using eye detection based on wavelets and neural networks. arXiv preprint arXiv:1401.5108 (2014)

  8. Wu, J., Bai, X., Loog, M., Roli, F., Zhou, Z.-H.: Multi-instance learning in pattern recognition and vision (2017)

    Google Scholar 

  9. Zhou, Z.-H., Jianxin, W., Tang, W.: Ensembling neural networks: many could be better than all. Artif. Intell. 137(1), 239–263 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  10. Bellido, I., Fiesler, E.: Do backpropagation trained neural networks have normal weight distributions? pp. 772–775. Springer, London (1993)

    Google Scholar 

  11. Barbour, B., Brunel, N., Hakim, V., Nadal, J.-P.: What can we learn from synaptic weight distributions? Trends Neurosci. 30(12), 622–629 (2007)

    Article  Google Scholar 

  12. Gardner, E.: The space of interactions in neural network models. J. Phys. A Math. Gen. 21(1), 257 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  13. Brunel, N., Hakim, V., Isope, P., Nadal, J.-P., Barbour, B.: Optimal information storage and the distribution of synaptic weights. Neuron 43(5), 745–757 (2004)

    Google Scholar 

  14. Langley, P., et al.: Selection of relevant features in machine learning. In: Proceedings of the AAAI Fall Symposium on Relevance, vol. 184, pp. 245–271 (1994)

    Google Scholar 

  15. Huang, G.B., Zhou, H., Ding, X., Zhang, R.: Extreme learning machine for regression and multiclass classification. IEEE Trans. Syst. Man Cybern. Part B (Cybernetics) 42(2), 513–529 (2012)

    Article  Google Scholar 

  16. Han, S., Pool, J., Tran, J., Dally, W.: Learning both weights and connections for efficient neural network. In: Proceedings of Advances in Neural Information Processing Systems, pp. 1135–1143 (2015)

    Google Scholar 

  17. Huang, G., Li, Y., Pleiss, G., Liu, Z., Hopcroft, J.E., Weinberger, K.Q.: Snapshot ensembles: train 1, get m for free. arXiv preprint arXiv:1704.00109 (2017)

  18. Perthame, B., Salort, D., Wainrib, G.: Distributed synaptic weights in a LIF neural network and learning rules. Phys. D Nonlinear Phenom. (2017)

    Google Scholar 

  19. Santucci, E., Didaci, L., Fumera, G., Roli, F.: A parameter randomization approach for constructing classifier ensembles. Pattern Recogn. 69, 1–13 (2017)

    Article  Google Scholar 

  20. Ahmed, M.A.O., Didaci, L., Fumera, G., Roli, F.: An empirical investigation on the use of diversity for creation of classifier ensembles. In: International Workshop on Multiple Classifier Systems, pp. 206–219. Springer (2015)

    Google Scholar 

  21. Akaike, H.: A new look at the statistical model identification. IEEE Trans. Autom. Control 19(6), 716–723 (1974)

    Article  MathSciNet  MATH  Google Scholar 

  22. Schwarz, G., et al.: Estimating the dimension of a model. Ann. Stat. 6(2), 461–464 (1978)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Muhammad Atta Othman Ahmed .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ahmed, M.A.O. (2018). Trained Neural Networks Ensembles Weight Connections Analysis. In: Hassanien, A., Tolba, M., Elhoseny, M., Mostafa, M. (eds) The International Conference on Advanced Machine Learning Technologies and Applications (AMLTA2018). AMLTA 2018. Advances in Intelligent Systems and Computing, vol 723. Springer, Cham. https://doi.org/10.1007/978-3-319-74690-6_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-74690-6_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-74689-0

  • Online ISBN: 978-3-319-74690-6

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics