Skip to main content

A Study of Ensemble of Hybrid Networks with Strong Regularization

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2709))

Abstract

We study various ensemble methods for hybrid neural networks. The hybrid networks are composed of radial and projection units and are trained using a deterministic algorithm that completely defines the parameters of the network for a given data set. Thus, there is no random selection of the initial (and final) parameters as in other training algorithms. Network independent is achieved by using bootstrap and boosting methods as well as random input sub-space sampling. The fusion methods are evaluated on several classification benchmark data-sets. A novel MDL based fusion method appears to reduce the variance of the classification scheme and sometimes be superior in its overall performance.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. L. Breiman. Bagging predictors. Machine Learning, 24:123–140, 1996.

    MATH  MathSciNet  Google Scholar 

  2. L. Breiman. Arcing classifiers. The Annals of Statistics, 26(3):801–849, 1998.

    Article  MATH  MathSciNet  Google Scholar 

  3. L. Breiman. Random forests. Technical Report, Statistic Department University of California, Berkeley, 2001.

    Google Scholar 

  4. S. Cohen and N. Intrator. Automatic model selection in a hybrid perceptron/radial network. Information Fusion Journal, 3(4), December 2002.

    Google Scholar 

  5. S. Cohen and N. Intrator. A hybrid projection based and radial basis function architecture: Initial values and global optimization. Pattern Analysis and Applications special issue on Fusion of Multiple Classifiers, 2:113–120, 2002.

    MathSciNet  Google Scholar 

  6. T. Cover and J. Thomas. Elements of Information Theory. Wiley, 1991.

    Google Scholar 

  7. D.H. Deterding. Speaker Normalisation for Automatic Speech Recognition. PhD thesis, University of Cambridge, 1989.

    Google Scholar 

  8. R. A. Fisher. The use of multiple measurements in taxonomic problems. Annals of Eugenics, 7:179–188, 1936.

    Google Scholar 

  9. G.W. Flake. Square unit augmented, radially extended, multilayer percpetrons. In G. B. Orr and K. Müller, editors, Neural Networks: Tricks of the Trade, pages 145–163. Springer, 1998.

    Google Scholar 

  10. Y. Freund and R.E. Schapire. A decision theorethic generalization of on-line learning and application to boosing. Journal of Computer and System Sciences, 55(1):119–139, 1995.

    Article  MathSciNet  Google Scholar 

  11. G. Giacinto and F. Roli. Dynamic classifier selection. In First International workshop on Multiple Classifier Systems, pages 177–189, 2000.

    Google Scholar 

  12. G. E. Hinton and D. van Camp. Keeping neural networks simple by minimizing the description length of the weights. In Sixth ACM conference on Computational Learning Theory, pages 5–13, July 1993.

    Google Scholar 

  13. M. P. Perrone and Leon N Cooper. When networks disagree: Ensemble method for neural networks. In R. J. Mammone, editor, Neural Networks for Speech and Image processing. Chapman-Hall, 1993.

    Google Scholar 

  14. Y. Raviv and N. Intrator. Bootstrapping with noise: An effective regularization technique. Connection Science, Special issue on Combining Estimators, 8:356–372, 1996.

    Google Scholar 

  15. B. D. Ripley. Pattern Recognition and Neural Networks. Oxford Press, 1996.

    Google Scholar 

  16. J. Rissanen. A universal prior for integers and estimation by minimum description length. The Annals of Statistics, 11:416–431, 1983.

    Article  MATH  MathSciNet  Google Scholar 

  17. F. Roli and G. Fumera. Analysis of linear and order statistic for combiners for fusion of imbalanced classifiers. In Third International workshop on Multiple Classifier Systems, pages 252–261, 2002.

    Google Scholar 

  18. C. E. Shannon. A mathematical theory of communication. Bell Syst. Tech. J., 27:379–423 and 623–656, 1948.

    MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Cohen, S., Intrator, N. (2003). A Study of Ensemble of Hybrid Networks with Strong Regularization. In: Windeatt, T., Roli, F. (eds) Multiple Classifier Systems. MCS 2003. Lecture Notes in Computer Science, vol 2709. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44938-8_23

Download citation

  • DOI: https://doi.org/10.1007/3-540-44938-8_23

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-40369-2

  • Online ISBN: 978-3-540-44938-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics