Skip to main content

Part of the book series: Studies in Fuzziness and Soft Computing ((STUDFUZZ,volume 152))

  • 294 Accesses

Summary

Classifier ensembles, and multiple classifier systems, have been established in the literature as a means to achieve higher classification accuracy through their combining. This recent interest has been marked by the introduction of a variety of combining methods that improve the overall accuracy. However, it has been noted that for multiple classifier approaches to be useful its members should demonstrate error-independence. Ideally, in terms of ensembles of classifiers, would be a set of classifiers which do not show any coincident errors. That is, each of the classifiers generalized well, and when they did make errors on the test set, these errors were not shared with any other classifier. Generally, training of members of the ensemble to achieve this independence is achieved by training each member independent while manipulating the training data. Although these approaches have been shown to be useful, they might not be sufficient. In this work we present an algorithm to train the members of an ensemble concurrently. The algorithm is applied to an ensemble, based on the weighted average, and the feature based aggregation architecture. An empirical evaluation shows a reduction in the number of training cycles when applying the algorithm on the overall architecture, while maintaining the same or improved performance. The performance of these approaches is also compared to standard approaches proposed in the literature. The results substantiate the use of adaptive training for both the ensemble and the aggregation architecture.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. J. Kittler, M. Hatef, R. Duin, and J. Matas, “On combining classifiers,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 3, pp. 226–239, 1998.

    Article  Google Scholar 

  2. K. Turner and J. Ghosh, “Estimating the Bayes Error through classifier combining,” in International Conference on Pattern Recognition, Vienna, Austria, 1996, pp. 695–699.

    Google Scholar 

  3. L. Hansen and P. Salmon, “Neural networks ensembles,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 10, pp. 993–1001, 1990.

    Article  Google Scholar 

  4. B. Parmanto and P. Munro, “Reducing variance of committee prediction with resampling techniques:’ Connection Science, Special issue on Combining Estimators, vol. 8, pp. 405–426, 1996.

    Google Scholar 

  5. Y. Raviv and N. Intrator, “Bootstrapping with noise: An effective regularization technique,” Connection Science, Special issue on Combining Estimators, vol. 8, pp. 356–372, 1996.

    Google Scholar 

  6. D. Wolpert, “Stacked generalization:’ Neural Networks, vol. 5, pp. 241–259, 1992.

    Article  Google Scholar 

  7. R. Jacobs, “Methods of combining experts’ probability assessments,” Neural Computation, vol. 7, pp. 867–888, 1995.

    Article  Google Scholar 

  8. J. Kittler and F. Roli, Eds., Multiple Classifier Systems, First International Workshop, MCS2000, Cagliari, Italy June 2000, Proceedings,vol. 1857 of Lecture Notes in Computer Science,Springer-Verlag Publishers, Berlin, 2000.

    Google Scholar 

  9. J. Kittler and F. Roli, Eds., Second International Workshop, MCS 2001 Cambridge, UK, July 2–4, 2001, Proceedings,vol. 2096 of Lecture Notes in Computer Science,Springer-Verlag Publishers, Berlin, 2001.

    Google Scholar 

  10. F. Roli and J. Kittler, Eds.,Third International Workshop MCS 2002 Cagliari Italy June 24–26 2002 Proceedingsvol. 2364 of Lecture Notes in Computer ScienceSpringer-Verlag Publishers, Berlin, 2002

    Google Scholar 

  11. T. Windeatt and F. Roli, Eds.,Fourth International Workshop MCS 2003 Guilford UK June 11–13 2003 Proceedingsvol. 2709 of Lecture Notes in Computer ScienceSpringer-Verlag Publishers, Berlin, 2003.

    Google Scholar 

  12. A. Sharkey, “Multi-net systems,” in Combing Artificial Neural Nets, pp. 1–30. Springer-Verlag Publishers, 1999.

    Google Scholar 

  13. L. Xu, A. Krzyíak, and C. Suen, “Methods of combining multiple classifiers and their applications to handwriting recognition,IEEE Transactions on Systems Man and Cyberneticsvol. 22, no. 3, pp. 418–435, 1992.

    Google Scholar 

  14. T. Ho, J. Hull, and S. Srihari, “Decision combination in multiple classifier systems,”IEEE Transactions on Pattern Analysis and Machine Intelligencevol. 16, no. 1, pp. 6675, 1994.

    Google Scholar 

  15. S. Hashem, `Algorithms for optimal linear combinations of neural networks,“ in International Conference on Neural NetworksHouston, 1997, vol. 1, pp. 242–247.

    Google Scholar 

  16. L. Lam and C. Suen, “Optimal combination of pattern classifiers,” Pattern Recognition Lettersvol. 16, pp. 945–954, 1995.

    Google Scholar 

  17. G. Rogova, “Combining the results of several neural network classifiers,” Neural Networksvol. 7, no. 5, pp. 777–781, 1994.

    Google Scholar 

  18. P. Gader, M. Mohamed, and J. Keller, “Fusion of handwritten word classifiers,”Pattern Recognition Lettersvol. 17, no. 6, pp. 577–584, 1996.

    Google Scholar 

  19. B. Dasarathy, Decision FusionIEEE Computer Society Press, 1994

    Google Scholar 

  20. G. Auda and M. Kamel, “Modular neural network classifiers: A comparative study,” Journal of Intelligent and Robotic Systemsvol. 21, pp. 117–129, 1998.

    Google Scholar 

  21. L. Kuncheva, “A theoretical study on six classifier fusion strategies,”IEEE Transactions on Pattern Analysis and Machine Intelligencevol. 24, no. 2, pp. 281–286,2002.

    Google Scholar 

  22. A. Verikas, A. Lipnickas, K. Malmqvist, M. Bacauskiene, and A. Gelzinis, “Soft combination of neural classifiers: A comparative study,” Pattern Recognition Letters, vol. 20, pp. 429–444, 1999.

    Article  Google Scholar 

  23. R. Duin and D. Tax, “Experiments with classifier combining rules,” in Multiple Classifier Systems First International Workshop MCS2000 Cagliari Italy June 2000 ProceedingsJ. Kittler and F. Roli, Eds., vol. 1857 of Lecture Notes in Computer Sciencepp. 16–29. Springer-Verlag Publishers, Berlin, 2000.

    Google Scholar 

  24. A. Sharkey, “Types of multinet systems,” in Multiple Classifier Systems Third International Workshop MCS2002 Cagliari Italy June 24–26 2002 ProceedingsF. Roli and J. Kittler, Eds., vol. 2364 of Lecture Notes in Computer Sciencepp. 108–117. Springer-Verlag Publishers, Berlin, 2002.

    Google Scholar 

  25. M. Kamel and N. Wanas, “Data dependence in combining classifiers,” in to appear in Multiple Classifier Systems Fourth International Workshop MCS2003 Surrey England June 2003 Proceedings. 2003

    Google Scholar 

  26. M. Jordon and R. Jacobs, “Hierarchical mixtures of expert and the em algorithm,”Neural Computingpp. 181–214, 1994.

    Google Scholar 

  27. N. Wanas and M. Kamel, “Decision fusion in neural network ensembles,” in International Joint Conference on Neural NetworksWashington, DC, June 2001, vol. 4, pp. 29522957.

    Google Scholar 

  28. A. Krogh and J. Vedelsby, “Neural network ensembles, cross validation, and active learning,” in Neural Information Processing Systems, G. Tesauro, D. Touretzky, and T. Leen, Eds., vol. 7, pp. 231–238. MIT Press, Cambridge, Cambridge, 1995.

    Google Scholar 

  29. L. Breiman, “Bagging predictors,” Machine Learning, vol. 26, no. 2, pp. 123–140, 1996.

    Google Scholar 

  30. K. Turner and J. Ghosh, “Error correlation and error reduction in ensemble classifiers,” Connection Science, Special issue on Combining Estimators, vol. 8, pp. 385–404, 1996.

    Google Scholar 

  31. R. Schapire, “The strength of weak learnability,” Machine Learning, vol. 5, pp. 197–227, 1990.

    Google Scholar 

  32. Y. Freund and R. Schapire, “Experiments with a new boosting algorithm,” in Proceedings of the Thirteenth International Conference on Machine Learning 1996, pp. 149–156, Morgan Kaufmann.

    Google Scholar 

  33. Y. Raviv and N. Intrator, “Variance reduction via noise and bias contraints,” in Combining Artificial Neural Nets, A. Sharkey, Ed. Springer-Verlag Publishers, 1999.

    Google Scholar 

  34. G. Auda and M. Kamel, “EVOL: Ensemble voting on-line,” International Conference on Neural Networks, pp. 1356–1360, 1998.

    Google Scholar 

  35. R. Duin, “The combining classifier: To train or not to train?,” in International Conference on Pattern Recognition, Quebec City, QC, Canada, June 2002.

    Google Scholar 

  36. L. Hodge, G. Auda, and M. Kamel, “Learning decision fusion in cooperative modular neural networks,” in Proceedings of the International Joint Conference on Neural Networks, Washington, D.C., 1999.

    Google Scholar 

  37. G. Auda and M. Kamel, “CMNN: Cooperative modular neural networks for pattern recognition,” Pattern Recognition Letters, vol. 18, pp. 1391–1398, 1997.

    Article  Google Scholar 

  38. C. Blake and C. Merz, “UCI repository of machine learning databases [http://www.ics.uci.edu/-,mlearn/MLRepository.html],” 1998.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2004 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Wanas, N.M., Kamel, M.S. (2004). Adaptive Training for Combining Classifier Ensembles. In: Rajapakse, J.C., Wang, L. (eds) Neural Information Processing: Research and Development. Studies in Fuzziness and Soft Computing, vol 152. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-39935-3_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-39935-3_15

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-53564-2

  • Online ISBN: 978-3-540-39935-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics