Skip to main content

Combination of Classifiers

  • Chapter
Pattern Recognition

Part of the book series: Undergraduate Topics in Computer Science ((UTICS,volume 0))

Abstract

A combination or an ensemble of classifiers is a set of classifiers whose individual decisions are combined to classify new examples. A combination of classifiers is often much more accurate than the individual classifiers that make them up. One reason for this could be that the training data may not provide sufficient information for choosing a single best classifier and a combination is the best compromise. Another reason could be that the learning algorithms used may not be able to solve the difficult search problem posed. Since solving the search problem may be difficult, suitable heuristics may be used in the search. As a consequence of this, even though with the training examples and prior knowledge, a unique best hypothesis exists, we may not be able to find it. A combination of classifiers is a way of compensating for imperfect classifiers. The learning algorithms we use may give us good approximations to the true value but may not be the right hypothesis. By taking a weighted combination of these approximations, we may be able to represent the true hypothesis. In fact, the combination could be equivalent to very complex decision trees.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 29.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 39.95
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Bibliography

  1. Bryll, Robert, Ricardo Gutierrez-Osuna and Francis Quek. Attribute bagging: Improving accuracy of classifier ensembles by using random feature subsets. Pattern Recognition 36(6): 1291–1302. 2003.

    Article  MATH  Google Scholar 

  2. Chen, Lei and Mohamed S. Karnel. A generalized adaptive ensemble generation and aggregation approach for multiple classifier systems. Pattern Recognition 42(5): 629–644. 2009.

    Article  MATH  Google Scholar 

  3. Dietterich, T.G. Machine learning research: Four current directions. AI Magazine 18(4): 97–136. 1997.

    Google Scholar 

  4. Drucker, H., C. Cortes, L. D. Jackel, Y. Lecun, V. Vapnik. Boosting and other ensemble methods. Neural Computation 6(6): 1289–1301. 1994.

    Article  MATH  Google Scholar 

  5. Freund, Yoav and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55(1): 119–139. 1997.

    Article  MathSciNet  MATH  Google Scholar 

  6. Freund, Yoav and Robert E. Schapire. A short introduction to boosting. Journal of Japanese Society for Artificial Intelligence 14(5): 771–780. 1999.

    Google Scholar 

  7. Ho, Tin Kam, Jonathan J. Hull and Sargur N. Srihari. Decision combination in multiple classifier systems. IEEE Trans on PAMI 16(1). 1994.

    Google Scholar 

  8. Hothorn, Torsten and Berthold Lausen. Double-bagging: Combining classifiers by bootstrap aggregation. Pattern Recognition 36(6): 1303–1309. 2003.

    Article  MATH  Google Scholar 

  9. Huang, Y. S. and C.Y. Suen. A method of combining multiple experts for the recognition of unconstrained handwritten numerals. IEEE Trans. on Pattern Analysis and Machine Intelligence 17: 90–94. 1995.

    Article  Google Scholar 

  10. Santos, Eulanda M. Dos, Robert Sabourin and Patrick Maupin. A dynamic overproduce-and-choose strategy for the selection of classifier ensembles. Pattern Recognition 41(10): 2993–3009. 2008.

    Article  MATH  Google Scholar 

  11. Saranli, Afsar and Mbeccel Demirekler. A statistical unified framework for rank-based multiple classifier decision combination. Pattern Recognition 34(4): 865–884. 2001.

    Article  MATH  Google Scholar 

  12. Wang, Xiao and Han Wang. Classification by evolutionary ensembles. Pattern Recognition 39(4): 595–607. 2006.

    Article  MATH  Google Scholar 

  13. Windeatt, Terry. Vote counting measures for ensemble classifiers. Pattern Recognition 36(12): 2743–2756. 2003.

    Article  MATH  Google Scholar 

  14. Xu, Lei, Adam Krzyzak and Ching Y. Suen. Methods of combining multiple classifiers and their application to handwriting recognition. IEEE Trans on SMC 22(3). 1992.

    Google Scholar 

  15. Zouari, Hla, Laurent Heutte and Yves Lecourtier. Controlling the diversity in classifier ensembles through a measure of agreement. Pattern Recognition 38(11): 2195–2199. 2005.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. Narasimha Murty .

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Universities Press (India) Pvt. Ltd.

About this chapter

Cite this chapter

Murty, M.N., Devi, V.S. (2011). Combination of Classifiers. In: Pattern Recognition. Undergraduate Topics in Computer Science, vol 0. Springer, London. https://doi.org/10.1007/978-0-85729-495-1_8

Download citation

  • DOI: https://doi.org/10.1007/978-0-85729-495-1_8

  • Publisher Name: Springer, London

  • Print ISBN: 978-0-85729-494-4

  • Online ISBN: 978-0-85729-495-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics