Skip to main content

Reducing Rankings of Classifiers by Eliminating Redundant Classifiers

  • Conference paper
  • First Online:
Progress in Artificial Intelligence (EPIA 2001)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2258))

Included in the following conference series:

  • 613 Accesses

Abstract

Several methods have been proposed to generate rankings of supervised classification algorithms based on their previous performance on other datasets [8],[4]. Like any other prediction method, ranking methods will sometimes err, for instance, they may not rank the best algorithm in the first position. Often the user is willing to try more than one algorithm to increase the possibility of identifying the best one. The information provided in the ranking methods mentioned is not quite adequate for this purpose. That is, they do not identify those algorithms in the ranking that have reasonable possibility of performing best. In this paper, we describe a method for that purpose. We compare our method to the strategy of executing all algorithms and to a very simple reduction method, consisting of running the top three algorithms. In all this work we take time as well as accuracy into account. As expected, our method performs better than the simple reduction method and shows a more stable behavior than running all algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. P. Brazdil and C. Soares. A comparison of ranking methods for classification algorithm selection. In R.L. de Mántaras and E. Plaza, editors, Machine Learning: Proceedings of the 11th European Conference on Machine Learning ECML2000, pages 63–74. Springer, 2000.

    Google Scholar 

  2. W.W. Cohen. Fast effective rule induction. In A. Prieditis and S. Russell, editors, Proceedings of the 11th International Conference on Machine Learning, pages 115–123. Morgan Kaufmann, 1995.

    Google Scholar 

  3. J. Gama. Probabilistic linear tree. In D. Fisher, editor, Proceedings of the 14th International Machine Learning Conference (ICML97), pages 134–142. Morgan Kaufmann, 1997.

    Google Scholar 

  4. J. Keller, I. Paterson, and H. Berrer. An integrated concept for multi-criteria ranking of data-mining algorithms. In J. Keller and C. Giraud-Carrier, editors, Meta-Learning: Building Automatic Advice Strategies for Model Selection and Method Combination, 2000.

    Google Scholar 

  5. R. Kohavi, G. John, R. Long, D. Mangley, and K. Pfleger. MLC++: A machine learning library in c++. Technical report, Stanford University, 1994.

    Google Scholar 

  6. D. Michie, D.J. Spiegelhalter, and C.C. Taylor. Machine Learning, Neural and Statistical Classification. Ellis Horwood, 1994.

    Google Scholar 

  7. R. Quinlan. C5.0: An Informal Tutorial. RuleQuest, 1998. http://www.rulequest.com/see5-unix.html.

  8. C. Soares and P. Brazdil. Zoomed ranking: Selection of classification algorithms based on relevant performance information. In D.A. Zighed, J. Komorowski, and J. Zytkow, editors, Proceedings of the Fourth European Conference on Principles and Practice of Knowledge Discovery in Databases (PKDD2000), pages 126–135. Springer, 2000.

    Google Scholar 

  9. C. Soares, P. Brazdil, and J. Costa. Measures to compare rankings of classification algorithms. In H.A.L. Kiers, J.-P. Rasson, P.J.F. Groenen, and M. Schader, editors, Data Analysis, Classification and Related Methods, Proceedings of the Seventh Conference of the International Federation of Classification Societies IFCS, pages 119–124. Springer, 2000.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Brazdil, P., Soares, C., Pereira, R. (2001). Reducing Rankings of Classifiers by Eliminating Redundant Classifiers. In: Brazdil, P., Jorge, A. (eds) Progress in Artificial Intelligence. EPIA 2001. Lecture Notes in Computer Science(), vol 2258. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45329-6_5

Download citation

  • DOI: https://doi.org/10.1007/3-540-45329-6_5

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-43030-8

  • Online ISBN: 978-3-540-45329-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics