Advertisement

Measures to Evaluate Rankings of Classification Algorithms

  • Carlos Soares
  • Pavel Brazdil
  • Joaquim Costa
Conference paper
Part of the Studies in Classification, Data Analysis, and Knowledge Organization book series (STUDIES CLASS)

Abstract

Due to the wide variety of algorithms for supervised classification originating from several research areas, selecting one of them to apply on a given problem is not a trivial task. Recently several methods have been developed to create rankings of classification algorithms based on their previous performance. Therefore, it is necessary to develop techniques to evaluate and compare those methods. We present three measures to evaluate rankings of classification algorithms, give examples of their use and discuss their characteristics.

Keywords

Classification Algorithm Average Correlation Ranking Method Supervise Classification Weighted Correlation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. BRAZDIL, P. and SOARES C. (2000): A Comparison of Ranking Methods for Classification Algorithm Selection. To be published in: Proceedings of the European Conference on Machine Learning. Google Scholar
  2. GAMA, J. and BRAZDIL, P. (1995): Characterization of Classification Algorithms. In: Pinto-Ferreira, C. and Mamede, N. (Eds.):Progress in Artificial Intelligence. Springer-Verlag, 189–200.Google Scholar
  3. DIETTERICH, T.G (1998): Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms. Neural Computation, 10, 7, 1895–1924ftp://ftp.cs.orst.edu/pub/tgd/papers/nc-stats.ps.gz CrossRefGoogle Scholar
  4. NAKHAEIZADEH, G. and SCHNABL, A. (1997): Development of Multi-Criteria Metrics for Evaluation of Data Mining Algorithms. In: D. Heckerman and H. Mannila and D. Pregibon and R. Uthurusamy (Eds.): Proceedings of the Third International Conference on Knowledge Discovery in Databases. AAAI Press, 37–42.Google Scholar
  5. NEAVE, H.R. and WORTHINGTON, P.L. (1992):Distribution-Free Tests Routledge.Google Scholar
  6. SALZBERG, S.L. (1997): On Comparing Classifiers: Pitfalls to Avoid and a Recommended Approach.Data Mining and Knowledge Discovery1, 317–327 http://www.cs.jhu.edu/~salzberg/critique.ps CrossRefGoogle Scholar
  7. SOARES, C.P. (1999):Ranking Classification Algorithms on Past Performance. M.Sc. Thesis, Faculty of Economics, University of Portohttp://www.ncc.up.pt/~csoares/miac/thesis_revised.zip Google Scholar

Copyright information

© Springer-Verlag Berlin · Heidelberg 2000

Authors and Affiliations

  • Carlos Soares
    • 1
  • Pavel Brazdil
    • 1
  • Joaquim Costa
    • 2
  1. 1.LIACC/FEP, University of PortoPortoPortugal
  2. 2.LIACC/DMA-FCUP, University of PortoPortoPortugal

Personalised recommendations