Evaluating and Comparing Classifiers: Review, Some Recommendations and Limitations

Conference paper

DOI: 10.1007/978-3-319-59162-9_2

Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 578)
Cite this paper as:
Stąpor K. (2018) Evaluating and Comparing Classifiers: Review, Some Recommendations and Limitations. In: Kurzynski M., Wozniak M., Burduk R. (eds) Proceedings of the 10th International Conference on Computer Recognition Systems CORES 2017. CORES 2017. Advances in Intelligent Systems and Computing, vol 578. Springer, Cham

Abstract

Performance evaluation of supervised classification learning method related to its prediction ability on independent data is very important in machine learning. It is also almost unthinkable to carry out any research work without the comparison of the new, proposed classifier with other already existing ones. This paper aims to review the most important aspects of the classifier evaluation process including the choice of evaluating metrics (scores) as well as the statistical comparison of classifiers. Critical view, recommendations and limitations of the reviewed methods are presented. The article provides a quick guide to understand the complexity of the classifier evaluation process and tries to warn the reader about the wrong habits.

Keywords

Supervised classification Classifier evaluation Performance metrics Statistical classifier comparison 

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  1. 1.Institute of Computer ScienceSilesian Technical UniversityGliwicePoland

Personalised recommendations