The previous chapters pretended that performance evaluation in machine learning is a fairly straightforward matter. All it takes is to apply the induced classifier to a set of examples whose classes are known, and then count the number of errors the classifier has made. In reality, things are not as simple. Error rate rarely paints the whole picture, and there are situations in which it can even be misleading. This is why the conscientious engineer wants to be acquainted with other criteria to assess the classifiers’ performance. This knowledge will enable her to choose the one that is best in capturing the behavioral aspects of interest.
- 51.Kubat, M., Holte, R., & Matwin, S. (1997). Learning when negatives examples abound. In Proceedings of the European conference on machine learning (ECML’97), Apr 1997, Prague (pp. 146–153).Google Scholar