Advertisement

Performance Evaluation

  • Miroslav Kubat
Chapter

Abstract

The previous chapters pretended that performance evaluation in machine learning is a fairly straightforward matter. All it takes is to apply the induced classifier to a set of examples whose classes are known, and then count the number of errors the classifier has made. In reality, things are not as simple. Error rate rarely paints the whole picture, and there are situations in which it can even be misleading. This is why the conscientious engineer wants to be acquainted with other criteria to assess the classifiers’ performance. This knowledge will enable her to choose the one that is best in capturing the behavioral aspects of interest.

References

  1. 22.
    Dietterich, T. (1998). Approximate statistical tests for comparing supervised classification learning algorithms. Neural Computation, 10, 1895–1923.CrossRefGoogle Scholar
  2. 36.
    Hellman, M. E. (1970). The nearest neighbor classification rule with the reject option. IEEE Transactions on Systems Science and Cybernetics, 6, 179–185.CrossRefzbMATHGoogle Scholar
  3. 51.
    Kubat, M., Holte, R., & Matwin, S. (1997). Learning when negatives examples abound. In Proceedings of the European conference on machine learning (ECML’97), Apr 1997, Prague (pp. 146–153).Google Scholar
  4. 56.
    Louizou, G. & Maybank, S. J. (1987). The nearest neighbor and the bayes error rates. IEEE Transactions on Pattern Analysis and Machine Intelligence, 9, 254–262.CrossRefzbMATHGoogle Scholar
  5. 100.
    Wolpert, D. (1996). The lack of a priori distinctions between learning algorithms. Neural Computation, 8, 1341–1390.CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Miroslav Kubat
    • 1
  1. 1.Department of Electrical and Computer EngineeringUniversity of MiamiCoral GablesUSA

Personalised recommendations