Advertisement

Making Early Predictions of the Accuracy of Machine Learning Classifiers

  • James Edward Smith
  • Muhammad Atif Tahir
  • Davy Sannen
  • Hendrik Van Brussel
Chapter

Abstract

The accuracy of machine learning systems is a widely studied research topic. Established techniques such as cross validation predict the accuracy on unseen data of the classifier produced by applying a given learning method to a given training data set. However, they do not predict whether incurring the cost of obtaining more data and undergoing further training will lead to higher accuracy. In this chapter, we investigate techniques for making such early predictions. We note that when a machine learning algorithm is presented with a training set the classifier produced, and hence its error, will depend on the characteristics of the algorithm, on training set’s size, and also on its specific composition. In particular we hypothesize that if a number of classifiers are produced, and their observed error is decomposed into bias and variance terms, then although these components may behave differently, their behavior may be predictable. Experimental results confirm this hypothesis, and show that our predictions are very highly correlated with the values observed after undertaking the extra training. This has particular relevance to learning in nonstationary environments, since we can use our characterization of bias and variance to detect whether perceived changes in the data stream arise from sampling variability or because the underlying data distributions have changed, which can be perceived as changes in bias.

Keywords

Data Item Total Error Variance Term True Error Probably Approximately Correct 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgements

This work was supported by the European Commission (project Contract No. STRP016429, acronym DynaVis). This publication reflects only the authors’ views.

References

  1. 1.
    P. L. Bartlett, S. Boucheron, and G. Lugosi. Model selection and error estimation. Machine Learning, 48(1), 85–113 (2002)MATHCrossRefGoogle Scholar
  2. 2.
    S. Boucheron, O. Bousquet, and G. Lugosi. Theory of classification: A survey of some recent advances. ESAIM: P&S, 9, 323–375 (2005)MathSciNetMATHCrossRefGoogle Scholar
  3. 3.
    O. Bousquet, S. Boucheron, and G. Lugosi. Introduction to statistical learning theory. Advanced Lectures on Machine Learning, pp. 169–207 (2004)Google Scholar
  4. 4.
    L. Breiman. Bagging predictors. Machine Learning, 24(2), 123–140 (1996)MathSciNetMATHGoogle Scholar
  5. 5.
    L. Breiman. Bias, variance, and arcing classifiers. Technical Report 460, Statistics Department, University of California, Berkeley, CA, 1996Google Scholar
  6. 6.
    L. Breiman. Random forests. Machine Learning, 45(1), 5–32 (2001)MATHCrossRefGoogle Scholar
  7. 7.
    L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone. Classification and Regression Trees. Wadsworth International Group, Belmont, California. (1994)Google Scholar
  8. 8.
    D. Brian and G.I. Webb. On the effect of data set size on bias and variance in classification learning. In Proceedings of the 4th Australian Knowledge Acquisition Workshop, pp. 117–128 (1999)Google Scholar
  9. 9.
    G. Brown, J. Wyatt, R. Harris, and X. Yao. Diversity creation methods: A survey and categorisation. Journal of Information Fusion, 6(1), 5–20 (2005)CrossRefGoogle Scholar
  10. 10.
    C. Cortes, L.D. Jackel, S.A. Solla, V. Vapnik, and J.S. Denker. Learning curves: Asymptotic values and rate of convergence. In Advances in Neural Information Processing Systems: 6, pp. 327–334 (1994)Google Scholar
  11. 11.
    T.M. Cover and P.E. Hart. Nearest neighbor pattern classification. IEEE Transactions on Information Theory, 13(1), 21–27 (1967)MATHCrossRefGoogle Scholar
  12. 12.
    Pedro Domingos. A unified bias–variance decomposition and its applications. In Proceedings of the 17th International Conference on Machine Learning, pp. 231–238. Morgan Kaufmann, San Francisco (2000)Google Scholar
  13. 13.
    R O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification. Wiley Interscience, 2nd edition, New York (2000)Google Scholar
  14. 14.
    C. Eitzinger, W. Heidl, E. Lughofer, S. Raiser, J.E. Smith, M.A. Tahir, D. Sannen and H. Van Brussel. Assessment of the Influence of Adaptive Components in Trainable Surface Inspection Systems, Machine Vision and Applications, 21(5), 613–626 (2010)CrossRefGoogle Scholar
  15. 15.
    Yoav Freund and Robert E. Shapire. Experiments with a new boosting algorithm. In Proceedings of 13th International Conference on Machine Learning, pp. 148–156. (1996)Google Scholar
  16. 16.
    J. H. Friedman. On bias, variance, 0/1-loss, and the curse of dimensionality. Data Mining and Knowledge Discovery, 1(1), 55–77 (2000)CrossRefGoogle Scholar
  17. 17.
    S. Geman, E. Bienenstock, and R. Doursat. Neural networks and the bias/variance dilemma. Neural Computation, textbf4, 1–48 (1995)Google Scholar
  18. 18.
    T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: Data Mining, Inference and Prediction. Springer Verlag, New York, Heidelberg, London (2001)MATHGoogle Scholar
  19. 19.
    G. James. Variance and bias for general loss functions. Machine Learning, 51(2), 115–135 (2003)MATHCrossRefGoogle Scholar
  20. 20.
    R. Kohavi. The power of decision tables. In Proceedings of the 8th European Conference on Machine Learning, pp. 174–189. Springer Verlag, London, UK (1995)Google Scholar
  21. 21.
    R. Kohavi and D. H. Wolpert. Bias plus variance decomposition for zero-one loss functions. In Proceedings of the 13th International Conference on Machine Learning., pp. 275–283 (1996)Google Scholar
  22. 22.
    B. E. Kong and T. G. Dietterich. Error-correcting output coding corrects bias and variance. In Proceedings of the 12th International Conference on Machine Learning, pp. 313–321, San Francisco, Morgan Kaufmann (1995)Google Scholar
  23. 23.
    R. Leite and P. Brazdil. Improving Progressive Sampling via Meta-learning on Learning Curves. In Proceedings of the European Conference on machine Learning (ECML) pp. 250–261 (2004)Google Scholar
  24. 24.
    E. Lughofer. Extensions of vector quantization for incremental clustering. Pattern Recognition, 41(3), 995–1011 (2008)MATHCrossRefGoogle Scholar
  25. 25.
    E. Lughofer, J.E. Smith, M.A. Tahir, P. Caleb-Solly, C. Eitzinger, D. Sannen and M. Nuttin. Human–Machine Interaction Issues in Quality Control Based on On-Line Image Classification, IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, 39(5), 960–971 (2009)CrossRefGoogle Scholar
  26. 26.
    S. Mukherjee, P. Tamayo, S. Rogers, R.M. Rifkin, A. Engle, C. Campbell, T.R. Golub, and J.P. Mesirov. Estimating dataset size requirements for classifying DNA microarray data. Journal of Computational Biology, 10(2), 119–142 (2003)CrossRefGoogle Scholar
  27. 27.
    Bartlett P.L. and S Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3, 463–482 (2002)Google Scholar
  28. 28.
    J. Platt. Fast training of support vector machines using sequential minimal optimization. In B. Schoelkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods—Support Vector Learning. MIT Press, Mass (1998)Google Scholar
  29. 29.
    F.J. Provost, D. Jensen and T. Oates. Efficient Progressive Sampling. In Proceedings of Knowledge Discovery in Databases (KDD), pp. 23–32 (1999)Google Scholar
  30. 30.
    J.R. Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann, San Francisco (1993)Google Scholar
  31. 31.
    J. J. Rodriguez, C. J. Alonso, and O. J. Prieto. Bias and variance of rotation-based ensembles. In Computational Intelligence and Bioinspired Systems, number 3512 in Lecture Notes in Computer Science, pp. 779–786. Springer, Berlin, Heidelberg (2005)Google Scholar
  32. 32.
    D. Sannen, H. Van Brussel, and M. Nuttin. Classifier fusion using discounted Dempster–Shafer combination. In Proceedings of the 5th International Conference on Machine Learning and Data Mining, Poster Proceedings, pp. 216–230 (2007)Google Scholar
  33. 33.
    J. E. Smith and M. A. Tahir. Stop wasting time: On predicting the success or failure of learning for industrial applications. In Proceedings of the 8th International Conference on Intelligent Data Engineering and Automated Learning (IDEAL’08), number 4881 in Lecture Notes in Computer Science, pp. 673–683. Springer Verlag, Berlin, Heidelberg (2007)Google Scholar
  34. 34.
    M. Stone. Cross-validatory choice and assessment of statistical predictions. Journal of the Royal Statistical Society, 36, 111–147 (1974)MATHGoogle Scholar
  35. 35.
    V. Vapnik. The Nature of Statistical Learning Theory. Springer, New York (2000)MATHGoogle Scholar
  36. 36.
    V. Vapnik and O. Chapelle. Bounds on error expectation for support vector machines. Neural Computation, 12(9), 2013–2036 (2000)CrossRefGoogle Scholar
  37. 37.
    G. I. Webb. Multiboosting: A technique for combining boosting and wagging. Machine Learning, 40(2), 159–196 (2000)CrossRefGoogle Scholar
  38. 38.
    Geoffrey I. Webb and Paul Conilione. Estimating bias and variance from data. Technical report, Monash University, http://www.csse.monash.edu.au/webb/Files/WebbConilione03.pdf (2004)
  39. 39.
    I.H. Witten and E. Frank. Data Mining: Practical machine learning tools and techniques. Morgan Kaufmann, San Francisco, 2nd edition (2005)MATHGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2012

Authors and Affiliations

  • James Edward Smith
    • 1
  • Muhammad Atif Tahir
    • 2
  • Davy Sannen
    • 3
  • Hendrik Van Brussel
    • 3
  1. 1.Department of Computer Science and Creative TechnologiesUniversity of the West of EnglandBristolUK
  2. 2.School of Computing, Engineering and Information SciencesUniversity of NorthumbriaNewcastleUK
  3. 3.Department of Mechanical EngineeringKatholieke Universiteit LeuvenLeuvenBelgium

Personalised recommendations