Skip to main content

Assessing the Impact of Changing Environments on Classifier Performance

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 5032))

Abstract

The purpose of this paper is to test the hypothesis that simple classifiers are more robust to changing environments than complex ones. We propose a strategy for generating artificial, but realistic domains, which allows us to control the changing environment and test a variety of situations. Our results suggest that evaluating classifiers on such tasks is not straightforward since the changed environment can yield a simpler or more complex domain. We propose a metric capable of taking this issue into consideration and evaluate our classifiers using it. We conclude that in mild cases of population drifts simple classifiers deteriorate more than complex ones and that in more severe cases as well as in class definition changes, all classifiers deteriorate to about the same extent. This means that in all cases, complex classifiers remain more accurate than simpler ones, thus challenging the hypothesis that simple classifiers are more robust to changing environments than complex ones.

Supported by the Natural Science and Engineering Council of Canada and the Spanish MEC project DPI2006-02550.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Alaiz-Rodríguez, R., Guerrero-Curieses, A., Cid-Sueiro, J.: Minimax regret classifier for imprecise class distributions. Journal of Machine Learning Research (2007)

    Google Scholar 

  2. Drummond, C., Holte, R.C.: Cost curves: An improved method for visualizing classifier performance. Machine Learning 65(1), 95–130 (2006)

    Article  Google Scholar 

  3. Hand, D.J.: Classifier technology and the illusion of progress. Statistical Sciences 21(1), 1–15 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  4. Holte, R.: Elaboration on two points raised in classifier technology and the illusion of progress. Statistical Science 21(1) (2006)

    Google Scholar 

  5. Huang, J., Smola, A.J., Gretton, A., Borgwardt, K.M., Schölkopf, B.: Correcting sample selection bias by unlabeled data. In: Schölkopf, B., Platt, J., Hoffman, T. (eds.) Advances in Neural Information Processing Systems 19, pp. 601–608. MIT Press, Cambridge (2007)

    Google Scholar 

  6. Japkowicz, N.: Why question machine learning evaluation methods? an illustrative review of the shortcomings of current methods. In: AAAI-2006 Workshop on Evaluation Methods for Machine Learning, Boston, USA (2006)

    Google Scholar 

  7. Kelly, M.G., Hand, D.J., Adams, N.M.: The impact of changing populations on classifier performance. In: Proceedings of Fifth International Conference on SIG Knowledge Discovery and Data Mining, San Diego, CA, pp. 367–371 (1999)

    Google Scholar 

  8. Lane, T., Brodley, C.E.: Approaches to online learning and concept drift for user identification in computer security. In: Knowledge Discovery and Data Mining, pp. 259–263 (1998)

    Google Scholar 

  9. Provost, F., Fawcett, T.: Robust classification systems for imprecise environments. Machine Learning 42(3), 203–231 (2001)

    Article  MATH  Google Scholar 

  10. Saerens, M., Latinne, P., Decaestecker, C.: Adjusting a classifier for new a priori probabilities: A simple procedure. Neural Computation 14, 21–41 (2002)

    Article  MATH  Google Scholar 

  11. Shimodaira, H.: Improving predictive inference under convariance shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference (2000)

    Google Scholar 

  12. Witten, I.H., Frank, E.: Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations. Morgan Kaufmann, San Francisco (1999)

    Google Scholar 

  13. Yamazaki, K., Kawanabe, M., Watanabe, S., Sugiyama, M., Müller, K.: Asymptotic bayesian generalization error when training and test distributions are different. In: ICML 2007, pp. 1079–1086. ACM Press, New York (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Sabine Bergler

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Alaiz-Rodríguez, R., Japkowicz, N. (2008). Assessing the Impact of Changing Environments on Classifier Performance. In: Bergler, S. (eds) Advances in Artificial Intelligence. Canadian AI 2008. Lecture Notes in Computer Science(), vol 5032. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-68825-9_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-68825-9_2

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-68821-1

  • Online ISBN: 978-3-540-68825-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics