Skip to main content

Combining One-Class Classifiers

  • Conference paper
  • First Online:
Multiple Classifier Systems (MCS 2001)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2096))

Included in the following conference series:

Abstract

In the problem of one-class classification target objects should be distinguished from outlier objects. In this problem it is assumed that only information of the target class is available while nothing is known about the outlier class. Like standard two-class classifiers, one-class classifiers hardly ever fit the data distribution perfectly. Using only the best classifier and discarding the classifiers with poorer performance might waste valuable information. To improve performance the results of different classifiers (which may differ in complexity or training algorithm) can be combined. This can not only increase the performance but it can also increase the robustness of the classification. Because for one-class classifiers only information of one of the classes is present, combining one-class classifiers is more difficult. In this paper we investigate if and how one-class classifiers can be combined best in a handwritten digit recognition problem.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. J.A. Benediktsson and P.H. Swain. Consensus theoretic classification methods. IEEE Transactions on Systems, Man and Cybernetics, 22(4):688–704, July/August 1992.

    Article  MATH  Google Scholar 

  2. A.P. Bradley. The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognition, 30(7):1145–1159, 1997.

    Article  Google Scholar 

  3. G.A. Carpenter, S. Grossberg, and D.B. Rosen. ART 2-A: an adaptive resonance algorithm for rapid category learning and recognition. Neural Networks, 4(4):493–504, 1991.

    Article  Google Scholar 

  4. R.O. Duda and P.E. Hart. Pattern Classification and Scene Analysis. John Wiley & Sons, New York, 1973.

    MATH  Google Scholar 

  5. R.P.W. Duin. UCI dataset, multiple features database. Available from ftp://ftp.ics.uci.edu/pub/machine-learning-databases/mfeat/, 1999.

  6. N. Japkowicz. Concept-Learning in the absence of counter-examples: an autoassociation-based approach to classification. PhD thesis, New Brunswick Rutgers, The State University of New Jersey, 1999.

    Google Scholar 

  7. J. Kittler, R.P.W. Duin, and J. Matas. On combining classifiers. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(4):226–239, 1998.

    Article  Google Scholar 

  8. J. Kittler, A. Hojjatoleslami, and T. Windeatt. Weighting factors in multiple expert fusion. In Clark A.F., editor, Proceedings of the 8th British Machine Vision Conference 1997, pages 41–50. University of Essex Printing Service, 1997.

    Google Scholar 

  9. M.A. Kraaijveld and R.P.W. Duin. A criterion for the smoothing parameter for parzen-estimators of probability density functions. Technical report, Delft University of Technology, September 1991.

    Google Scholar 

  10. M.R. Moya, M.W. Koch, and L.D. Hostetler. One-class classifier networks for target recognition applications. In Proceedings world congress on neural networks, pages 797–801, Portland, OR, 1993. International Neural Network Society, INNS.

    Google Scholar 

  11. M. Tanigushi and V. Tresp. Averaging regularized estimators. Neural Computation, 9:1163–1178, 1997.

    Article  Google Scholar 

  12. L. Tarassenko, P. Hayton, and M. Brady. Novelty detection for the identification of masses in mammograms. In Proc. of the Fourth International IEE Conference on Artificial Neural Networks, volume 409, pages 442–447, 1995.

    Article  Google Scholar 

  13. D.M.J. Tax and R.P.W Duin. Data domain description using support vectors. In M. Verleysen, editor, Proceedings of the European Symposium on Artificial Neural Networks 1999, pages 251–256. D.Facto, Brussel, April 1999.

    Google Scholar 

  14. D.M.J. Tax and R.P.W Duin. Support vector domain description. Pattern Recognition Letters, 20(11-13):1191–1199, December 1999.

    Article  Google Scholar 

  15. A. Ypma and R.P.W. Duin. Support objects for domain approximation. In ICANN’98, Skovde (Sweden), September 1998.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Tax, D.M.J., Duin, R.P.W. (2001). Combining One-Class Classifiers. In: Kittler, J., Roli, F. (eds) Multiple Classifier Systems. MCS 2001. Lecture Notes in Computer Science, vol 2096. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-48219-9_30

Download citation

  • DOI: https://doi.org/10.1007/3-540-48219-9_30

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-42284-6

  • Online ISBN: 978-3-540-48219-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics