Optimal “Anti-Bayesian” Parametric Pattern Classification Using Order Statistics Criteria

  • A. Thomas
  • B. John Oommen
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7441)

Abstract

The gold standard for a classifier is the condition of optimality attained by the Bayesian classifier. Within a Bayesian paradigm, if we are allowed to compare the testing sample with only a single point in the feature space from each class, the optimal Bayesian strategy would be to achieve this based on the (Mahalanobis) distance from the corresponding means. The reader should observe that, in this context, the mean, in one sense, is the most central point in the respective distribution. In this paper, we shall show that we can obtain optimal results by operating in a diametrically opposite way, i.e., a so-called “anti-Bayesian” manner. Indeed, we shall show the completely counter-intuitive result that by working with a very few (sometimes as small as two) points distant from the mean, one can obtain remarkable classification accuracies. Further, if these points are determined by the Order Statistics of the distributions, the accuracy of our method, referred to as Classification by Moments of Order Statistics (CMOS), attains the optimal Bayes’ bound! This claim, which is totally counter-intuitive, has been proven for many uni-dimensional, and some multi-dimensional distributions within the exponential family, and the theoretical results have been verified by rigorous experimental testing. Apart from the fact that these results are quite fascinating and pioneering in their own right, they also give a theoretical foundation for the families of Border Identification (BI) algorithms reported in the literature.

Keywords

Classification using Order Statistics (OS) Moments of OS 

References

  1. 1.
    Duda, R.O., Hart, P.: Pattern Classification and Scene Analysis. A Wiley Interscience Publication (2000)Google Scholar
  2. 2.
    Garcia, S., Derrac, J., Cano, J.R., Herrera, F.: Prototype Selection for Nearest Neighbor Classification: Taxonomy and Empirical Study. IEEE Transactions on Pattern Analysis and Machine IntelligenceGoogle Scholar
  3. 3.
    Triguero, I., Derrac, J., Garcia, S., Herrera, F.: A Taxonomy and Experimental Study on Prototype Generation for Nearest Neighbor Classification. IEEE Transactions on Systems, Man and Cybernetics - Part C: Applications and ReviewsGoogle Scholar
  4. 4.
    Kim, S., Oommen, B.J.: On Using Prototype Reduction Schemes and Classifier Fusion Strategies to Optimize Kernel-Based Nonlinear Subspace Methods. IEEE Transactions on Pattern Analysis and machine Intelligence 27, 455–460 (2005)CrossRefGoogle Scholar
  5. 5.
    Kuncheva, L.I., Bezdek, J.C., Duin, R.P.W.: Decision Templates for Multiple Classifier Fusion: An Experimental Comparison. Pattern Recognition - The Journal of the Pattern Recognition Society 34, 299–314 (2001)MATHCrossRefGoogle Scholar
  6. 6.
    Hart, P.E.: The Condensed Nearest Neighbor Rule. IEEE Transactions on Information Theory 14, 515–516 (1968)CrossRefGoogle Scholar
  7. 7.
    Gates, G.W.: The Reduced Nearest Neighbor Rule. IEEE Transactions on Information Theory 18, 431–433 (1972)CrossRefGoogle Scholar
  8. 8.
    Chang, C.L.: Finding Prototypes for Nearest Neighbor Classifiers. IEEE Transactions on Computing 23, 1179–1184 (1974)MATHCrossRefGoogle Scholar
  9. 9.
    Ritter, G.L., Woodruff, H.B., Lowry, S.R., Isenhour, T.L.: An Algorithm for a Selective Nearest Neighbor Rule. IEEE Transactions on Information Theory 21, 665–669 (1975)MATHCrossRefGoogle Scholar
  10. 10.
    Devijver, P.A., Kittler, J.: On the Edited Nearest Neighbor Rule. In: Fifth International Conference on Pattern Recognition, pp. 72–80 (December 1980)Google Scholar
  11. 11.
  12. 12.
    Thomas, A.: Pattern Classification using Novel Order Statistics and Border Identification Methods. PhD thesis, School of Computer Science, Carleton University (to be submitted, 2013)Google Scholar
  13. 13.
    Duch, W.: Similarity based methods: a general framework for Classification, Approximation and Association. Control and Cybernetics 29(4), 937–968 (2000)MathSciNetMATHGoogle Scholar
  14. 14.
    Foody, G.M.: Issues in Training Set Selection and Refinement for Classification by a Feedforward Neural Network. In: Proceedings of IEEE International Geoscience and Remote Sensing Symposium, pp. 409–411 (1998)Google Scholar
  15. 15.
    Li, G., Japkowicz, N., Stocki, T.J., Ungar, R.K.: Full Border Identification for Reduction of Training Sets. In: Proceedings of the Canadian Society for Computational Studies of Intelligence, 21st Conference on Advances in Artificial Intelligence, pp. 203–215 (2008)Google Scholar
  16. 16.
    Thomas, A., Oommen, B.J.: The Foundational Theory of Optimal “Anti-Bayesian” Parametric Pattern Classification Using Order Statistics Criteria (to be submitted, 2012)Google Scholar
  17. 17.
    Too, Y., Lin, G.D.: Characterizations of Uniform and Exponential Distributions. Academia Sinica 7(5), 357–359 (1989)MathSciNetMATHGoogle Scholar
  18. 18.
    Ahsanullah, M., Nevzorov, V.B.: Order Statistics: Examples and Exercises. Nova Science Publishers, Inc. (2005)Google Scholar
  19. 19.
    Morris, K.W., Szynal, D.: A goodness-of-fit for the Uniform Distribution based on a Characterization. Journal of Mathematical Science 106, 2719–2724 (2001)MathSciNetMATHCrossRefGoogle Scholar
  20. 20.
    Lin, G.D.: Characterizations of Continuous Distributions via Expected values of two functions of Order Statistics. Sankhya: The Indian Journal of Statistics 52, 84–90 (1990)MATHGoogle Scholar
  21. 21.
    Thomas, A., Oommen, B.J.: Optimal “Anti-Bayesian” Parametric Pattern Classification for the Exponential Family Using Order Statistics Criteria (to be submitted, 2012)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • A. Thomas
    • 1
  • B. John Oommen
    • 1
  1. 1.School of Computer ScienceCarleton UniversityOttawaCanada

Personalised recommendations