Nearest neighbors in random subspaces

  • Tin Kam Ho
Statistical Classification Techniques
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1451)


Recent studies have shown that the random subspace method can be used to create multiple independent tree-classifiers that can be combined to improve accuracy. We apply the procedure to k-nearest-neighbor classifiers and show that it can achieve similar results. We examine the effects of several parameters of the method by experiments using data from a digit recognition problem. We show that the combined accuracies follow a trend of increase with increasing number of component classifiers, and that with an appropriate subspace dimensionality, the method can be superior to simple k-nearest-neighbor classification, The method's superiority is maintained when smaller number of training prototypes are available, i.e., when conventional knn classifiers suffer most heavily from the curse of dimensionality.


Support Vector Machine Subspace Method Weak Classifier Component Classifier Random Subspace 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Chandrasekaran, B., Jain, A.K.: On balancing decision functions, J. of Cybernetics and Information Science, 2, 3 (1979) 12–15Google Scholar
  2. 2.
    Cover, T.M., Hart, P.E.: Nearest neighbor pattern classification. IEEE Transactions on Information Theory, IT-13, 1 (1967) 21–27CrossRefGoogle Scholar
  3. 3.
    Fukunaga, K., Hummels, D.M.: Bias of nearest neighbor error estimates. IEEE Transactions on Pattern Analysis and Machine Intelligence, 9 (1987) 103–112Google Scholar
  4. 4.
    Fukunaga, K., Hummels, D.M.: Bayes error estimation using Parzen and k-NN procedures. IEEE Transactions on Pattern Analysis and Machine Intelligence, 9 (1987) 634–643Google Scholar
  5. 5.
    Hamamoto, Y., Uchimura, S., Tomita, S.: A bootstrap technique for nearest neighbor classifier design. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19, 1 (1997) 73–79CrossRefGoogle Scholar
  6. 6.
    Ho, T.K.: Random decision forests, Proceedings of the 3rd International Conference on Document Analysis and Recognition, Montreal, Canada, August 14–18 (1995) 278–282Google Scholar
  7. 7.
    Ho, T.K.: C4,5 decision forests, Proceedings of the 14th International Conference on Pattern Recognition, Brisbane, Australia, August 17–20 (1998)Google Scholar
  8. 8.
    Ho, T.K., Hull, J.J., Srihari, S.N.: Decision combination in multiple classifier systems. IEEE Transactions on Pattern Analysis and Machine Intelligence, 16, 1 (1994) 66–75CrossRefGoogle Scholar
  9. 9.
    Ho, T.K., Baird, H.S.: Large-scale simulation studies in image pattern recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19, 10 (1997) 1067–1079CrossRefGoogle Scholar
  10. 10.
    Ho, T.K., Kleinberg, E.M.: Building projectable classifiers of arbitrary complexity. Proceedings of the 13th International Conference on Pattern Recognition, Vienna, Austria, August 25–30 (1996) 880–885Google Scholar
  11. 11.
    Kleinberg, E.M.: Stochastic discrimination. Annals of Mathematics and Artificial Intelligence, 1 (1990) 207–239CrossRefGoogle Scholar
  12. 12.
    Kleinberg, E.M.: An overtraining-resistant stochastic modeling method for pattern recognition, Annals of Statistics, 4, 6 (1996) 2319–2349Google Scholar
  13. 13.
    Vapnik, V.: The nature of statistical learning theory. Springer-Verlag (1995)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • Tin Kam Ho
    • 1
  1. 1.Bell LaboratoriesLucent TechnologiesMurray HillUSA

Personalised recommendations