Advertisement

Multi-class Leveraged κ-NN for Image Classification

  • Paolo Piro
  • Richard Nock
  • Frank Nielsen
  • Michel Barlaud
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6494)

Abstract

The k-nearest neighbors (k-NN) classification rule is still an essential tool for computer vision applications, such as scene recognition. However, k-NN still features some major drawbacks, which mainly reside in the uniform voting among the nearest prototypes in the feature space.

In this paper, we propose a new method that is able to learn the “relevance” of prototypes, thus classifying test data using a weighted k-NN rule. In particular, our algorithm, called Multi-class Leveraged k-nearest neighbor (MLNN), learns the prototype weights in a boosting framework, by minimizing a surrogate exponential risk over training data. We propose two main contributions for improving computational speed and accuracy. On the one hand, we implement learning in an inherently multiclass way, thus providing significant computation time reduction over one-versus-all approaches. Furthermore, the leveraging weights enable effective data selection, thus reducing the cost of k-NN search at classification time. On the other hand, we propose a kernel generalization of our approach to take into account real-valued similarities between data in the feature space, thus enabling more accurate estimation of the local class density.

We tested MLNN on three datasets of natural images. Results show that MLNN significantly outperforms classic k-NN and weighted k-NN voting. Furthermore, using an adaptive Gaussian kernel provides significant performance improvement. Finally, the best results are obtained when using MLNN with an appropriate learned metric distance.

Keywords

Vote Rule Scene Recognition Spatial Pyramid Match Prototype Selection Surrogate Risk 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Oliva, A., Torralba, A.: Modeling the shape of the scene: A holistic representation of the spatial envelope. Int. J. of Comp. Vision 42, 145–175 (2001)CrossRefzbMATHGoogle Scholar
  2. 2.
    Zhang, H., Berg, A.C., Maire, M., Malik, J.: Svm-knn: Discriminative nearest neighbor classification for visual category recognition. In: CVPR 2006, pp. 2126–2136 (2006)Google Scholar
  3. 3.
    Boiman, O., Shechtman, E., Irani, M.: In defense of nearest-neighbor based image classification. In: CVPR 2008, pp. 1–8 (2008)Google Scholar
  4. 4.
    Hastie, T., Tibshirani, R.: Discriminant adaptive nearest neighbor classification. IEEE Trans. PAMI 18, 607–616 (1996)CrossRefGoogle Scholar
  5. 5.
    Paredes, R.: Learning weighted metrics to minimize nearest-neighbor classification error. IEEE Trans. PAMI 28, 1100–1110 (2006); Member-Vidal, EnriqueCrossRefGoogle Scholar
  6. 6.
    Davis, J.V., Kulis, B., Jain, P., Sra, S., Dhillon, I.S.: Information-theoretic metric learning. In: ICML 2007, pp. 209–216 (2007)Google Scholar
  7. 7.
    Brighton, H., Mellish, C.: Advances in instance selection for instance-based learning algorithms. Data Mining and Knowledge Disc. 6, 153–172 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Zuo, W., Zhang, D., Wang, K.: On kernel difference-weighted k-nearest neighbor classification. Pattern Anal. Appl. 11, 247–257 (2008)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Holmes, C.C., Adams, N.M.: Likelihood inference in nearest-neighbour classification models. Biometrika 90, 99–112 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Marin, J.M., Robert, C.P., Titterington, D.M.: A Bayesian reassessment of nearest-neighbor classification. J. of the Am. Stat. Assoc. (2009)Google Scholar
  11. 11.
    Athitsos, V., Sclaroff, S.: Boosting nearest neighbor classi.ers for multiclass recognition. In: CVPR 2005: Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005) - Workshops, vol. 45 (2005)Google Scholar
  12. 12.
    Schapire, R.E., Singer, Y.: Improved boosting algorithms using confidence-rated predictions. Machine Learning 37, 297–336 (1999)CrossRefzbMATHGoogle Scholar
  13. 13.
    Zou, H., Zhu, J., Hastie, T.: New multicategory boosting algorithms based on multicategory fisher-consistent losses. Annals of Applied Statistics 2(4), 1290–1306 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Nock, R., Nielsen, F.: Bregman divergences and surrogates for learning. IEEE Trans. PAMI 31, 2048–2059 (2009)CrossRefGoogle Scholar
  15. 15.
    Bartlett, P., Jordan, M., McAuliffe, J.D.: Convexity, classification, and risk bounds. J. of the Am. Stat. Assoc. 101, 138–156 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Freund, Y., Schapire, R.E.: A Decision-Theoretic generalization of on-line learning and an application to Boosting. Journal of Comp. Syst. Sci. 55, 119–139 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Fei-Fei, L., Perona, P.: A bayesian hierarchical model for learning natural scene categories. In: CVPR, pp. 524–531 (2005)Google Scholar
  18. 18.
    Lazebnik, S., Schmid, C., Ponce, J.: Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In: CVPR, pp. 2169–2178 (2006)Google Scholar
  19. 19.
    Philbin, J., Chum, O., Isard, M., Sivic, J., Zisserman, A.: Lost in quantization: Improving particular object retrieval in large scale image databases. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Paolo Piro
    • 1
  • Richard Nock
    • 2
  • Frank Nielsen
    • 3
  • Michel Barlaud
    • 1
  1. 1.University of Nice-Sophia Antipolis / CNRSFrance
  2. 2.CEREGMIA, University of Antilles-GuyaneFrance
  3. 3.Sony CSL / LIX, Ecole PolytechniqueFrance

Personalised recommendations