Advertisement

Methodology and Computing in Applied Probability

, Volume 21, Issue 4, pp 1251–1258 | Cite as

An Efficient Algorithm for Bayesian Nearest Neighbours

  • Giuseppe NutiEmail author
Open Access
Article
  • 152 Downloads

Abstract

K-Nearest Neighbours (k-NN) is a popular classification and regression algorithm, yet one of its main limitations is the difficulty in choosing the number of neighbours. We present a Bayesian algorithm to compute the posterior probability distribution for k given a target point within a data-set, efficiently and without the use of Markov Chain Monte Carlo (MCMC) methods or simulation—alongside an exact solution for distributions within the exponential family. The central idea is that data points around our target are generated by the same probability distribution, extending outwards over the appropriate, though unknown, number of neighbours. Once the data is projected onto a distance metric of choice, we can transform the choice of k into a change-point detection problem, for which there is an efficient solution: we recursively compute the probability of the last change-point as we move towards our target, and thus de facto compute the posterior probability distribution over k. Applying this approach to both a classification and a regression UCI data-sets, we compare favourably and, most importantly, by removing the need for simulation, we are able to compute the posterior probability of k exactly and rapidly. As an example, the computational time for the Ripley data-set is a few milliseconds compared to a few hours when using a MCMC approach.

Keywords

K-nearest neighbour Non-parametric classification Bayesian classification 

Mathematics Subject Classification (2010)

62F15 Bayesian Inference 60G25 Prediction Theory 

References

  1. Cucala L, Marin J-M, Robert C, Titterington M (2008) A Bayesian reassessment of nearest-neighbour classification ArXiv e-printsGoogle Scholar
  2. Fink D (1997) A compendium of conjugate priorsGoogle Scholar
  3. Ghosh AK (2006) On optimum choice of k in nearest neighbor classification. Comput Stat Data Anal 50(11):3113–3123MathSciNetCrossRefGoogle Scholar
  4. Green PJ (1995) Reversible jump markov chain monte carlo computation and bayesian model determination. Biometrika 82(4):711MathSciNetCrossRefGoogle Scholar
  5. Guo R, Chakraborty S (2010) Bayesian adaptive nearest neighbor. Stat Anal Data Min 3(2):92–105MathSciNetGoogle Scholar
  6. Holmes CC, Adams NM (2002) A probabilistic nearest neighbour method for statistical pattern recognition. J Royal Stat Soc Ser B (Stat Methodol) 64(2):295–306MathSciNetCrossRefGoogle Scholar
  7. Ji WY, Friel N (2013) Efficient estimation of the number of neighbours in probabilistic K nearest neighbour classification. CoRR, arXiv:1305.1002
  8. Kaya H, Tüfekci P, Gürgen FS (2012) Local and global learning methods for predicting power of a combined gas & steam turbine. In: International conference on emerging trends in computer and electronics engineering (ICETCEE 2012), DubaiGoogle Scholar
  9. Manocha S, Girolami MA (2007) An empirical analysis of the probabilistic k-nearest neighbour classifier. Pattern Recogn Lett 28(13):1818–1824CrossRefGoogle Scholar
  10. Prescott Adams R, MacKay DJC (2007) Bayesian Online Changepoint Detection. ArXiv e-printsGoogle Scholar
  11. Smith AFM (1975) A bayesian approach to inference about a change-point in a sequence of random variables. Biometrika 62(2):407–416MathSciNetCrossRefGoogle Scholar
  12. Stephens DA (1994) Bayesian retrospective multiple-changepoint identification. J Royal Stat Soc Ser C (Appl Stat) 43(1):159–178zbMATHGoogle Scholar
  13. Tomasev N, Radovanović M, Mladenić D, Ivanović M (2011) A probabilistic approach to nearest-neighbor classification: Naive hubness bayesian knn. In: Proceedings of the 20th ACM International Conference on Information and Knowledge Management, CIKM ’11. ACM, New York, pp 2173–2176Google Scholar
  14. Weinberger KQ, Saul LK (2009) Distance metric learning for large margin nearest neighbor classification. J Mach Learn Res 10:207–244zbMATHGoogle Scholar

Copyright information

© The Author(s) 2018

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity College LondonLondonUK

Personalised recommendations