Advertisement

K Nearest Neighbor Classification with Local Induction of the Simple Value Difference Metric

  • Andrzej Skowron
  • Arkadiusz Wojna
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3066)

Abstract

The classical k nearest neighbor (k-nn) classification assumes that a fixed global metric is defined and searching for nearest neighbors is always based on this global metric. In the paper we present a model with local induction of a metric. Any test object induces a local metric from the neighborhood of this object and selects k nearest neighbors according to this locally induced metric. To induce both the global and the local metric we use the weighted Simple Value Difference Metric (SVDM). The experimental results show that the proposed classification model with local induction of a metric reduces classification error up to several times in comparison to the classical k-nn method.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Mitchell, T.M.: Machine Learning. McGraw-Hill, Portland (1997)zbMATHGoogle Scholar
  2. 2.
    Pawlak, Z.: Rough Sets - Theoretical Aspects of Reasoning about Data. Kluwer Academic Publishers, Dordrecht (1991)zbMATHGoogle Scholar
  3. 3.
    Breiman, L.: Statistical modeling - the two cultures. Statistical Science 16, 199–231 (2001)zbMATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    Skowron, A., Stepaniuk, J.: Information granules and rough-neural computing. In: Rough-Neural Computing: Techniques for Computing with Words. Cognitive Technologies, pp. 43–84. Springer, Heidelberg (2003)Google Scholar
  5. 5.
    Vapnik, V.: Statistical Learning Theory. Wiley, Chichester (1998)zbMATHGoogle Scholar
  6. 6.
    Cover, T.M., Hart, P.E.: Nearest neighbor pattern classification. IEEE Transactions on Information Theory 13, 21–27 (1967)zbMATHCrossRefGoogle Scholar
  7. 7.
    Duda, R.O., Hart, P.E.: Pattern Classification and Scene Analysis. Wiley, New York (1973)zbMATHGoogle Scholar
  8. 8.
    Wojna, A.G.: Center-based indexing in vector and metric spaces. Fundamenta Informaticae 56, 285–310 (2003)zbMATHMathSciNetGoogle Scholar
  9. 9.
    Friedman, J.: Flexible metric nearest neighbor classification. Technical Report 113, Department of Statistics, Stanford University, CA (1994)Google Scholar
  10. 10.
    Hastie, T., Tibshirani, R.: Discriminant adaptive nearest neighbor classification. IEEE Transactions on Pattern Analysis and Machine Intelligence 18, 607–616 (1996)CrossRefGoogle Scholar
  11. 11.
    Domeniconi, C., Gunopulos, D.: Efficient local flexible nearest neighbor classification. In: Proceedings of the Second SIAM International Conference on Data Mining (2002)Google Scholar
  12. 12.
    Domingos, P.: Unifying instance-based and rule-based induction. Machine Learning 24, 141–168 (1996)Google Scholar
  13. 13.
    Góra, G., Wojna, A.G.: RIONA: a new classification system combining rule induction and instance-based learning. Fundamenta Informaticae 51, 369–390 (2002)zbMATHMathSciNetGoogle Scholar
  14. 14.
    Blake, C.L., Merz, C.J.: UCI repository of machine learning databases. Department of Information and Computer Science. University of California, Irvine (1998), http://www.ics.uci.edu/~mlearn/MLRepository.html Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Andrzej Skowron
    • 1
  • Arkadiusz Wojna
    • 1
  1. 1.Faculty of Mathematics, Informatics and MechanicsWarsaw UniversityWarsawPoland

Personalised recommendations