Pattern Recognition and Image Analysis

, Volume 28, Issue 1, pp 87–96 | Cite as

Wrong Matching Points Elimination after Scale Invariant Feature Transform and Its Application to Image Matching

Representation, Processing, Analysis, and Understanding of Images


When images are rotated and the scale varies or there are similar objects in the images, wrong matching points appear easily in the scale invariant feature transform (SIFT). To address the problem, this paper proposes a SIFT wrong matching points elimination algorithm. The voting mechanism of Generalized Hough Transform (GHT) is introduced to find the rotation and scaling of the image and locate where the template image appears in the scene in order to completely reject unmatched points. Through a discovery that the neighborhood diameter ratio and direction angle difference of correct matching pairs have a quantitative relationship with the image’s rotation and scaling information, we further remove the mismatching points accurately. In order to improve image matching efficiency, a method for finding the optimal scaling level is proposed. A scaling multiple is obtained through training of sample images and applied to all images to be matched. The experimental results demonstrate that the proposed algorithm can eliminate wrong matching points more effectively than the other three commonly used methods. The image matching tests have been conducted on images from the Inria BelgaLogos database. Performance evaluation results show that the proposed method has a higher correct matching rate and higher matching efficiency.


scale invariant feature transform image matching voting mechanism optimal scaling level wrong matching points 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    W. Davros, “Digital image processing for medical applications,” Perspect. Biol. Med. 37 (4), 948–949 (2009).Google Scholar
  2. 2.
    Y. H. Lee, B. Kim, and H. J. Kim, “Efficient object identification and localization for image retrieval using query-by-region,” Comput. Math. Appl. 63 (2), 511–517 (2012).CrossRefMATHGoogle Scholar
  3. 3.
    F. Comaschi, S. Stuijk, T. Basten, et al., “A tool for fast ground truth generation for object detection and tracking from video,” J. Mass Spectrom. 25 (25), 368–372 (2015).Google Scholar
  4. 4.
    X. Yuan, et al., “Traffic sign recognition based on a context-aware scale-invariant feature transform approach,” J. Electron. Imag. 22 (4), 1105 (2013).MathSciNetCrossRefGoogle Scholar
  5. 5.
    S. Zhang, M. Xie, and T. Wei, “Object representation using SIFT and local contour,” J. Comput. Inf. Syst. 7 (15), 5419–5427 (2011).Google Scholar
  6. 6.
    X. Y. Zhang, “Intelligent alarm system of remote monitoring based on image processing,” in Proc. Int. Conf. on Electronics, Network, and Computer Engineering (Yinchuan, 2016).Google Scholar
  7. 7.
    S. Zhou, D. Zeng, and Q. Tian, “Fast large-scale object retrieval with binary quantization,” J. Electron. Imag. 24 (6), 063018 (2015).CrossRefGoogle Scholar
  8. 8.
    Yi Zhang, Kai Lu, and Ying-hui Gao, “Fast image matching algorithm based on affine invariants,” J. Central South Univ. 21 (5), 1907–1918 (2014).CrossRefGoogle Scholar
  9. 9.
    M. A. Manzar et al., “New image matching technique based on hyper-vectorisation of grey level sliced binary image,” IET Image Processing 2 (6), 337–351 (2008).MathSciNetCrossRefGoogle Scholar
  10. 10.
    Z. Wang and W. U. Jian, “Image matching method based on fast gray value projection and SSDA,” Comput. Eng. Appl. 47 (33), 195–197 (2011).Google Scholar
  11. 11.
    M. R. Cardinal et al., “Intravascular ultrasound image segmentation: A three-dimensional fast-marching method based on gray level distributions,” IEEE Trans. Med. Imag. 25 (5), 590–601 (2006).CrossRefGoogle Scholar
  12. 12.
    L. Li and J. Zic, “Image matching algorithm based on feature-point and DAISY descriptor,” J. Multimedia 9 (6), 829–834 (2014).CrossRefGoogle Scholar
  13. 13.
    D. G. Lowe, “Distinctive image features from scaleinvariant keypoints,” Int. J. Comput. Vision 60 (2), 91–110 (2004).CrossRefGoogle Scholar
  14. 14.
    D. G. Lowe, “Object recognition from local scaleinvariant features,” in Proc. IEEE Int. Conf. on Computer Vision (Kerkyra, 1999), pp. 1150–1157.Google Scholar
  15. 15.
    J. Ren, Y. Wang, and D. Yun, “Study on eliminating wrong match pairs of SIFT,” in Proc. IEEE Int. Conf. on Signal Processing (Dalian, 2010), pp. 992–995.Google Scholar
  16. 16.
    S. H. Jadhav, “A content based image retrieval system using homogeneity feature extraction from recencybased retrieved image library,” Int. J. Eng. Sci. Technol. 7 (6), 13–24 (2012).Google Scholar
  17. 17.
    J. Y. Zhang, X. J. Bai, X. U. Li-Yan, et al., “A method of correcting SIFT mismatching based on spatial distribution descriptor,” J. Image Graph. 14 (7), 1369–1377 (2009).Google Scholar
  18. 18.
    Y. Lin and B. Liu, “Underwater image bidirectional matching for localization based on SIFT,” J. Marine Sci. Appl. 13 (2), 225–229 (2014).MathSciNetCrossRefGoogle Scholar
  19. 19.
    C. R. Wang, N. Zhao, and L. L. Zhang, “Image registration and stitching based on triangle geometry similarity,” Opto-Electron. Eng. 34 (8), 87–92 (2007).Google Scholar
  20. 20.
    C. Harris, “A combined corner and edge detector,” in Proc. Alvey Vision Conf. (Manchester, 1988), No. 3, pp. 147–151.Google Scholar
  21. 21.
    M. A. Fischler, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM 24 (6), 381–395 (1981).MathSciNetCrossRefGoogle Scholar
  22. 22.
    T. Qu, B. An, and G. Chen, “Application of improved RANSAC algorithm to image registration,” J. Comput. Appl. 30 (7), 1849–1851 (2010).Google Scholar
  23. 23.
    K. K. Xu, W. Q. Zhu, and F. L. Guo, “Improved RANSAC algorithm based on structural similarity,” Comput. Eng. Appl. 52 (12), 168–171 (2016).Google Scholar
  24. 24.
    K. N. Mu, F. Hui, J. M. Cao, et al., “Improved RANSAC algorithm based on geometric constraints,” Comput. Eng. Appl. 51 (4), 205–208 (2015).Google Scholar
  25. 25.
    V. P. C. Hough, U.S. Patent 3069654 (1962).Google Scholar
  26. 26.
    R. O. Duda and P. E. Hart, “Use of the hough transformation to detect lines and curves in pictures,” Commun. ACM 15 (1), 11–15 (1972).CrossRefMATHGoogle Scholar
  27. 27.
    D. H. Ballard, “Generalizing the Hough transform to detect arbitrary shapes,” Pattern Recogn. 13 (2), 111–122 (1981).CrossRefMATHGoogle Scholar
  28. 28.
    A. Joly and O. Buisson, “Logo retrieval with a contrario visual query expansion,” in Proc. Int. Conf. on Multimedia (Vancouver, Oct. 2009), pp. 581–584.Google Scholar
  29. 29.
    X. L. Ruan, Q. H. Chen, Y. M. Qiu, et al., “Excluding SIFT mismatching points based on the invariant factors and image retrieval,” Infrared Technol. 37 (7), 560–565 (2015).Google Scholar

Copyright information

© Pleiades Publishing, Ltd. 2018

Authors and Affiliations

  1. 1.School of AutomationWuhan University of TechnologyWuhanChina

Personalised recommendations