F-SORT: An Alternative for Faster Geometric Verification

  • Jacob ChanEmail author
  • Jimmy Addison Lee
  • Kemao Qian
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10111)


This paper presents a novel geometric verification approach coined Fast Sequence Order Re-sorting Technique (F-SORT), capable of rapidly validating matches between images under arbitrary viewing conditions. By using a fundamental framework of re-sorting image features into local sequence groups for geometric validation along different orientations, we simulate the enforcement of geometric constraints within each sequence group in various views and rotations. While conventional geometric verification (e.g. RANSAC) and state-of-the-art fully affine invariant image matching approaches (e.g. ASIFT) are high in computational cost, our approach is multiple times less computational expensive. We evaluate F-SORT on the Stanford Mobile Visual Search (SMVS) and the Zurich Buildings (ZuBuD) image databases comprising an overall of 9 image categories, and report competitive performance with respect to PROSAC, RANSAC and ASIFT. Out of the 9 categories, F-SORT wins PROSAC in 9 categories, RANSAC in 8 categories and ASIFT in 7 categories, with a significant reduction in computational cost of over nine-fold, thirty-fold and hundred-fold respectively.


Image Category Image Match Sequence Group Feature Pair Visual Odometry 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



This research was partially supported by National Research Foundation, Prime Minister’s Office, Singapore under its IDM Futures Funding Initiative and AcRF Tier 1 (RG28/15).


  1. 1.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. IJCV 60, 91–110 (2004)CrossRefGoogle Scholar
  2. 2.
    Brown, M., Lowe, D.G.: Automatic panoramic image stitching using invariant features. IJCV 74, 59–73 (2007)CrossRefGoogle Scholar
  3. 3.
    Hartley, R.I., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, UK (2000)zbMATHGoogle Scholar
  4. 4.
    Se, S., Lowe, D.G., Little, J.: Mobile robot localization and mapping with uncertainty using scale-invariant visual landmarks. IJRR 21, 735–758 (2002)Google Scholar
  5. 5.
    Bay, H., Tuytelaars, T., Gool, L.: SURF: speeded up robust features. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 404–417. Springer, Heidelberg (2006). doi: 10.1007/11744023_32 CrossRefGoogle Scholar
  6. 6.
    Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: ORB: an efficient alternative to SIFT and SURF. In: Proceedings of the ICCV, pp. 2564–2571 (2011)Google Scholar
  7. 7.
    Chum, O., Matas, J.: Matching with PROSAC - progressive sample consensus. In: Proceedings of the CVPR, pp. 220–226 (2005)Google Scholar
  8. 8.
    Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24, 381–395 (1981)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Philbin, J., Chum, O., Isard, M., Sivic, J., Zisserman, A.: Object retrieval with large vocabularies and fast spatial matching. In: Proceedings of the CVPR, pp. 1–8 (2007)Google Scholar
  10. 10.
    Wu, Z., Ke, Q., Isard, M., Sun, J.: Bundling features for large scale partial-duplicate web image search. In: Proceedings of the CVPR, pp. 25–32 (2009)Google Scholar
  11. 11.
    Zhang, W., Košecká, J.: Image based localization in urban environments. In: Proceedings of the 3DPVT, pp. 33–40 (2006)Google Scholar
  12. 12.
    Moisan, L., Stival, B.: A probabilistic criterion to detect rigid point matches between two images and estimate the fundamental matrix. Int. J. Comput. Vis. 57, 201–218 (2004)CrossRefGoogle Scholar
  13. 13.
    Hartley, R.I.: In defense of the eight-point algorithm. IEEE TPAMI 19, 580–593 (1997)CrossRefGoogle Scholar
  14. 14.
    Zhang, Z.: Determining the epipolar geometry and its uncertainty: a review. IJCV 27, 161–195 (1998)CrossRefGoogle Scholar
  15. 15.
    Stewénius, H., Engels, C., Nistér, D.: Recent developments on direct relative orientation. ISPRS J. Photogrammetry Remote Sens. 60, 284–294 (2006)CrossRefGoogle Scholar
  16. 16.
    Torr, P.H.S., Zisserman, A.: MLESAC: a new robust estimator with application to estimating image geometry. CVIU 78, 138–156 (2000)Google Scholar
  17. 17.
    Chum, O., Matas, J., Kittler, J.: Locally optimized RANSAC. In: Michaelis, B., Krell, G. (eds.) DAGM 2003. LNCS, vol. 2781, pp. 236–243. Springer, Heidelberg (2003). doi: 10.1007/978-3-540-45243-0_31 CrossRefGoogle Scholar
  18. 18.
    Matas, J., Chum, O.: Randomized RANSAC with sequential probability ratio test. In: Proceedings of the ICCV, pp. 1727–1732 (2005)Google Scholar
  19. 19.
    Frahm, J.M., Pollefeys, M.: RANSAC for (quasi-)degenerate data (QDEGSAC). In: Proceedings of the CVPR, pp. 453–460 (2006)Google Scholar
  20. 20.
    Ni, K., Jin, H., Dellaert, F.: GroupSAC: efficient consensus in the presence of groupings. In: Proceedings of the ICCV, pp. 2193–2200 (2009)Google Scholar
  21. 21.
    Fragoso, V., Sen, P., Rodriguez, S., Turk, M.: EVSAC: accelerating hypotheses generation by modeling matching scores with extreme value theory. In: Proceedings of the ICCV, pp. 2472–2479 (2013)Google Scholar
  22. 22.
    Nistér, D.: Preemptive RANSAC for live structure and motion estimation. In: Proceedings of the ICCV, pp. 199–206 (2003)Google Scholar
  23. 23.
    Raguram, R., Frahm, J.-M., Pollefeys, M.: A comparative analysis of RANSAC techniques leading to adaptive real-time random sample consensus. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5303, pp. 500–513. Springer, Heidelberg (2008). doi: 10.1007/978-3-540-88688-4_37 CrossRefGoogle Scholar
  24. 24.
    Raguram, R., Frahm, J., Pollefeys, M.: Exploiting uncertainty in random sample consensus. In: ICCV, pp. 2074–2081 (2009)Google Scholar
  25. 25.
    Raguramand, R., Frahm, J.: RECON: Scale-adaptive robust estimation via Residual Consensus. In: ICCV, pp. 1299–1306 (2011)Google Scholar
  26. 26.
    Raguram, R., Chum, O., Pollefeys, M., Matas, J., Frahm, J.: USAC: a universal framework for random sample consensus. IEEE TPAMI 35, 2022–2038 (2013)CrossRefGoogle Scholar
  27. 27.
    Sivic, J., Zisserman, A.: Video Google: a text retrieval approach to object matching in videos. In: Proceedings of the ICCV, pp. 1470–1477 (2003)Google Scholar
  28. 28.
    Matas, J., Chum, O., Urban, M., Pajdla, T.: Robust wide baseline stereo from maximally stable extremal regions. In: Proceedings of the BMVC, pp. 384–393 (2002)Google Scholar
  29. 29.
    Morel, J.M., Yu, G.: ASIFT: a new framework for fully affine invariant image comparison. SIAM J. Imaging Sci. 2, 438–469 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  30. 30.
    Bentley, J.L.: Multidimensional binary search trees used for associative searching. Commun. ACM 18, 509–517 (1975)CrossRefzbMATHGoogle Scholar
  31. 31.
    Chandrasekhar, V.R., Chen, D.M., Tsai, S.S., Cheung, N.M., Chen, H., Takacs, G., Reznik, Y., Vedantham, R., Grzeszczuk, R., Bach, J., Girod, B.: The stanford mobile visual search data set. In: Proceedings of the MMSys, pp. 117–122 (2011)Google Scholar
  32. 32.
    Shao, H., Svoboda, T., Van-Gool, L.: Zubud-zurich buildings database for image based recognition. Technical report 260 (2003)Google Scholar
  33. 33.
    Obdržálek, Š., Matas, J.: Image retrieval using local compact DCT-based representation. In: Michaelis, B., Krell, G. (eds.) DAGM 2003. LNCS, vol. 2781, pp. 490–497. Springer, Heidelberg (2003). doi: 10.1007/978-3-540-45243-0_63 CrossRefGoogle Scholar
  34. 34.
    Obdržálek, S., Matas, J.: Sub-linear indexing for large scale object recognition. In: Proceedings of the BMVC, pp. 1–10 (2005)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.School of Computer Engineering (SCE)Nanyang Technological UniversitySingaporeSingapore
  2. 2.Institute for Infocomm Research (I2R)Agency for Science, Technology and Research (A*STAR)SingaporeSingapore

Personalised recommendations