Advertisement

Interest Point Detectors Stability Evaluation on ApolloScape Dataset

  • Jacek KomorowskiEmail author
  • Konrad Czarnota
  • Tomasz Trzcinski
  • Lukasz Dabala
  • Simon Lynen
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11133)

Abstract

In the recent years, a number of novel, deep-learning based, interest point detectors, such as LIFT, DELF, Superpoint or LF-Net was proposed. However there’s a lack of a standard benchmark to evaluate suitability of these novel keypoint detectors for real-live applications such as autonomous driving. Traditional benchmarks (e.g. Oxford VGG) are rather limited, as they consist of relatively few images of mostly planar scenes taken in favourable conditions. In this paper we verify if the recent, deep-learning based interest point detectors have the advantage over the traditional, hand-crafted keypoint detectors. To this end, we evaluate stability of a number of hand crafted and recent, learning-based interest point detectors on the street-level view ApolloScape dataset.

Keywords

Keypoint detectors Interest points stability 

Notes

Acknowledgement

This research was supported by Google Sponsor Research Agreement under the project “Efficient visual localization on mobile devices”.

The Titan X Pascal used for this research was donated by the NVIDIA Corporation.

References

  1. 1.
    Agarwal, S., et al.: Building rome in a day. Commun. ACM 54(10), 105–112 (2011)CrossRefGoogle Scholar
  2. 2.
    Brown, M., Lowe, D.G.: Automatic panoramic image stitching using invariant features. IJCV 74(1), 59–73 (2007)CrossRefGoogle Scholar
  3. 3.
    Lynen, S., Sattler, T., Bosse, M., Hesch, J.A., Pollefeys, M., Siegwart, R.: Get out of my lab: large-scale, real-time visual-inertial localization. In: Robotics: Science and Systems (2015)Google Scholar
  4. 4.
    Yi, K.M., Trulls, E., Lepetit, V., Fua, P.: LIFT: learned invariant feature transform. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9910, pp. 467–483. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46466-4_28CrossRefGoogle Scholar
  5. 5.
    Noh, H., Araujo, A., Sim, J., Weyand, T., Han, B.: Largescale image retrieval with attentive deep local features. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3456–3465 (2017)Google Scholar
  6. 6.
    DeTone, D., Malisiewicz, T., Rabinovich, A.: SuperPoint: self-supervised interest point detection and description. arXiv preprint arXiv:1712.07629 (2017)
  7. 7.
    Harris, C., Stephens, M.: A combined corner and edge detector. In: Alvey Vision Conference, vol. 15, pp. 10–5244. Citeseer (1988)Google Scholar
  8. 8.
    Lindeberg, T.: Feature detection with automatic scale selection. Int. J. Comput. Vis. 30(2), 79–116 (1998)CrossRefGoogle Scholar
  9. 9.
    Lowe, D.G.: Object recognition from local scale-invariant features. In: The Proceedings of the Seventh IEEE International Conference on Computer Vision, vol. 2, pp. 1150–1157. IEEE (1999)Google Scholar
  10. 10.
    Rosten, E., Drummond, T.: Machine learning for high-speed corner detection. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 430–443. Springer, Heidelberg (2006).  https://doi.org/10.1007/11744023_34CrossRefGoogle Scholar
  11. 11.
    Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: ORB: an efficient alternative to sift or surf. In: 2011 IEEE international conference on Computer Vision (ICCV), pp. 2564–2571. IEEE (2011)Google Scholar
  12. 12.
    Savinov, N., Seki, A., Ladicky, L., Sattler, T., Pollefeys, M.: Quad-networks: unsupervised learning to rank for interest point detection. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)Google Scholar
  13. 13.
    Ono, Y., Trulls, E., Fua, P., Yi, K.M.: LF-Net: learning local features from images. arXiv preprint arXiv:1805.09662 (2018)
  14. 14.
    Mikolajczyk, K., Schmid, C.: Scale & affine invariant interest point detectors. Int. J. Comput. Vis. 60(1), 63–86 (2004)CrossRefGoogle Scholar
  15. 15.
    Tuytelaars, T., Mikolajczyk, K., et al.: Local invariant feature detectors: a survey. Found. trends® Comput. Graph. Vis. 3(3), 177–280 (2008)CrossRefGoogle Scholar
  16. 16.
    Huang, X., et al.: The apolloscape dataset for autonomous driving. arXiv preprint arXiv:1803.06184 (2018)
  17. 17.
    Strecha, C., Lindner, A., Ali, K., Fua, P.: Training for task specific keypoint detection. In: Denzler, J., Notni, G., Süße, H. (eds.) DAGM 2009. LNCS, vol. 5748, pp. 151–160. Springer, Heidelberg (2009).  https://doi.org/10.1007/978-3-642-03798-6_16CrossRefGoogle Scholar
  18. 18.
    Verdie, Y., Yi, K., Fua, P., Lepetit, V.: Tilde: a temporally invariant learned detector. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5279–5288 (2015)Google Scholar
  19. 19.
    Altwaijry, H., Veit, A., Belongie, S.J., Tech, C.: Learning to detect and match keypoints with deep architectures. In: BMVC (2016)Google Scholar
  20. 20.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  21. 21.
    Schmid, C., Mohr, R., Bauckhage, C.: Comparing and evaluating interest points. In: Sixth International Conference on Computer Vision, pp. 230–235. IEEE (1998)Google Scholar
  22. 22.
    Balntas, V., Lenc, K., Vedaldi, A., Mikolajczyk, K.: HPatches: a benchmark and evaluation of handcrafted and learned local descriptors. In: Conference on Computer Vision and Pattern Recognition (CVPR), vol. 4, p. 6 (2017)Google Scholar
  23. 23.
    Alcantarilla, P.F., Solutions, T.: Fast explicit diffusion for accelerated features in nonlinear scale spaces. IEEE Trans. Patt. Anal. Mach. Intell 34(7), 1281–1298 (2011)Google Scholar
  24. 24.
    Mair, E., Hager, G.D., Burschka, D., Suppa, M., Hirzinger, G.: Adaptive and generic corner detection based on the accelerated segment test. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6312, pp. 183–196. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15552-9_14CrossRefGoogle Scholar
  25. 25.
    Aldana-Iuit, J., Mishkin, D., Chum, O., Matas, J.: In the saddle: chasing fast and repeatable features. In: 2016 23rd International Conference on Pattern Recognition (ICPR), pp. 675–680. IEEE (2016)Google Scholar
  26. 26.
    Maddern, W., Pascoe, G., Linegar, C., Newman, P.: 1 Year, 1000 km: the Oxford RobotCar dataset. Int. J. Robot. Res. (IJRR) 36(1), 3–15 (2017)CrossRefGoogle Scholar
  27. 27.
    Yu, F., et al.: Bdd100k: a diverse driving video database with scalable annotation tooling. arXiv preprint arXiv:1805.04687 (2018)

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Warsaw University of TechnologyWarsawPoland
  2. 2.TooplooxWrocławPoland
  3. 3.GoogleMountain ViewUSA

Personalised recommendations