Information Theoretic Rotationwise Robust Binary Descriptor Learning

  • Youssef El RhabiEmail author
  • Loic Simon
  • Luc Brun
  • Josep Llados Canet
  • Felipe Lumbreras
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10029)


In this paper, we propose a new data-driven approach for binary descriptor selection. In order to draw a clear analysis of common designs, we present a general information-theoretic selection paradigm. It encompasses several standard binary descriptor construction schemes, including a recent state-of-the-art one named BOLD. We pursue the same endeavor to increase the stability of the produced descriptors with respect to rotations. To achieve this goal, we have designed a novel offline selection criterion which is better adapted to the online matching procedure. The effectiveness of our approach is demonstrated on two standard datasets, where our descriptor is compared to BOLD and to several classical descriptors. In particular, it emerges that our approach can reproduce equivalent if not better performance as BOLD while relying on twice shorter descriptors. Such an improvement can be influential for real-time applications.


Feature Selection Natural Transformation Binary Feature Information Quantity Visual Odometry 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Alahi, A., Ortiz, R., Vandergheynst, P.: Freak: fast retina keypoint. In: CVPR, pp. 510–517. IEEE (2012)Google Scholar
  2. 2.
    Alcantarilla, P.F., Nuevo, J., Bartoli, A.: Fast explicit diffusion for accelerated features in nonlinear scale spaces. In: BMVC (2013)Google Scholar
  3. 3.
    Balntas, V., Tang, L., Mikolajczyk, K.: Bold-binary online learned descriptor for efficient image matching. In: CVPR, pp. 2367–2375 (2015)Google Scholar
  4. 4.
    Bay, H., Tuytelaars, T., Gool, L.: SURF: speeded up robust features. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 404–417. Springer, Heidelberg (2006). doi: 10.1007/11744023_32 CrossRefGoogle Scholar
  5. 5.
    Brown, M., Lowe, D.G.: Automatic panoramic image stitching using invariant features. IJCV 74(1), 59–73 (2007)CrossRefGoogle Scholar
  6. 6.
    Calonder, M., Lepetit, V., Strecha, C., Fua, P.: BRIEF: binary robust independent elementary features. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6314, pp. 778–792. Springer, Heidelberg (2010). doi: 10.1007/978-3-642-15561-1_56 CrossRefGoogle Scholar
  7. 7.
    Fan, B., Kong, Q., Sui, W., Wang, Z., Wang, X., Xiang, S., Pan, C., Fua, P.: Do we need binary features for 3D reconstruction? arXiv:1602.04502 (2016)
  8. 8.
    Hua, G., Brown, M., Winder, S.: Discriminant embedding for local image descriptors. In: ICCV, pp. 1–8. IEEE (2007)Google Scholar
  9. 9.
    Indyk, P., Motwani, R.: Approximate nearest neighbors: towards removing the curse of dimensionality. In: Proceedings of 30th Annual ACM Symposium on Theory of Computing, pp. 604–613. ACM (1998)Google Scholar
  10. 10.
    Ke, Y., Sukthankar, R.: PCA-SIFT: a more distinctive representation for local image descriptors. In: CVPR, vol. 2, pp. II-506. IEEE (2004)Google Scholar
  11. 11.
    Leutenegger, S., Chli, M., Siegwart, R.Y.: Brisk: binary robust invariant scalable keypoints. In: ICCV, pp. 2548–2555. IEEE (2011)Google Scholar
  12. 12.
    Lowe, D.G.: Object recognition from local scale-invariant features. In: ICCV, vol. 2, pp. 1150–1157. IEEE (1999)Google Scholar
  13. 13.
    Mikolajczyk, K., Schmid, C.: Scale & affine invariant interest point detectors. IJCV 60(1), 63–86 (2004)CrossRefGoogle Scholar
  14. 14.
    Mikolajczyk, K., Schmid, C.: A performance evaluation of local descriptors. PAMI 27(10), 1615–1630 (2005)CrossRefGoogle Scholar
  15. 15.
    Moulon, P., Monasse, P., Marlet, R.: Global fusion of relative motions for robust, accurate and scalable structure from motion. In: ICCV, pp. 3248–3255 (2013)Google Scholar
  16. 16.
    Muja, M., Lowe, D.G.: Fast matching of binary features. In: CRV. IEEE (2012)Google Scholar
  17. 17.
    Ojala, T., Pietikäinen, M., Harwood, D.: A comparative study of texture measures with classification based on featured distributions. Pattern Recogn. 29(1), 51–59 (1996)CrossRefGoogle Scholar
  18. 18.
    Peng, H., Long, F., Ding, C.: Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. PAMI 27(8), 1226–1238 (2005)CrossRefGoogle Scholar
  19. 19.
    Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: ORB: an efficient alternative to SIFT or SURF. In: ICCV, pp. 2564–2571. IEEE (2011)Google Scholar
  20. 20.
    Sechidis, K., Nikolaou, N., Brown, G.: Information theoretic feature selection in multi-label data through composite likelihood. In: S+SSPR, pp. 143–152 (2014)Google Scholar
  21. 21.
    Trzcinski, T., Christoudias, M., Fua, P., Lepetit, V.: Boosting binary keypoint descriptors. In: CVPR, pp. 2874–2881 (2013)Google Scholar
  22. 22.
    Wang, Z., Fan, B., Wu, F.: Local intensity order pattern for feature description. In: ICCV, pp. 603–610. IEEE (2011)Google Scholar
  23. 23.
    Yang, X., Cheng, K.T.: Ldb: An ultra-fast feature for scalable augmented reality on mobile devices. In: ISMAR, pp. 49–57. IEEE (2012)Google Scholar
  24. 24.
    Zabih, R., Woodfill, J.: Non-parametric local transforms for computing visual correspondence. In: Eklundh, J.-O. (ed.) ECCV 1994. LNCS, vol. 801, pp. 151–158. Springer, Heidelberg (1994). doi: 10.1007/BFb0028345 Google Scholar
  25. 25.
    Zagoruyko, S., Komodakis, N.: Learning to compare image patches via convolutional neural networks. In: CVPR, pp. 4353–4361 (2015)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Youssef El Rhabi
    • 2
    Email author
  • Loic Simon
    • 1
  • Luc Brun
    • 1
  • Josep Llados Canet
    • 3
  • Felipe Lumbreras
    • 3
  1. 1.Groupe de Recherche en Informatique, Image, Automatique et Instrumentation de Caen Normandie Univ, UNICAEN, ENSICAEN, CNRS, GREYCCaenFrance
  2. 2.44screensParisFrance
  3. 3.Computer Vision Center Dep. InformàticaUniversitat Autònoma de BarcelonaBellaterra (Barcelona)Spain

Personalised recommendations