Advertisement

LSQ++: Lower Running Time and Higher Recall in Multi-codebook Quantization

  • Julieta Martinez
  • Shobhit Zakhmi
  • Holger H. Hoos
  • James J. Little
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11220)

Abstract

Multi-codebook quantization (MCQ) is the task of expressing a set of vectors as accurately as possible in terms of discrete entries in multiple bases. Work in MCQ is heavily focused on lowering quantization error, thereby improving distance estimation and recall on benchmarks of visual descriptors at a fixed memory budget. However, recent studies and methods in this area are hard to compare against each other, because they use different datasets, different protocols, and, perhaps most importantly, different computational budgets. In this work, we first benchmark a series of MCQ baselines on an equal footing and provide an analysis of their recall-vs-running-time performance. We observe that local search quantization (LSQ) is in practice much faster than its competitors, but is not the most accurate method in all cases. We then introduce two novel improvements that render LSQ (i) more accurate and (ii) faster. These improvements are easy to implement, and define a new state of the art in MCQ.

Notes

Acknowledgements

We thank NVIDIA for the donation of GPUs used in this project. Shobhit was supported by a Mitacs Globalink research internship while at UBC. We also thank Ioan Andrei Bârsan for proofreading our work, and anonymous reviewers for multiple comments that improved this project. This research was supported in part by NSERC.

References

  1. 1.
    Ai, L., Yu, J., Guan, T., He, Y.: Efficient approximate nearest neighbor search by optimized residual vector quantization. In: International Workshop on Content-Based Multimedia Indexing (CBMI) (2014)Google Scholar
  2. 2.
    Arthur, D., Vassilvitskii, S.: k-means++: the advantages of careful seeding. In: Proceedings of the Eighteenth Annual ACM-SIAM symposium on Discrete algorithms, pp. 1027–1035. Society for Industrial and Applied Mathematics (2007)Google Scholar
  3. 3.
    Babenko, A., Lempitsky, V.: Additive quantization for extreme vector compression. In: CVPR (2014)Google Scholar
  4. 4.
    Babenko, A., Lempitsky, V.: Tree quantization for large-scale similarity search and classification. In: CVPR (2015)Google Scholar
  5. 5.
    Babenko, A., Lempitsky, V.: Efficient indexing of billion-scale datasets of deep descriptors. In: CVPR (2016)Google Scholar
  6. 6.
    Bezanson, J., Edelman, A., Karpinski, S., Shah, V.B.: Julia: a fresh approach to numerical computing. arXiv preprint arXiv:1411.1607 (2014)
  7. 7.
    Blalock, D.W., Guttag, J.V.: Bolt: accelerated data mining with fast vector compression. In: KDD (2017)Google Scholar
  8. 8.
    Chatfield, K., Simonyan, K., Vedaldi, A., Zisserman, A.: Return of the devil in the details: delving deep into convolutional nets. In: BMVC (2014)Google Scholar
  9. 9.
    Chen, Y., Guan, T., Wang, C.: Approximate nearest neighbor search by residual vector quantization. Sensors 10(12), 11259–11273 (2010)CrossRefGoogle Scholar
  10. 10.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR (2009)Google Scholar
  11. 11.
    Douze, M., Szlam, A., Hariharan, B., Jégou, H.: Low-shot learning with large-scale diffusion. In: NIPS (2017)Google Scholar
  12. 12.
    Ge, T., He, K., Ke, Q., Sun, J.: Optimized product quantization. In: CVPR (2013)Google Scholar
  13. 13.
    Goyal, P., et al.: Accurate, large minibatch SGD: training imagenet in 1 hour. arXiv preprint arXiv:1706.02677 (2017)
  14. 14.
    Guo, R., Kumar, S., Choromanski, K., Simcha, D.: Quantization based fast inner product search. In: AISTATS (2016)Google Scholar
  15. 15.
    Hoos, H.H., Stützle, T.: Stochastic Local Search: Foundations and Applications. Elsevier, Amsterdam (2004)CrossRefGoogle Scholar
  16. 16.
    Hu, H., et al.: Web-scale responsive visual search at bing. arXiv preprint arXiv:1802.04914 (2018)
  17. 17.
    Jégou, H., Douze, M., Schmid, C.: Product quantization for nearest neighbor search. TPAMI 33(1), 117–128 (2011)CrossRefGoogle Scholar
  18. 18.
    Johnson, J., Douze, M., Jégou, H.: Billion-scale similarity search with GPUs. arXiv preprint arXiv:1702.08734 (2017)
  19. 19.
    Lourenço, H.R., Martin, O.C., Stützle, T.: Iterated local search. In: Handbook of Metaheuristics, pp. 320–353. Springer, Berlin (2003)Google Scholar
  20. 20.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. IJCV 60(2), 91–110 (2004)MathSciNetCrossRefGoogle Scholar
  21. 21.
    Martinez, J., Clement, J., Hoos, H.H., Little, J.J.: Revisiting additive quantization. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 137–153. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46475-6_9CrossRefGoogle Scholar
  22. 22.
    Martinez, J., Hoos, H.H., Little, J.J.: Stacked quantizers for compositional vector compression. arXiv preprint arXiv:1411.2173 (2014)
  23. 23.
    Martinez, J., Hoos, H.H., Little, J.J.: Solving multi-codebook quantization in the GPU. In: ECCV Workshop on Web-Scale Vision and Social Media (VSM) (2016)Google Scholar
  24. 24.
    Mussmann, S., Levy, D., Ermon, S.: Fast amortized inference and learning in log-linear models with randomly perturbed nearest neighbor search. In: UAI (2017)Google Scholar
  25. 25.
    Nocedal, J.: Updating quasi-newton matrices with limited storage. Math. Comput. 35(151), 773–782 (1980)MathSciNetCrossRefGoogle Scholar
  26. 26.
    Norouzi, M., Fleet, D.J.: Cartesian k-means. In: CVPR (2013)Google Scholar
  27. 27.
    Ozan, E.C., Kiranyaz, S., Gabbouj, M.: Competitive quantization for approximate nearest neighbor search. IEEE Trans. Knowl. Data Eng. 28(11), 2884–2894 (2016)CrossRefGoogle Scholar
  28. 28.
    Russell, B.C., Torralba, A., Murphy, K.P., Freeman, W.T.: Labelme: a database and web-based tool for image annotation. IJCV 77(1–3), 157–173 (2008)CrossRefGoogle Scholar
  29. 29.
    Smith, S.L., Kindermans, P.J., Le, Q.V.: Don’t decay the learning rate, increase the batch size. In: ICLR (2018)Google Scholar
  30. 30.
    Szegedy, C., et al.: Going deeper with convolutions. In: CVPR (2015)Google Scholar
  31. 31.
    Wang, J., Wang, J., Ke, Q., Zeng, G., Li, S.: Fast approximate \(K\)-means via cluster closures. In: Baughman, A.K., Gao, J., Pan, J.-Y., Petrushin, V.A. (eds.) Multimedia Data Mining and Analytics, pp. 373–395. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-14998-1_17Google Scholar
  32. 32.
    Xia, Y., He, K., Wen, F., Sun, J.: Joint inverted indexing. In: ICCV (2013)Google Scholar
  33. 33.
    Zeger, K., Vaisey, J., Gersho, A.: Globally optimal vector quantizer design by stochastic relaxation. IEEE Trans. Signal Process. 40(2), 310–322 (1992)CrossRefGoogle Scholar
  34. 34.
    Zhang, T., Du, C., Wang, J.: Composite quantization for approximate nearest neighbor search. In: ICML (2014)Google Scholar
  35. 35.
    Zhang, T., Qi, G.J., Tang, J., Wang, J.: Sparse composite quantization. In: CVPR (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Julieta Martinez
    • 1
    • 2
  • Shobhit Zakhmi
    • 1
  • Holger H. Hoos
    • 1
    • 3
  • James J. Little
    • 1
  1. 1.University of British Columbia (UBC)VancouverCanada
  2. 2.Uber ATGTorontoCanada
  3. 3.Leiden Institute of Advanced Computer Science (LIACS)LeidenNetherlands

Personalised recommendations