Abstract
Many current applications need to organize data with respect to mutual similarity between data objects. A typical general strategy to retrieve objects similar to a given sample is to access and then refine a candidate set of objects. We propose an indexing and search technique that can significantly reduce the candidate set size by combination of several space partitionings. Specifically, we propose a mapping of objects from a generic metric space onto main memory codes using several pivot spaces; our search algorithm first ranks objects within each pivot space and then aggregates these rankings producing a candidate set reduced by two orders of magnitude while keeping the same answer quality. Our approach is designed to well exploit contemporary HW: (1) larger main memories allow us to use rich and fast index, (2) multi-core CPUs well suit our parallel search algorithm, and (3) SSD disks without mechanical seeks enable efficient selective retrieval of candidate objects. The gain of the significant candidate set reduction is paid by the overhead of the candidate ranking algorithm and thus our approach is more advantageous for datasets with expensive candidate set refinement, i.e. large data objects or expensive similarity function. On real-life datasets, the search time speedup achieved by our approach is by factor of two to five.
Keywords
- Search Time
- Voronoi Cell
- Probe Depth
- Candidate Object
- Query Object
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This is a preview of subscription content, access via your institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Amato, G., Gennaro, C., Savino, P.: MI-File: using inverted files for scalable approximate similarity search. Multimedia Tools Appl. 71(3), 1–30 (2012)
Amato, G., Savino, P.: Approximate similarity search in metric spaces using inverted files. In: Proceedings of InfoScale 2008. Vico Equense, Italy, June 4–6, pp. 1–10. ICST, Brussels, Belgium (2008)
Batko, M., Falchi, F., Lucchese, C., Novak, D., Perego, R., Rabitti, F., Sedmidubsky, J., Zezula, P.: Building a web-scale image similarity search system. Multimedia Tools Appl. 47(3), 599–629 (2010)
Batko, M., Novak, D., Zezula, P.: MESSIF: metric similarity search implementation framework. In: Thanos, C., Borri, F., Candela, L. (eds.) Digital Libraries: Research and Development. LNCS, vol. 4877, pp. 1–10. Springer, Heidelberg (2007)
Beecks, C., Lokoč, J., Seidl, T., Skopal, T.: Indexing the signature quadratic form distance for efficient content-based multimedia retrieval. In: Proceedings of the ACM International Conference on Multimedia Retrieval, p. 8 (2011)
Bolettieri, P., Esuli, A., Falchi, F., Lucchese, C., Perego, R., Piccioli, T., Rabitti, F.: CoPhIR: A Test Collection for Content-Based Image Retrieval. CoRR, abs/0905.4 (2009)
Chávez, E., Figueroa, K., Navarro, G.: Effective proximity retrieval by ordering permutations. IEEE Trans. Patt. Anal. Mach. Intell. 30(9), 1647–1658 (2008)
Christensen, D.: Fast algorithms for the calculation of Kendalls \(\tau \). Comput. Stat. 20(1), 51–62 (2005)
Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., Darrell, T.: DeCAF: a deep convolutional activation feature for generic visual recognition. In: International Conference on Machine Learning, pp. 647–655 (2014)
Edsberg, O., Hetland, M.L.: Indexing inexact proximity search with distance regression in pivot space. In: Proceedings of SISAP 2010, Istanbul, Turkey, September 18–19, pp. 51–58. ACM Press, NY, USA (2010)
Esuli, A.: Use of permutation prefixes for efficient and scalable approximate similarity search. Inform. Process. Manag. 48(5), 889–902 (2012)
Fagin, R., Kumar, R., Sivakumar, D.: Comparing top k lists. In: Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2003, pp. 28–36. Society for Industrial and Appl. Math, Philadelphia, PA, USA (2003)
Fagin, R., Kumar, R., Sivakumar, D.: Efficient similarity search and classification via rank aggregation. In: Proceedings of ACM SIGMOD 2003. San Diego, California June 9–12, pp. 301–312. ACM Press, New York, USA (2003)
Gan, J., Feng, J., Fang, Q., Ng, W.: Locality-sensitive hashing scheme based on dynamic collision counting. In: Proceedings of the 2012 International Conference on Management of Data - SIGMOD 2012, pp. 541–552. ACM Press, New York, NY, USA (2012)
Gionis, A., Indyk, P., Motwani, R.: Similarity search in high dimensions via hashing. In: Proceedings of VLDB 1999, Edinburgh, Scotland, UK, September 7–10, pp. 518–529. Morgan Kaufmann (1999)
Jégou, H., Douze, M., Schmid, C.: Product quantization for nearest neighbor search. IEEE Trans. Patt. Anal. Mach. Intell. 33(1), 117–128 (2011)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances In Neural Information Processing Systems, pp. 1106–1114 (2012)
Muller-Molina, A.J., Shinohara, T.: Efficient similarity search by reducing I/O with compressed sketches. In: 2009 Second International Workshop on Similarity Search and Applications, pp. 30–38. IEEE, August 2009
Novak, D.: Multi-modal similarity retrieval with a shared distributed data store. In: Jung, J.J., Badica, C., Kiss, A. (eds.) INFOSCALE 2014. LNICST, vol. 139, pp. 28–37. Springer, Heidelberg (2015)
Novak, D., Batko, M., Zezula, P.: Metric Index: an efficient and scalable solution for precise and approximate similarity search. Inform. Syst. 36(4), 721–733 (2011)
Novak, D., Batko, M., Zezula, P.: Large-scale Image retrieval using neural net descriptors. In: Proceedings of SIGIR 2015 (2015) (Will appear)
Novak, D., Kyselak, M., Zezula, P.: On locality-sensitive indexing in generic metric spaces. In: Proceedings of SISAP 2010, Istanbul, Turkey, September 18–19, pp. 59–66. ACM Press, New York, USA (2010)
Novak, D., Zezula, P.: Performance study of independent anchor spaces for similarity searching. Comput. J. 57(11), 1741–1755 (2014)
Novak, D., Zezula, P.: Rank aggregation of candidate sets for efficient similarity search. In: Decker, H., Lhotská, L., Link, S., Spies, M., Wagner, R.R. (eds.) DEXA 2014, Part II. LNCS, vol. 8645, pp. 42–58. Springer, Heidelberg (2014)
Patella, M., Ciaccia, P.: Approximate similarity search: a multi-faceted problem. J. Discrete Algorithms 7(1), 36–48 (2009)
Skala, M.: Counting distance permutations. J. Discrete Algorithms 7(1), 49–61 (2009)
Torralba, A., Fergus, R., Weiss, Y.: Small codes and large image databases for recognition. In: 2008 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8. IEEE, June 2008
Weiss, Y., Fergus, R., Torralba, A.: Multidimensional spectral hashing. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part V. LNCS, vol. 7576, pp. 340–353. Springer, Heidelberg (2012)
Zezula, P., Amato, G., Dohnal, V., Batko, M.: Similarity Search: The Metric Space Approach. Advances in Database Systems, vol. 32. Springer, New York (2006)
Acknowledgments
This work was supported by Czech Research Foundation project P103/12/G084.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
Appendix I: Correctness of Algorithm 2
Lemma 1
If d maintains Eq. (8) then Algorithm 2 GetNextIDs(q, j) returns \(\mathrm {ID}\)s of objects with the lowest j-th ranking \(\psi _q^j(x)\), \(j\in \varLambda \) that were not returned so far.
Proof
The algorithm returns \(\mathrm {ID}\)s from Q containing nodes and \(\mathrm {ID}\)s from the j-th PPP-Tree. Because every node and \(\mathrm {ID}\) is inserted into Q maximally once, the algorithm always returns something, unless all \(\mathrm {ID}\)s were returned. Q is ordered by \(d(q,\langle i_1,\ldots ,i_{l'}\rangle )\) where \(\langle i_1,\ldots ,i_{l'}\rangle \) is either path to a node or it is equal to \(\varPi _x^j(1..l)\) for \(\mathrm {ID}_x\) (recall that \(d(q,\varPi _x^j(1..l))\) generate \(\psi _q^j(x)\)). Let \(\mathrm {ID}_x\) be returned by the algorithm; we prove the lemma by contradiction. Let us assume that there exists : \(d(q,\varPi _y^j(1..l))<d(q,\varPi _x^j(1..l))\) and \(\mathrm {ID}_y\) was not returned by the algorithm. If \(\mathrm {ID}_y\) is in Q then it must be ahead of \(\mathrm {ID}_x\) (contradiction). Thus, \(\mathrm {ID}_y\) is not in Q, but Q must contain a node with path \(\langle i_1,\ldots ,i_{l'}\rangle \), \(l'\le l\) such that \(\langle i_1,\ldots ,i_{l'}\rangle =\varPi _y^j(1..l')\), because Q initially contains root of j-th PPP-Tree and then recursively all child nodes are inserted into Q (line 8). Because of (8), \(d(q,\langle i_1,\ldots ,i_{l'}\rangle )\le d(q,\varPi _y^j(1..l))<d(q,\varPi _x^j(1..l))\) which is in contradiction with the fact that \(\mathrm {ID}_x\) was on top of Q.
Appendix II: Optimizations of Algorithm 2
Complexity of the GetNextIDs routine is \(O(|Q|\cdot \log |Q|)\) and the length of Q depends on “tightness” of Eq. (8). We propose the following optimizations for the \(d_{\varDelta }\) distance.
Optimization 1. Distance \(d_{\varDelta }(q,\varPi (1..l'))\) between and PP prefix on level \(l'<l\) is corrected so that it returns the minimum theoretical distance to PPP-Codes on level l with prefix \(\varPi (1..l')\):
Notation \(\oplus \, \varPi _q(1..l\!-\!l'))\) is concatenation of the pivot indexes closest to the query. This addition does not break the condition (8) but, in our test cases, it resulted to reduction of the queue length by factor of 0.4–0.7.
Optimization 2. This optimization is relatively trivial: A leaf of the PPP-Tree at level \(l'<l\) can keep \(\mathrm {ID}\)s with the same PP suffix together as \(\langle \varPi (l'\!+\!1..l);\mathrm {ID}_{x_1},\ldots ,\mathrm {ID}_{x_m}\rangle \) (see Sect. 4.1 for the original proposal); the list of \(\mathrm {ID}\)s can be further optimized e.g. using delta encoding . This results in index memory reduction and, especially, in a slight reduction of the Q size, because such entry is inserted to Q only once.
Optimization 3. Another important cost component of the GetNextIDs algorithm are distances \(d(q,\langle i_1,\ldots ,i_{l'},i_{l'+1}\rangle )\) evaluated for each item added into Q (lines 8 and 12). If the formula of distance d is a sum of independent values for each level from 1 to \(l'\!+\!1\) (as the \(d_{\varDelta }\) distance \((4^{\prime })\)) then value of \(d(q,\langle i_1,\ldots ,i_{l'},i_{l'+1}\rangle )\) can be calculated as a sum of distance of its parent node \( dist =d(q, \langle i_1,\ldots ,i_{l'}\rangle )\) plus the addend for level \(l'+1\). In this case, the distances are calculated stepwise and no calculations are repeated.
Rights and permissions
Copyright information
© 2016 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Novak, D., Zezula, P. (2016). PPP-Codes for Large-Scale Similarity Searching. In: Hameurlain, A., Küng, J., Wagner, R., Decker, H., Lhotska, L., Link, S. (eds) Transactions on Large-Scale Data- and Knowledge-Centered Systems XXIV. Lecture Notes in Computer Science(), vol 9510. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-49214-7_2
Download citation
DOI: https://doi.org/10.1007/978-3-662-49214-7_2
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-662-49213-0
Online ISBN: 978-3-662-49214-7
eBook Packages: Computer ScienceComputer Science (R0)