Advertisement

Optimal Subspace Dimensionality for k-NN Search on Clustered Datasets

  • Yue Li
  • Alexander Thomasian
  • Lijuan Zhang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3180)

Abstract

Content based retrieval is an important paradigm in multimedia applications. It heavily relies on k-Nearest-Neighbor ( k-NN) queries applied to high dimensional feature vectors representing objects. Dimensionality Reduction (DR) of high-dimensional datasets via Principal Component Analysis – PCA is an effective method to reduce the cost of processing k-NN queries on multi-dimensional indices. The distance information loss is quantifiable by the Normalized Mean Square Error (NMSE), which is determined by the number of retained dimensions (n). For smaller n the cost of accessing the index (an SR-tree is used in our study) for k-NN search is lower, but the postprocessing cost to achieve exact query processing is higher. The optimum value n opt can be determined experimentally by considering cost as a function of n. We concern ourselves with a local DR method, which applies DR to clusters of the original dataset. Clusters are obtained via a PCA-friendly clustering method, which also determines the number of clusters. For a given NMSE we use an algorithm developed in conjunction with the Clustered SVD – CSVD method to determine the vector of the number of dimensions retained in all clusters (n). The NMSE is varied to determine the optimum n, which minimizes the number of accessed pages. To verify the robustness of our methodology we experimented with one synthetic and three real-world datasets. It is observed that the NMSE yielding the optimum n varies over a narrow range and that the optimal value is expected to be applicable to datasets with similar characteristics.

Keywords

Subspace Dimensionality Normalize Mean Square Error Query Cost Very Large Data Base High Dimensional Feature Vector 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Böhm, C.: A cost model for query processing in high-dimensional data space. ACM Trans. Database Systems (TODS) 25(2), 129–178 (2000)CrossRefGoogle Scholar
  2. 2.
    Singh, A.K., Lang, C.A.: Modeling high-dimensional index structures using sampling. In: Proc. ACM SIGMOD Int’l Conf. on Management of Data, Santa Barbara, CA, pp. 389–400 (2001)Google Scholar
  3. 3.
    Castelli, V., Thomasian, A., Li, C.S.: CSVD: clustering and singular value decomposition for approximate similarity search in high dimensional spaces. IEEE Trans. on Knowledge and Data Engineering (TKDE) 14(3), 671–685 (2003)CrossRefGoogle Scholar
  4. 4.
    Chakrabarti, K., Mehrotra, S.: Local dimensionality reduction: A new approach to indexing high dimensional space. In: Proc. Int’l Conf. on Very Large Data Bases (VLDB), pages Cairo, Egypt, pp. 89–100 (2000)Google Scholar
  5. 5.
    Faloutsos, C.: Searching Multimedia Databases by Content. Kluwer Academic Publishers, Boston (1996)zbMATHGoogle Scholar
  6. 6.
    Faloutsos, C., Kamel, I.: Beyond uniformity and independence: Analysis of the R-tree using the concept of fractal dimension. In: Proc. ACM Symp. on Principles of Database Systems (PODS), Minneapolis, MN, pp. 4–13 (1994)Google Scholar
  7. 7.
    Hjaltason, G.R., Samet, H.: Ranking in spatial databases. In: Egenhofer, M.J., Herring, J.R. (eds.) SSD 1995. LNCS, vol. 951, pp. 83–95. Springer, Heidelberg (1995)Google Scholar
  8. 8.
    Katayama, N., Satoh, S.: The SR-tree: An index structure for high dimensional nearest neighbor queries. In: Proc. ACM SIGMOD Conf. on Management of Data, Tucson, AZ, pp. 369–380 (1997)Google Scholar
  9. 9.
    Korn, F., Jagadish, H.V., Faloutsos, C.: Efficiently supporting ad hoc queries in large datasets of time sequences. In: Proc. ACM SIGMOD Conf. on Management of Data, Tucson, AZ, May 1997, pp. 289–300 (1997)Google Scholar
  10. 10.
    Korn, F., Pagel, B., Faloutsos, C.: On the ”dimensionality curse” and the ”selfsimilarity blessing”. IEEE Trans. on Knowledge and Data Engineering (TKDE) 13(1), 96–111 (2001)CrossRefGoogle Scholar
  11. 11.
    Korn, F., Sidiropoulos, N., Faloutsos, C., Siegel, E., Protopapas, Z.: Fast nearest neighbor search in medical image databases. In: Proc. 22nd Int’l Conf. on Very Large Data Bases (VLDB), Mumbai, India, pp. 215–226 (1996)Google Scholar
  12. 12.
    Li, Y., Thomasian, A., Zhang, L.: An exact search algorithm for CSVD. Technical Report ISL-2003-02, Integrated Systems Lab, Computer Science Dept., New Jersey Institute of Technology (2003)Google Scholar
  13. 13.
    Roussopoulos, N., Kelley, S., Vincent, F.: Nearest neighbor queries. In: Proc. Conf. ACM SIGMOD, pp. 71–79 (1995)Google Scholar
  14. 14.
    Seidl, T., Kriegel, H.P.: Optimal multi-step k-nearest neighbor search. In: Proc. ACM SIGMOD Int’l Conf. on Management of Data, Seattle, WA, pp. 154–165 (1998)Google Scholar
  15. 15.
    Theodoridis, Y., Sellis, T.: A model for the prediction of R-tree performance. In: Proc. ACM Symp. on Principles of Database Systems (PODS), Montreal, Canada, pp. 161–171 (1996)Google Scholar
  16. 16.
    Thomasian, A., Castelli, V., Li, C.S.: RCSVD: Recursive clustering and singular value decomposition for approximate high-dimensionality indexing. In: Proc. Conf. on Information and Knowledge Management (CIKM), Baltimore, MD, pp. 267–272 (1998)Google Scholar
  17. 17.
    Thomasian, A., Li, Y., Zhang, L.: Performance comparison of local dimensionality reduction methods. Technical Report ISL-2003-01, Integrated Systems Lab, Computer Science Dept., New Jersey Institute of Technology (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Yue Li
    • 1
  • Alexander Thomasian
    • 1
  • Lijuan Zhang
    • 1
  1. 1.Computer Science DepartmentNew Jersey Institute of Technology NJITNewarkUSA

Personalised recommendations