Skip to main content

Fast Algorithms for DR Approximation

  • Chapter
  • 4236 Accesses

Abstract

In nonlinear dimensionality reduction, the kernel dimension is the square of the vector number in the data set. In many applications, the number of data vectors is very large. The spectral decomposition of a large dimensioanl kernel encounters difficulties in at least three aspects: large memory usage, high computational complexity, and computational instability. Although the kernels in some nonlinear DR methods are sparse matrices, which enable us to overcome the difficulties in memory usage and computational complexity partially, yet it is not clear if the instability issue can be settled. In this chapter, we study some fast algorithms that avoid the spectral decomposition of large dimensional kernels in DR processing, dramatically reducing memory usage and computational complexity, as well as increasing numerical stability. In Section 15.1, we introduce the concepts of rank revealings. In Section 15.2, we present the randomized low rank approximation algorithms. In Section 15.3, greedy lank-revealing algorithms (GAT) and randomized anisotropic transformation algorithms (RAT), which approximate leading eigenvalues and eigenvectors of DR kernels, are introduced. Numerical experiments are shown in Section 15.4 to illustrate the validity of these algorithms. The justification of RAT algorithms is included in Section 15.5.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   89.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Cheng, H., Gimbutas, Z., Martinsson, P.G., Rokhlin, V.: On the compression of low rank matrices. SIAM Journal on Scientific Computing 26(4), 1389–1404 (2005).

    Article  MathSciNet  MATH  Google Scholar 

  2. Woolfe, F., Liberty, E., Rokhlin, V., Tygert, M.: A randomized algorithm for the approximation of matrices. Appl. Comput. Harmon. Anal. 25(3), 335–366 (2008).

    Article  MathSciNet  MATH  Google Scholar 

  3. Hansen, P.C.: Rank-Deficient and Discrete Ill-Posed Problems: Numerical Aspects of Linear Inversion. SIAM, Philadelphia (1998).

    Book  Google Scholar 

  4. Stewart, G.W.: Matrix Algorithms Volume I: Basic Decompositions. SIAM, Philadelphia (1998).

    Google Scholar 

  5. Chan, T.F., Hansen, P.C.: Some applications of the rank revealing QR factorization. SIAM J. Sci. Statist. Comput. 13, 727–741 (1992).

    Article  MathSciNet  MATH  Google Scholar 

  6. Gu, M., Eisenstat, S.C.: Efficient algorithms for computing a strong rankrevealing QR factorization. SIAM J. Sci. Comput. 17, 848–869 (1996).

    Article  MathSciNet  MATH  Google Scholar 

  7. Hong, Y.P., Pan, C.T.: Rank-revealing QR factorizations and the singular value decomposition. Mathematics of Computation 58(197), 213–232 (1992).

    MathSciNet  MATH  Google Scholar 

  8. Berry, M., Pulatova, S., Stewart, G.: Algorithm 844: computing sparse reducedrank approximations to sparse matrices. ACM Trans Math Softw 31(2), 252–269 (2005).

    Article  MathSciNet  MATH  Google Scholar 

  9. Goreinov, S.A., Tyrtyshnikov, E.E., Zamarashkin, N.L.: A theory of pseudoskeleton approximations. Linear Algebra and Its Applications 261, 1–21 (1997).

    Article  MathSciNet  MATH  Google Scholar 

  10. Tyrtyshnikov, E.: Matrix bruhat decompositions with a remark on the QR (GR) algorithm. Linear Algebra Appl. 250, 61–68 (1997).

    Article  MathSciNet  MATH  Google Scholar 

  11. Tyrtyshnikov, E., Zamarashkin, N.: Thin structure of eigenvalue clusters for non-hermitian Toeplitz matrices. Linear Algebra Appl. 292, 297–310 (1999).

    Article  MathSciNet  MATH  Google Scholar 

  12. Zamarashkin, N., Tyrtyshnikov, E.: Eigenvalue estimates for Hankel matrices. Sbornik: Mathematics 192, 59–72 (2001).

    Article  MathSciNet  Google Scholar 

  13. Fierro, R., Bunch, J.: Bounding the subspaces from rank revealing two-sided orthogonal decompositions. SIAM Matrix Anal. Appl. 16, 743–759 (1995).

    Article  MathSciNet  MATH  Google Scholar 

  14. Fierro, R., Hansen, P.: Low-rank revealing UTV decompositions. Numerical Algorithms 15, 37–55 (1997).

    Article  MathSciNet  MATH  Google Scholar 

  15. Fierro, R., Hansen, P.C., Hansen, P.S.K.: UTV Tools: Matlab templates for rank-revealing UTV decompositions. Numerical Algorithms 20, 165–194 (1999).

    Article  MathSciNet  MATH  Google Scholar 

  16. Golub, G.H., van Loan, C.F.: Matrix Computations, third edn. Johns Hopkins Press, Baltimore (1996).

    MATH  Google Scholar 

  17. Fierro, R., Hansen, P.: UTV Expansion Pack: Special-purpose rank revaling algorithms. Numerical Algorithms 40, 47–66 (2005).

    Article  MathSciNet  MATH  Google Scholar 

  18. Hansen, P.C., Yalamov, P.Y.: Computing symmetric rank-revealing decompositions via triangular factorization. SIAM J. Matrix Anal. Appl. 28, 443–458 (2001).

    Article  MathSciNet  Google Scholar 

  19. Luk, F.T., S, Q.: A symmetric rank-revealing Toeplitz matrix decomposition. J. VLSI Signal Proc. 14, 19–28 (1996).

    Article  MATH  Google Scholar 

  20. Belabbas, M.A., Wolfe, P.J.: Fast low-rank approximation for covariance matrices. In: Proceedings of the 2nd IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (2007).

    Google Scholar 

  21. Belabbas, M.A., Wolfe, P.J.: On sparse representations of linear operators and the approximation of matrix products. In: Proceedings of the 42nd Annual Conference on Information Sciences and Systems, pp. 258–263 (2008).

    Google Scholar 

  22. Fowlkes, C., Belongie, S., Chung, F., Malik, J.: Spectral grouping using the Nyström method. IEEE Trans. Patt. Anal. Mach. Intell. pp. 214–225 (2004).

    Google Scholar 

  23. Parker, P., Wolfe, P.J., Tarokh, V.: A signal processing application of randomized low-rank approximations. In: IEEE Worksh. Statist. Signal Process., pp. 345–350 (2005).

    Google Scholar 

  24. Williams, C.K.I., Seeger, M.: Using the Nyström method to speed up kernel machines. In: Neural Information Processing Systems, pp. 682–688 (2000).

    Google Scholar 

  25. Martinsson, P.G., Rokhlin, V., Tygert, M.: A randomized algorithm for the approximation of matrices. Tech. Rep. 1361, Dept. of Computer Science, Yale University (2006).

    Google Scholar 

  26. Martinsson, P.G., Rokhlin, V., Tygert, M.: On interpolation and integration in finite-dimensional spaces of bounded functions. Comm. Appl. Math. Comput. Sci. pp. 133–142 (2006).

    Google Scholar 

  27. Belabbas, M.A., Wolfe, P.J.: Spectral methods in machine learning: New strategies for very large data sets. PANS 106(2), 369–374 (2009).

    Article  Google Scholar 

  28. Chui, C., Wang, J.: Dimensionality reduction of hyper-spectral imagery data for feature classification. In: W. Freeden, Z. Nashed, T. Sonar (eds.) Handbook of Geomathematics. Springer, Berlin (2010).

    Google Scholar 

  29. Chui, C., Wang, J.: Randomized anisotropic transform for nonlinear dimensionality reduction. International Journal on Geomathematics 1(1), 23–50 (2010).

    Article  MathSciNet  MATH  Google Scholar 

  30. Xiao, L., Sun, J., Boyd, S.P.: A duality view of spectral methods for dimensionality reduction. In: W.W. Cohen, A. Moore (eds.) Machine Learning: Proceedings of the Twenty-Third International Conference, ACM International Conference Proceeding Series, vol. 148, pp. 1041–1048. ICML, Pittsburgh, Pennsylvania, USA (2006).

    Google Scholar 

  31. Goldstine, H.H., von Neumann, J.: Numerical inverting of matrices of high order II. Amer. Math. Soc. Proc. 2, 188–202 (1951).

    Article  MATH  Google Scholar 

  32. Chen, Z., Dongarra, J.J.: Condition numbers of Gaussian random matrices. SIAM J. on Matrix Anal. Appl. 27, 603–620 (2005).

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Higher Education Press, Beijing and Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Wang, J. (2012). Fast Algorithms for DR Approximation. In: Geometric Structure of High-Dimensional Data and Dimensionality Reduction. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-27497-8_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-27497-8_15

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-27496-1

  • Online ISBN: 978-3-642-27497-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics