Advertisement

CUR LRA at Sublinear Cost Based on Volume Maximization

  • Qi Luan
  • Victor Y. PanEmail author
Conference paper
  • 20 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11989)

Abstract

A matrix algorithm runs at sublinear cost if it uses much fewer memory cells and arithmetic operations than the input matrix has entries. Such algorithms are indispensable for Big Data Mining and Analysis, where input matrices are so immense that one can only access a small fraction of all their entries. Typically, however, such matrices admit their Low Rank Approximation (LRA), which one can access and process at sublinear cost. Can, however, we compute LRA at sublinear cost? Adversary argument shows that no algorithm running at sublinear cost can output accurate LRA of worst case input matrices or even of the matrices of small families of our Appendix A, but we prove that some sublinear cost algorithms output a reasonably close LRA of a matrix W if (i) this matrix is sufficiently close to a low rank matrix or (ii) it is a Symmetric Positive Semidefinite (SPSD) matrix that admits LRA. In both cases supporting algorithms are deterministic and output LRA in its special form of CUR LRA, particularly memory efficient. The design of our algorithms and the proof of their correctness rely on the results of extensive previous study of CUR LRA in Numerical Linear Algebra using volume maximization. In case (i) we apply Cross-Approximation (C-A) iterations, running at sublinear cost and computing accurate LRA worldwide for more than a decade. We provide the first formal support for this long-known empirical efficiency assuming non-degeneracy of the initial submatrix of at least one C-A iteration. We cannot ensure non-degeneracy at sublinear cost for a worst case input but prove that it holds with a high probability (whp) for any initialization in the case of a random or randomized input. Empirically we can replace randomization with sparse multiplicative preprocessing of an input matrix, performed at sublinear cost. In case (ii) we make no additional assumptions about the input class of SPSD matrices admitting LRA or about initialization of our sublinear cost algorithms for CUR LRA, which promise to be practically valuable. We hope that proper combination of our deterministic techniques with randomized LRA methods, popular among Computer Science researchers, will lead them to further progress in LRA.

Keywords

Low Rank Approximation (LRA) CUR LRA Sublinear cost Symmetric Positive Semidefinite (SPSD) matrices Cross-Approximation (C-A) Maximal volume 

2000 Math. Subject Classification:

65Y20 65F30 68Q25 15A52 

Notes

Acknowledgements

Our research has been supported by NSF Grants CCF-1116736, CCF-1563942, and CCF-133834 and PSC CUNY Award 69813 00 48. We also thank A. Cortinovis, A. Osinsky, N. L. Zamarashkin for pointers to their papers [CKM19] and [OZ18], S. A. Goreinov for reprints, of his papers, and E. E. Tyrtyshnikov for pointers to the bibliography and the challenge of formally supporting empirical power of C-A algorithms.

References

  1. [B00]
    Bebendorf, M.: Approximation of boundary element matrices. Numer. Math. 86(4), 565–589 (2000)MathSciNetCrossRefGoogle Scholar
  2. [CI94]
    Chandrasekaran, S., Ipsen, I.: On rank revealing QR factorizations. SIAM J. Matrix Anal. Appl. 15, 592–622 (1994)MathSciNetCrossRefGoogle Scholar
  3. [CKM19]
    Cortinovis, A., Kressner, D., Massei, S.: MATHICSE technical report: on maximum volume submatrices and cross approximation for symmetric semidefinite and diagonally dominant matrices. MATHICSE, 12 February 2019Google Scholar
  4. [CM09]
    Çivril, A., Magdon-Ismail, M.: On selecting a maximum volume sub-matrix of a matrix and related problems. Theor. Comput. Sci. 410(47–49), 4801–4811 (2009)MathSciNetCrossRefGoogle Scholar
  5. [DMM08]
    Drineas, P., Mahoney, M.W., Muthukrishnan, S.: Relative-error CUR matrix decompositions. SIAM J. Matrix Anal. Appl. 30(2), 844–881 (2008)MathSciNetCrossRefGoogle Scholar
  6. [GE96]
    Gu, M., Eisenstat, S.C.: An efficient algorithm for computing a strong rank revealing QR factorization. SIAM J. Sci. Comput. 17, 848–869 (1996)MathSciNetCrossRefGoogle Scholar
  7. [GL13]
    Golub, G.H., Van Loan, C.F.: Matrix Computations, 4th edn. The Johns Hopkins University Press, Baltimore (2013)zbMATHGoogle Scholar
  8. [GT01]
    Goreinov, S.A., Tyrtyshnikov, E.E.: The maximal-volume concept in approximation by low rank matrices. Contemp. Math. 208, 47–51 (2001)MathSciNetCrossRefGoogle Scholar
  9. [GTZ97]
    Goreinov, S.A., Tyrtyshnikov, E.E., Zamarashkin, N.L.: A theory of pseudo-skeleton approximations. Linear Algebra Appl. 261, 1–21 (1997)MathSciNetCrossRefGoogle Scholar
  10. [LPa]
    Luan, Q., Pan, V.Y.: Low rank approximation of a matrix at sublinear cost, 21 July 2019. arXiv:1907.10481
  11. [MD09]
    Mahoney, M.W., Drineas, P.: CUR matrix decompositions for improved data analysis. Proc. Natl. Acad. Sci. USA 106, 697–702 (2009)MathSciNetCrossRefGoogle Scholar
  12. [MW17]
    Musco, C., Woodruff, D.P.: Sublinear time low-rank approximation of positive semidefinite matrices. In: IEEE 58th FOCS, pp. 672–683 (2017)Google Scholar
  13. [O17]
    Osinsky, A.I.: Probabilistic estimation of the rank 1 cross approximation accuracy, submitted on 30 June 2017. arXiv:1706.10285
  14. [OZ18]
    Osinsky, A.I., Zamarashkin, N.L.: Pseudo-skeleton approximations with better accuracy estimates. Linear Algebra Appl. 537, 221–249 (2018)MathSciNetCrossRefGoogle Scholar
  15. [PLa]
    Pan, V.Y., Luan, Q.: Refinement of low rank approximation of a matrix at sub-linear cost, submitted on 10 June 2019. arXiv:1906.04223
  16. [PLSZ16]
    Pan, V.Y., Luan, Q., Svadlenka, J., Zhao, L.: Primitive and cynical low rank approximation, preprocessing and extensions, submitted on 3 November 2016. arXiv:1611.01391v1
  17. [PLSZ17]
    Pan, V.Y., Luan, Q., Svadlenka, J., Zhao, L.: Superfast accurate approximation of low rank matrices, submitted on 22 October 2017. arXiv:1710.07946v1
  18. [PLSZa]
    Pan, V.Y., Luan, Q., Svadlenka, J., Zhao, L.: CUR low rank approximation at sub-linear cost, submitted on 10 June 2019. arXiv:1906.04112
  19. [PQY15]
    Pan, V.Y., Qian, G., Yan, X.: Random multipliers numerically stabilize Gaussian and block Gaussian elimination: proofs and an extension to low-rank approximation. Linear Algebra Appl. 481, 202–234 (2015)MathSciNetCrossRefGoogle Scholar
  20. [PZ17a]
    Pan, V.Y., Zhao, L.: New studies of randomized augmentation and additive preprocessing. Linear Algebra Appl. 527, 256–305 (2017)MathSciNetCrossRefGoogle Scholar
  21. [PZ17b]
    Pan, V.Y., Zhao, L.: Numerically safe Gaussian elimination with no pivoting. Linear Algebra Appl. 527, 349–383 (2017)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.MathematicsThe Graduate Center of the City University of New YorkNew YorkUSA
  2. 2.Computer Science and MathematicsThe Graduate Center of the City University of New YorkNew YorkUSA
  3. 3.Computer ScienceLehman College of the City University of New YorkBronxUSA

Personalised recommendations