Journal of Scientific Computing

, Volume 70, Issue 2, pp 478–499 | Cite as

Subspace Methods with Local Refinements for Eigenvalue Computation Using Low-Rank Tensor-Train Format

  • Junyu Zhang
  • Zaiwen Wen
  • Yin Zhang


Computing a few eigenpairs from large-scale symmetric eigenvalue problems is far beyond the tractability of classic eigensolvers when the storage of the eigenvectors in the classical way is impossible. We consider a tractable case in which both the coefficient matrix and its eigenvectors can be represented in the low-rank tensor train formats. We propose a subspace optimization method combined with some suitable truncation steps to the given low-rank Tensor Train formats. Its performance can be further improved if the alternating minimization method is used to refine the intermediate solutions locally. Preliminary numerical experiments show that our algorithm is competitive to the state-of-the-art methods on problems arising from the discretization of the stationary Schrödinger equation.


High-dimensional eigenvalue problem Tensor-train format Alternating least square method Subspace optimization method 



We thank D. Kressner, M. Steinlechner and A. Uschmajew for sharing online their matlab codes on EVAMEn and the TT/MPS tensor toolbox TTeMPS. The authors would like to thank the associate editor Prof. Wotao Yin and two anonymous referees for their detailed and valuable comments and suggestions.


  1. 1.
    Ballani, J., Grasedyck, L.: A projection method to solve linear systems in tensor format. Numer. Linear Algebra Appl. 20, 27–43 (2013)MathSciNetCrossRefMATHGoogle Scholar
  2. 2.
    Dolgov, S.V., Khoromskij, B.N., Oseledets, I.V., Savostyanov, D.V.: Computation of extreme eigenvalues in higher dimensions using block tensor train format. Comput. Phys. Commun. 185, 1207–C1216 (2013)CrossRefMATHGoogle Scholar
  3. 3.
    Grasedyck, L.: Existence and computation of low Kronecker-rank approximations for large linear systems of tensor product structure. Computing 72, 247–265 (2004)MathSciNetCrossRefMATHGoogle Scholar
  4. 4.
    Grasedyck, L.: Hierarchical singular value decomposition of tensors. SIAM J. Matrix Anal. Appl. 31, 2029–2054 (2009/10)Google Scholar
  5. 5.
    Grasedyck, L., Kressner, D., Tobler, C.: A literature survey of low-rank tensor approximation techniques. GAMM Mitt. 36, 53–78 (2013)MathSciNetCrossRefMATHGoogle Scholar
  6. 6.
    Hackbusch, W.: Entwicklungen nach exponentialsummen, Technical Report 4, Max Planck Institute for Mathematics in the Sciences, 2005, MPI MIS Leipzig, Revised version (2010)Google Scholar
  7. 7.
    Hackbusch, W.: Tensor Spaces and Numerical Tensor Calculus. Springer, Heidelberg (2012)CrossRefMATHGoogle Scholar
  8. 8.
    Holtz, S., Rohwedder, T., Schneider, R.: The alternating linear scheme for tensor optimization in the tensor train format. SIAM J. Sci. Comput. 34, A683–A713 (2012)MathSciNetCrossRefMATHGoogle Scholar
  9. 9.
    Knyazev, A.V.: Toward the optimal preconditioned eigensolver: locally optimal block preconditioned conjugate gradient method. SIAM J. Sci. Comput. 23, 517–541 (2001)MathSciNetCrossRefMATHGoogle Scholar
  10. 10.
    Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 51, 455–500 (2009)MathSciNetCrossRefMATHGoogle Scholar
  11. 11.
    Kressner, D., Steinlechner, M., Uschmajew, A.: Low-rank tensor methods with subspace correction for symmetric eigenvalue problems. SIAM J. Sci. Comput. 36, A2346–CA2368 (2014)MathSciNetCrossRefMATHGoogle Scholar
  12. 12.
    Kressner, D., Tobler, C.: Preconditioned low-rank methods for high-dimensional elliptic PDE eigenvalue problems. Comput. Methods Appl. Math. 11, 363–381 (2011)MathSciNetCrossRefMATHGoogle Scholar
  13. 13.
    Kressner, D., Tobler, C.: htucker, a MATLAB toolbox for tensors in hierarchical Tucker format, Technical Report (2012)Google Scholar
  14. 14.
    De Lathauwer, L., De Moor, B., Vandewalle, J.: A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 21, 1253–1278 (2000)MathSciNetCrossRefMATHGoogle Scholar
  15. 15.
    Lebedeva, O.S.: Tensor conjugate-gradient-type method for Rayleigh quotient minimization in block QTT-format. Rus. J. Numer. Anal. Math. Model. 26, 465–489 (2011)MathSciNetMATHGoogle Scholar
  16. 16.
    Liu, X., Wen, Z., Zhang, Y.: Limited memory block Krylov subspace optimization for computing dominant singular value decompositions. SIAM J. Sci. Comput. 35–3, A1641–A1668 (2013)MathSciNetCrossRefMATHGoogle Scholar
  17. 17.
    Oseledets, I.: DMRG approach to fast linear algebra in the TT-format. Comput. Methods Appl. Math. 11, 382–393 (2011)MathSciNetCrossRefMATHGoogle Scholar
  18. 18.
    Oseledets, I.V.: Approximation of \(2^d \times 2^d\) matrices using tensor decomposition. SIAM J. Matrix Anal. Appl 31, 2130–2145 (2010)MathSciNetCrossRefMATHGoogle Scholar
  19. 19.
    Oseledets, I.V.: Tensor train decomposition. SIAM J. Sci. Comput. 33, 2295–2317 (2011)MathSciNetCrossRefMATHGoogle Scholar
  20. 20.
    Wen, Z., Zhang, Y.: Block algorithms with augmented rayleigh-ritz projections for large-scale eigenpair computation, arXiv:1507.06078 (2015)

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  1. 1.Department of Industrial and Systems EngineeringUniversity of MinnesotaMinneapolisUSA
  2. 2.Beijing International Center for Mathematical ResearchPeking UniversityBeijingChina
  3. 3.Department of Computational and Applied MathematicsRice UniversityHoustonUSA

Personalised recommendations