Advertisement

On Polynomial Time Methods for Exact Low-Rank Tensor Completion

  • Dong Xia
  • Ming YuanEmail author
Article
  • 50 Downloads

Abstract

In this paper, we investigate the sample size requirement for exact recovery of a high-order tensor of low rank from a subset of its entries. We show that a gradient descent algorithm with initial value obtained from a spectral method can, in particular, reconstruct a \({d\times d\times d}\) tensor of multilinear ranks (rrr) with high probability from as few as \(O(r^{7/2}d^{3/2}\log ^{7/2}d+r^7d\log ^6d)\) entries. In the case when the ranks \(r=O(1)\), our sample size requirement matches those for nuclear norm minimization (Yuan and Zhang in Found Comput Math 1031–1068, 2016), or alternating least squares assuming orthogonal decomposability (Jain and Oh in Advances in Neural Information Processing Systems, pp 1431–1439, 2014). Unlike these earlier approaches, however, our method is efficient to compute, is easy to implement, and does not impose extra structures on the tensor. Numerical results are presented to further demonstrate the merits of the proposed approach.

Keywords

Concentration inequality Matrix completion Nonconvex optimization Polynomial time complexity Tensor completion Tensor rank U-statistics 

Mathematics Subject Classification

Primary 90C25 Secondary 90C59 15A52 

Notes

References

  1. 1.
    P. Absil, R. Mahony, and R. Sepulchre. Optimization Algorithms on Matrix Manifolds. Princeton University Press, 2008.Google Scholar
  2. 2.
    Animashree Anandkumar, Rong Ge, Daniel Hsu, Sham M Kakade, and Matus Telgarsky. Tensor decompositions for learning latent variable models. Journal of Machine Learning Research, 15(1):2773–2832, 2014.Google Scholar
  3. 3.
    Boaz Barak and Ankur Moitra. Noisy tensor completion via the sum-of-squares hierarchy. In 29th Annual Conference on Learning Theory, pages 417–445, 2016.Google Scholar
  4. 4.
    Olivier Bousquet. A Bennett concentration inequality and its application to suprema of empirical processes. Comptes Rendus Mathematique, 334(6):495–500, 2002.MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Emmanuel J Candès and Benjamin Recht. Exact matrix completion via convex optimization. Foundations of Computational mathematics, 9(6):717–772, 2009.Google Scholar
  6. 6.
    Emmanuel J Candès and Terence Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Transactions on Information Theory, 56(5):2053–2080, 2010.Google Scholar
  7. 7.
    S. Cohen and M. Collins. Tensor decomposition for fast parsing with latent-variable PCFGS. In Advances in Neural Information Processing Systems, 2012.Google Scholar
  8. 8.
    Victor de la Pena and Evarist Giné. Decoupling: from dependence to independence. Springer Science & Business Media, 1999.Google Scholar
  9. 9.
    Victor H de la Peña and Stephen J Montgomery-Smith. Decoupling inequalities for the tail probabilities of multivariate U-statistics. The Annals of Probability, pages 806–816, 1995.Google Scholar
  10. 10.
    Vin de Silva and Lek-Heng Lim. Tensor rank and the ill-posedness of the best low-rank approximation problem. SIAM Journal on Matrix Analysis and Applications, 30(3):1084–1127, 2008.MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Alan Edelman, Tomás A Arias, and Steven T Smith. The geometry of algorithms with orthogonality constraints. SIAM journal on Matrix Analysis and Applications, 20(2):303–353, 1998.Google Scholar
  12. 12.
    Lars Elden and Berkant Savas. A Newton-Grassmann method for computing the best multilinear rank-(\(r_1,r_2,r_3\)) approximation of a tensor. SIAM Journal on Matrix Analysis and Applications, 31(2):248–271, 2009.Google Scholar
  13. 13.
    Silvia Gandy, Benjamin Recht, and Isao Yamada. Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse Problems, 27(2):025010, 2011.MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    David Gross. Recovering low-rank matrices from few coefficients in any basis. IEEE Transactions on Information Theory, 57(3):1548–1566, 2011.MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    C. Hillar and Lek-Heng Lim. Most tensor problems are NP-hard. Journal of ACM, 60(6):45, 2013.Google Scholar
  16. 16.
    Prateek Jain and Sewoong Oh. Provable tensor factorization with missing data. In Advances in Neural Information Processing Systems, pages 1431–1439, 2014.Google Scholar
  17. 17.
    Raghunandan H Keshavan, Sewoong Oh, and Andrea Montanari. Matrix completion from a few entries. In 2009 IEEE International Symposium on Information Theory, pages 324–328. IEEE, 2009.Google Scholar
  18. 18.
    Daniel Kressner, Michael Steinlechner, and Bart Vandereycken. Low-rank tensor completion by Riemannian optimization. BIT Numerical Mathematics, 54(2):447–468, 2014.MathSciNetCrossRefzbMATHGoogle Scholar
  19. 19.
    N. Li and B. Li. Tensor completion for on-board compression of hyperspectral images. In 17th IEEE International Conference on Image Processing (ICIP), pages 517–520, 2010.Google Scholar
  20. 20.
    Ji Liu, Przemyslaw Musialski, Peter Wonka, and Jieping Ye. Tensor completion for estimating missing values in visual data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1):208–220, 2013.CrossRefGoogle Scholar
  21. 21.
    David G Luenberger and Yinyu Ye. Linear and nonlinear programming, volume 228. Springer, 2015.Google Scholar
  22. 22.
    Andrea Montanari and Nike Sun. Spectral algorithms for tensor completion. Communications on Pure and Applied Mathematics, 2016.Google Scholar
  23. 23.
    Cun Mu, Bo Huang, John Wright, and Donald Goldfarb. Square deal: Lower bounds and improved convex relaxations for tensor recovery. Journal of Machine Learning Research, 1:1–48, 2014.Google Scholar
  24. 24.
    Holger Rauhut and Željka Stojanac. Tensor theta norms and low rank recovery. arXiv preprint arXiv:1505.05175, 2015.
  25. 25.
    Holger Rauhut, Reinhold Schneider, and Zeljka Stojanac. Low rank tensor recovery via iterative hard thresholding. arXiv preprint arXiv:1602.05217, 2016.
  26. 26.
    Benjamin Recht. A simpler approach to matrix completion. Journal of Machine Learning Research, 12(Dec):3413–3430, 2011.Google Scholar
  27. 27.
    Berkant Savas and Lek-Heng Lim. Quasi-newton methods on Grassmannians and multilinear approximations of tensors. SIAM Journal on Matrix Analysis and Applications, 32(6):3352–3393, 2010.MathSciNetzbMATHGoogle Scholar
  28. 28.
    O. Semerci, N. Hao, M. Kilmer, and E. Miller. Tensor-based formulation and nuclear norm regularization for multienergy computed tomography. IEEE Transactions on Image Processing, 23:1678–1693, 2014.MathSciNetCrossRefzbMATHGoogle Scholar
  29. 29.
    N.D. Sidiropoulos and N. Nion. Tensor algebra and multi-dimensional harmonic retrieval in signal processing for mimo radar. IEEE Transactions on Signal Processing, 58:5693–5705, 2010.MathSciNetCrossRefzbMATHGoogle Scholar
  30. 30.
    Ryota Tomioka, Kohei Hayashi, and Hisashi Kashima. Estimation of low-rank tensors via convex optimization. arXiv preprint arXiv:1010.0789, 2010.
  31. 31.
    Joel A Tropp. User-friendly tail bounds for sums of random matrices. Foundations of Computational Mathematics, 12(4):389–434, 2012.Google Scholar
  32. 32.
    Yi Yu, Tengyao Wang, and Richard J Samworth. A useful variant of the Davis–Kahan theorem for statisticians. Biometrika, 102(2):315–323, 2015.Google Scholar
  33. 33.
    Ming Yuan and Cun-Hui Zhang. On tensor completion via nuclear norm minimization. Foundations of Computational Mathematics, pages 1031–1068, 2016.Google Scholar
  34. 34.
    Ming Yuan and Cun-Hui Zhang. Incoherent tensor norms and their applications in higher order tensor completion. IEEE Transactions on Information Theory, 63(10):6753–6766, 2017.MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© SFoCM 2018

Authors and Affiliations

  1. 1.Hong Kong University of Science and TechnologyClear Water BayHong Kong
  2. 2.Department of StatisticsColumbia UniversityNew YorkUSA

Personalised recommendations