Skip to main content
Log in

An approximation method of CP rank for third-order tensor completion

  • Published:
Numerische Mathematik Aims and scope Submit manuscript

Abstract

We study the problem of third-order tensor completion based on low CP rank recovery. Due to the NP-hardness of the calculation of CP rank, we propose an approximation method by using the sum of ranks of a few matrices as an upper bound of CP rank. We show that such upper bound is between CP rank and the square of CP rank of a tensor. This approximation would be useful when the CP rank is very small. Numerical algorithms are developed and examples are presented to demonstrate that the tensor completion performance by the proposed method is better than that of existing methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. Denote \(\mathbf {Y}=\sum _{\ell =1}^{I_1}m_{\ell j}\mathcal {A}(\ell ,:,:)\). Let the SVD of \(\mathbf {Y}\) be \(\mathbf {U}\varvec{\Sigma }\mathbf {V}^T\). The chain rule gives

    $$\begin{aligned} \frac{\partial \mathscr {T}}{\partial \mathbf {M}}(i,j)=\bigg \{\text {tr}\left( (\mathbf {U}\mathbf {V}^T+\mathbf {W})^T \mathcal {A}(i,:,:)\right) : \mathbf {W}\in \mathbb {R}^{I_2\times I_3},\mathbf {U}^T\mathbf {W}=0,\mathbf {W}\mathbf {V}=0, \Vert \mathbf {W}\Vert _2\le 1 \bigg \}, \end{aligned}$$

    where \(\Vert \mathbf {W}\Vert _2\) is the spectral norm of \(\mathbf {W}\) and \(\text {tr}(\cdot )\) is the trace of a matrix.

  2. The data are available at http://peterwonka.net/Publications/code/LRTC_Package_Ji.zip and have been used in [44].

  3. The data are from BrainWeb [12] and available at http://brainweb.bic.mni.mcgill.ca/brainweb/selection_normal.html.

  4. The data are from the video trace library [37] and available at http://trace.eas.asu.edu/yuv/.

  5. To be more accurate, the slices that we utilize are submatrices of the unfolding matrix from the original tensor after some linear transform. See Corollary 2.6.

References

  1. Ashraphijuo, M., Wang, X.: Fundamental conditions for low-CP-rank tensor completion. J. Mach. Learn. Res. 18(1), 2116–2145 (2017)

    MathSciNet  MATH  Google Scholar 

  2. Attouch, H., Bolte, J., Redont, P., Soubeyran, A.: Proximal alternating minimization and projection methods for nonconvex problems: an approach based on the Kurdyka-Łojasiewicz inequality. Math. Oper. Res. 35(2), 438–457 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  3. Bader, B.W., Kolda, T.G. et al.: MATLAB Tensor Toolbox Version 3.0-dev. https://www.tensortoolbox.org (2017)

  4. Barak, B., Moitra, A.: Noisy tensor completion via the sum-of-squares hierarchy. In: Conference on Learning Theory, pp. 417–445 (2016)

  5. Bolte, J., Sabach, S., Teboulle, M.: Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Math. Program. 146(1–2), 459–494 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  6. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends® Mach. Learn. 3(1), 1–122 (2011)

    MATH  Google Scholar 

  7. Breiding, P., Vannieuwenhoven, N.: A Riemannian trust region method for the canonical tensor rank approximation problem. SIAM J. Optim. 28(3), 2435–2465 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  8. Breiding, P., Vannieuwenhoven, N.: The condition number of join decompositions. SIAM J. Matrix Anal. Appl. 39(1), 287–309 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  9. Cai, J.-F., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20(4), 1956–1982 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  10. Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9(6), 717 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  11. Candès, E.J., Tao, T.: The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory 56(5), 2053–2080 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  12. Cocosco, C.A., Kollokian, V., Kwan, R.K.-S., Pike, G.B., Evans, A.C.: Brainweb: Online interface to a 3D MRI simulated brain database. In NeuroImage, Citeseer (1997)

  13. De Lathauwer, L., De Moor, B., Vandewalle, J.: A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 21(4), 1253–1278 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  14. De Silva, V., Lim, L.-H.: Tensor rank and the ill-posedness of the best low-rank approximation problem. SIAM J. Matrix Anal. Appl. 30(3), 1084–1127 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  15. Edelman, A., Arias, T.A., Smith, S.T.: The geometry of algorithms with orthogonality constraints. SIAM J. Matrix Anal. Appl. 20(2), 303–353 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  16. Friedland, S., Lim, L.-H.: Nuclear norm of higher-order tensors. Math. Comput. 87(311), 1255–1281 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  17. Gandy, S., Recht, B., Yamada, I.: Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse Prob. 27(2), 025010 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  18. Goldfarb, D., Qin, Z.: Robust low-rank tensor recovery: models and algorithms. SIAM J. Matrix Anal. Appl. 35(1), 225–253 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  19. Håstad, J.: Tensor rank is NP-complete. J. Algorithms 11(4), 644–654 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  20. Hillar, C.J., Lim, L.-H.: Most tensor problems are NP-hard. J. ACM (JACM) 60(6), 45 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  21. Holtz, S., Rohwedder, T., Schneider, R.: The alternating linear scheme for tensor optimization in the tensor train format. SIAM J. Sci. Comput. 34(2), A683–A713 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  22. Horn, R.A., Johnson, C.R.: Matrix Analysis. Cambridge University Press, Cambridge (2012)

    Book  Google Scholar 

  23. Jain, P., Oh, S.: Provable tensor factorization with missing data. In: Advances in Neural Information Processing Systems, pp. 1431–1439 (2014)

  24. Jiang, B., Ma, S., Zhang, S.: Tensor principal component analysis via convex optimization. Math. Program. 150(2), 423–457 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  25. Jiang, B., Ma, S., Zhang, S.: Low-M-rank tensor completion and robust tensor PCA. IEEE J. Sel. Top. Signal Process. 12(6), 1390–1404 (2018)

    Article  Google Scholar 

  26. Jiang, B., Yang, F., Zhang, S.: Tensor and its Tucker core: the invariance relationships. Numer. Linear Algebra Appl. 24(3), e2086 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  27. Jiang, Q., Ng, M.: Robust low-tubal-rank tensor completion via convex optimization. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, pp. 2649–2655 (2019)

  28. Kilmer, M.E., Braman, K., Hao, N., Hoover, R.C.: Third-order tensors as operators on matrices: a theoretical and computational framework with applications in imaging. SIAM J. Matrix Anal. Appl. 34(1), 148–172 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  29. Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 51(3), 455–500 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  30. Kressner, D., Steinlechner, M., Vandereycken, B.: Low-rank tensor completion by Riemannian optimization. BIT Numer. Math. 54(2), 447–468 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  31. Kruskal, J.B.: Three-way arrays: rank and uniqueness of trilinear decompositions, with application to arithmetic complexity and statistics. Linear Algebra Appl. 18(2), 95–138 (1977)

    Article  MathSciNet  MATH  Google Scholar 

  32. Landsberg, J.M.: Tensors: Geometry and Applications, vol. 128. American Mathematical Society, Providence (2012)

    MATH  Google Scholar 

  33. Liu, J., Musialski, P., Wonka, P., Ye, J.: Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 208–220 (2013)

    Article  Google Scholar 

  34. Mu, C., Huang, B., Wright, J., Goldfarb, D.: Square deal: lower bounds and improved relaxations for tensor recovery. In: International conference on machine learning, pp. 73–81 (2014)

  35. Recht, B., Fazel, M., Parrilo, P.A.: Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52(3), 471–501 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  36. Rohwedder, T., Uschmajew, A.: On local convergence of alternating schemes for optimization of convex problems in the tensor train format. SIAM J. Numer. Anal. 51(2), 1134–1162 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  37. Seeling, P., Reisslein, M.: Video transport evaluation with H. 264 video traces. IEEE Commun. Surv. Tutor. 14(4), 1142–1165 (2011)

    Article  Google Scholar 

  38. Steinlechner, M.: Riemannian optimization for high-dimensional tensor completion. SIAM J. Sci. Comput. 38(5), S461–S484 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  39. Uschmajew, A.: Local convergence of the alternating least squares algorithm for canonical tensor approximation. SIAM J. Matrix Anal. Appl. 33(2), 639–652 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  40. Vannieuwenhoven, N.: Condition numbers for the tensor rank decomposition. Linear Algebra Appl. 535, 35–86 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  41. Wen, Z., Yin, W.: A feasible method for optimization with orthogonality constraints. Math. Program. 142(1–2), 397–434 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  42. Wen, Z., Yin, W., Zhang, Y.: Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Math. Program. Comput. 4(4), 333–361 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  43. Wright, S.J.: Coordinate descent algorithms. Math. Program. 151(1), 3–34 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  44. Xu, Y., Hao, R., Yin, W., Su, Z.: Parallel matrix factorization for low-rank tensor completion. Inverse Problems Imag. 9(2), 601–624 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  45. Xu, Y., Yin, W.: A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion. SIAM J. Imag. Sci. 6(3), 1758–1789 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  46. Yang, J., Yuan, X.: Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization. Math. Comput. 82(281), 301–329 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  47. Yang, Y., Feng, Y., Huang, X., Suykens, J.A.: Rank-1 tensor properties with applications to a class of tensor optimization problems. SIAM J. Optim. 26(1), 171–196 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  48. Yokota, T., Zhao, Q., Cichocki, A.: Smooth PARAFAC decomposition for tensor completion. IEEE Trans. Signal Process. 64(20), 5423–5436 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  49. Yuan, M., Zhang, C.-H.: On tensor completion via nuclear norm minimization. Found. Comput. Math. 16(4), 1031–1068 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  50. Zhang, Z., Aeron, S.: Exact tensor completion using t-SVD. IEEE Trans. Signal Process. 65(6), 1511–1526 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  51. Zhao, Q., Zhang, L., Cichocki, A.: Bayesian CP factorization of incomplete tensors with automatic rank determination. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1751–1763 (2015)

    Article  Google Scholar 

Download references

Acknowledgements

We are extremely grateful to two anonymous referees for their valuable feedback, which improved this paper significantly.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chao Zeng.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

T.-X. Jiang’s research is supported in part by the National Natural Science Foundation of China (12001446) and the Fundamental Research Funds for the Central Universities (JBK2102001). M. Ng’s research is supported in part by the HKRGC GRF 12306616, 12200317, 12300218 and 12300519, and HKU 104005583.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zeng, C., Jiang, TX. & Ng , M.K. An approximation method of CP rank for third-order tensor completion. Numer. Math. 147, 727–757 (2021). https://doi.org/10.1007/s00211-021-01185-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00211-021-01185-9

Mathematics Subject Classification

Navigation