Advertisement

A Factorization Strategy for Tensor Robust PCA

  • Andong Wang
  • Zhong JinEmail author
  • Jingyu Yang
Conference paper
  • 129 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12046)

Abstract

Many kinds of real-world data, e.g., color images, videos, etc., are represented by tensors and may often be corrupted by outliers. Tensor robust principal component analysis (TRPCA) servers as a tensorial modification of the fundamental principal component analysis (PCA) which performs well in the presence of outliers. The recently proposed TRPCA model [12] based on tubal nuclear norm (TNN) has attracted much attention due to its superiority in many applications. However, TNN is computationally expensive, limiting the application of TRPCA for large tensors. To address this issue, we first propose a new TRPCA model by adopting a factorization strategy within the framework of tensor singular value decomposition (t-SVD). An algorithm based on the non-convex augmented Lagrangian method (ALM) is developed with convergence guarantee. Effectiveness and efficiency of the proposed algorithm is demonstrated through extensive experiments on both synthetic and real datasets.

Keywords

Robust tensor principle component analysis Tensor SVD Non-convex ALM 

References

  1. 1.
    Candès, E.J., Li, X., Ma, Y., Wright, J.: Robust principal component analysis? JACM 58(3), 11 (2011)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Fazel, M.: Matrix rank minimization with applications. Ph.D. thesis, Stanford University (2002)Google Scholar
  3. 3.
    Foucart, S., Rauhut, H.: A Mathematical Introduction to Compressive Sensing, vol. 1. Birkhäuser, Basel (2013)CrossRefGoogle Scholar
  4. 4.
    Friedland, S., Lim, L.: Nuclear norm of higher-order tensors. Math. Comput. 87(311), 1255–1281 (2017)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Goldfarb, D., Qin, Z.: Robust low-rank tensor recovery: models and algorithms. SIAM J. Matrix Anal. Appl. 35(1), 225–253 (2014)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Harshman, R.A.: Foundations of the PARAFAC procedure: models and conditions for an “explanatory” multi-modal factor analysis (1970)Google Scholar
  7. 7.
    Hillar, C.J., Lim, L.: Most tensor problems are NP-hard. J. ACM 60(6), 45 (2009)MathSciNetzbMATHGoogle Scholar
  8. 8.
    Huang, B., Mu, C., Goldfarb, D., Wright, J.: Provable models for robust low-rank tensor completion. Pac. J. Optim. 11(2), 339–364 (2015)MathSciNetzbMATHGoogle Scholar
  9. 9.
    Kilmer, M.E., Braman, K., Hao, N., Hoover, R.C.: Third-order tensors as operators on matrices: a theoretical and computational framework with applications in imaging. SIAM J. Matrix Anal. Appl. 34(1), 148–172 (2013)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Liu, J., Musialski, P., Wonka, P., Ye, J.: Tensor completion for estimating missing values in visual data. IEEE TPAMI 35(1), 208–220 (2013)CrossRefGoogle Scholar
  11. 11.
    Lu, C., Feng, J., Chen, Y., Liu, W., Lin, Z., Yan, S.: Tensor robust principal component analysis: exact recovery of corrupted low-rank tensors via convex optimization. In: CVPR, pp. 5249–5257 (2016)Google Scholar
  12. 12.
    Lu, C., Feng, J., Liu, W., Lin, Z., Yan, S., et al.: Tensor robust principal component analysis with a new tensor nuclear norm. IEEE TPAMI (2019)Google Scholar
  13. 13.
    Moosmann, F., Stiller, C.: Joint self-localization and tracking of generic objects in 3D range data. In: ICRA, pp. 1138–1144. Karlsruhe, Germany, May 2013Google Scholar
  14. 14.
    Romera-Paredes, B., Pontil, M.: A new convex relaxation for tensor completion. In: NIPS, pp. 2967–2975 (2013)Google Scholar
  15. 15.
    Tucker, L.R.: Some mathematical notes on three-mode factor analysis. Psychometrika 31(3), 279–311 (1966)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Wang, A., Jin, Z.: Near-optimal noisy low-tubal-rank tensor completion via singular tube thresholding. In: ICDM Workshop, pp. 553–560 (2017)Google Scholar
  17. 17.
    Wang, A., Lai, Z., Jin, Z.: Noisy low-tubal-rank tensor completion. Neurocomputing 330, 267–279 (2019)CrossRefGoogle Scholar
  18. 18.
    Wang, A., Wei, D., Wang, B., Jin, Z.: Noisy low-tubal-rank tensor completion through iterative singular tube thresholding. IEEE Access 6, 35112–35128 (2018)CrossRefGoogle Scholar
  19. 19.
    Wu, T., Bajwa, W.U.: A low tensor-rank representation approach for clustering of imaging data. IEEE Signal Process. Lett. 25(8), 1196–1200 (2018)CrossRefGoogle Scholar
  20. 20.
    Xie, Y., Tao, D., Zhang, W., Liu, Y., Zhang, L., Qu, Y.: On unifying multi-view self-representations for clustering by tensor multi-rank minimization. Int. J. Comput. Vis. 126(11), 1157–1179 (2018)MathSciNetCrossRefGoogle Scholar
  21. 21.
    Xu, Y., Hao, R., Yin, W., Su, Z.: Parallel matrix factorization for low-rank tensor completion. Inverse Prob. Imaging 9(2), 601–624 (2015)MathSciNetCrossRefGoogle Scholar
  22. 22.
    Zhang, Z., Aeron, S.: Exact tensor completion using T-SVD. IEEE TSP 65(6), 1511–1526 (2017)MathSciNetzbMATHGoogle Scholar
  23. 23.
    Zhang, Z., Ely, G., Aeron, S., Hao, N., Kilmer, M.: Novel methods for multilinear data completion and de-noising based on tensor-SVD. In: CVPR, pp. 3842–3849 (2014)Google Scholar
  24. 24.
    Zhou, P., Feng, J.: Outlier-robust tensor PCA. In: CVPR (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.School of Computer Science and EngineeringNanjing University of Science and TechnologyNanjingChina
  2. 2.Key Laboratory of Intelligent Perception and System for High-Dimensional Information of Ministry of EducationNanjing University of Science and TechnologyNanjingChina

Personalised recommendations