Advertisement

Low-Rank Tensor Recovery and Alignment Based on \(\ell _p\) Minimization

  • Kaifei Zhang
  • Di WangEmail author
  • Xiaoqin Zhang
  • Nannan Gu
  • Hongxing Jiang
  • Xiuzi Ye
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10116)

Abstract

In this paper, we propose a framework of non-convex low-rank recovery and alignment for arbitrary tensor data. Specially, by using Schatten-p (\(0<p<1\), the same below) norm and \(\ell _p\) norm to relax the rank function and \(\ell _0\) norm respectively, the model requires much weaker incoherence conditions to guarantee a successful recovery than the common used nuclear norm and \(\ell _1\) norm. At the same time, we adopt a set of transformations which acts on the images of the tensor data to compensate the possible misalignments of images. By solving the optimal transformations, the strict alignments of the images are achieved in the low-rank recovery process. Furthermore, we propose an efficient algorithm based on the method of Alternating Direction Method of Multipliers (ADMM) for the non-convex optimization problem. The extensive experiments on the artificial data sets and real image data sets show the superiority of our method in image alignment and denoising.

Keywords

Face Image Reconstruction Error Rank Function Pepper Noise Nuclear Norm 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgement

This work is supported by NSFC (Grants nos. 61305035, 61472285, 61511130084, and 61503263), Zhejiang Provincial Natural Science Foundation (Grants nos. LY17F030004, LR17F030001, LY16F020023, LY12F03016), Project of science and technology plans of Zhejiang Province (Grants nos. 2014C31062, 2015C31168). Project of science and technology plans of Wenzhou (Grants No. G20150017).

References

  1. 1.
    Basri, R., Jacobs, D.W.: Lambertian reflectance and linear subspaces. IEEE Trans. Pattern Anal. Mach. Intell. 25, 218–233 (2003)CrossRefGoogle Scholar
  2. 2.
    Wagner, A., Wright, J., Ganesh, A., Zhou, Z., Mobahi, H., Ma, Y.: Toward a practical face recognition system: robust alignment and illumination by sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 34, 372–386 (2012)CrossRefGoogle Scholar
  3. 3.
    Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52, 1289–1306 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Candés, E.J., Li, X., Ma, Y., Wright, J.: Robust principal component analysis? J. ACM (JACM) 58, 1–73 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Ganesh, A., Wright, J., Li, X., Candés, E.J., Ma, Y.: Dense error correction for low-rank matrices via principal component pursuit. In: IEEE International Symposium on Information Theory, pp. 1513–1517 (2010)Google Scholar
  6. 6.
    Zhou, Z., Li, X., Wright, J., Candes, E.J., Ma, Y.: Stable principal component pursuit. In: IEEE International Symposium on Information Theory, pp. 1518–1522 (2010)Google Scholar
  7. 7.
    Ji, H., Huang, S., Shen, Z., Xu, Y.: Robust video restoration by joint sparse and low rank matrix approximation. SIAM J. Imaging Sci. 4, 1122–1142 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Peng, Y., Ganesh, A., Wright, J., Xu, W., Ma, Y.: RASL: robust alignment by sparse and low-rank decomposition for linearly correlated images. IEEE Trans. Pattern Anal. Mach. Intell. 34, 2233–2246 (2012)CrossRefGoogle Scholar
  9. 9.
    Zhang, Z., Ganesh, A., Liang, X., Ma, Y.: TILT: transform invariant low-rank textures. Int. J. Comput. Vis. 99, 1–24 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Guo, X., Cao, X., Chen, X., Ma, Y.: Video editing with temporal, spatial and appearance consistency. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2283–2290 (2013)Google Scholar
  11. 11.
    Zhang, D., Hu, Y., Ye, J., Li, X., He, X.: Matrix completion by truncated nuclear norm regularization. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2192–2199 (2012)Google Scholar
  12. 12.
    Liu, J., Musialski, P., Wonka, P., Ye, J.: Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Mach. Intell. 35, 208–220 (2013)CrossRefGoogle Scholar
  13. 13.
    Signoretto, M., Plas, R.V.D., Moor, B.D., Suykens, J.A.K.: Tensor versus matrix completion: a comparison with application to spectral data. IEEE Sig. Process. Lett. 18, 403–406 (2011)CrossRefGoogle Scholar
  14. 14.
    Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 66, 294–310 (2005)Google Scholar
  15. 15.
    Huang, B., Mu, C., Goldfarb, D., Wright, J.: Provable low-rank tensor recovery (2014). http://www.optimization-online.org/DB_HTML/2014/02/4252.html
  16. 16.
    Li, Y., Yan, J., Zhou, Y., Yang, J.: Optimum subspace learning and error correction for tensors. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6313, pp. 790–803. Springer, Heidelberg (2010). doi: 10.1007/978-3-642-15558-1_57 CrossRefGoogle Scholar
  17. 17.
    Zhang, X., Wang, D., Zhou, Z., Ma, Y.: Simultaneous rectification and alignment via robust recovery of low-rank tensors. In: Advances in Neural Information Processing Systems, pp. 1637–1645 (2013)Google Scholar
  18. 18.
    Mohan, K., Fazel, M.: Iterative reweighted algorithms for matrix rank minimization. J. Mach. Learn. Res. 13, 3441–3473 (2012)MathSciNetzbMATHGoogle Scholar
  19. 19.
    Chen, X., Xu, F., Ye, Y.: Lower bound theory of nonzero entries in solutions of l2-lp Minimization. SIAM J. Sci. Comput. 32, 2832–2852 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    Qin, L., Lin, Z., She, Y., Zhang, C.: A comparison of typical lp minimization algorithms. Neurocomputing 119, 413–424 (2013)CrossRefGoogle Scholar
  21. 21.
    Zuo, W., Meng, D., Zhang, L., Feng, X., Zhang, D.: A generalized iterated shrinkage algorithm for non-convex sparse coding. In: IEEE International Conference on Computer Vision (ICCV), pp. 217–224 (2013)Google Scholar
  22. 22.
    Nie, F., Huang, H., Ding, C.: Low-rank matrix recovery via efficient Schatten p-norm minimization. In: AAAI Conference on Artificial Intelligence (2014)Google Scholar
  23. 23.
    Marjanovic, G., Solo, V.: On lq optimization and matrix completion. IEEE Trans. Sig. Process. 60, 5714–5724 (2012)CrossRefGoogle Scholar
  24. 24.
    Toh, K.C., Yun, S.: An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems. Pac. J. Optim. 6(3), 615–640 (2010)MathSciNetzbMATHGoogle Scholar
  25. 25.
    Georghiades, A.S., Belhumeur, P.N., Kriegman, D.J.: From few to many: illumination cone models for face recognition under variable lighting and pose. IEEE Trans. Pattern Anal. Mach. Intell. 23, 643–660 (2001)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Kaifei Zhang
    • 1
  • Di Wang
    • 1
    Email author
  • Xiaoqin Zhang
    • 1
  • Nannan Gu
    • 2
  • Hongxing Jiang
    • 1
  • Xiuzi Ye
    • 1
  1. 1.College of Mathematics and Information ScienceWenzhou UniversityZhejiangChina
  2. 2.Capital University of Economics and BusinessBeijingChina

Personalised recommendations