Skip to main content

Completion of High Order Tensor Data with Missing Entries via Tensor-Train Decomposition

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10634))

Included in the following conference series:

Abstract

In this paper, we aim at the completion problem of high order tensor data with missing entries. The existing tensor factorization and completion methods suffer from the curse of dimensionality when the order of tensor \(N>>3\). To overcome this problem, we propose an efficient algorithm called TT-WOPT (Tensor-train Weighted OPTimization) to find the latent core tensors of tensor data and recover the missing entries. Tensor-train decomposition, which has the powerful representation ability with linear scalability to tensor order, is employed in our algorithm. The experimental results on synthetic data and natural image completion demonstrate that our method significantly outperforms the other related methods. Especially when the missing rate of data is very high, e.g., 85% to 99%, our algorithm can achieve much better performance than other state-of-the-art algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Acar, E., Dunlavy, D.M., Kolda, T.G., Mørup, M.: Scalable tensor factorizations for incomplete data. Chemometr. Intell. Lab. Syst. 106(1), 41–56 (2011)

    Article  Google Scholar 

  2. Zhao, Q., Zhang, L., Cichocki, A.: Bayesian CP factorization of incomplete tensors with automatic rank determination. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1751–1763 (2015)

    Article  Google Scholar 

  3. De Lathauwer, L., Castaing, J.: Blind identification of underdetermined mixtures by simultaneous matrix diagonalization. IEEE Trans. Signal Process. 56(3), 1096–1105 (2008)

    Article  MathSciNet  Google Scholar 

  4. Muti, D., Bourennane, S.: Multidimensional filtering based on a tensor approach. Sig. Process. 85(12), 2338–2353 (2005)

    Article  MATH  Google Scholar 

  5. Mocks, J.: Topographic components model for event-related potentials and some biophysical considerations. IEEE Trans. Biomed. Eng. 35(6), 482–484 (1988)

    Article  Google Scholar 

  6. Shashua, A., Levin, A.: Linear image coding for regression and classification using the tensor-rank principle. In: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001, vol. 1, p. I. IEEE (2001)

    Google Scholar 

  7. Vasilescu, M.A.O., Terzopoulos, D.: Multilinear image analysis for facial recognition. In: 16th International Conference on Pattern Recognition, Proceedings, vol. 2, pp. 511–514. IEEE (2002)

    Google Scholar 

  8. Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 51(3), 455–500 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  9. Harshman, R.A.: Foundations of the PARAFAC procedure: models and conditions for an “Explanatory” multi-modal factor analysis (1970)

    Google Scholar 

  10. Sorensen, M., De Lathauwer, L., Comon, P., Icart, S., Deneire, L.: Canonical polyadic decomposition with orthogonality constraints. SIAM J. Matrix Anal. Appl. 33(4), 1190–1213 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  11. Tucker, L.R.: Some mathematical notes on three-mode factor analysis. Psychometrika 31(3), 279–311 (1966)

    Article  MathSciNet  Google Scholar 

  12. Gandy, S., Recht, B., Yamada, I.: Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse Prob. 27(2), 025010 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  13. Oseledets, I.V.: Tensor-train decomposition. SIAM J. Sci. Comput. 33(5), 2295–2317 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  14. Cichocki, A., Lee, N., Oseledets, I.V., Phan, A.H., Zhao, Q., Mandic, D.P., et al.: Tensor networks for dimensionality reduction and large-scale optimization: part 1 low-rank tensor decompositions. Found. Trends® Mach. Learn. 9(4–5), 249–429 (2016)

    Article  MATH  Google Scholar 

  15. Nocedal, J., Wright, S.: Numerical Optimization. Springer Science & Business Media, New York (2006)

    MATH  Google Scholar 

Download references

Acknowledgments

This work is supported by JSPS KAKENHI (Grant No. 17K00326) and KAKENHI (Grant No. 15H04002).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qibin Zhao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Yuan, L., Zhao, Q., Cao, J. (2017). Completion of High Order Tensor Data with Missing Entries via Tensor-Train Decomposition. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, ES. (eds) Neural Information Processing. ICONIP 2017. Lecture Notes in Computer Science(), vol 10634. Springer, Cham. https://doi.org/10.1007/978-3-319-70087-8_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-70087-8_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-70086-1

  • Online ISBN: 978-3-319-70087-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics