Advertisement

Matrix Completion

  • Ke-Lin DuEmail author
  • M. N. S. Swamy
Chapter

Abstract

The recovery of a data matrix from a subset of its entries is an extension of compressed sensing and sparse approximation. This chapter introduces matrix completion and matrix recovery. The ideas are also extended to tensor factorization and completion.

References

  1. 1.
    Acar, E., Dunlavy, D. M., Kolda, T. G., & Morup, M. (2011). Scalable tensor factorizations for incomplete data. Chemometrics and Intelligent Laboratory Systems, 106(1), 41–56.CrossRefGoogle Scholar
  2. 2.
    Argyriou, A., Evgeniou, T., & Pontil, M. (2007). Multi-task feature learning. Advances in neural information processing systems (Vol. 20, pp. 243–272).Google Scholar
  3. 3.
    Ashraphijuo, M., & Wang, X. (2017). Fundamental conditions for low-CP-rank tensor completion. Journal of Machine Learning Research, 18, 1–29.MathSciNetzbMATHGoogle Scholar
  4. 4.
    Belkin, M., & Niyogi, P. (2003). Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15, 1373–1396.zbMATHCrossRefGoogle Scholar
  5. 5.
    Bhaskar, S. A. (2016). Probabilistic low-rank matrix completion from quantized measurements. Journal of Machine Learning Research, 17, 1–34.MathSciNetzbMATHGoogle Scholar
  6. 6.
    Bhojanapalli, S., & Jain, P. (2014). Universal matrix completion. In Proceedings of the 31st International Conference on Machine Learning (pp. 1881–1889). Beijing, China.Google Scholar
  7. 7.
    Cai, T., & Zhou, W.-X. (2013). A max-norm constrained minimization approach to 1-bit matrix completion. Journal of Machine Learning Research, 14, 3619–3647.MathSciNetzbMATHGoogle Scholar
  8. 8.
    Cai, J.-F., Candes, E. J., & Shen, Z. (2010). A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4), 1956–1982.MathSciNetzbMATHCrossRefGoogle Scholar
  9. 9.
    Candes, E. J., & Plan, Y. (2010). Matrix completion with noise. Proceedings of the IEEE, 98(6), 925–936.CrossRefGoogle Scholar
  10. 10.
    Candes, E. J., & Recht, B. (2009). Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6), 717–772.MathSciNetzbMATHCrossRefGoogle Scholar
  11. 11.
    Candes, E. J., & Tao, T. (2010). The power of convex relaxation: Near-optimal matrix completion. IEEE Transactions on Information Theory, 56(5), 2053–2080.MathSciNetzbMATHCrossRefGoogle Scholar
  12. 12.
    Candes, E. J., Li, X., Ma, Y., & Wright, J. (2011). Robust principal component analysis? Journal of the ACM, 58(3), 1–37.MathSciNetzbMATHCrossRefGoogle Scholar
  13. 13.
    Cao, Y., & Xie, Y. (2015). Categorical matrix completion. In Proceedings of IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP) (pp. 369–372). Cancun, Mexico.Google Scholar
  14. 14.
    Carroll, J. D., & Chang, J.-J. (1970). Analysis of individual differences in multidimensional scaling via an \(N\)-way generalization of Eckart-Young decomposition. Psychometrika, 35(3), 283–319.zbMATHCrossRefGoogle Scholar
  15. 15.
    Chandrasekaran, V., Sanghavi, S., Parrilo, P. A., & Willsky, A. S. (2009). Sparse and low-rank matrix decompositions. In Proceedings of the 47th Annual Allerton Conference on Communication, Control, and Computing (pp. 962–967). Monticello, IL.Google Scholar
  16. 16.
    Chandrasekaran, V., Sanghavi, S., Parrilo, P. A., & Willsky, A. S. (2011). Rank-sparsity incoherence for matrix decomposition. SIAM Journal on Optimization, 21(2), 572–596.MathSciNetzbMATHCrossRefGoogle Scholar
  17. 17.
    Chen, Y. (2015). Incoherence-optimal matrix completion. IEEE Transactions on Information Theory, 61(5), 2909–2923.MathSciNetzbMATHCrossRefGoogle Scholar
  18. 18.
    Chen, Y., & Chi, Y. (2014). Robust spectral compressed sensing via structured matrix completion. IEEE Transactions on Information Theory, 60(10), 6576–6601.MathSciNetzbMATHCrossRefGoogle Scholar
  19. 19.
    Chen, C., He, B., & Yuan, X. (2012). Matrix completion via an alternating direction method. IMA Journal of Numerical Analysis, 32(1), 227–245.MathSciNetzbMATHCrossRefGoogle Scholar
  20. 20.
    Chen, Y., Jalali, A., Sanghavi, S., & Caramanis, C. (2013). Low-rank matrix recovery from errors and erasures. IEEE Transactions on Information Theory, 59(7), 4324–4337.CrossRefGoogle Scholar
  21. 21.
    Chen, Y., Bhojanapalli, S., Sanghavi, S., & Ward, R. (2015). Completing any low-rank matrix, provably. Journal of Machine Learning Research, 16, 2999–3034.MathSciNetzbMATHGoogle Scholar
  22. 22.
    Costantini, R., Sbaiz, L., & Susstrunk, S. (2008). Higher order SVD analysis for dynamic texture synthesis. IEEE Transactions on Image Processing, 17(1), 42–52.MathSciNetCrossRefGoogle Scholar
  23. 23.
    Davenport, M. A., Plan, Y., van den Berg, E., & Wootters, M. (2014). 1-bit matrix completion. Information and Inference, 3, 189–223.MathSciNetzbMATHCrossRefGoogle Scholar
  24. 24.
    De Lathauwer, L., De Moor, B., & Vandewalle, J. (2000). On the best rank-1 and rank-(R1,R2,...,RN) approximation of high-order tensors. SIAM Journal on Matrix Analysis and Applications, 21(4), 1324–1342.Google Scholar
  25. 25.
    De Lathauwer, L., De Moor, B., & Vandewalle, J. (2000). A multilinear singular value decomposition. SIAM Journal on Matrix Analysis and Applications, 21(4), 1253–1278.MathSciNetzbMATHCrossRefGoogle Scholar
  26. 26.
    Elhamifar, E., & Vidal, R. (2013). Sparse subspace clustering: Algorithm, theory, and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(11), 2765–2781.CrossRefGoogle Scholar
  27. 27.
    Eriksson, A., & van den Hengel, A. (2012). Efficient computation of robust weighted low-rank matrix approximations using the \(L_1\) norm. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(9), 1681–1690.CrossRefGoogle Scholar
  28. 28.
    Fan, J., & Chow, T. W. S. (2018). Non-linear matrix completion. Pattern Recognition, 77, 378–394.CrossRefGoogle Scholar
  29. 29.
    Fazel, M. (2002). Matrix rank minimization with applications. Ph.D. thesis, Stanford University.Google Scholar
  30. 30.
    Foygel, R., & Srebro, N. (2011). Concentration-based guarantees for low-rank matrix reconstruction. In JMLR: Workshop and Conference Proceedings (Vol. 19, pp. 315–339).Google Scholar
  31. 31.
    Foygel, R., Shamir, O., Srebro, N., & Salakhutdinov, R. (2011). Learning with the weighted trace-norm under arbitrary sampling distributions. Advances in neural information processing systems (Vol. 24, pp. 2133–2141).Google Scholar
  32. 32.
    Gandy, S., Recht, B., & Yamada, I. (2011). Tensor completion and low-\(n\)-rank tensor recovery via convex optimization. Inverse Problems, 27(2), 1–19.MathSciNetzbMATHCrossRefGoogle Scholar
  33. 33.
    Goldfarb, D., & Qin, Z. (2014). Robust low-rank tensor recovery: Models and algorithms. SIAM Journal on Matrix Analysis and Applications, 35(1), 225–253.MathSciNetzbMATHCrossRefGoogle Scholar
  34. 34.
    Gross, D. (2011). Recovering low-rank matrices from few coefficients in any basis. IEEE Transactions on Information Theory, 57(3), 1548–1566.MathSciNetzbMATHCrossRefGoogle Scholar
  35. 35.
    Guo, K., Liu, L., Xu, X., Xu, D., & Tao, D. (2018). Godec+: Fast and robust low-rank matrix decomposition based on maximum correntropy. IEEE Transactions on Neural Networks and Learning Systems, 29(6), 2323–2336.MathSciNetCrossRefGoogle Scholar
  36. 36.
    Harshman, R. A. (1970). Foundations of the PARAFAC procedure: Models and conditions for an “explanatory” multimodal factor analysis. UCLA Working Papers in Phonetics (Vol. 16, pp. 1–84).Google Scholar
  37. 37.
    Hastie, T., Mazumder, R., Lee, J. D., & Zadeh, R. (2015). Matrix completion and low-rank SVD via fast alternating least squares. Journal of Machine Learning Research, 16, 3367–3402.MathSciNetzbMATHGoogle Scholar
  38. 38.
    He, X., Cai, D., Yan, S., & Zhang, H.-J. (2005). Neighborhood preserving embedding. In Proceedings of the 10th IEEE International Conference on Computer Vision (pp. 1208–1213). Beijing, China.Google Scholar
  39. 39.
    He, X., Yan, S., Hu, Y., Niyogi, P., & Zhang, H. J. (2005). Face recognition using Laplacianfaces. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(3), 328–340.CrossRefGoogle Scholar
  40. 40.
    Hillar, C. J., & Lim, L.-H. (2013). Most tensor problems are NP-hard. Journal of the ACM, 60(6), Article No. 45, 39 p.Google Scholar
  41. 41.
    Hu, R.-X., Jia, W., Huang, D.-S., & Lei, Y.-K. (2010). Maximum margin criterion with tensor representation. Neurocomputing, 73, 1541–1549.CrossRefGoogle Scholar
  42. 42.
    Hu, Y., Zhang, D., Ye, J., Li, X., & He, X. (2013). Fast and accurate matrix completion via truncated nuclear norm regularization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(9), 2117–2130.CrossRefGoogle Scholar
  43. 43.
    Jain, P., & Oh, S. (2014). Provable tensor factorization with missing data. In Advances in neural information processing systems (Vol. 27, pp. 1431–1439).Google Scholar
  44. 44.
    Jain, P., Netrapalli, P., & S. Sanghavi, (2013). Low-rank matrix completion using alternating minimization. In Proceedings of the 45th Annual ACM Symposium on Theory of Computing (pp. 665–674).Google Scholar
  45. 45.
    Ji, S., & Ye, J. (2009). An accelerated gradient method for trace norm minimization. In Proceedings of the 26th Annual International Conference on Machine Learning (pp. 457–464). Montreal, Canada.Google Scholar
  46. 46.
    Ke, Q., & Kanade, T. (2005). Robust \(L_1\) norm factorization in the presence of outliers and missing data by alternative convex programming. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (pp. 739–746). San Diego, CA.Google Scholar
  47. 47.
    Keshavan, R. H., Montanari, A., & Oh, S. (2010). Matrix completion from a few entries. IEEE Transactions on Information Theory, 56(6), 2980–2998.MathSciNetzbMATHCrossRefGoogle Scholar
  48. 48.
    Khan, S. A., & Kaski, S. (2014). Bayesian multi-view tensor factorization. In T. Calders, F. Esposito, E. Hullermeier, & R. Meo (Eds.), Proceedings of Joint European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 656-671). Berlin: Springer.Google Scholar
  49. 49.
    Kilmer, M. E., Braman, K., Hao, N., & Hoover, R. C. (2013). Third-order tensors as operators on matrices: A theoretical and computational framework with applications in imaging. SIAM Journal on Matrix Analysis and Applications, 34(1), 148–172.MathSciNetzbMATHCrossRefGoogle Scholar
  50. 50.
    Kim, Y.-D., & Choi, S. (2007). Nonnegative Tucker decomposition. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (pp. 1–8). Minneapolis, MN.Google Scholar
  51. 51.
    Kim, E., Lee, M., Choi, C.-H., Kwak, N., & Oh, S. (2015). Efficient \(l_1\)-norm-based low-rank matrix approximations for large-scale problems using alternating rectified gradient method. IEEE Transactions on Neural Networks and Learning Systems, 26(2), 237–251.MathSciNetCrossRefGoogle Scholar
  52. 52.
    Kolda, T. G., & Bader, B. W. (2009). Tensor decompositions and applications. SIAM Review, 51(3), 455–500.MathSciNetzbMATHCrossRefGoogle Scholar
  53. 53.
    Komodakis, N., & Tziritas, G. (2006). Image completion using global optimization. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 417–424). New York, NY.Google Scholar
  54. 54.
    Koren, Y., Bell, R., & Volinsky, C. (2009). Matrix factorization techniques for recommender systems. Computer, 42(8), 30–37.CrossRefGoogle Scholar
  55. 55.
    Krishnamurthy, A., & Singh, A. (2013). Low-rank matrix and tensor completion via adaptive sampling. Advances in neural information processing systems (Vol. 26, pp. 836–844).Google Scholar
  56. 56.
    Krishnamurthy, A., & Singh, A. (2014). On the power of adaptivity in matrix completion and approximation. arXiv preprint arXiv:1407.3619.
  57. 57.
    Lafond, J., Klopp, O., Moulines, E., & Salmon, J. (2014). Probabilistic low-rank matrix completion on finite alphabets. Advances in neural information processing systems (Vol. 27, pp. 1727–1735). Cambridge: MIT Press.Google Scholar
  58. 58.
    Lai, Z., Xu, Y., Yang, J., Tang, J., & Zhang, D. (2013). Sparse tensor discriminant analysis. IEEE Transactions on Image Processing, 22(10), 3904–3915.MathSciNetzbMATHCrossRefGoogle Scholar
  59. 59.
    Li, X. (2013). Compressed sensing and matrix completion with constant proportion of corruptions. Constructive Approximation, 37(1), 73–99.MathSciNetzbMATHCrossRefGoogle Scholar
  60. 60.
    Lin, Z., Chen, M., Wu, L., & Ma, Y. (2009). The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices. Technical Report UILU-ENG-09-2215. Champaign, IL: Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign.Google Scholar
  61. 61.
    Lin, Z., Ganesh, A., Wright, J., Wu, L., Chen, M., & Ma, Y. (2009). Fast convex optimization algorithms for exact recovery of a corrupted low-rank matrix. Technical Report UILU-ENG-09-2214. Champaign, IL: University of Illinois at Urbana-Champaign.Google Scholar
  62. 62.
    Liu, G., Lin, Z., & Yu, Y. (2010). Robust subspace segmentation by low-rank representation. In Proceedings of the 25th International Conference on Machine Learning (pp. 663–670). Haifa, Israel.Google Scholar
  63. 63.
    Liu, J., Musialski, P., Wonka, P., & Ye, J. (2013). Tensor completion for estimating missing values in visual data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1), 208–220.CrossRefGoogle Scholar
  64. 64.
    Liu, Y., Jiao, L. C., Shang, F., Yin, F., & Liu, F. (2013). An efficient matrix bi-factorization alternative optimization method for low-rank matrix recovery and completion. Neural Networks, 48, 8–18.zbMATHCrossRefGoogle Scholar
  65. 65.
    Liu, G., Lin, Z., Yan, S., Sun, J., Yu, Y., & Ma, Y. (2013c). Robust recovery of subspace structures by low-rank representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1), 171–184.CrossRefGoogle Scholar
  66. 66.
    Lu, H., Plataniotis, K. N., & Venetsanopoulos, A. N. (2008). MPCA: Multilinear principal component analysis of tensor objects. IEEE Transactions on Neural Networks, 19(1), 18–39.CrossRefGoogle Scholar
  67. 67.
    Lu, H., Plataniotis, K. N., & Venetsanopoulos, A. N. (2009). Uncorrelated multilinear principal component analysis for unsupervised multilinear subspace learning. IEEE Transactions on Neural Networks, 20(11), 1820–1836.CrossRefGoogle Scholar
  68. 68.
    Luo, Y., Tao, D., Ramamohanarao, K., & Xu, C. (2015). Tensor canonical correlation analysis for multi-view dimension reduction. IEEE Transactions on Knowledge and Data Engineering, 27(11), 3111–3124.CrossRefGoogle Scholar
  69. 69.
    Mackey, L., Talwalkar, A., & Jordan, M. I. (2015). Distributed matrix completion and robust factorization. Journal of Machine Learning Research, 16, 913–960.MathSciNetzbMATHGoogle Scholar
  70. 70.
    Mu, C., Huang, B., Wright, J., & Goldfarb, D. (2014). Square deal: Lower bounds and improved relaxations for tensor recovery. In JMLR W&CP: Proceedings of the 31st International Conference on Machine Learning (Vol. 32). Beijing, China.Google Scholar
  71. 71.
    Negahban, S., & Wainwright, M. J. (2012). Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. Journal of Machine Learning Research, 13, 1665–1697.MathSciNetzbMATHGoogle Scholar
  72. 72.
    Oseledets, I. V. (2011). Tensor-train decomposition. SIAM Journal on Scientific Computing, 33(5), 2295–2317.MathSciNetzbMATHCrossRefGoogle Scholar
  73. 73.
    Panagakis, Y., Kotropoulos, C., & Arce, G. R. (2010). Non-negative multilinear principal component analysis of auditory temporal modulations for music genre classification. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 18(3), 576–588.CrossRefGoogle Scholar
  74. 74.
    Pitaval, R.-A., Dai, W., & Tirkkonen, O. (2015). Convergence of gradient descent for low-rank matrix approximation. IEEE Transactions on Information Theory, 61(8), 4451–4457.MathSciNetzbMATHCrossRefGoogle Scholar
  75. 75.
    Qi, Y., Comon, P., & Lim, L.-H. (2016). Uniqueness of nonnegative tensor approximations. IEEE Transactions on Information Theory, 62(4), 2170–2183.MathSciNetzbMATHCrossRefGoogle Scholar
  76. 76.
    Recht, B. (2011). A simpler approach to matrix completion. Journal of Machine Learning Research, 12, 3413–3430.MathSciNetzbMATHGoogle Scholar
  77. 77.
    Recht, B., Fazel, M., & Parrilo, P. A. (2010). Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Review, 52(3), 471–501.MathSciNetzbMATHCrossRefGoogle Scholar
  78. 78.
    Rennie, J. D. M., & Srebro, N. (2005). Fast maximum margin matrix factorization for collaborative prediction. In Proceedings of the 22nd International Conference of Machine Learning (pp. 713–719). Bonn, Germany.Google Scholar
  79. 79.
    Roweis, S. T., & Saul, L. K. (2000). Nonlinear dimensionality reduction by locally linear embedding. Science, 290, 2323–2326.CrossRefGoogle Scholar
  80. 80.
    Salakhutdinov, R., & Srebro, N. (2010). Collaborative filtering in a non-uniform world: Learning with the weighted trace norm. In J. LaFerty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, & A. Culotta (Eds.), Advances in neural information processing systems (Vol. 23, pp. 2056–2064). Cambridge: MIT Press.Google Scholar
  81. 81.
    Shamir, O., & Shalev-Shwartz, S. (2014). Matrix completion with the trace norm: Learning, bounding, and transducing. Journal of Machine Learning Research, 15, 3401–3423.MathSciNetzbMATHGoogle Scholar
  82. 82.
    Sorber, L., Van Barel, M., & De Lathauwer, L. (2013). Optimization-based algorithms for tensor decompositions: Canonical polyadic decomposition, decomposition in rank-\((L_r, L_r, 1)\) terms, and a new generalization. SIAM Journal on Optimization, 23(2), 695–720.MathSciNetzbMATHCrossRefGoogle Scholar
  83. 83.
    Srebro, N., & Shraibman, A. (2005). Rank, trace-norm and max-norm. In Proceedings of the 18th Annual Conference on Learning Theory (COLT) (pp. 545–560). Berlin: Springer.Google Scholar
  84. 84.
    Srebro, N., Rennie, J. D. M., & Jaakkola, T. S. (2004). Maximum-margin matrix factorization. Advances in neural information processing systems (Vol. 17, pp. 1329–1336).Google Scholar
  85. 85.
    Sun, W., Huang, L., So, H. C., & Wang, J. (2019). Orthogonal tubal rank-1 tensor pursuit for tensor completion. Signal Processing, 157, 213–224.CrossRefGoogle Scholar
  86. 86.
    Takacs, G., Pilaszy, I., Nemeth, B., & Tikk, D. (2009). Scalable collaborative filtering approaches for large recommender systems. Journal of Machine Learning Research, 10, 623–656.Google Scholar
  87. 87.
    Tan, H., Cheng, B., Wang, W., Zhang, Y.-J., & Ran, B. (2014). Tensor completion via a multi-linear low-n-rank factorization model. Neurocomputing, 1(33), 161–169.CrossRefGoogle Scholar
  88. 88.
    Tao, D., Li, X., Wu, X., Hu, W., & Maybank, S. J. (2007). Supervised tensor learning. Knowledge and Information Systems, 13(1), 1–42.CrossRefGoogle Scholar
  89. 89.
    Tao, D., Li, X., Wu, X., & Maybank, S. J. (2007). General tensor discriminant analysis and gabor features for gait recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(10), 1700–17015.CrossRefGoogle Scholar
  90. 90.
    Tenenbaum, J. B., de Silva, V., & Langford, J. C. (2000). A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500), 2319–2323.CrossRefGoogle Scholar
  91. 91.
    Toh, K.-C., & Yun, S. (2010). An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems. Pacific Journal of Optimization, 6(3), 615–640.MathSciNetzbMATHGoogle Scholar
  92. 92.
    Tucker, L. R. (1966). Some mathematical notes on three-mode factor analysis. Psychometrika, 31(3), 279–311.MathSciNetCrossRefGoogle Scholar
  93. 93.
    Tucker, L. R., & Harris, C. W. (1963). Implication of factor analysis of three-way matrices for measurement of change. In C. W. Harris (Ed.), Problems in measuring change (pp. 122–137). Madison: University Wisconsin Press.Google Scholar
  94. 94.
    Vasilescu, M. A. O., & Terzopoulos, D. (2002). Multilinear analysis of image ensembles: Tensorfaces. In Proceedigs of European Conference on Computer Vision, LNCS (Vol. 2350, pp. 447–460). Copenhagen, Denmark. Berlin: Springer.Google Scholar
  95. 95.
    Wang, W., Aggarwal, V., & Aeron, S. (2016). Tensor completion by alternating minimization under the tensor train (TT) model. arXiv:1609.05587.
  96. 96.
    Wong, R. K. W., & Lee, T. C. M. (2017). Matrix completion with noisy entries and outliers. Journal of Machine Learning Research, 18, 1–25.MathSciNetzbMATHGoogle Scholar
  97. 97.
    Wright, J., Ganesh, A., Rao, S., Peng, Y., & Ma, Y. (2009). Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. Advances in neural information processing systems (Vol. 22, pp. 2080–2088). Vancouver, Canada.Google Scholar
  98. 98.
    Xu, D., Yan, S., Tao, D., Zhang, L., Li, X., & Zhang, H. (2006). Human gait recognition with matrix representation. IEEE Transactions on Circuits and Systems for Video Technology, 16(7), 896–903.CrossRefGoogle Scholar
  99. 99.
    Xu, D., Yan, S., Zhang, L., Lin, S., Zhang, H., & Huang, T. S. (2008). Reconstruction and recogntition of tensor-based objects with concurrent subspaces analysis. IEEE Transactions on Circuits and Systems for Video Technology, 18(1), 36–47.CrossRefGoogle Scholar
  100. 100.
    Yin, M., Cai, S., & Gao, J. (2013). Robust face recognition via double low-rank matrix recovery for feature extraction. In Proceedings of IEEE International Conference on Image Processing (pp. 3770–3774). Melbourne, Australia.Google Scholar
  101. 101.
    Yin, M., Gao, J., & Lin, Z. (2016). Laplacian regularized low-rank representation and its applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(3), 504–517.CrossRefGoogle Scholar
  102. 102.
    Yokota, T., Zhao, Q., & Cichocki, A. (2016). Smooth PARAFAC decomposition for tensor completion. IEEE Transactions on Signal Processing, 64(20), 5423–5436.MathSciNetzbMATHCrossRefGoogle Scholar
  103. 103.
    Zafeiriou, S. (2009). Discriminant nonnegative tensor factorization algorithms. IEEE Transactions on Neural Networks, 20(2), 217–235.zbMATHCrossRefGoogle Scholar
  104. 104.
    Zhou, T., & Tao, D. (2011). GoDec: Randomized low-rank & sparse matrix decomposition in noisy case. In Proceedings of the 28th International Conference on Machine Learning (pp. 33–40). Bellevue, WA.Google Scholar
  105. 105.
    Zhou, Y., Wilkinson, D., Schreiber, R., & Pan, R. (2008). Large-scale parallel collaborative filtering for the netix prize. In Proceedings of the 4th International Conference on Algorithmic Aspects in Information and Management (pp. 337–348). Berlin: Springer.Google Scholar
  106. 106.
    Zhou, G., Cichocki, A., Zhao, Q., & Xie, S. (2015). Efficient nonnegative tucker decompositions: Algorithms and uniqueness. IEEE Transactions on Image Processing, 24(12), 4990–5003.MathSciNetzbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of Electrical and Computer EngineeringConcordia UniversityMontrealCanada
  2. 2.Xonlink Inc.HangzhouChina

Personalised recommendations