Journal of Global Optimization

, Volume 62, Issue 4, pp 811–832 | Cite as

On optimal low rank Tucker approximation for tensors: the case for an adjustable core size

  • Bilian Chen
  • Zhening Li
  • Shuzhong Zhang


Approximating high order tensors by low Tucker-rank tensors have applications in psychometrics, chemometrics, computer vision, biomedical informatics, among others. Traditionally, solution methods for finding a low Tucker-rank approximation presume that the size of the core tensor is specified in advance, which may not be a realistic assumption in many applications. In this paper we propose a new computational model where the configuration and the size of the core become a part of the decisions to be optimized. Our approach is based on the so-called maximum block improvement method for non-convex block optimization. Numerical tests on various real data sets from gene expression analysis and image compression are reported, which show promising performances of the proposed algorithms.


Multiway array Tucker decomposition Low-rank approximation Maximum block improvement 



This work was partially supported by National Science Foundation of China (Grant 11301436 and 11371242), National Science Foundation of USA (Grant CMMI-1161242), Natural Science Foundation of Shanghai (Grant 12ZR1410100), and Ph.D. Programs Foundation of Chinese Ministry of Education (Grant 20123108120002). We would like to thank the anonymous referee for the insightful suggestions.


  1. 1.
    Andersson, C.A., Bro, R.: Improving the speed of multi-way algorithms: Part I. Tucker3. Chemometr. Intell. Lab. 42, 93–103 (1998)CrossRefGoogle Scholar
  2. 2.
    Appellof, C.J., Davidson, E.R.: Strategies for analyzing data from video fluorometric monitoring of liquid chromatographic effluents. Anal. Chem. 53, 2053–2056 (1981)CrossRefGoogle Scholar
  3. 3.
    Aubry, A., De Maio, A., Jiang, B., Zhang, S.: Ambiguity function shaping for cognitive radar via complex quartic optimization. IEEE Trans. Sig. Process. 61, 5603–5619 (2013)CrossRefGoogle Scholar
  4. 4.
    Bader, B.W., Kolda, T.G.: Matlab tensor toolbox, version 2.5. (2012)
  5. 5.
    Bertsekas, D.P.: Nonlinear Programming, 2nd edn. Athena Scientific, Belmont (1999)MATHGoogle Scholar
  6. 6.
    Bro, R.: PARAFAC: tutorial and applications. Chemometr. Intell. Lab. 38, 149–171 (1997)CrossRefGoogle Scholar
  7. 7.
    Bro, R.: Multi-way analysis in the food industry: models, algorithms, and applications. Ph.D. Thesis, University of Amsterdam, Netherlands, and Royal Veterinary and Agricultural University, Denmark (1998)Google Scholar
  8. 8.
    Bro, R., Kiers, H.A.L.: A new efficient method for determining the number of components in PARAFAC models. J. Chemom. 17, 274–286 (2003)CrossRefGoogle Scholar
  9. 9.
    Carroll, J.D., Chang, J.J.: Analysis of individual differences in multidimensional scaling via an N-way generalization of “Eckart-Young” decomposition. Psychometrika 35, 283–319 (1970)CrossRefMATHGoogle Scholar
  10. 10.
    Ceulemans, E., Kiers, H.A.L.: Selecting among three-way principal component models of different types and complexities: a numerical convex hull based method. Br. J. Math. Stat. Psychol. 59, 133–150 (2006)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Ceulemans, E., Kiers, H.A.L.: Discriminating between strong and weak structures in three-mode principal component analysis. Br. J. Math. Stat. Psychol. 62, 601–620 (2009)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Chen, B.: Optimization with block variables: theory and applications. Ph.D. Thesis, The Chinese Univesrity of Hong Kong, Hong Kong (2012)Google Scholar
  13. 13.
    Chen, B., He, S., Li, Z., Zhang, S.: Maximum block improvement and polynomial optimization. SIAM J. Optim. 22, 87–107 (2012)MathSciNetCrossRefMATHGoogle Scholar
  14. 14.
    De Lathauwer, L., De Moor, B., Vandewalle, J.: A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 21, 1253–1278 (2000)MathSciNetCrossRefMATHGoogle Scholar
  15. 15.
    De Lathauwer, L., De Moor, B., Vandewalle, J.: On the best rank-1 and rank-\((R_1, R_2,\dots, R_N)\) approximation of higher-order tensors. SIAM J. Matrix Anal. Appl. 21, 1324–1342 (2000)MathSciNetCrossRefMATHGoogle Scholar
  16. 16.
    De Lathauwer, L., Nion, D.: Decompositions of a higher-order tensor in block terms—Part III: alternating least squares algorithms. SIAM J. Matrix Anal. Appl. 30, 1067–1083 (2008)MathSciNetCrossRefMATHGoogle Scholar
  17. 17.
    Eldén, L., Savas, B.: A Newton–Grassmann method for computing the best multi-linear rank-\((r1, r2, r3)\) approximation of a tensor. SIAM J. Matrix Anal. Appl. 31, 248–271 (2009)MathSciNetCrossRefMATHGoogle Scholar
  18. 18.
    Harshman, R.A.: Foundations of the PARAFAC procedure: models and conditions for an “explanatory” multi-modal factor analysis. UCLA Working Papers in Phonetics 16, pp. 1–84 (1970).
  19. 19.
    He, Z., Cichocki, A., Xie, S.: Efficient method for Tucker3 model selection. Electron. Lett. 45, 805–806 (2009)CrossRefGoogle Scholar
  20. 20.
    Ishteva, M., De Lathauwer, L., Absil, P.A., Van Huffel, S.: Differential-geometric Newton method for the best rank-\((R_1, R_2, R_3)\) approximation of tensors. Numer. Algorithms 51, 179–194 (2009)MathSciNetCrossRefMATHGoogle Scholar
  21. 21.
    Kapteyn, A., Neudecker, H., Wansbeek, T.: An approach to n-mode components analysis. Psychometrika 51, 269–275 (1986)MathSciNetCrossRefMATHGoogle Scholar
  22. 22.
    Kiers, H.A.L., Der Kinderen, A.: A fast method for choosing the numbers of components in Tucker3 analysis. Br. J. Math. Stat. Psychol. 56, 119–125 (2003)CrossRefGoogle Scholar
  23. 23.
    Kofidis, E., Regalia, P.A.: On the best rank-1 approximation of higher order supersymmetric tensors. SIAM J. Matrix Anal. Appl. 23, 863–884 (2002)MathSciNetCrossRefMATHGoogle Scholar
  24. 24.
    Kolda, T.G.: Multilinear operators for higher-order decompositions. Technical Report SAND2006-2081, Sandia National Laboratories, Albuquerque (2006)Google Scholar
  25. 25.
    Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 51, 455–500 (2009)MathSciNetCrossRefMATHGoogle Scholar
  26. 26.
    Kroonenberg, P.M., De Leeuw, J.: Principal component analysis of three-mode data by means of alternating least squares algorithms. Psychometrika 45, 69–97 (1980)MathSciNetCrossRefMATHGoogle Scholar
  27. 27.
    Leibovici, D., Sabatier, R.: A singular value decomposition of a k-way array for a principal component analysis of multiway data, PTA-k. Linear Algebra Appl. 269, 307–329 (1998)MathSciNetCrossRefMATHGoogle Scholar
  28. 28.
    Levin, J.: Three-mode factor analysis. Ph.D. Thesis, University of Illinois, Urbana (1963)Google Scholar
  29. 29.
    Li, Z., Uschmajew, A., Zhang, S.: On convergence of the maximum block improvement method. Technical Report (2013)Google Scholar
  30. 30.
    Lyons, M.J., Akamatsu, S., Kamachi, M., Gyoba, J.: Coding facial expressions with gabor wavelets. In: Proceedings of the 3rd IEEE International Conference on Automatic Face and Gesture Recognition, pp. 200–205 (1998)Google Scholar
  31. 31.
    Mørup, M., Hansen, L.K.: Automatic relevance determination for multi-way models. J. Chemom. 23, 352–363 (2009)CrossRefGoogle Scholar
  32. 32.
    Qi, L.: The best rank-one approximation ratio of a tensor space. SIAM J. Matrix Anal. Appl. 32, 430–442 (2011)MathSciNetCrossRefMATHGoogle Scholar
  33. 33.
    Samaria, F., Harter, A.: Parameterisation of a stochastic model for human face identification. In: Proceedings of 2nd IEEE Workshop on Applications of Computer Vision, pp. 138–142 (1994)Google Scholar
  34. 34.
    Sun, W., Yuan, Y.-X.: Optimization theory and methods: nonlinear programming. In: Springer Optimization and Its Applications, vol. 1. Springer, New York (2006)Google Scholar
  35. 35.
    Timmerman, M.E., Kiers, H.A.L.: Three-mode principal components analysis: choosing the numbers of components and sensitivity to local optima. Br. J. Math. Stat. Psychol. 53, 1–16 (2000)CrossRefGoogle Scholar
  36. 36.
    Tomioka, R., Suzuki, T., Hayashi, K., Kashima, H.: Statistical performance of convex tensor decomposition. In: Proceedings of the 25th Annual Conference on Neural Information Processing Systems, pp. 972–980 (2011)Google Scholar
  37. 37.
    Tucker, L.R.: Implications of factor analysis of three-way matrices for measurement of change. In: Harris, C.W. (ed.) Problems in Measuring Change, pp. 122–137. University of Wisconsin Press, Madison (1963)Google Scholar
  38. 38.
    Tucker, L.R.: The extension of factor analysis to three-dimensional matrices. In: Gulliksen, H., Frederiksen, N. (eds.) Contributions to Mathematical Psychology. Holt, Rinehardt, & Winston, New York (1964)Google Scholar
  39. 39.
    Tucker, L.R.: Some mathematical notes on three-mode factor analysis. Psychometrika 31, 279–311 (1966)MathSciNetCrossRefGoogle Scholar
  40. 40.
    Uschmajew, A.: Local convergence of the alternating least squares algorithm for canonical tensor approximation. SIAM J. Matrix Anal. Appl. 33, 639–652 (2012)MathSciNetCrossRefMATHGoogle Scholar
  41. 41.
    Wang, Y., Qi, L.: On the successive supersymmetric rank-1 decomposition of higher-order supersymmetric tensors. Numer. Linear Algebra Appl. 14, 503–519 (2007)MathSciNetCrossRefMATHGoogle Scholar
  42. 42.
    Zhang, S., Wang, K., Ashby, C., Chen, B., Huang, X.: A unified adaptive co-identification framework for high-D expression data. In: Shibuya, T., et al. (eds.) Proceedings of the 7th IAPR International Conference on Pattern Recognition in Bioinformatics. Lecture Notes in Computer Science, vol. 7632, pp. 59–70. Springer, New York (2012)Google Scholar
  43. 43.
    Zhang, S., Wang, K., Chen, B., Huang, X.: A new framework for co-clustering of gene expression data. In: Loog, M., et al. (eds.) Proceedings of the 6th IAPR International Conference on Pattern Recognition in Bioinformatics. Lecture Notes in Computer Science, vol. 7036, pp. 1–12. Springer, New York (2011)Google Scholar
  44. 44.
    Zhang, T., Golub, G.H.: Rank-one approximation to high order tensors. SIAM J. Matrix Anal. Appl. 23, 534–550 (2001)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  1. 1.Department of AutomationXiamen UniversityXiamenChina
  2. 2.Department of MathematicsUniversity of PortsmouthPortsmouthUK
  3. 3.Department of Industrial and Systems EngineeringUniversity of MinnesotaMinneapolisUSA

Personalised recommendations