Skip to main content

Low Rank Tensor Manifold Learning

  • Chapter
  • First Online:
Low-Rank and Sparse Modeling for Visual Analysis

Abstract

Other than vector representations, the direct objects of human cognition are generally high-order tensors, such as 2D images and 3D textures. From this fact, two interesting questions naturally arise: How does the human brain represent these tensor perceptions in a “manifold” way, and how can they be recognized on the “manifold”? In this chapter, we present a supervised model to learn the intrinsic structure of the tensors embedded in a high dimensional Euclidean space. With the fixed point continuation procedures, our model automatically and jointly discovers the optimal dimensionality and the representations of the low dimensional embeddings. This makes it an effective simulation of the cognitive process of human brain. Furthermore, the generalization of our model based on similarity between the learned low dimensional embeddings can be viewed as counterpart of recognition of human brain. Experiments on applications for object recognition and face recognition demonstrate the superiority of our proposed model over state-of-the-art approaches.

\(\copyright \) 2014 MIT Press. Reprinted, with permission, from Guoqiang Zhong and Mohamed Cheriet, “Large Margin Low Rank Tensor Analysis”, Neural Computation, Vol. 26, No. 4: 761–780.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://sedumi.ie.lehigh.edu/.

  2. 2.

    http://www.cs.columbia.edu/CAVE/software/softlib/coil-20.php.

  3. 3.

    http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html.

  4. 4.

    https://github.com/rasmusbergpalm/DeepLearnToolbox.

References

  1. G. Baudat, F. Anouar, Generalized discriminant analysis using a kernel approach. Neural Comput. 12(10), 2385–2404 (2000)

    Article  Google Scholar 

  2. M. Belkin, P. Niyogi, Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput. 15(6), 1373–1396 (2003)

    Article  MATH  Google Scholar 

  3. Y. Bengio, P. Lamblin, D. Popovici, H. Larochelle, Greedy layer-wise training of deep networks, in NIPS, pp. 153–160 (2006)

    Google Scholar 

  4. Y. Bengio, J.F. Paiement, P. Vincent, O. Delalleau, N.L. Roux, M. Ouimet, Out-of-sample extensions for LLE, isomap, MDS, eigenmaps, and spectral clustering, in NIPS (2003)

    Google Scholar 

  5. J.A. Bondy, U.S.R. Murty, Graph Theory with Applications (Elsevier, North-Holland, 1976)

    MATH  Google Scholar 

  6. E. Candès, B. Recht, Exact matrix completion via convex optimization. Commun. ACM 55(6), 111–119 (2012)

    Article  Google Scholar 

  7. E. Candès, T. Tao, The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory 56(5), 2053–2080 (2010)

    Article  Google Scholar 

  8. F.R.K. Chung, Spectral Graph Theory (American Mathematical Society, Providence, 1997)

    MATH  Google Scholar 

  9. D. Cohn, R. Ladner, A. Waibel, Improving generalization with active learning, in Machine Learning, pp. 201–221 (1994)

    Google Scholar 

  10. G. Dai, D.Y. Yeung, Tensor embedding methods, in AAAI, pp. 330–335 (2006)

    Google Scholar 

  11. J.G. Daugman, Complete discrete 2D gabor transforms by neural networks for image analysis and compression. IEEE Trans. Acoust. Speech Signal Process. 36(7), 1169–1179 (1988)

    Article  MATH  Google Scholar 

  12. R.A. Fisher, The use of multiple measurements in taxonomic problems. Ann. Eugenics. 7(7), 179–188 (1936)

    Article  Google Scholar 

  13. Y. Fu, T.S. Huang, Image classification using correlation tensor analysis. IEEE Trans. Image Process. 17(2), 226–234 (2008)

    Article  MathSciNet  Google Scholar 

  14. M. Grant, S. Boyd, in Graph Implementations for Nonsmooth Convex Programs, ed. by V. Blondel, S. Boyd, H. Kimura. Recent Advances in Learning and Control. Lecture Notes in Control and Information Sciences (Springer Limited, 2008), pp. 95–110

    Google Scholar 

  15. X. He, P. Niyogi, Locality preserving projections, in NIPS (2003)

    Google Scholar 

  16. G.E. Hinton, S. Osindero, Y.W. Teh, A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  17. G.E. Hinton, R.R. Salakhutdinov, Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  18. T.G. Kolda, B.W. Bader, Tensor decompositions and applications. SIAM Rev. 51(3), 455–500 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  19. N.D. Lawrence, Probabilistic non-linear principal component analysis with gaussian process latent variable models. J. Mach. Learn. Res. 6, 1783–1816 (2005)

    MathSciNet  MATH  Google Scholar 

  20. Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition, in Intelligent Signal Processing, IEEE Press, pp. 306–351 (2001)

    Google Scholar 

  21. D.D. Lee, H.S. Seung, Learning the parts of objects by non-negative matrix factorization. Nature 401(6755), 788–791 (1999)

    Article  Google Scholar 

  22. J. Liu, J. Liu, P. Wonka, J. Ye, Sparse non-negative tensor factorization using columnwise coordinate descent. Pattern Recognit. 45(1), 649–656 (2012)

    Article  MATH  Google Scholar 

  23. Y. Liu, Y. Liu, K.C.C. Chan, Tensor distance based multilinear locality-preserved maximum information embedding. IEEE Trans. Neural Networks. 21(11), 1848–1854 (2010)

    Article  Google Scholar 

  24. S. Ma, D. Goldfarb, L. Chen, Fixed point and bregman iterative methods for matrix rank minimization. Math. Program. 128(1–2), 321–353 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  25. L. van der Maaten, G.E. Hinton, Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008)

    MATH  Google Scholar 

  26. D.G. Northcott, Multilinear Algebra (Cambridge University Press, New York, 1984)

    Book  MATH  Google Scholar 

  27. S.J. Pan, Q. Yang, A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010)

    Article  Google Scholar 

  28. E. Rosch, Cogn. Psychol. 4, 328–350 (1973)

    Article  Google Scholar 

  29. S.T. Roweis, L.K. Saul, Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500), 2323–2326 (2000)

    Article  Google Scholar 

  30. B. Schölkopf, A.J. Smola, K.R. Müller, Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput. 10(5), 1299–1319 (1998)

    Article  Google Scholar 

  31. H.S. Seung, D.D. Lee, The manifold ways of perception. Science 290(5500), 2268–2269 (2000)

    Article  Google Scholar 

  32. V. de Silva, L.H. Lim, Tensor rank and the ill-posedness of the best low-rank approximation problem. SIAM J. Matrix Anal. Appl. 30(3), 1084–1127 (2008)

    Article  MathSciNet  Google Scholar 

  33. M. Sugiyama, Dimensionality reduction of multimodal labeled data by local fisher discriminant analysis. J. Mach. Learn. Res. 8, 1027–1061 (2007)

    MATH  Google Scholar 

  34. D. Tao, X. Li, X. Wu, S.J. Maybank, General tensor discriminant analysis and gabor features for gait recognition. IEEE Trans. Pattern Anal. Mach. Intell. 29(10), 1700–1715 (2007)

    Article  Google Scholar 

  35. J.B. Tenenbaum, C. Kemp, T.L. Griffiths, N.D. Goodman, How to grow a mind: statistics, structure, and abstraction. Science 331(6022), 1279–1285 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  36. J.B. Tenenbaum, V. de Silva, J.C. Langford, A global geometric framework for nonlinear dimensionality reduction. Science 290(5500), 2319–2323 (2000)

    Article  Google Scholar 

  37. P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, P.A. Manzagol, Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 11, 3371–3408 (2010)

    MathSciNet  MATH  Google Scholar 

  38. H. Wang, S. Yan, T.S. Huang, X. Tang, A convengent solution to tensor subspace learning, in IJCAI, pp. 629–634 (2007)

    Google Scholar 

  39. K.Q. Weinberger, J. Blitzer, L.K. Saul, Distance metric learning for large margin nearest neighbor classification, in NIPS (2005)

    Google Scholar 

  40. S. Yan, D. Xu, B. Zhang, H.J. Zhang, Q. Yang, S. Lin, Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Trans. Pattern Anal. Mach. Intell. 29(1), 40–51 (2007)

    Article  Google Scholar 

  41. J. Yang, D. Zhang, A.F. Frangi, J.Y. Yang, Two-dimensional pca: a new approach to appearance-based face representation and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 26(1), 131–137 (2004)

    Article  Google Scholar 

  42. J. Ye, R. Janardan, Q. Li, Two-Dimensional linear discriminant analysis, in NIPS (2004)

    Google Scholar 

  43. G. Zhong, M. Cheriet, Large margin low rank tensor analysis. Neural Comput. 26(4), 761–780 (2014)

    Article  MathSciNet  Google Scholar 

  44. G. Zhong, W.J Li, D.Y. Yeung, X. Hou, C.L. Liu, C.L. Gaussian process latent random field. in: AAAI (2010)

    Google Scholar 

Download references

Acknowledgments

This work is partially supported by the Social Sciences and Humanities Research Council of Canada (SSHRC), the Natural Sciences and Engineering Research Council of Canada (NSERC), the National Natural Science Foundation of China (NSFC) under Grant No. 61403353 and the Fundamental Research Funds for the Central Universities of China. We thank the MIT Press for their permission to reuse some parts of our paper published on “Neural Computation”.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guoqiang Zhong .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Zhong, G., Cheriet, M. (2014). Low Rank Tensor Manifold Learning. In: Fu, Y. (eds) Low-Rank and Sparse Modeling for Visual Analysis. Springer, Cham. https://doi.org/10.1007/978-3-319-12000-3_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-12000-3_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-11999-1

  • Online ISBN: 978-3-319-12000-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics