Abstract
Other than vector representations, the direct objects of human cognition are generally high-order tensors, such as 2D images and 3D textures. From this fact, two interesting questions naturally arise: How does the human brain represent these tensor perceptions in a “manifold” way, and how can they be recognized on the “manifold”? In this chapter, we present a supervised model to learn the intrinsic structure of the tensors embedded in a high dimensional Euclidean space. With the fixed point continuation procedures, our model automatically and jointly discovers the optimal dimensionality and the representations of the low dimensional embeddings. This makes it an effective simulation of the cognitive process of human brain. Furthermore, the generalization of our model based on similarity between the learned low dimensional embeddings can be viewed as counterpart of recognition of human brain. Experiments on applications for object recognition and face recognition demonstrate the superiority of our proposed model over state-of-the-art approaches.
\(\copyright \) 2014 MIT Press. Reprinted, with permission, from Guoqiang Zhong and Mohamed Cheriet, “Large Margin Low Rank Tensor Analysis”, Neural Computation, Vol. 26, No. 4: 761–780.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
G. Baudat, F. Anouar, Generalized discriminant analysis using a kernel approach. Neural Comput. 12(10), 2385–2404 (2000)
M. Belkin, P. Niyogi, Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput. 15(6), 1373–1396 (2003)
Y. Bengio, P. Lamblin, D. Popovici, H. Larochelle, Greedy layer-wise training of deep networks, in NIPS, pp. 153–160 (2006)
Y. Bengio, J.F. Paiement, P. Vincent, O. Delalleau, N.L. Roux, M. Ouimet, Out-of-sample extensions for LLE, isomap, MDS, eigenmaps, and spectral clustering, in NIPS (2003)
J.A. Bondy, U.S.R. Murty, Graph Theory with Applications (Elsevier, North-Holland, 1976)
E. Candès, B. Recht, Exact matrix completion via convex optimization. Commun. ACM 55(6), 111–119 (2012)
E. Candès, T. Tao, The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory 56(5), 2053–2080 (2010)
F.R.K. Chung, Spectral Graph Theory (American Mathematical Society, Providence, 1997)
D. Cohn, R. Ladner, A. Waibel, Improving generalization with active learning, in Machine Learning, pp. 201–221 (1994)
G. Dai, D.Y. Yeung, Tensor embedding methods, in AAAI, pp. 330–335 (2006)
J.G. Daugman, Complete discrete 2D gabor transforms by neural networks for image analysis and compression. IEEE Trans. Acoust. Speech Signal Process. 36(7), 1169–1179 (1988)
R.A. Fisher, The use of multiple measurements in taxonomic problems. Ann. Eugenics. 7(7), 179–188 (1936)
Y. Fu, T.S. Huang, Image classification using correlation tensor analysis. IEEE Trans. Image Process. 17(2), 226–234 (2008)
M. Grant, S. Boyd, in Graph Implementations for Nonsmooth Convex Programs, ed. by V. Blondel, S. Boyd, H. Kimura. Recent Advances in Learning and Control. Lecture Notes in Control and Information Sciences (Springer Limited, 2008), pp. 95–110
X. He, P. Niyogi, Locality preserving projections, in NIPS (2003)
G.E. Hinton, S. Osindero, Y.W. Teh, A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)
G.E. Hinton, R.R. Salakhutdinov, Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)
T.G. Kolda, B.W. Bader, Tensor decompositions and applications. SIAM Rev. 51(3), 455–500 (2009)
N.D. Lawrence, Probabilistic non-linear principal component analysis with gaussian process latent variable models. J. Mach. Learn. Res. 6, 1783–1816 (2005)
Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition, in Intelligent Signal Processing, IEEE Press, pp. 306–351 (2001)
D.D. Lee, H.S. Seung, Learning the parts of objects by non-negative matrix factorization. Nature 401(6755), 788–791 (1999)
J. Liu, J. Liu, P. Wonka, J. Ye, Sparse non-negative tensor factorization using columnwise coordinate descent. Pattern Recognit. 45(1), 649–656 (2012)
Y. Liu, Y. Liu, K.C.C. Chan, Tensor distance based multilinear locality-preserved maximum information embedding. IEEE Trans. Neural Networks. 21(11), 1848–1854 (2010)
S. Ma, D. Goldfarb, L. Chen, Fixed point and bregman iterative methods for matrix rank minimization. Math. Program. 128(1–2), 321–353 (2011)
L. van der Maaten, G.E. Hinton, Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008)
D.G. Northcott, Multilinear Algebra (Cambridge University Press, New York, 1984)
S.J. Pan, Q. Yang, A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010)
E. Rosch, Cogn. Psychol. 4, 328–350 (1973)
S.T. Roweis, L.K. Saul, Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500), 2323–2326 (2000)
B. Schölkopf, A.J. Smola, K.R. Müller, Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput. 10(5), 1299–1319 (1998)
H.S. Seung, D.D. Lee, The manifold ways of perception. Science 290(5500), 2268–2269 (2000)
V. de Silva, L.H. Lim, Tensor rank and the ill-posedness of the best low-rank approximation problem. SIAM J. Matrix Anal. Appl. 30(3), 1084–1127 (2008)
M. Sugiyama, Dimensionality reduction of multimodal labeled data by local fisher discriminant analysis. J. Mach. Learn. Res. 8, 1027–1061 (2007)
D. Tao, X. Li, X. Wu, S.J. Maybank, General tensor discriminant analysis and gabor features for gait recognition. IEEE Trans. Pattern Anal. Mach. Intell. 29(10), 1700–1715 (2007)
J.B. Tenenbaum, C. Kemp, T.L. Griffiths, N.D. Goodman, How to grow a mind: statistics, structure, and abstraction. Science 331(6022), 1279–1285 (2011)
J.B. Tenenbaum, V. de Silva, J.C. Langford, A global geometric framework for nonlinear dimensionality reduction. Science 290(5500), 2319–2323 (2000)
P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, P.A. Manzagol, Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 11, 3371–3408 (2010)
H. Wang, S. Yan, T.S. Huang, X. Tang, A convengent solution to tensor subspace learning, in IJCAI, pp. 629–634 (2007)
K.Q. Weinberger, J. Blitzer, L.K. Saul, Distance metric learning for large margin nearest neighbor classification, in NIPS (2005)
S. Yan, D. Xu, B. Zhang, H.J. Zhang, Q. Yang, S. Lin, Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Trans. Pattern Anal. Mach. Intell. 29(1), 40–51 (2007)
J. Yang, D. Zhang, A.F. Frangi, J.Y. Yang, Two-dimensional pca: a new approach to appearance-based face representation and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 26(1), 131–137 (2004)
J. Ye, R. Janardan, Q. Li, Two-Dimensional linear discriminant analysis, in NIPS (2004)
G. Zhong, M. Cheriet, Large margin low rank tensor analysis. Neural Comput. 26(4), 761–780 (2014)
G. Zhong, W.J Li, D.Y. Yeung, X. Hou, C.L. Liu, C.L. Gaussian process latent random field. in: AAAI (2010)
Acknowledgments
This work is partially supported by the Social Sciences and Humanities Research Council of Canada (SSHRC), the Natural Sciences and Engineering Research Council of Canada (NSERC), the National Natural Science Foundation of China (NSFC) under Grant No. 61403353 and the Fundamental Research Funds for the Central Universities of China. We thank the MIT Press for their permission to reuse some parts of our paper published on “Neural Computation”.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Zhong, G., Cheriet, M. (2014). Low Rank Tensor Manifold Learning. In: Fu, Y. (eds) Low-Rank and Sparse Modeling for Visual Analysis. Springer, Cham. https://doi.org/10.1007/978-3-319-12000-3_7
Download citation
DOI: https://doi.org/10.1007/978-3-319-12000-3_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-11999-1
Online ISBN: 978-3-319-12000-3
eBook Packages: Computer ScienceComputer Science (R0)