Advertisement

Abstract

A critical aspect of non-linear dimensionality reduction techniques is represented by the construction of the adjacency graph. The difficulty resides in finding the optimal parameters, a process which, in general, is heuristically driven. Recently, sparse representation has been proposed as a non-parametric solution to overcome this problem. In this paper, we demonstrate that this approach not only serves for the graph construction, but also represents an efficient and accurate alternative for out-of-sample embedding. Considering for a case study the Laplacian Eigenmaps, we applied our method to the face recognition problem. Experimental results conducted on some challenging datasets confirmed the robustness of our approach and its superiority when compared to existing techniques.

Keywords

Recognition Rate Sparse Representation Latent Variable Model Random Projection Locally Linear Embedding 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Schölkopf, B., Smola, A., Müller, K.-R.: Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation 10, 1299–1319 (1998)CrossRefGoogle Scholar
  2. 2.
    Roweis, S., Saul, L.: Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500), 2323–2326 (2000)CrossRefGoogle Scholar
  3. 3.
    Saul, L.K., Roweis, S.T., Singer, Y.: Think globally, fit locally: Unsupervised learning of low dimensional manifolds. Journal of Machine Learning Research 4, 119–155 (2003)Google Scholar
  4. 4.
    Tenenbaum, J.B., de Silva, V., Langford, J.C.: A global geometric framework for nonlinear dimensionality reduction. Science 290(5500), 2319–2323 (2000)CrossRefGoogle Scholar
  5. 5.
    Geng, X., Zhan, D., Zhou, Z.: Supervised nonlinear dimensionality reduction for visualization and classification. IEEE Transactions on Systems, Man, and Cybernetics-Part B: Cybernetics 35, 1098–1107 (2005)CrossRefGoogle Scholar
  6. 6.
    Belkin, M., Niyogi, P.: Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation 15(6), 1373–1396 (2003)zbMATHCrossRefGoogle Scholar
  7. 7.
    Jia, P., Yin, J., Huang, X., Hu, D.: Incremental Laplacian Eigenmaps by preserving adjacent information between data points. Pattern Recognition Letters 30(16), 1457–1463 (2009)CrossRefGoogle Scholar
  8. 8.
    Elgammal, A., Lee, C.: Non-linear manifold learning for dynamic shape and dynamic appearance. Computer Vision and Image Understanding 106(1), 31–46 (2007)CrossRefGoogle Scholar
  9. 9.
    Bengio, Y., Paiement, J., Vincent, P.: Out-of-sample extensions for LLE, Isomap, MDS, eigenmaps and spectral clustering. In: Advances in Neural Information Processing (2004)Google Scholar
  10. 10.
    Yan, S., Wang, H.: Semi-supervised learning by sparse representation. In: SIAM International Conference on Data Mining (2009)Google Scholar
  11. 11.
    Carreira-Perpinan, M.A., Lu, Z.: The Laplacian Eigenmaps latent variable model. Journal of Machine Learning Research 2, 59–66 (2007)Google Scholar
  12. 12.
    Goel, N., Bebis, G., Nefian, A.: Face recognition experiments with random projections. In: SPIE Conference on Biometric Technology for Human Identification (2005)Google Scholar
  13. 13.
    Cai, D., He, X., Han, J.: Spectral regression for efficient regularized subspace learning. In: Proc. Int. Conf. Computer Vision, ICCV 2007 (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Bogdan Raducanu
    • 1
  • Fadi Dornaika
    • 2
    • 3
  1. 1.Computer Vision CenterBellaterraSpain
  2. 2.University of the Basque Country (UPV/EHU)San SebastianSpain
  3. 3.IKERBASQUE, Basque Foundation for ScienceBilbaoSpain

Personalised recommendations