Riemannian Manifold Clustering and Dimensionality Reduction for Vision-Based Analysis

Part of the Advances in Pattern Recognition book series (ACVPR)


Segmentation is one fundamental aspect of vision-based motion analysis, thus it has been extensively studied. Its goal is to group the data into clusters based upon image properties such as intensity, color, texture, or motion. Most existing segmentation algorithms proceed by associating a feature vector to each pixel in the image or video and then segmenting the data by clustering these feature vectors. This process can be phrased as a manifold learning and clustering problem, where the objective is to learn a low-dimensional representation of the underlying data structure and to segment the data points into different groups. Over the past few years, various techniques have been developed for learning a low-dimensional representation of a nonlinear manifold embedded in a high-dimensional space. Unfortunately, most of these techniques are limited to the analysis of a single connected nonlinear manifold. In addition, all these manifold learning algorithms assume that the feature vectors are embedded in a Euclidean space and make use of (at least locally) the Euclidean metric or a variation of it to perform dimensionality reduction. While this may be appropriate in some cases, there are several computer vision problems where it is more natural to consider features that live in a Riemannian space. To address these problems, algorithms for performing simultaneous nonlinear dimensionality reduction and clustering of data sampled from multiple submanifolds of a Riemannian manifold have been recently proposed. In this book chapter, we give a summary of these newly developed algorithms as described in Goh and Vidal (Conference on Computer Vision and Pattern Recognition, 2007 and 2008; European Conference on Machine Learning, 2008; and European Conference on Computer Vision, 2008) and demonstrate their applications to vision-based analysis.


Riemannian Manifold Diffusion Tensor Image Dimensionality Reduction Subspace Cluster Locally Linear Embedding 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Agarwal, S., Lim, J., Zelnik-Manor, L., Perona, P., Kriegman, D., Belongie, S.: Beyond pairwise clustering. In: Computer Vision and Pattern Recognition, vol. 2, pp. 838–845 (2005) Google Scholar
  2. 2.
    Amari, S.: Differential-Geometrical Methods in Statistics. Springer, Berlin (1985) MATHCrossRefGoogle Scholar
  3. 3.
    Arsigny, V., Fillard, P., Pennec, X., Ayache, N.: Log-Euclidean metrics for fast and simple calculus on diffusion tensors. Magn. Reson. Med. 56, 411–421 (2006) CrossRefGoogle Scholar
  4. 4.
    Barbará, D., Chen, P.: Using the fractal dimension to cluster datasets. In: KDD ’00: Proceedings of the 6th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 260–264. ACM, New York (2000) CrossRefGoogle Scholar
  5. 5.
    Belkin, M., Niyogi, P.: Laplacian eigenmaps and spectral techniques for embedding and clustering. In: Advances in Neural Information Processing Systems, pp. 585–591. MIT Press, Cambridge (2002) Google Scholar
  6. 6.
    Belkin, M., Niyogi, P.: Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput. 15(6), 1373–1396 (2003) MATHCrossRefGoogle Scholar
  7. 7.
    Brand, M., Huang, K.: A unifying theorem for spectral embedding and clustering. In: International Workshop on Artificial Intelligence and Statistics (2003) Google Scholar
  8. 8.
    Burges, C.: Geometric methods for feature extraction and dimensional reduction—a guided tour. In: The Data Mining and Knowledge Discovery Handbook, pp. 59–92. Kluwer Academic, Norwell (2005) CrossRefGoogle Scholar
  9. 9.
    Cencov, N.N.: Statistical decision rules and optimal inference. In: Translations of Mathematical Monographs, vol. 53. AMS, Providence (1982) Google Scholar
  10. 10.
    Chen, G., Lerman, G.: Spectral curvature clustering, SCC. Int. J. Comput. Vis. 81(3), 317–330 (2009) CrossRefGoogle Scholar
  11. 11.
    Cox, T.F., Cox, M.A.A.: Multidimensional Scaling. Chapman & Hall, London (1994) MATHGoogle Scholar
  12. 12.
    Cremers, D., Soatto, S.: Motion competition: a variational framework for piecewise parametric motion segmentation. Int. J. Comput. Vis. 62(3), 249–265 (2005) CrossRefGoogle Scholar
  13. 13.
    do Carmo, M.P.: Riemannian Geometry. Birkhäuser, Boston (1992) MATHCrossRefGoogle Scholar
  14. 14.
    Donoho, D., Grimes, C.: Hessian eigenmaps: locally linear embedding techniques for high-dimensional data. Natl. Acad. Sci. 100(10), 5591–5596 (2003) MathSciNetMATHCrossRefGoogle Scholar
  15. 15.
    Fletcher, P.T., Joshi, S.: Riemannian geometry for the statistical analysis of diffusion tensor data. Signal Process. 87(2), 250–262 (2007) MATHCrossRefGoogle Scholar
  16. 16.
    Frechet, M.: Les elements aleatoires de nature quelconque dans un espace distancie. Ann. Inst. Henri Poincare 10, 235–310 (1948) MathSciNetGoogle Scholar
  17. 17.
    Gionis, A., Hinneburg, A., Papadimitriou, S., Tsaparas, P.: Dimension induced clustering. In: KDD ’05: Proceeding of the 11th ACM SIGKDD International Conference on Knowledge Discovery in Data Mining, pp. 51–60. ACM, New York (2005) CrossRefGoogle Scholar
  18. 18.
    Goh, A., Vidal, R.: Segmenting motions of different types by unsupervised manifold clustering. In: Conference on Computer Vision and Pattern Recognition (2007) Google Scholar
  19. 19.
    Goh, A., Vidal, R.: Segmenting fiber bundles in diffusion tensor images. In: European Conference on Computer Vision (2008) Google Scholar
  20. 20.
    Goh, A., Vidal, R.: Unsupervised Riemannian clustering of probability density functions. In: European Conference on Machine Learning (2008) Google Scholar
  21. 21.
    Goh, A., Vidal, R.: Clustering and dimensionality reduction on Riemannian manifolds. In: Conference on Computer Vision and Pattern Recognition, pp. 238–250 (2008) Google Scholar
  22. 22.
    Govindu, V.: A tensor decomposition for geometric grouping and segmentation. In: Computer Vision and Pattern Recognition, vol. 1, pp. 1150–1157 (2005) Google Scholar
  23. 23.
    Ham, J., Lee, D.D., Mika, S., Schölkopf, B.: A kernel view of the dimensionality reduction of manifolds. In: International Conference on Machine learning, vol. 69, p. 47 (2004) Google Scholar
  24. 24.
    Haro, G., Randall, G., Sapiro, G.: Translated poisson mixture model for stratification learning. Int. J. Comput. Vis. 80, 358–374 (2008) CrossRefGoogle Scholar
  25. 25.
    Ho, J., Yang, M.H., Lim, J., Lee, K.C., Kriegman, D.: Clustering appearances of objects under varying illumination conditions. In: IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 11–18 (2003) Google Scholar
  26. 26.
    Hotelling, H.: Analysis of a complex of statistical variables into principal components. J. Educ. Psychol. 24, 417–441 (1933) CrossRefGoogle Scholar
  27. 27.
    Karcher, H.: Riemannian center of mass and mollifier smoothing. Commun. Pure Appl. Math. 30(5), 509–541 (1977) MathSciNetMATHCrossRefGoogle Scholar
  28. 28.
    Kindlmann, G., Estepar, R.S.J., Niethammer, M., Haker, S., Westin, C.F.: Geodesic-loxodromes for diffusion tensor interpolation and difference measurement. In: Medical Image Computing and Computer-Assisted Intervention (2007) Google Scholar
  29. 29.
    Levina, E., Bickel, P.J.: Maximum likelihood estimation of intrinsic dimension. In: NIPS (2004) Google Scholar
  30. 30.
    Melonakos, J., Mohan, V., Niethammer, M., Smith, K., Kubicki, M., Tannenbaum, A.: Finsler tractography for white matter connectivity analysis of the cingulum bundle. In: Medical Image Computing and Computer-Assisted Intervention (2007) Google Scholar
  31. 31.
    Mordohai, P., Medioni, G.G.: Unsupervised dimensionality estimation and manifold learning in high-dimensional spaces by tensor voting. In: International Joint Conference on Artificial Intelligence, pp. 798–803 (2005) Google Scholar
  32. 32.
    Pennec, X., Fillard, P., Ayache, N.: A Riemannian framework for tensor computing. Int. J. Comput. Vis. 66(1), 41–46 (2006) MathSciNetCrossRefGoogle Scholar
  33. 33.
    Polito, M., Perona, P.: Grouping and dimensionality reduction by locally linear embedding. In: Advances in Neural Information Processing Systems. MIT Press, Cambridge (2002) Google Scholar
  34. 34.
    Rao, C.R.: Information and accuracy attainable in the estimation of statistical parameters. Bull. Calcutta Math. Soc. 37, 81–89 (1945) MathSciNetMATHGoogle Scholar
  35. 35.
    Roweis, S., Saul, L.: Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500), 2323–2326 (2000) CrossRefGoogle Scholar
  36. 36.
    Roweis, S., Saul, L.: Think globally, fit locally: unsupervised learning of low dimensional manifolds. J. Mach. Learn. Res. 4, 119–155 (2003) MathSciNetGoogle Scholar
  37. 37.
    Schmid, C.: Constructing models for content-based image retrieval. In: IEEE Conference on Computer Vision and Pattern Recognition (2001) Google Scholar
  38. 38.
    Schölkopf, B., Smola, A.: Learning with Kernels. MIT Press, Cambridge (2002) Google Scholar
  39. 39.
    Sha, F., Saul, L.: Analysis and extension of spectral methods for nonlinear dimensionality reduction. In: International Conference on Machine learning, pp. 784–791 (2005) Google Scholar
  40. 40.
    Souvenir, R., Pless, R.: Manifold clustering. In: IEEE International Conference on Computer Vision, vol. I, pp. 648–653 (2005) Google Scholar
  41. 41.
    Srivastava, A., Jermyn, I., Joshi, S.: Riemannian analysis of probability density functions with applications in vision. In: IEEE Conference on Computer Vision and Pattern Recognition (2007) Google Scholar
  42. 42.
    Tenenbaum, J.B., de Silva, V., Langford, J.C.: A global geometric framework for nonlinear dimensionality reduction. Science 290(5500), 2319–2323 (2000) CrossRefGoogle Scholar
  43. 43.
    Tipping, M., Bishop, C.: Mixtures of probabilistic principal component analyzers. Neural Comput. 11(2), 443–482 (1999) CrossRefGoogle Scholar
  44. 44.
    Varma, M., Zisserman, A.: A statistical approach to texture classification from single images. Int. J. Comput. Vis. 62(1–2), 61–81 (2005) Google Scholar
  45. 45.
    Vidal, R., Ma, Y., Sastry, S.: Generalized principal component analysis, GPCA. IEEE Trans. Pattern Anal. Mach. Intell. 27(12), 1–15 (2005) CrossRefGoogle Scholar
  46. 46.
    Wang, Z., Vemuri, B.: An affine invariant tensor dissimilarity measure and its applications to tensor-valued image segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 228–233 (2004) Google Scholar
  47. 47.
    Weinberger, K.Q., Saul, L.: Unsupervised learning of image manifolds by semidefinite programming. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 988–955 (2004) Google Scholar
  48. 48.
    Wright, J., Ma, Y.: Dense error correction via l1-minimization. IEEE Trans. Inf. Theory 56(7), 3540–3560 (2010). doi: 10.1109/TIT.2010.2048473 MathSciNetCrossRefGoogle Scholar
  49. 49.
    Yan, J., Pollefeys, M.: A factorization approach to articulated motion recovery. In: IEEE Conference on Computer Vision and Pattern Recognition, vol. II, pp. 815–821 (2005) Google Scholar
  50. 50.
    Zhang, Z., Zha, H.: Principal manifolds and nonlinear dimensionality reduction via tangent space alignment. SIAM J. Sci. Comput. 26(1), 313–338 (2005) MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer-Verlag London Limited 2011

Authors and Affiliations

There are no affiliations available

Personalised recommendations