Skip to main content

Part of the book series: Advances in Pattern Recognition ((ACVPR))

Abstract

Manifold learning methods are one of the most exciting developments in machine learning in recent years. The central idea underlying these methods is that although natural data is typically represented in very high-dimensional spaces, the process generating the data is often thought to have relatively few degrees of freedom. A natural mathematical characterization of this intuition is to model the data as lying on or near a low-dimensional manifold

Recently, manifold learning has also been applied in utilizing both labeled and unlabeled data for classification, that is, semi-supervised learning. For example, once the manifold is estimated, then the Laplace–Beltrami operator may be used to provide a basis for maps intrinsically defined on this manifold and then the appropriate classifier (map) is estimated on the basis of the labeled examples.

In this chapter, we will discuss the manifold perspective of visual pattern representation, dimensionality reduction, and classification problems, as well as a survey that includes manifold learning concepts, technical mechanisms, and algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Balasubramanian M, Schwartz E, Tenenbaum J, de Silva V, Langford J (2002) The isomap algorithm and topological stability. Science 295:7

    Article  Google Scholar 

  • Belkin M, Niyogi P (2002) Laplacian Eigenmaps and spectral techniquesfor embedding and clustering. Advances in Neural Information Processing Systems 1:585–592

    Google Scholar 

  • Belkin M, Niyogi P (2003) Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation 15:1373–1396

    Article  MATH  Google Scholar 

  • Belkin M, Niyogi P (2004) Semi-supervised learning on Riemannian manifolds. Machine Learning 56(1):209–239

    Article  MATH  Google Scholar 

  • Bengio Y, Paiement J, Vincent P, Delalleau O, Le Roux N, Ouimet M (2004) Out-of-sample extensions for LLE, Isomap, MDS, eigenmaps, and spectral clustering. In: Advances in Neural Information Processing Systems 16:177–186. MIT Press, Bradford Book

    Google Scholar 

  • Borg I, Groenen P (2003) Modern multidimensional scaling: theory and applications. Journal of Educational Measurement 40(3):277–280

    Article  Google Scholar 

  • Boykov Y, Jolly M (2001) Interactive graph cuts for optimal boundary and region segmentation of objects in N-Dimages. In: International Conference on Computer Vision, Vancouver, BC, Canada, vol 1, pp 105–112

    Google Scholar 

  • Brand M (2003) Charting a manifold. Advances in Neural Information Processing Systems 15:985–992. MIT Press, Cambridge

    Google Scholar 

  • Coifman R, Lafon S, Lee A, Maggioni M, Nadler B, Warner F, Zucker S (2005) Geometric diffusions as a tool for harmonic analysis and structure definition of data: diffusion maps. Proceedings of the National Academy of Sciences 102(21):7426–7431

    Article  Google Scholar 

  • Donoho D, Grimes C (2003) Hessian eigenmaps: locally linear embedding techniques for high-dimensional data. Proceedings of the National Academy of Sciences 100(10):5591–5596

    Article  MATH  MathSciNet  Google Scholar 

  • Efros A, Isler V, Shi J, Visontai M (2005) Seeing through water. In: Advances in Neural Information Processing Systems, MIT Press, Cambridge

    Google Scholar 

  • Ghahramani Z, Hinton G (1997) The EM Algorithm for Mixtures of Factor Analyzers. University of Toronto Technical Report CRG-TR-96-1

    Google Scholar 

  • Law M, Jain A (2006) Incremental nonlinear dimensionality reduction by manifold learning. IEEE Transactions on Pattern Analysis and Machine Intelligence 28:377–391

    Article  Google Scholar 

  • Lin T, Zha H, Lee S (2006) Riemannian manifold learning for nonlinear dimensionality reduction. Lecture Notes in Computer Science 3951:44

    Article  Google Scholar 

  • Mika S, Ratsch G, Weston J, Scholkopf B, Mullers K (1999) Fisher discriminant analysis with kernels. In: Neural Networks for Signal Processing IX, IEEE Signal Processing Society Workshop, pp 41–48

    Google Scholar 

  • Roweis S, Ghahramani Z (1999) A Unifying review of linear gaussian models. Neural Computation 11(2):305–345

    Article  Google Scholar 

  • Roweis S, Saul L (2000) Nonlinear dimensionality reduction by locally linear embedding Science 290:2323–2326

    Google Scholar 

  • Roweis S, Saul L, Hinton G (2002) Global Coordination of Local Linear Models. In: Advances in Neural Information Processing Systems, MIT Press

    Google Scholar 

  • Salzmann M, Pilet J, Ilic S, Fua P (2007) Surface deformation models for nonrigid 3D shape recovery. IEEE Transactions on Pattern Analysis and Machine Intelligence 29:1481–1487

    Article  Google Scholar 

  • Schoelkopf B, Smola A, Mueller K (1997) Kernel principal component analysis. Lecture Notes in Computer Science, Springer, Berline, pp 583–588

    Google Scholar 

  • Sha F, Saul L (2005) Analysis and extension of spectral methods for nonlinear dimensionality reduction. In: International Workshop on Machine Learning, vol 22

    Google Scholar 

  • de Silva V, Tenenbaum J (2003) Global versus local methods in nonlinear dimensionality reduction. In: Advances in Neural Information Processing Systems 15:721–728. MIT Press, Cambridge

    Google Scholar 

  • Weinberger K, Saul L (2006) Unsupervised learning of image manifolds by semidefinite programming. International Journal of Computer Vision 70(1):77–90

    Article  Google Scholar 

  • Zhang Z, Zha H (2002) Principal manifolds and nonlinear dimension reduction via local tangent space alignment. Arxiv preprint csLG/0212008

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nanning Zheng .

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag London Limited

About this chapter

Cite this chapter

Zheng, N., Xue, J. (2009). Manifold Learning. In: Statistical Learning and Pattern Analysis for Image and Video Processing. Advances in Pattern Recognition. Springer, London. https://doi.org/10.1007/978-1-84882-312-9_4

Download citation

  • DOI: https://doi.org/10.1007/978-1-84882-312-9_4

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-84882-311-2

  • Online ISBN: 978-1-84882-312-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics