Skip to main content

Dimensionality Reduction

  • Living reference work entry
  • First Online:
Computer Vision

Synonyms

Dimension reduction; Dimensional compression; Dimensional embedding

Related Concepts

Feature Selection

Definition

Dimensionality reduction is the process of reducing the dimension of the vector space spanned by feature vectors (pattern vectors). Various kinds of reduction can be achieved by defining a map from the original space into a dimensionality-reduced space.

Background

The feature space, i.e., the vector space spanned by feature vectors (pattern vectors) defined on d-dimensional space, can be transformed into a vector space of lower-dimension d′(< d) spanned by d′-dimensional feature vectors through linear or nonlinear transformation. This transformation allows feature vectors to be represented by lower-dimensional vectors, and various kinds of vector operations and statistical analysis, such as multivariate analysis, machine learning, clustering, and classification, become less expensive to perform. Moreover, it tackles the “curse of dimensionality,” the various...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

References

  1. Baudat G, Anouar F (2000) Generalized discriminant analysis using a kernel approach. Neural Comput 12:2385–2404

    Article  Google Scholar 

  2. Belhumeur P, Hespanha J, Kriegman D (1997) Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Trans Pattern Anal Mach Intell 19(7):711–720

    Article  Google Scholar 

  3. Belkin M, Niyogi P (2002) Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput 15(6):1373–1396

    Article  Google Scholar 

  4. Burges CJ (2005) Geometric methods for feature extraction and dimensional reduction. In: Maimon O, Rokach L (eds) Data mining and knowledge discovery handbook: a complete guide for researchers and practitioners. Springer, New York

    Google Scholar 

  5. DeMers D, Cottrell G (1992) Non-linear dimensionality reduction. In: Advances in neural information processing systems, vol 5. Morgan Kaufmann Publishers Inc., San Francisco, pp 580–587

    Google Scholar 

  6. Ham J, Lee DD, Mika S, Schölkopf B (2004) A kernel view of the dimensionality reduction of manifolds. In: Proceedings of the 21st international conference on machine learning (ICML’04), Banff, pp 369–376

    Google Scholar 

  7. Hinton GE, Salakhutdinov RR (2006) Reducing the dimensionality of data with neural networks. Science 313(5786):504–507

    Article  MathSciNet  Google Scholar 

  8. Mika S, Rätsch G, Weston J, Schölkopf B, Müller K (1999) Fisher discriminant analysis with kernels. In: Proceedings of IEEE neural networks for signal processing workshop IX (NNSP’99), Madison, pp 41–48

    Google Scholar 

  9. Murase H, Nayar SK (1995) Visual learning and recognition of 3-d objects from appearance. Int J Comput Vis 14(1):5–24

    Article  Google Scholar 

  10. Oja E (1983) Subspace methods of pattern recognition. Research Studies Press, Baldock

    Google Scholar 

  11. Pless R, Souvenir R (2009) A survey of manifold learning for images. IPSJ Trans Comput Vis Appl 1:83–94

    Article  Google Scholar 

  12. Roweis ST, Saul LK (2000) Nonlinear dimensionality reduction by locally linear embedding. Science 290:2323–2326

    Article  Google Scholar 

  13. Saul LK, Roweis ST, Singer Y (2003) Think globally, fit locally: unsupervised learning of low dimensional manifolds. J Mach Learn Res 4:119–155

    MathSciNet  MATH  Google Scholar 

  14. Schölkopf B, Smola A, Müller KR (1998) Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput 10(5):1299–1319

    Article  Google Scholar 

  15. Tenenbaum JB, de Silva V, Langford JC (2000) A global geometric framework for nonlinear dimensionality reduction. Science 290(5500):2319–2323

    Article  Google Scholar 

  16. Turk M, Pentland A (1991) Eigenfaces for recognition. J Cogn Neurosci 3(1):71–86

    Article  Google Scholar 

  17. Watanabe S (1969) Knowing & guessing – quantitative study of inference and information. Wiley, Hoboken

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eisaku Maeda .

Section Editor information

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this entry

Check for updates. Verify currency and authenticity via CrossMark

Cite this entry

Maeda, E. (2020). Dimensionality Reduction. In: Computer Vision. Springer, Cham. https://doi.org/10.1007/978-3-030-03243-2_652-1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-03243-2_652-1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-03243-2

  • Online ISBN: 978-3-030-03243-2

  • eBook Packages: Springer Reference Computer SciencesReference Module Computer Science and Engineering

Publish with us

Policies and ethics