Skip to main content

Mixtures of Normalized Linear Projections

  • Conference paper
Advanced Concepts for Intelligent Vision Systems (ACIVS 2009)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 5807))

  • 1696 Accesses

Abstract

High dimensional spaces pose a challenge to any classification task. In fact, these spaces contain much redundancy and it becomes crucial to reduce the dimensionality of the data to improve analysis, density modeling, and classification. In this paper, we present a method for dimensionality reduction in mixture models and its use in classification. For each component of the mixture, the data are projected by a linear transformation onto a lower-dimensional space. Subsequently, the projection matrices and the densities in such compressed spaces are learned by means of an Expectation Maximization (EM) algorithm. However, two main issues arise as a result of implementing this approach, namely: 1) the scale of the densities can be different across the mixture components and 2) a singularity problem may occur. We suggest solutions to these problems and validate the proposed method on three image data sets from the UCI Machine Learning Repository. The classification performance is compared with that of a mixture of probabilistic principal component analysers (MPPCA). Across the three data sets, our accuracy always compares favourably, with improvements ranging from 2.5% to 35.4%.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bellman, R.: Adaptive Control Processes - A Guided Tour. Princeton University Press, Princeton (1961)

    Book  MATH  Google Scholar 

  2. Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, Heidelberg (2006)

    MATH  Google Scholar 

  3. Tipping, M.E., Bishop, C.M.: Probabilistic principal component analysis. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 61(3), 611–622 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  4. Bartholomew, D.J.: Latent Variable Models and Factor Analysis. Charles Griffin & Co. Ltd., London (1987)

    MATH  Google Scholar 

  5. Hinton, G.E., Dayan, P., Revow, M.: Modeling the manifolds of images of handwritten digits. IEEE Transactions on Neural Networks 8(1), 65–74 (1997)

    Article  Google Scholar 

  6. Tipping, M.E., Bishop, C.M.: Mixtures of probabilistic principal component analyzers. Neural Computation 11(2), 443–482 (1999)

    Article  Google Scholar 

  7. Ghahramani, Z., Hinton, G.E.: The em algorithm for mixtures of factor analyzers. Technical Report CRG-TR-96-1, University of Toronto (1997)

    Google Scholar 

  8. Piccardi, M., Gunes, H., Otoom, A.F.: Maximum-likelihood dimensionality reduction in gaussian mixture models with an application to object classification. In: 19th IEEE International Conference on Pattern Recognition, Tampa, FL, USA (2008)

    Google Scholar 

  9. Asuncion, A., Newman, D.J.: UCI machine learning repository (2007)

    Google Scholar 

  10. Kittler, J.V.: Combining classifiers: A theoretical framework. Pattern Analysis and Applications 1(1), 18–27 (1998)

    Article  MathSciNet  Google Scholar 

  11. Figueiredo, M.A.F., Jain, A.K.: Unsupervised learning of finite mixture models. IEEE Transactions on Pattern Analysis and Machine Intelligence 24(3), 381–396 (2002)

    Article  Google Scholar 

  12. Bolton, R.J., Krzanowski, W.J.: A characterization of principal components for projection pursuit. The American Statistician 53(2), 108–109 (1999)

    Google Scholar 

  13. Zhong, P., Fukushima, M.: A regularized nonsmooth newton method for multi-class support vector machines. Technical report, Department of Applied Mathematics and Physics, Kyoto University (2005)

    Google Scholar 

  14. Mangasaria, O.L., Wild, E.W.: Nonlinear knowledge-based classification. Technical Report 06-04, Data Mining Institute (2006)

    Google Scholar 

  15. Keysers, D., Gollan, C., Ney, H.: Local context in non-linear deformation models for handwritten character recognition. In: International Conference on Pattern Recognition, vol. 4, pp. 511–514 (2004)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Otoom, A.F., Perez Concha, O., Gunes, H., Piccardi, M. (2009). Mixtures of Normalized Linear Projections. In: Blanc-Talon, J., Philips, W., Popescu, D., Scheunders, P. (eds) Advanced Concepts for Intelligent Vision Systems. ACIVS 2009. Lecture Notes in Computer Science, vol 5807. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04697-1_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-04697-1_7

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-04696-4

  • Online ISBN: 978-3-642-04697-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics