Advertisement

Vector Quantization and Mixture Estimation

  • Gernot A. Fink
Part of the Advances in Computer Vision and Pattern Recognition book series (ACVPR)

Abstract

The goal of a so-called vector quantizer is to compute a compact representation for a set of data vectors. It maps vectors from some input data space onto a finite set of typical reproduction vectors. Ideally, during this transformation no information should be lost that is relevant for the further processing of the data. Consequently, one tries to reduce the effort for storage and transmission of vector-valued data by eliminating redundant information contained therein.

The goal of finding a compact representation for the distribution of some data can also be considered from the viewpoint of statistics. Then the task can be described as trying to find a suitable probability distribution that adequately represents the input data. This is usually achieved by means of mixture densities.

In this chapter we will first formally define the concept of a vector quantizer and derive conditions for its optimality. Subsequently, the most important algorithms for building vector quantizers will be presented. Finally, the unsupervised estimation of mixture densities will be treated as a generalization of the vector quantization problem.

Keywords

Gaussian Mixture Model Vector Quantizer Quantization Error Prototype Vector Codebook Vector 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 22.
    Bilmes, J.: A gentle tutorial of the EM algorithm and its application to parameter estimation for Gaussian mixture and hidden Markov models. Technical report TR-97-021, International Computer Science Institute, Berkeley (1997) Google Scholar
  2. 23.
    Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, New York (2006) zbMATHGoogle Scholar
  3. 25.
    Bock, H.-H.: Origins and extensions of the k-means algorithm in cluster analysis. J. Électron. Hist. Probab. Stat. 4(2) (2008) Google Scholar
  4. 53.
    Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. B 39(1), 1–22 (1977) zbMATHMathSciNetGoogle Scholar
  5. 103.
    Furui, S.: Digital Speech Processing, Synthesis, and Recognition. Signal Processing and Communications Series. Marcel Dekker, New York (2000) Google Scholar
  6. 109.
    Gauvain, J.-L., Lee, C.-H.: Maximum a posteriori estimation for multivariate Gaussian mixture observations of Markov chains. IEEE Trans. Audio Speech Lang. Process. 2(2), 291–298 (1994) Google Scholar
  7. 110.
    Gersho, A., Gray, R.M.: Vector Quantization and Signal Compression. Communications and Information Theory. Kluwer Academic, Boston (1992) CrossRefzbMATHGoogle Scholar
  8. 123.
    Huang, X., Acero, A., Hon, H.-W.: Spoken Language Processing: A Guide to Theory, Algorithm, and System Development. Prentice Hall, Englewood Cliffs (2001) Google Scholar
  9. 125.
    Huang, X.D., Ariki, Y., Jack, M.A.: Hidden Markov Models for Speech Recognition. Information Technology Series, vol. 7. Edinburgh University Press, Edinburgh (1990) Google Scholar
  10. 136.
    Jelinek, F.: Statistical Methods for Speech Recognition. MIT Press, Cambridge (1997) Google Scholar
  11. 178.
    Linde, Y., Buzo, A., Gray, R.M.: An algorithm for vector quantizer design. IEEE Trans. Commun. 28(1), 84–95 (1980) CrossRefGoogle Scholar
  12. 180.
    Lloyd, S.P.: Least squares quantization in PCM. IEEE Trans. Inf. Theory 28(2), 129–137 (1982) CrossRefzbMATHMathSciNetGoogle Scholar
  13. 187.
    MacQueen, J.: Some methods for classification and analysis of multivariate observations. In: Cam, L.M.L., Neyman, J. (eds.) Proc. Fifth Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, pp. 281–296 (1967) Google Scholar
  14. 252.
    Rabiner, L.R., Juang, B.-H.: Fundamentals of Speech Recognition. Prentice-Hall, Englewood Cliffs (1993) Google Scholar
  15. 263.
    Sabin, M., Gray, R.: Global convergence and empirical consistency of the generalized Lloyd algorithm. IEEE Trans. Inf. Theory 32(2), 148–155 (1986) CrossRefzbMATHMathSciNetGoogle Scholar

Copyright information

© Springer-Verlag London 2014

Authors and Affiliations

  • Gernot A. Fink
    • 1
  1. 1.Department of Computer ScienceTU Dortmund UniversityDortmundGermany

Personalised recommendations