Skip to main content

Learning-Theoretic Methods in Vector Quantization

  • Chapter
Principles of Nonparametric Learning

Part of the book series: International Centre for Mechanical Sciences ((CISM,volume 434))

Abstract

The principal goal of data compression (also known as source coding) is to replace data by a compact representation in such a manner that from this representation the original data can be reconstructed either perfectly, or with high enough accuracy. Generally, the representation is given in the form of a sequence of binary digits (bits) that can be used for efficient digital transmission or storage.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • E. A. Abaya and G. L. Wise. On the existence of optimal quantizers. IEEE Trans. Inform. Theory, 28:937 — 940, Nov. 1982.

    Google Scholar 

  • E. A. Abaya and G. L. Wise. Convergence of vector quantizers with applications to optimal quantization. SIAM Journal on Applied Mathematics, 44: 183–189, 1984.

    Article  MathSciNet  MATH  Google Scholar 

  • M. Anthony and P. L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge University Press, 1999.

    Google Scholar 

  • R. B. Ash. Probability and Measure Theory. Academic Press, New York, 2000.

    MATH  Google Scholar 

  • P. Bartlett, T. Linder, and G. Lugosi. The minimax distortion redundancy in empirical quantizer design. IEEE Trans. Inform. Theory, IT-44(5): 1802–1813, Sep. 1998.

    Google Scholar 

  • T. Berger. Optimum quantizers and permutation codes. IEEE Trans. Inform. Theory, IT-18: 759–765, Nov. 1972.

    Google Scholar 

  • P. A. Chou. The distortion of vector quantizers trained on n vectors decreases to the optimum as Op (1/n). in Proc. IEEE Int. Symp. Information Theory (Trondheim, Norway, Jun. 27-Jul. 1, 1994 ), p. 457.

    Google Scholar 

  • P. A. Chou and B. J. Betts. When optimal entropy-constrained quantizers have only a finite number of codewords. in Proc. IEEE Int. Symp. Information Theory (Cambridge, MA, USA, Aug. 16–21, 1998 ), p. 97.

    Google Scholar 

  • P. A. Chou, T. Lookabaugh, and R. M. Gray. Entropy-constrained vector quantization. IEEE Trans. Acoust. Speech, Signal Processing, ASSP-37: 31–42, Jan. 1989.

    Google Scholar 

  • D. Cohn, E. Riskin, and R. Ladner. Theory and practice of vector quantizers trained on small training sets. IEEE Trans. on Pattern Analysis and Machine Intelligence, 16: 54–65, Jan. 1994.

    Article  Google Scholar 

  • P. C. Cosman, K. O. Perlmutter, S. M. Perlmutter, R. A. Olshen, and R. M. Gray. Training sequence size and vector quantizer performance. In Proceedings of Asilomar Conference on Signals, Systems, and Computers, pages 434–438, 1991.

    Google Scholar 

  • T. Cover and J. A. Thomas. Elements of Information Theory. Wiley, New York, 1991.

    Book  MATH  Google Scholar 

  • L. Devroye, L. Györfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer, New York, 1996.

    MATH  Google Scholar 

  • R. M. Dudley. Real Analysis and Probability. Chapman & Hall, New York, 1989.

    MATH  Google Scholar 

  • R.M. Dudley. Central limit theorems for empirical measures. Annals of Probability, 6: 899–929, 1978.

    Article  MathSciNet  MATH  Google Scholar 

  • A. Gersho and R. M. Gray. Vector Quantization and Signal Compression. Kluwer, Boston, 1992.

    Book  MATH  Google Scholar 

  • H. Gish and J. N. Pierce. Asymptotically efficient quantizing. IEEE Trans. Inform. Theory, IT-14: 676–683, Sep. 1968.

    Google Scholar 

  • S. Graf and H. Luschgy. Consistent estimation in the quantization problem for random vectors. In Transactions of the Twelfth Prague Conference on Information Theory, Statistical Decision Functions, Random Processes, pages 84–87, 1994.

    Google Scholar 

  • S. Graf and H. Luschgy. Rates of convergence for the empirical quantization error. preprint, 1999.

    Google Scholar 

  • S. Graf and H. Luschgy. Foundations of Quantization for Probability Distributions. Springer Verlag, Berlin, Heidelberg, 2000.

    Google Scholar 

  • R. M. Gray. Source Coding Theory. Kluwer, Boston, 1990.

    MATH  Google Scholar 

  • R. M. Gray and L. D. Davisson. Quantizer mismatch. IEEE Trans. Communications, 23: 439–443, 1975.

    Article  MathSciNet  MATH  Google Scholar 

  • R. M. Gray, T. Linder, and J. Li. A Lagrangian formulation of Zador’s entropy-constrained quantization theorem. IEEE Trans. Inform. Theory,2001 (to appear).

    Google Scholar 

  • R. M. Gray and D. L. Neuhoff. Quantization. IEEE Trans. Inform. Theory, (Special Commemorative Issue), IT-44(6): 2325–2383, Oct. 1998.

    Google Scholar 

  • R. M. Gray, D. L. Neuhoff, and P. C. Shields. A generalization of Orstein’s J-distance with applications to information theory. Annals of Probability, 3: 315–328, 1975.

    Article  MathSciNet  MATH  Google Scholar 

  • A. György and T. Linder. Optimal entropy-constrained scalar quantization of a uniform source. IEEE Trans. Inform. Theory, IT-46:pp. 2704–2711, Nov. 2000.

    Google Scholar 

  • A. György and T. Linder. On the structure of optimal entropy-constrained scalar quantizers. IEEE Trans. Inform. Theory,2001 (to appear).

    Google Scholar 

  • A. György and T. Linder. On optimal Lagrangian entropy-constrained vector quantization. preprint, 2001.

    Google Scholar 

  • Y. Linde, A. Buzo, and R. M. Gray. An algorithm for vector quantizer design. IEEE Transactions on Communications, COM-28: 84–95, Jan. 1980.

    Google Scholar 

  • T. Linder. On the training distortion of vector quantizers. IEEE Trans. Inform. Theory, IT-46: 1617–1623, Jul. 2000.

    Google Scholar 

  • T. Linder, G. Lugosi, and K. Zeger. Rates of convergence in the source coding theorem, in empirical quantizer design, and in universal lossy source coding. IEEE Trans. Inform. Theory, 40: 1728–1740, Nov. 1994.

    Article  MathSciNet  MATH  Google Scholar 

  • T. Linder, G. Lugosi, and K. Zeger. Empirical quantizer design in the presence of source noise or channel noise. IEEE Trans. Inform. Theory, IT-43: 612–623, Mar. 1997.

    Google Scholar 

  • T. Linder, V. Tarokh, and K. Zeger. Existence of optimal prefix codes for infinite source alphabets. IEEE Trans. Inform. Theory, IT-43: 2026–2028, Nov. 1997.

    Google Scholar 

  • S. P. Lloyd. Least squared quantization in PCM. unpublished memorandum, Bell Lab., 1957; Also, IEEE Trans. Inform. Theory, vol. IT-28, no. 2, pp. 129–137., Mar. 1982.

    Google Scholar 

  • J. MacQueen. Some methods for classification and and analysis of multivariate observations. in Proc. 5th Berkeley Symp. on Mathematical Statistics and Probability, vol. 1, pp. 281–296, 1967.

    Google Scholar 

  • R Zador. Asymptotic quantization error of continuous signals and the quantization dimension. IEEE Trans. Inform. Theory, IT-28: 139–149, Mar. 1982.

    Google Scholar 

  • N. Merhav and J. Ziv. On the amount of side information required for lossy data compression. IEEE Trans. Inform. Theory, IT-43: 1112–1121, July 1997.

    Google Scholar 

  • D. Pollard. Strong consistency of k-means clustering. Annals of Statistics, 9, no. 1: 135140, 1981.

    Google Scholar 

  • D. Pollard. Quantization and the method of k-means. IEEE Trans. Inform. Theory, IT-28: 199–205, Mar. 1982.

    Google Scholar 

  • D. Pollard. A central limit theorem for k-means clustering. Annals of Probability, vol. 10, no. 4: 919–926, 1982.

    Article  MathSciNet  MATH  Google Scholar 

  • D. Pollard. Empirical Processes: Theory and Applications. NSF-CBMS Regional Conference Series in Probability and Statistics, Institute of Mathematical Statistics, Hayward, CA, 1990.

    Google Scholar 

  • R. T. Rockafellar. Convex Analysis. Princeton University Press, Princeton, NJ, 1970.

    MATH  Google Scholar 

  • M. J. Sabin. Global convergence and empirical consistency of the generalized Lloyd algorithm. PhD thesis, Stanford Univ., 1984.

    Google Scholar 

  • M. J. Sabin and R. M. Gray. Global convergence and empirical consistency of the generalized Lloyd algorithm. IEEE Trans. Inform. Theory, IT-32: 148–155, Mar. 1986.

    Google Scholar 

  • K. Sayood. Introduction to Data Compression. Morgan Kaufmann Publishers, San Francisco, 2nd edition, 2001.

    Google Scholar 

  • E.V. Slud. Distribution inequalities for the binomial law. Annals of Probability, 5: 404412, 1977.

    Google Scholar 

  • H. Steinhaus. Sur la division des corp materiels en parties. Bull. Acad. Polon. Sci, IV: 80 1804, May 1956.

    Google Scholar 

  • S. Szarek. On the best constants in the Khintchine inequality. Studia Mathematica, 63: 197–208, 1976.

    MathSciNet  Google Scholar 

  • V. N. Vapnik. Statistical Learning Theory. Wiley, New York, 1998.

    MATH  Google Scholar 

  • P. Zador. Topics in the asymptotic quantization of continuous random variables. unpublished memorandum, Bell Laboratories, Murray Hill, NJ, Feb. 1966.

    Google Scholar 

  • A. J. Zeevi. On the performance of vector quantizers empirically designed from dependent sources. in Proceedings of Data Compression Conference, DCC’98, (J. Storer, M. Cohn, ed.) pp. 73–82, IEEE Computer Society Press, Los Alamitos, California, 1998.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag Wien

About this chapter

Cite this chapter

Linder, T. (2002). Learning-Theoretic Methods in Vector Quantization. In: Györfi, L. (eds) Principles of Nonparametric Learning. International Centre for Mechanical Sciences, vol 434. Springer, Vienna. https://doi.org/10.1007/978-3-7091-2568-7_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-7091-2568-7_4

  • Publisher Name: Springer, Vienna

  • Print ISBN: 978-3-211-83688-0

  • Online ISBN: 978-3-7091-2568-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics