Advertisement

Lossy Compression, Classification, and Regression

  • Robert M. Gray
Part of the The IMA Volumes in Mathematics and its Applications book series (IMA, volume 107)

Abstract

The traditional goal of data compression is to speed transmission or to minimize storage requirements of a signal while preserving the best possible quality of reproduction. This is usually formalized by trying to minimize the average distortion between the input and output in the sense of mean squared error (MSE) or a similar measure, subject to a constraint on the average bit rate. Measures of distortion are intended to ensure that low average distortion means the reconstructed signal will “look like” or “sound like” the original uncompressed signal. We will refer to the MSE throughout, but we mean it in the general sense of any applicable measure of distortion. The constrained optimization problem leads to theoretical analysis, providing optimality properties and performance bounds, and design algorithms for a variety of code structures. Numerous methods can be used to design a MSE minimizing vector quantization (VQ), including clustering algorithms such as the Lloyd or k-means algorithm and their tree-structured extensions [1]; algorithms based on neural net ideas such as competitive learning, backpropagation, and self organizing feature maps; and algorithms for deterministic annealing.

Keywords

Mean Square Error Minimum Mean Square Error Near Neighbor Vector Quantization Lossy Compression 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    A. Gersho and R. M. Gray, Vector Quantization and Signal Compression, Kluwer Academic Press, 1992.CrossRefGoogle Scholar
  2. [2]
    A. Gersho and B. Ramamurthi, Image coding using vector quantization,in International Conference on Acoustics, Speech, and Signal processing, vol. 1, (Paris), pp. 428–431, April 1982.Google Scholar
  3. [3]
    M. Effros, P.A. Chou, and R.M. Gray, Variable dimension weighted universal vector quantization and noiseless coding, Proceedings of the 1994 Data Compression Conference, J. Storer and M. Cohn, eds., IEEE Computer Society Press, March 1994, Snowbird, Utah, pp. 1–11.Google Scholar
  4. [4]
    M. Effros, P.A. Chou, and R.M. Gray, One-pass adaptive universal vector quantization, Proceedings 1994 International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Adelaide, Australia, April 1994.Google Scholar
  5. [5]
    M. Effros, P.C. Chou, and R.M. Gray, Rates of Convergence in Adaptive Universal Vector Quantization, 1994 IEEE International Symposium on Information Theory, Tronheim, Norway, June 1994.Google Scholar
  6. [6]
    T. M. Cover and P. E. HartNearest neighbor pattern classification, IEEE Trans. Inform. Theory, vol. IT-13, pp. 21–27, 1967.MATHCrossRefGoogle Scholar
  7. [7]
    Q. Xie, C. A. Laszlo, and R. K. Ward, Vector quantization technique for nonparametric classifier design, IEEE Trans. Pattern Anal. and Mach. Int., vol. 15, pp. 1326–1330, Dec. 1993.CrossRefGoogle Scholar
  8. [8]
    K. Popat and R. W. Picard, Novel cluster-based probability model for texture synthesis, classification, and compression, in Proc. SPIE Visual Communications and Image Processing, (Boston, MA), Nov. 1993.Google Scholar
  9. [9]
    K. Popat and R. W. Picard, Cluster-based probability model applied to image restoration and compression, in PICASSP, (Adelaide, Australia), April 1994.Google Scholar
  10. [10]
    T. Kohonen, Self-organization and associative memory, Berlin: Springer-Verlag, third ed., 1989.CrossRefGoogle Scholar
  11. [11]
    T. Kohonen, G. Barna, and R. Chrisley, Statistical pattern recognition with neural networks: benchmarking studies, in IEEE International Conference on Neural Networks, pp. I 61–68, July 1988.CrossRefGoogle Scholar
  12. [12]
    A. B. Nobel, Histogram regression estimation using data-dependent partitions, Annals of Statistics, vol. 24, pp. 1084–1105, June 1996.MathSciNetMATHCrossRefGoogle Scholar
  13. [13]
    G. Lugosi and A. Nobel, Consistency of data-driven histogram methods for density estimation and classification, Annale of Statistics, vol. 24, pp. 687–706, April 1996.MathSciNetMATHCrossRefGoogle Scholar
  14. [14]
    T. Linder, G. Lugosi, and K. Zeger, Rates of convergence in the source coding theorem, empirical quantizer design, and universal lossy source coding, IEEE Transactions on Information Theory, vol. 40, pp. 1728–1740, 1994.MathSciNetMATHCrossRefGoogle Scholar
  15. [15]
    A. Nobel and R. A. Olshen, Almost sure consistency for a variable rate lossy code, Proceedings of the 1993 IEEE International Symposium on Information Theory, (San Antonio, Texas), January 1993.Google Scholar
  16. [16]
    A. B. Nobel and R. A. Olshen, Termination and greedy growing for tree-structured vector quantizers, IEEE Transactions on Information Theory, vol. 42, pp. 191–206, January 1996.MathSciNetMATHCrossRefGoogle Scholar
  17. [17]
    EVE A. Riskin and Robert M. Gray, A Greedy Tree Growing Algorithm for the Design of Variable Rate Vector Quantizers,Proceedings of 1990 Picture Coding Symposium, pp. 11.4.1–11.4.3, Cambridge, MA, March 1990.Google Scholar
  18. [18]
    E. A. Riskin and R.M. Gray, A greedy tree growing algorithm for the design of variable rate vector quantizers, IEEE Trans. Signal Processing, vol. 39, pp. 2500–2507, 1991.CrossRefGoogle Scholar
  19. [19]
    Optimal Nonlinear Interpolative Vector Quantization, IEEE Trans. Commun., vol. COM-38, no. 9, pp. 1285–1287.Google Scholar
  20. [20]
    D. Miller, A. Rao, K. Rose, and A. Gersho, A Global Optimization Technique for Statistical Classifier Design, IEEE Trans. Signal Processing, December, 1996.Google Scholar
  21. [21]
    J. E. Shore and D. K. Burton, Discrete utterance speech recognition without time alignment, in International Conference on Acoustics, Speech, and Signal Processing, p. 907, May 1982.Google Scholar
  22. [22]
    J. E. Shore, D. Burton, and J. Buck, A generalization of isolated word recognition using vector quantization, in International Conference on Acoustics, Speech, and Signal Processing, pp. 1021–1024, April 1983.Google Scholar
  23. [23]
    J. E. Shore and D. K. Burton, Discrete utterance speech recognition without time alignment, IEEE Trans. Inform. Theory, vol. IT-29, pp. 473–491, July 1983.CrossRefGoogle Scholar
  24. [24]
    G.F. Mclean, Vector quantization for texture classification, IEEE Transactions on Systems, Man, and Cybernetics, Vol. 23, pp. 637–649, May/June, 1993.CrossRefGoogle Scholar
  25. [25]
    K. O. Perlmutter, R. M. Gray, R. A. Olshen, S. M. Perlmutter, Bayes Risk Weighted Vector Quantization with CART Estimated Posteriors, Proceedings of the IEEE Conference on Acoustics, Speech, and vol.4, Signal Processing (ICASSP), pp. 2435–8, May 1995.Google Scholar
  26. [26]
    K. Oehler and R. Gray, Combining image classification and image compression using vector quantization, in Proceedings of the 1993 IEEE Data Compression Conference (DCC), J. Storer and M. Cohn, eds., Snowbird, Utah, pp. 2–11, IEEE Computer Society Press, March 1993.Google Scholar
  27. [27]
    R.M. Gray, K.L. Oehler, K.O. Perlmutter, and R.A. Olshen, Combining tree-structured vector quantization with classification and regression trees,Ibid., pp. 1494–1498.Google Scholar
  28. [28]
    K.L. Oehler and R.M. Gray, Combining image compression and classification using vector quantization, IEEE Transactions on Pattern Analysis and Machine Intelligence, volume 17, pp. 461–473, May 1995.CrossRefGoogle Scholar
  29. [29]
    C. L. Nash, K. O. Perlmutter, and R. M. Gray, Evaluation of bayes risk weighted vector quantization with posterior estimation in the detection of lesions in digitized mammograms, in Proceedings of the 28th Asilomar Conference on Circuits Systems and Computers, Pacific Grove, CA, October 1994, vol.1, p. 716–20.Google Scholar
  30. [30]
    K. O. Perlmutter, C. L. Nash, and R. M. Gray, A comparison of Bayes risk weighted vector quantization with posterior estimation with other VQ-based classifiers, in Proceedings of the IEEE 1994 International Conference on Image Processing (ICIP), volume 2, pages 217–221, Austin, TX, Nov. 1994.CrossRefGoogle Scholar
  31. [31]
    K. O. Perlmutter, Compression and Classification of Images using Vector Quantization and Decision Trees, Ph.D. Thesis, December 1995.Google Scholar
  32. [32]
    L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone, Classification and Regression Trees, Belmont, California: Wadsworth, 1984.MATHGoogle Scholar
  33. [33]
    L. Gordon and R.A. Olshen, Asymptotically efficient solutions to the classification problem, Annals of Statistics, 6, (1978), 515–533.MathSciNetMATHCrossRefGoogle Scholar
  34. [34]
    L. Gordon and R.A. Olshen, Consistent nonparametric regression from recursive partitioning schmes, Journal of Multivariate Analysis, 10, (1980), 611–627.MathSciNetMATHCrossRefGoogle Scholar
  35. [35]
    L. Gordon and R.A. Olshen, Almost surely consistent nonparametric regression from recursive partitioning schemes, Journal of Multivariate Analysis, 15, (1984), 147–163.MathSciNetMATHCrossRefGoogle Scholar
  36. [36]
    A. Nobel, (1995), Recursive partitioning to reduce distortion, Technical Report UIUC-BI-93–01, Beckman Institute, University of Illinois, Urbana-Champaign.Google Scholar
  37. [37]
    P. H. Westerink, J. Biemond, and D. E. Boekee, An optimal bit allocation algorithm for sub-band coding, in International Conference on Acoustics, Speech, and Signal Processing, pp. 757–760, 1988.Google Scholar
  38. [38]
    Y. Shoham and A. Gersho, Efficient bit allocation for an arbitrary set of quantizers, IEEE Trans. Acoust. Speech Signal Process., vol. ASSP-36, pp. 1445–1453, September 1988.MATHCrossRefGoogle Scholar
  39. [39]
    P. A. Chou, T. Lookabaugh, and R. M. Gray, Optimal pruning with applications to tree-structured source coding and modeling, IEEE Trans. Inform. Theory, pp. 299–315, March 1989.Google Scholar
  40. [40]
    E. A. Riskin, Optimal bit allocation via the generalized BFOS algorithm, IEEE Trans. Inform. Theory, vol. 37, pp. 400–402, March 1991.CrossRefGoogle Scholar
  41. [41]
    K. Oehler, P.C. Cosman, R.M. Gray, and J. May, Classification using vector quantization, Proc. Twenty-Fifth Annual Asilomar Converence on Signals, Systems, and Computers, Pacific Grove, Calif., pp. 439–445, Nov. 1991.Google Scholar
  42. [42]
    K.O. Perlmutter, S.M. Perlmutter, R.M. Gray, R.A. Olshen, and K.L. Oehler, Bayes Risk Weighted Vector Quantization with Posterior Estimation for Image Compression and Classification, IEEE Transactions on Image Processing, vol.5, no.2, p. 347–60, February 1996.CrossRefGoogle Scholar
  43. [43]
    K.O. Perlmutter, C.L. Nash, and R.M. Gray, Bayes Risk Weighted Tree-structured Vector Quantization with Posterior Estimation, Ibid., Volume 2, pp. 217–221 November 1994.Google Scholar
  44. [44]
    Chaddha, N., K. Perlmutter and R.M. Gray, Joint image classification and compression using hierarchical table-lookup vector quantization, Proceedings of Data Compression Conference - DCC ‘86, Held: Snowbird, UT, USA, 31 March-3 April 1996, (USA: IEEE Comput. Soc. Press, 1996, pp. 23–32.)Google Scholar
  45. [45]
    Chaddha, N., P.A. Chou, and R.M. Gray, Constrained and recursive hierarchical table-lookup vector quantization, Ibid., pp. 220–9.Google Scholar
  46. [46]
    K.O. Perlmutter, N. Chaddha, J. Buckheit, R.A. Olshen, and R.M. Gray, Text Segmentation in Mixed Mode Images using Classification Trees and Transform Tree-Structured Vector Quantization, Proceedings of the IEEE Conference on Acoustics, Speech, and Signal Processing, May 1996, pp. 22314, vol. 4.Google Scholar
  47. [47]
    P. C. Cosman, E. A. Riskin, K. L. Oehler, and R. M. Gray, Using Vector Quantization for Image Processing, Proceedings of the IEEE, vol. 81, no 9, pp. 1326–1341, September 1993.CrossRefGoogle Scholar
  48. [48]
    P.C. Cosman, R.M. Gray, and R.A. Olshen, Vector quantization: Clustering and classification trees, contributed chapter in Statistics and Images, pp. 93–108, K.V. Mardia, Ed., Carfax Publishing Company, Abingdon, UK, 1994.Google Scholar
  49. [49]
    K. O. Perlmutter, R. M. Gray, K. L. Oehler, and R. A. Olshen, Bayes risk weighted tree-structured vector quantization with posterior estimation, Ibid., pages 274–283.Google Scholar
  50. [50]
    R.D. Wesel and R.M. Gray, Bayes risk weighted VQ and learning VQ, Ibid., pp. 400–409.Google Scholar
  51. [51]
    K. O. Perlmutter, S. M. Perlmutter, M. Effros, and R. M. Gray, An iterative joint codebook and classifier improvement algorithm for finite-state vector quantization,Ibid., Vol. 1, 476–81.Google Scholar
  52. [52]
    M. de guzmÁn, Differentiation of Integrals in fin, Berlin: Springer-Verlag, 1975.Google Scholar

Copyright information

© Springer Science+Business Media New York 1999

Authors and Affiliations

  • Robert M. Gray
    • 1
  1. 1.Information Systems Laboratory, Department of Electrical EngineeringStanford UniversityUSA

Personalised recommendations