Advertisement

Vector Quantization Using Mixture of Epsilon-Insensitive Components

  • Kazuho Watanabe
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8228)

Abstract

We consider mixture models consisting of ε-insensitive component distributions, which provide an extension of Laplacian mixture models. An EM-type learning algorithm is derived for maximum likelihood estimation of the mixture models. The derived algorithm is applied to approximate computation of rate-distortion functions associated with the ε-insensitive loss function. Then the robustness property of the mixture of ε-insensitive component distributions is demonstrated in a multi-dimensional mixture modelling problem.

Keywords

Mixture Model Support Vector Regression Vector Quantization Laplace Distribution Distortion Measure 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Barber, D.: Bayesian Reasoning and Machine Learning. Cambridge University Press (2012)Google Scholar
  2. 2.
    Berger, T.: Rate Distortion Theory: A Mathematical Basis for Data Compression. Prentice-Hall, Englewood Cliffs (1971)Google Scholar
  3. 3.
    Chu, W., Keerthi, S.S., Ong, C.J.: A unified loss function in Bayesian framework for support vector regression. In: Proc. of ICML, pp. 51–58 (2001)Google Scholar
  4. 4.
    Cord, A., Ambroise, C., Cocquerez, J.: Feature selection in robust clustering based on Laplace mixture. Pattern Recognition Letters 27(6), 627–635 (2006)CrossRefGoogle Scholar
  5. 5.
    Dekel, O., Shalev-Shwartz, S., Singer, Y.: ε-insensitive regression by loss symmetrization. Journal of Machine Learning Research 6, 711–741 (2005)MathSciNetzbMATHGoogle Scholar
  6. 6.
    Dempster, A., Laird, N., Rubin, D.: Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society 39-B, 1–38 (1977)MathSciNetGoogle Scholar
  7. 7.
    Eltoft, T., Kim, T., Lee, T.: On the multivariate Laplace distribution. IEEE Signal Processing Letters 13(5), 300–303 (2006)CrossRefGoogle Scholar
  8. 8.
    Mitianoudis, N., Stathaki, T.: Batch and online underdetermined source separation using Laplacian mixture models. IEEE Transactions on Audio, Speech and Language Processing 15(6), 1818–1832 (2007)CrossRefGoogle Scholar
  9. 9.
    Schölkopf, B., Smola, A.J., Williamson, R.C., Bartlett, P.L.: New support vector algorithms. Neural Computation 12(5), 1207–1245 (2000)CrossRefGoogle Scholar
  10. 10.
    Steinwart, I., Christmann, A.: Support Vector Machines. Springer (2008)Google Scholar
  11. 11.
    Vapnik, V.: The Nature of Statistical Learning Theory. Springer (1995)Google Scholar
  12. 12.
    Watanabe, K.: Rate-distortion bounds for an ε-insensitive distortion measure. In: Proc. of ITW 2013, pp. 679–683 (2013)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Kazuho Watanabe
    • 1
  1. 1.Graduate School of Information ScienceNara Institute of Science and TechnologyIkomaJapan

Personalised recommendations