Advertisement

Information Potential Variability for Hyperparameter Selection in the MMD Distance

  • Cristhian K. ValenciaEmail author
  • Andrés Álvarez
  • Edgar A. Valencia
  • Mauricio A. Álvarez
  • Álvaro Orozco
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11401)

Abstract

Nowadays, the methodologies based on Reproducing Kernel Hilbert Space (RKHS) embeddings have been gaining importance in machine learning. In tasks such as time series classification, there is a tendency to construct classifiers based on RKHS metrics to find separability among classes, identifying an appropriate RKHS tuning characteristic kernel hyperparameters. In most applications, the characteristic kernel hyperparameter is adjusted based on cross-validation heuristic techniques. These approaches require the construction of a grid of possible values in order to evaluate the performance regarding each of these, which can lead to inaccurate values because the optimal value can not necessarily be contained in the grid. Also, this may involve a computational expense and a high computation time. We propose to use the information potential variations (IPV) from a Parzen-based probability density estimator. Specifically, we search for an RKHS by optimizing the global kernel hyperparameter which describes the IPV. Our methodology is tested on time series classification using a well-known RHKS metric called Maximum Mean Discrepancy (MMD) with a 1-NN classifier. Results show that our strategy allows estimating suitable RKHSs favoring data separability and achieving competitive results in terms of the average classification accuracy.

Keywords

RKHS MMD Characteristic kernel Information potential 

Notes

Acknowledgments

The research was supported by the project 1110-744-55778, funded by Colciencias. Authors would like to thank the Master in Electrical Engineering from Universidad Tecnológica de Pereira, Colombia, for partially funding this research.

References

  1. 1.
    Zuluaga, C.D., Valencia, E.A., Álvarez, M.A., Orozco, Á.A.: A parzen-based distance between probability measures as an alternative of summary statistics in approximate bayesian computation. In: Murino, V., Puppo, E. (eds.) ICIAP 2015. LNCS, vol. 9279, pp. 50–61. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-23231-7_5CrossRefGoogle Scholar
  2. 2.
    Blandon, J.S., Valencia, C.K., Alvarez, A., Echeverry, J., Alvarez, M.A., Orozco, A.: Shape classification using hilbert space embeddings and kernel adaptive filtering. In: Campilho, A., Karray, F., ter Haar Romeny, B. (eds.) ICIAR 2018. LNCS, vol. 10882, pp. 245–251. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-93000-8_28CrossRefGoogle Scholar
  3. 3.
    Sriperumbudur, B.K., Gretton, A., Fukumizu, K., Schölkopf, B., Lanckriet, G.R.: Hilbert space embeddings and metrics on probability measures. J. Mach. Learn. Res. 11, 1517–1561 (2010)MathSciNetzbMATHGoogle Scholar
  4. 4.
    Moore, A.W., Lee, M.S.: Efficient algorithms for minimizing cross validation error. In: Machine Learning Proceedings 1994, pp. 190–198. Elsevier (1994)Google Scholar
  5. 5.
    Álvarez-Meza, A.M., Cárdenas-Peña, D., Castellanos-Dominguez, G.: Unsupervised kernel function building using maximization of information potential variability. In: Bayro-Corrochano, E., Hancock, E. (eds.) CIARP 2014. LNCS, vol. 8827, pp. 335–342. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-12568-8_41CrossRefGoogle Scholar
  6. 6.
    Smola, A., Gretton, A., Song, L., Schölkopf, B.: A hilbert space embedding for distributions. In: Hutter, M., Servedio, R.A., Takimoto, E. (eds.) ALT 2007. LNCS (LNAI), vol. 4754, pp. 13–31. Springer, Heidelberg (2007).  https://doi.org/10.1007/978-3-540-75225-7_5CrossRefGoogle Scholar
  7. 7.
    González-Vanegas, W., Alvarez-Meza, A., Orozco-Gutierrez, Á.: Sparse hilbert embedding-based statistical inference of stochastic ecological systems. In: Mendoza, M., Velastín, S. (eds.) CIARP 2017. LNCS, vol. 10657, pp. 255–262. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-75193-1_31CrossRefGoogle Scholar
  8. 8.
    Hein, M., Bousquet, O.: Hilbertian metrics and positive definite kernels on probability measures. In: AISTATS, pp. 136–143 (2005)Google Scholar
  9. 9.
    Steinwart, I.: On the influence of the kernel on the consistency of support vector machines. J. Mach. Learn. Res. 2, 67–93 (2001)MathSciNetzbMATHGoogle Scholar
  10. 10.
    Giraldo, L.G.S., Principe, J.C.: Information theoretic learning with infinitely divisible kernels. arXiv preprint arXiv:1301.3551 (2013)
  11. 11.
    Berlinet, A., Thomas-Agnan, C.: Reproducing Kernel Hilbert Spaces in Probability and Statistics. Springer Science & Business Media, New York (2011)zbMATHGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Cristhian K. Valencia
    • 1
    Email author
  • Andrés Álvarez
    • 1
  • Edgar A. Valencia
    • 1
  • Mauricio A. Álvarez
    • 2
  • Álvaro Orozco
    • 1
  1. 1.Automatic Research GroupUniversidad Tecnológica de PereiraPereiraColombia
  2. 2.Department of Computer ScienceUniversity of SheffieldSheffieldUK

Personalised recommendations