Advertisement

Training Mahalanobis Kernels by Linear Programming

  • Shigeo Abe
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7553)

Abstract

The covariance matrix in the Mahalanobis distance can be trained by semi-definite programming, but training for a large size data set is inefficient. In this paper, we constrain the covariance matrix to be diagonal and train Mahalanobis kernels by linear programming (LP). Training can be formulated by ν-LP SVMs (support vector machines) or regular LP SVMs. We clarify the dependence of the solutions on the margin parameter. If a problem is not separable, a zero-margin solution, which does not appear in the LP SVM, appears in the ν-LP SVM. Therefore, we use the LP SVM for kernel training. Using the benchmark data sets we show that the proposed method gives better generalization ability than RBF (radial basis function) kernels and Mahalanobis kernels calculated using the training data and has a good capability of selecting input variables especially for a large number of input variables.

Keywords

Support Vector Machine Radial Basis Function Mahalanobis Distance Radial Basis Function Kernel Good Generalization Ability 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Lanckriet, G.R.G., Ghaoui, L., El Bhattacharyya, C., Jordan, M.I.: A robust minimax approach to classification. Journal of Machine Learning Research 3, 555–582 (2002)Google Scholar
  2. 2.
    Xue, H., Chen, S., Yang, Q.: Structural regularized support vector machine: A framework for structural large margin classifier. IEEE Trans. Neural Networks 22(4), 573–587 (2011)CrossRefGoogle Scholar
  3. 3.
    Grandvalet, Y., Canu, S.: Adaptive scaling for feature selection in SVMs. In: Neural Information Processing Systems 15, pp. 569–576. MIT Press (2003)Google Scholar
  4. 4.
    Abe, S.: Training of Support Vector Machines with Mahalanobis Kernels. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds.) ICANN 2005. LNCS, vol. 3697, pp. 571–576. Springer, Heidelberg (2005)Google Scholar
  5. 5.
    Wang, D., Yeung, D.S., Tsang, E.C.C.: Weighted Mahalanobis distance kernels for support vector machines. IEEE Trans. Neural Networks 18(5), 1453–1462 (2007)CrossRefzbMATHGoogle Scholar
  6. 6.
    Shen, C., Kim, J., Wang, L.: Scalable large-margin Mahalanobis distance metric learning. IEEE Trans. Neural Networks 21(9), 1524–1530 (2010)CrossRefGoogle Scholar
  7. 7.
    Demiriz, A., Bennett, K.P., Shawe-Taylor, J.: Linear programming boosting via column generation. Machine Learning 46(1-3), 225–254 (2002)CrossRefzbMATHGoogle Scholar
  8. 8.
    Abe, S.: Support Vector Machines for Pattern Classification. Springer, Heidelberg (2010)CrossRefzbMATHGoogle Scholar
  9. 9.
    Asuncion, A., Newman, D.J.: UCI machine learning repository (2007), http://www.ics.uci.edu/~mlearn/MLRepository.html
  10. 10.

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Shigeo Abe
    • 1
  1. 1.Kobe UniversityKobeJapan

Personalised recommendations