Training Mahalanobis Kernels by Linear Programming
The covariance matrix in the Mahalanobis distance can be trained by semi-definite programming, but training for a large size data set is inefficient. In this paper, we constrain the covariance matrix to be diagonal and train Mahalanobis kernels by linear programming (LP). Training can be formulated by ν-LP SVMs (support vector machines) or regular LP SVMs. We clarify the dependence of the solutions on the margin parameter. If a problem is not separable, a zero-margin solution, which does not appear in the LP SVM, appears in the ν-LP SVM. Therefore, we use the LP SVM for kernel training. Using the benchmark data sets we show that the proposed method gives better generalization ability than RBF (radial basis function) kernels and Mahalanobis kernels calculated using the training data and has a good capability of selecting input variables especially for a large number of input variables.
KeywordsSupport Vector Machine Radial Basis Function Mahalanobis Distance Radial Basis Function Kernel Good Generalization Ability
Unable to display preview. Download preview PDF.
- 1.Lanckriet, G.R.G., Ghaoui, L., El Bhattacharyya, C., Jordan, M.I.: A robust minimax approach to classification. Journal of Machine Learning Research 3, 555–582 (2002)Google Scholar
- 3.Grandvalet, Y., Canu, S.: Adaptive scaling for feature selection in SVMs. In: Neural Information Processing Systems 15, pp. 569–576. MIT Press (2003)Google Scholar
- 4.Abe, S.: Training of Support Vector Machines with Mahalanobis Kernels. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds.) ICANN 2005. LNCS, vol. 3697, pp. 571–576. Springer, Heidelberg (2005)Google Scholar
- 9.Asuncion, A., Newman, D.J.: UCI machine learning repository (2007), http://www.ics.uci.edu/~mlearn/MLRepository.html