Fast Approximation of Support Vector Kernel Expansions, and an Interpretation of Clustering as Approximation in Feature Spaces
Kernel-based learning methods provide their solutions as expansions in terms of a kernel. We consider the problem of reducing the computational complexity of evaluating these expansions by approximating them using fewer terms. As a by-product, we point out a connection between clustering and approximation in reproducing kernel Hilbert spaces generated by a particular class of kernels.
Unable to display preview. Download preview PDF.
- 3.J. M. Buhmann. Data clustering and learning. In M. A. Arbib, editor, The Handbook of Brain Theory and Neural Networks, pages 278–281. MIT Press, 1995.Google Scholar
- 4.C. J. C. Burges. Simplified support vector decision rules. In L. Saitta, editor, Proceedings, 13th ICML, pages 71–77, San Mateo, CA, 1996. Morgan Kaufmann.Google Scholar
- 5.C. J. C. Burges and B. Schölkopf. Improving the accuracy and speed of support vector learning machines. In M. Mozer, M. Jordan, and T. Pet sehe, editors, Advances in NIPS P, pages 375–381, Cambridge, MA, 1997. MIT Press.Google Scholar
- 6.T. Frieß. Personal communication. 1998.Google Scholar
- 9.E. Osuna and F. Girosi. Reducing run-time complexity in support vector machines. In B. Schölkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods — Support Vector Learning. MIT Press, Cambridge, MA, 1998. to appear.Google Scholar
- 10.S. Saitoh. Theory of Reproducing Kernels and its Applications. Longman Scientific & Technical, Harlow, England, 1988.Google Scholar
- 11.B. Schölkopf. Support Vector Learning. R. Oldenbourg Verlag, Munich, 1997.Google Scholar
- 12.B. Schölkopf, A. Smola, and K.-R. Müller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299 - 1319, 1998.Google Scholar