Skip to main content

Incorporating invariances in support vector learning machines

  • Oral Presentations: Theory Theory II: Learning
  • Conference paper
  • First Online:
Artificial Neural Networks — ICANN 96 (ICANN 1996)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1112))

Included in the following conference series:

Abstract

Developed only recently, support vector learning machines achieve high generalization ability by minimizing a bound on the expected test error; however, so far there existed no way of adding knowledge about invariances of a classification problem at hand. We present a method of incorporating prior knowledge about transformation invariances by applying transformations to support vectors, the training examples most critical for determining the classification boundary.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Abu-Mostafa, Y. S.: Hints. Neural Computation 7 (1995) 639–671

    Google Scholar 

  • Boser, B. E., Guyon, I. M., Vapnik, V.: A training algorithm for optimal margin classifiers. Fifth Annual Workshop on Computational Learning Theory, PittsburghACM (1992) 144–152.

    Google Scholar 

  • Bottou, L., Cortes, C., Denker, J. S., Drucker, H., Guyon, I., Jackel, L. D., Le Cun, Y., Müller, U. A., Säckinger, E., Simard, P., Vapnik, V.: Comparison of classifier methods: a case study in handwritten digit recognition. Proceedings of the 12th International Conference on Pattern Recognition and Neural Networks, Jerusalem (1994)

    Google Scholar 

  • Burges, C.: Simplified support vector decision rules. 13th International Conference on Machine Learning (1996)

    Google Scholar 

  • Cortes, C., Vapnik, V.: Support Vector Networks. Machine Learning 20 (1995) 1–25

    Google Scholar 

  • Drucker, H., Schapire, R., Simard, P.: Boosting performance in neural networks. International Journal of Pattern Recognition and Artificial Intelligence 7 (1993) 705–719

    Google Scholar 

  • Le Cun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., Jackel, L. J.: Backpropagation applied to handwritten zip code recognition. Neural Computation 1 (1989) 541–551

    Google Scholar 

  • Schölkopf, B., Burges, C., Vapnik, V.: Extracting support data for a given task. In: Fayyad, U. M., Uthurusamy, R. (eds.): Proceedings, First International Conference on Knowledge Discovery & Data Mining, AAAI Press, Menlo Park, CA (1995)

    Google Scholar 

  • Segman, J., Rubinstein, J., Zeevi, Y. Y.: The canonical coordinates method for pattern deformation: theoretical and computational considerations. IEEE Transactions on Pattern Analysis and Machine Intelligence 14 (1992) 1171–1183

    Google Scholar 

  • Simard, P., Le Cun, Y., Denker, J.: Efficient pattern recognition using a new transformation distance. In: Hanson, S. J., Cowan, J. D., Giles, C. L. (eds.): Advances in Neural Information Processing Systems 5, Morgan Kaufmann, San Mateo, CA (1993)

    Google Scholar 

  • Simard, P., Victorri, B., Le Cun, Y., Denker, J.: Tangent Prop — a formalism for specifying selected invariances in an adaptive network. In: Moody, J. E., Hanson, S. J., Lippmann, R. P.: Advances in Neural Information Processing Systems 4, Morgan Kaufmann, San Mateo, CA (1992)

    Google Scholar 

  • Vapnik, V.: Estimation of Dependences Based on Empirical Data. [in Russian] Nauka, Moscow (1979); English translation: Springer Verlag, New York (1982)

    Google Scholar 

  • Vapnik, V.: The Nature of Statistical Learning Theory. Springer Verlag, New York (1995)

    Google Scholar 

  • Vetter, T., Poggio, T.: Image Synthesis from a Single Example Image. Proceedings of the European Conference on Computer Vision, Cambridge UK, in press (1996)

    Google Scholar 

  • Vetter, T., Poggio, T., Bülthoff, H.: The importance of symmetry and virtual views in three-dimensional object recognition. Current Biology 4 (1994) 18–23

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Christoph von der Malsburg Werner von Seelen Jan C. Vorbrüggen Bernhard Sendhoff

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Schölkopf, B., Burges, C., Vapnik, V. (1996). Incorporating invariances in support vector learning machines. In: von der Malsburg, C., von Seelen, W., Vorbrüggen, J.C., Sendhoff, B. (eds) Artificial Neural Networks — ICANN 96. ICANN 1996. Lecture Notes in Computer Science, vol 1112. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-61510-5_12

Download citation

  • DOI: https://doi.org/10.1007/3-540-61510-5_12

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-61510-1

  • Online ISBN: 978-3-540-68684-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics