Adaptive Hausdorff Distances and Tangent Distance Adaptation for Transformation Invariant Classification Learning

  • Sascha Saralajew
  • David Nebel
  • Thomas VillmannEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9949)


Tangent distances (TDs) are important concepts for data manifold distance description in machine learning. In this paper we show that the Hausdorff distance is equivalent to the TD for certain conditions. Hence, we prove the metric properties for TDs. Thereafter, we consider those TDs as dissimilarity measure in learning vector quantization (LVQ) for classification learning of class distributions with high variability. Particularly, we integrate the TD in the learning scheme of LVQ to obtain a TD adaption during LVQ learning. The TD approach extends the classical prototype concept to affine subspaces. This leads to a high topological richness compared to prototypes as points in the data space. By the manifold theory of TDs we can ensure that the affine subspaces are aligned in directions of invariant transformations with respect to class discrimination. We demonstrate the superiority of this new approach by two examples.


Hausdorff Distance Classification Learning Dissimilarity Measure Learn Vector Quantization Affine Subspace 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Kohonen, T.: Self-Organizing Maps. Springer Series in Information Sciences, vol. 30. Springer, Heidelberg (1995). Second Extended Edition 1997zbMATHGoogle Scholar
  2. 2.
    Schölkopf, B., Smola, A.: Learning with Kernels. MIT Press, Cambridge (2002)zbMATHGoogle Scholar
  3. 3.
    Biehl, M., Hammer, B., Schleif, F.-M., Schneider, P., Villmann, T.: Stationarity of matrix relevance LVQ. In: Proceedings of the International Joint Conference on Neural Networks 2015 (IJCNN), pp. 1–8. IEEE Computer Society Press, Los Alamitos (2015)Google Scholar
  4. 4.
    Xu, H., Caramanis, C., Mannor, S.: Robustness and regularization of support vector machines. J. Mach. Learn. Res. 10, 1485–1510 (2009)MathSciNetzbMATHGoogle Scholar
  5. 5.
    Decoste, D., Schölkopf, B.: Training invariant support vector machines. Mach. Learn. 46, 161–190 (2002)CrossRefzbMATHGoogle Scholar
  6. 6.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)CrossRefGoogle Scholar
  7. 7.
    Simard, P., LeCun, Y., Denker, J.S.: Efficient pattern recognition using a new transformation distance. In: Hanson, S.J., Cowan, J.D., Giles, C.L. (eds.) Advances in Neural Information Processing Systems 5, pp. 50–58. Morgan-Kaufmann, San Mateo (1993)Google Scholar
  8. 8.
    Schneider, P., Hammer, B., Biehl, M.: Adaptive relevance matrices in learning vector quantization. Neural Comput. 21, 3532–3561 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Henrikson, J.: Completeness and total boundedness of the Hausdorff metric. MIT Undergrad. J. Math. 1, 69–79 (1999)Google Scholar
  10. 10.
    Pekalska, E., Duin, R.P.W.: The Dissimilarity Representation for Pattern Recognition: Foundations and Applications. World Scientific, Singapore (2006)zbMATHGoogle Scholar
  11. 11.
    Villmann, T., Kaden, M., Nebel, D., Bohnsack, A.: Similarities, dissimilarities and types of inner products for data analysis in the context of machine learning. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2016. LNCS (LNAI), vol. 9693, pp. 125–133. Springer, Heidelberg (2016). doi: 10.1007/978-3-319-39384-1_11 Google Scholar
  12. 12.
    Saralajew, S., Villmann, T.: Adaptive tangent distances in generalized learning vector quantization for transformation and distortion invariant classification learning. In: Proceedings of the International Joint Conference on Neural Networks 2016 (IJCNN), pp. 1–8, Vancouver, Canada, (2016)Google Scholar
  13. 13.
    Kohonen, T.: Improved versions of learning vector quantization. In: Proceedings of the IJCNN-90, International Joint Conference on Neural Networks, San Diego, vol. I, pp. 545–550. IEEE Service Center, Piscataway (1990)Google Scholar
  14. 14.
    Sato, A., Yamada, K.: Generalized learning vector quantization. In: Touretzky, D.S., Mozer, M.C., Hasselmo, M.E. (eds.) Advances in Neural Information Processing Systems 8, Proceedings of the 1995 Conference, pp. 423–429. MIT Press, Cambridge (1996)Google Scholar
  15. 15.
    Kaden, M., Lange, M., Nebel, D., Riedel, M., Geweniger, T., Villmann, T.: Aspects in classification learning - review of recent developments in learning vector quantization. Found. Comput. Decis. Sci. 39(2), 79–105 (2014)MathSciNetzbMATHGoogle Scholar
  16. 16.
    Schwenk, H., Milgram, M.: Learning discriminant tangent models for handwritten character recognition. In: Fogelman-Soulié, F., Gallinari, P. (eds.) International Conference on Artificial Neural Networks, volume II, pp. 985–988. EC2 and Cie, Paris (1995)Google Scholar
  17. 17.
    Keysers, D., Macherey, W., Ney, H., Dahmen, J.: Adaptation in statistical pattern recognition using tangent vectors. IEEE Trans. Pattern Anal. Mach. Intell. 26(2), 269–274 (2004)CrossRefGoogle Scholar
  18. 18.
    Chang, C.-C., Lin, C.-J.: LIBSVM : a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2(3:27), 1–27 (2011)CrossRefGoogle Scholar
  19. 19.
    Rossi, F., Lendasse, A., François, D., Wertz, V., Verleysen, M.: Mutual information for the selection of relevant variables in spectrometric nonlinear modelling. Chemometrics Intell. Lab. Syst. 80, 215–226 (2006)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Sascha Saralajew
    • 1
  • David Nebel
    • 2
  • Thomas Villmann
    • 2
    • 3
    Email author
  1. 1.Electrical/Electronics Engineering - Driver Assistance Platform/SystemsDr. Ing. h.c. F. Porsche AGWeissachGermany
  2. 2.Computational Intelligence GroupUniversity of Applied Sciences MittweidaMittweidaGermany
  3. 3.Institut für Computational Intelligence und Intelligente Datenanalyse Mittweida (CIID) e.V.MittweidaGermany

Personalised recommendations