Skip to main content

Sequence Learning in Unsupervised and Supervised Vector Quantization Using Hankel Matrices

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10245))

Abstract

In the present contribution we consider sequence learning by means of unsupervised and supervised vector quantization, which should be invariant regarding to shifts in the sequences. A mathematical tool to achieve a respective invariant representation and comparison of sequences are Hankel matrices with an appropriate dissimilarity measure based on subspace angles. We discuss their mathematical properties and show how they can be incorporated in prototype based vector quantization schemes like neural gas and self-organizing maps for clustering and data compression in case of unsupervised learning. For classification learning we refer to the closely related supervised learning vector quantization scheme. Particularly, median variants of these vector quantizers allow an easy application of Hankel matrices. A possible application of the Hankel matrix approach could be the analysis of DNA sequences, as it does not require the alignment of sequences due to its invariance properties.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    The dyadic product here is

    $$\begin{aligned} \mathbf {H}^{\circ }\left( \mathbf {y}\right) \otimes \mathbf {H}^{\circ }\left( \mathbf {y}\right) =\left( \begin{array}{ccc} \mathbf {H}_{\mathbf {y}}^{1}\cdot \left( \mathbf {H}_{\mathbf {y}}^{1}\right) ^{T} &{} \cdots &{} \mathbf {H}_{\mathbf {y}}^{1}\cdot \left( \mathbf {H}_{\mathbf {y}}^{N}\right) ^{T}\\ \vdots &{} \ddots &{} \vdots \\ \mathbf {H}_{\mathbf {y}}^{N}\cdot \left( \mathbf {H}_{\mathbf {y}}^{1}\right) ^{T} &{} \cdots &{} \mathbf {H}_{\mathbf {y}}^{N}\cdot \left( \mathbf {H}_{\mathbf {y}}^{N}\right) ^{T} \end{array}\right) \end{aligned}$$

    i.e. the tensor is a matrix of matrices.

References

  1. Mokbel, B., Paaßen, B., Schleif, F.-M., Hammer, B.: Metric learning for sequences in relational LVQ. Neurocomputing 169, 306–322 (2015)

    Article  Google Scholar 

  2. Golub, G.H., Van Loan, C.F.: Matrix Computations. Johns Hopkins Studies in the Mathematical Sciences, 4th edn. John Hopkins University Press, Baltimore (2013)

    MATH  Google Scholar 

  3. Frey, B.J., Dueck, D.: Clustering by message passing between data points. Science 315, 972–976 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  4. Martinetz, T.M., Berkovich, S.G., Schulten, K.J.: ‘Neural-gas’ network for vector quantization and its application to time-series prediction. IEEE Trans. Neural Netw. 4(4), 558–569 (1993)

    Article  Google Scholar 

  5. Kohonen, T.: Self-Organizing Maps. Springer Series in Information Sciences, vol. 30. Springer, Heidelberg (1995)

    MATH  Google Scholar 

  6. Vesanto, J.: SOM-based data visualization methods. Intell. Data Anal. 3(7), 123–456 (1999)

    MATH  Google Scholar 

  7. Biehl, M., Hammer, B., Villmann, T.: Prototype-based models in machine learning. Wiley Interdisc. Rev. Cogn. Sci. 7(2), 92–111 (2016)

    Article  Google Scholar 

  8. Kohonen, T.: Learning vector quantization. Neural Netw. 1(Supplement 1), 303 (1988)

    Google Scholar 

  9. Seo, S., Obermayer, K.: Soft learning vector quantization. Neural Comput. 15, 1589–1604 (2003)

    Article  MATH  Google Scholar 

  10. Seo, S., Bode, M., Obermayer, K.: Soft nearest prototype classification. IEEE Trans. Neural Netw. 14, 390–398 (2003)

    Article  Google Scholar 

  11. Kaden, M., Lange, M., Nebel, D., Riedel, M., Geweniger, T., Villmann, T.: Aspects in classification learning - review of recent developments in learning vector quantization. Found. Comput. Decis. Sci. 39(2), 79–105 (2014)

    MathSciNet  MATH  Google Scholar 

  12. Villmann, T., Bohnsack, A., Kaden, M.: Can learning vector quantization be an alternative to SVM and deep learning? J. Artif. Intell. Soft Comput. Res. 7(1), 65–81 (2017)

    Article  Google Scholar 

  13. Kaden, M., Riedel, M., Hermann, W., Villmann, T.: Border-sensitive learning in generalized learning vector quantization: an alternative to support vector machines. Soft Comput. 19(9), 2423–2434 (2015)

    Article  Google Scholar 

  14. Gray, R.M.: Toeplitz and circulant matrices: a review. Found. Trends Commun. Inf. Theor. 2(3), 155–239 (2006)

    Article  MATH  Google Scholar 

  15. Davis, P.J.: Circulant Matrices. Wiley, New York, Chichester, Brisbane (1979)

    MATH  Google Scholar 

  16. Lai, D., Chen, G.: Dynamical systems identification from time-series data: a Hankel matrix approach. Math. Comput. Model. 24(3), 1–10 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  17. Lange, M., Nebel, D., Villmann, T.: Non-euclidean principal component analysis for matrices by hebbian learning. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2014. LNCS, vol. 8467, pp. 77–88. Springer, Cham (2014). doi:10.1007/978-3-319-07173-2_8

    Chapter  Google Scholar 

  18. Viberg, M.: Subspace-based methods for the identification of linear time-invariant systems. Automatica 31(12), 1835–1851 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  19. Li, B., Camps, O.I., Sznaier, M.: Activity recognition using dynamic subspace angles. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2011, Providence, USA, pp. 3193–3200 (2012)

    Google Scholar 

  20. Li, B., Camps, O.I., Sznaier, M.: Cross-view activity recognition using Hankelets. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2012, Providence, USA, pp. 1362–1369 (2012)

    Google Scholar 

  21. Presti, L.L., LaCascia, M., Sclaroff, S., Camps, O.: Hankelet-based dynamical systems modeling for 3D action recognition. Image Vis. Comput. 44, 29–43 (2015)

    Article  Google Scholar 

  22. Pekalska, E., Duin, R.P.W.: The Dissimilarity Representation for Pattern Recognition: Foundations and Applications. World Scientific, Singapore (2006)

    MATH  Google Scholar 

  23. Nebel, D., Kaden, M., Bohnsack, A., Villmann, T.: Types of (dis-)similarities and adaptive mixtures thereof for improved classification learning. Neurocomputing (2017, in press)

    Google Scholar 

  24. Presti, L.L., La Cascia, M.: Ensemble of Hankel matrices for face emotion recognition. In: Murino, V., Puppo, E. (eds.) ICIAP 2015. LNCS, vol. 9280, pp. 586–597. Springer, Cham (2015). doi:10.1007/978-3-319-23234-8_54

    Chapter  Google Scholar 

  25. Presti, L.L., LaCascia, M.: Boosting Hankel matrices for face emotion recognition and pain detection. Compu. Vis. Image Underst. 156, 19–33 (2017)

    Article  Google Scholar 

  26. Linde, Y., Buzo, A., Gray, R.M.: An algorithm for vector quantizer design. IEEE Trans. Commun. 28, 84–95 (1980)

    Article  Google Scholar 

  27. Cottrell, M., Hammer, B., Hasenfuß, A., Villmann, T.: Batch and median neural gas. Neural Netw. 19, 762–771 (2006)

    Article  MATH  Google Scholar 

  28. Erwin, E., Obermayer, K., Schulten, K.: Self-organizing maps: ordering, convergence properties and energy functions. Biol. Cybern. 67(1), 47–55 (1992)

    Article  MATH  Google Scholar 

  29. Heskes, T.: Energy functions for self-organizing maps. In: Oja, E., Kaski, S. (eds.) Kohonen Maps, pp. 303–315. Elsevier, Amsterdam (1999)

    Chapter  Google Scholar 

  30. Villmann, T., Der, R., Herrmann, M., Martinetz, T.: Topology preservation in self-organizing feature maps: exact definition and measurement. IEEE Trans. Neural Netw. 8(2), 256–266 (1997)

    Article  Google Scholar 

  31. Kohonen, T., Somervuo, P.: How to make large self-organizing maps for nonvectorial data. Neural Netw. 15(8–9), 945–952 (2002)

    Article  Google Scholar 

  32. Sato, A., Yamada, K.: Generalized learning vector quantization. In: Touretzky, D.S., Mozer, M.C., Hasselmo, M.E. (eds.) Proceedings of the 1995 Conference Advances in Neural Information Processing Systems, vol. 8, pp. 423–429. MIT Press, Cambridge (1996)

    Google Scholar 

  33. Nebel, D., Hammer, B., Frohberg, K., Villmann, T.: Median variants of learning vector quantization for learning of dissimilarity data. Neurocomputing 169, 295–305 (2015)

    Article  Google Scholar 

  34. Mwebaze, E., Schneider, P., Schleif, F.-M., Aduwo, J.R., Quinn, J.A., Haase, S., Villmann, T., Biehl, M.: Divergence based classification in learning vector quantization. Neurocomputing 74(9), 1429–1435 (2011)

    Article  Google Scholar 

  35. Villmann, T., Haase, S., Kaden, M.: Kernelized vector quantization in gradient-descent learning. Neurocomputing 147, 83–95 (2015)

    Article  Google Scholar 

  36. D’haeseleer, P.: What are DNA sequence motifs? Nat. Biotechnol. 24, 423–425 (2006)

    Article  Google Scholar 

  37. Almeida, J.S., Carricio, J.A., Maretzek, A., Noble, P.A., Fletcher, M.: Analysis of genomic sequences by chaos game representation. Bioinformatics 17(5), 429–437 (2001)

    Article  Google Scholar 

  38. Rizzo, R., Fiannaca, A., LaRosa, M., Urso, A.: Classification experiments of DNA sequences by using a deep neural network and chaos game representation. In: Proceedings of the International Conference on Computer Systems and Technologies - CompSysTech 2016, Palermo, Italy, pp. 222–228 (2016)

    Google Scholar 

  39. Blaisdell, B.E.: A measure of the similarity of sets of sequences not requiring sequence alignment. Proc. Nat. Acad. Sci. U.S.A. 83, 5155–5159 (1986)

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thomas Villmann .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Mohammadi, M., Biehl, M., Villmann, A., Villmann, T. (2017). Sequence Learning in Unsupervised and Supervised Vector Quantization Using Hankel Matrices. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L., Zurada, J. (eds) Artificial Intelligence and Soft Computing. ICAISC 2017. Lecture Notes in Computer Science(), vol 10245. Springer, Cham. https://doi.org/10.1007/978-3-319-59063-9_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-59063-9_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-59062-2

  • Online ISBN: 978-3-319-59063-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics