Advertisement

Regularized Sparse Kernel Slow Feature Analysis

  • Wendelin Böhmer
  • Steffen Grünewälder
  • Hannes Nickisch
  • Klaus Obermayer
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6911)

Abstract

This paper develops a kernelized slow feature analysis (SFA) algorithm. SFA is an unsupervised learning method to extract features which encode latent variables from time series. Generative relationships are usually complex, and current algorithms are either not powerful enough or tend to over-fit. We make use of the kernel trick in combination with sparsification to provide a powerful function class for large data sets. Sparsity is achieved by a novel matching pursuit approach that can be applied to other tasks as well. For small but complex data sets, however, the kernel SFA approach leads to over-fitting and numerical instabilities. To enforce a stable solution, we introduce regularization to the SFA objective. Versatility and performance of our method are demonstrated on audio and video data sets.

Keywords

Kernel Matrix Kernel Parameter Subset Size Match Pursuit Audio Data 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Assmann, P.F., Nearey, T.M., Bharadwaj, S.: Analysis and classification of a vowel database. Canadian Acoustics 36(3), 148–149 (2008)Google Scholar
  2. 2.
    Becker, S., Hinton, G.E.: A self-organizing neural network that discovers surfaces in randomdot stereograms. Nature 355(6356), 161–163 (1992)CrossRefGoogle Scholar
  3. 3.
    Berkes, P., Wiskott, L.: Slow feature analysis yields a rich repertoire of complex cell properties. Journal of Vision 5, 579–602 (2005)CrossRefzbMATHGoogle Scholar
  4. 4.
    Bray, A., Martinez, D.: Kernel-based extraction of Slow features: Complex cells learn disparity and translation invariance from natural images. In: Neural Information Processing Systems, vol. 15, pp. 253–260 (2002)Google Scholar
  5. 5.
    Csató, L., Opper, M.: Sparse on-line gaussian processes. Neural Computation 14(3), 641–668 (2002)CrossRefzbMATHGoogle Scholar
  6. 6.
    Davison, A.J.: Real-time simultaneous localisation and mapping with a single camera. In: IEEE International Conference on Computer Vision, pp. 1403–1410 (2003)Google Scholar
  7. 7.
    Einhäuser, W., Hipp, J., Eggert, J., Körner, E., König, P.: Learning viewpoint invariant object representations using temporal coherence principle. Biological Cybernetics 93(1), 79–90 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Földiák, P.: Learning invariance from transformation sequences. Neural Computation 3(2), 194–200 (1991)CrossRefGoogle Scholar
  9. 9.
    Franzius, M., Sprekeler, H., Wiskott, L.: Slowness and sparseness leads to place, head-direction, and spatial-view cells. PLoS Computational Biology 3(8), e166 (2007)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Fukumizu, K., Bach, F.R., Gretton, A.: Statistical consistency of kernel canonical correlation analysis. Journal of Machine Learning Research 8, 361–383 (2007)MathSciNetzbMATHGoogle Scholar
  11. 11.
    Huke, J.P.: Embedding nonlinear dynamical systems: A guide to takens’ theorem. Technical report, University of Manchester (2006)Google Scholar
  12. 12.
    Hussain, Z., Shawe-Taylor, J.: Theory of matching pursuit. In: Advances in Neural Information Processing Systems, vol. 21, pp. 721–728 (2008)Google Scholar
  13. 13.
    Lowe, D.G.: Object recognition from local scale-invariant features. In: International Conference on Computer Vision, pp. 1150–1157 (1999)Google Scholar
  14. 14.
    Mallat, S., Zhang, Z.: Matching pursuits with time-frequency dictionaries. IEEE Transactions On Signal Processing 41, 3397–3415 (1993)CrossRefzbMATHGoogle Scholar
  15. 15.
    Meyn, S.P., Tweedie, R.L.: Markov chains and stochastic stability. Springer, London (1993)CrossRefzbMATHGoogle Scholar
  16. 16.
    Schölkopf, B., Smola, A., Müller, K.-R.: Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation 10(5), 1299–1319 (1998)CrossRefGoogle Scholar
  17. 17.
    Shawe-Taylor, J., Cristianini, N.: Kernel Methods for Pattern Analysis. Cambridge University Press, Cambridge (2004)CrossRefzbMATHGoogle Scholar
  18. 18.
    Smola, A.J., Schölkopf, B.: Sparse greedy matrix approximation for machine learning. In: Proceedings to the 17th International Conference Machine Learning, pp. 911–918 (2000)Google Scholar
  19. 19.
    Stone, J.V.: Blind source separation using temporal predictability. Neural Computation 13(7), 1559–1574 (2001)CrossRefzbMATHGoogle Scholar
  20. 20.
    Takens, F.: Detecting strange attractors in turbulence. Dynamical Systems and Turbulence, 366–381 (1981)Google Scholar
  21. 21.
    Wahba, G.: Spline Models for Observational Data. Society for Industrial and Applied Mathematics, Philadelphia (1990)CrossRefzbMATHGoogle Scholar
  22. 22.
    Wiskott, L.: Slow feature analysis: A theoretical analysis of optimal free responses. Neural Computation 15(9), 2147–2177 (2003)CrossRefzbMATHGoogle Scholar
  23. 23.
    Wiskott, L., Sejnowski, T.: Slow feature analysis: Unsupervised learning of invariances. Neural Computation 14(4), 715–770 (2002)CrossRefzbMATHGoogle Scholar
  24. 24.
    Wyss, R., König, P., Verschure, P.F.M.J.: A model of the ventral visual system based on temporal stability and local memory. PLoS Biology 4(5), e120 (2006)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Wendelin Böhmer
    • 1
  • Steffen Grünewälder
    • 2
  • Hannes Nickisch
    • 3
  • Klaus Obermayer
    • 1
  1. 1.Neural Processing GroupTechnische Universität BerlinGermany
  2. 2.Centre for Computational Statistics and Machine LearningUniversity College LondonUnited Kingdom
  3. 3.Philips Research LaboratoriesHamburgGermany

Personalised recommendations