Skip to main content

Regularized Principal Manifolds

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1572))

Abstract

Many settings of unsupervised learning can be viewed as quantization problems — the minimization of the expected quantization error subject to some restrictions. This allows the use of tools such as regularization from the theory of (supervised) risk minimization for unsupervised settings. Moreover, this setting is very closely related to both principal curves and the generative topographic map.

We explore this connection in two ways: 1) we propose an algorithm for finding principal manifolds that can be regularized in a variety of ways. Experimental results demonstrate the feasibility of the approach. 2) We derive uniform convergence bounds and hence bounds on the learning rates of the algorithm. In particular, we give good bounds on the covering numbers which allows us to obtain a nearly optimal learning rate of order \( O(m^{ - \tfrac{1} {2} + \alpha } ) \) for certain types of regularization operators, where m is the sample size and α an arbitrary positive constant.

This work was supported in part by grants of the ARC and the DFG (Ja 379/71 and Ja 379/51). Moreover we thank Balazs Kégl and Adam Krzyżak for helpful comments and discussions.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. P. Bartlett, T. Linder, and G. Lugosi. The minimax distortion redundancy in empirical quantizer design. IEEE Transactions on Information Theory, 44(5):1802–1813, 1998.

    Article  MATH  MathSciNet  Google Scholar 

  2. C.M. Bishop, M. Svensén, and C.K.I. Williams. GTM: The generative topographic mapping. Neural Computation, 10(1):215–234, 1998.

    Article  Google Scholar 

  3. B. Carl and I. Stephani. Entropy, compactness, and the approximation of operators. Cambridge University Press, Cambridge, UK, 1990.

    MATH  Google Scholar 

  4. R. Der, U. Steinmetz, B. Balzuweit, and G. Schü ürmann. Nonlinear principal component analysis. University of Leipzig, Preprint, http://www.informatik.unileipzig.de/der/Veroeff/npcafin.ps.gz, 1998.

  5. M. Hamermesh. Group theory and its applications to physical problems. Addison Wesley, Reading, MA, 2 edition, 1962. Reprint by Dover, New York, NY.

    Google Scholar 

  6. T. Hastie and W. Stuetzle. Principal curves. Journal of the American Statistical Association, 84(406):502–516, 1989.

    Article  MATH  MathSciNet  Google Scholar 

  7. B. Kégl, A. Krzyżak, T. Linder, and K. Zeger. Learning and design of principal curves. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1999. http://magenta.mast.queensu.ca/~linder/psfiles/KeKrLiZe97.ps.gz.

  8. B. Schölkopf, A. Smola, and K.-R. Müller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299–1319, 1998.

    Article  Google Scholar 

  9. A.J. Smola, B. Schölkopf, and K.-R. Müller. The connection between regularization operators and support vector kernels. Neural Networks, 11:637–649, 1998.

    Article  Google Scholar 

  10. A.N. Tikhonov and V.Y. Arsenin. Solution of Ill-Posed Problems. Winston, Washington, DC, 1977.

    Google Scholar 

  11. V.N. Vapnik. Estimation of Dependences Based on Empirical Data. Springer-Verlag, Berlin, 1982.

    MATH  Google Scholar 

  12. C.K.I. Williams. Prediction with gaussian processes: From linear regression to linear prediction and beyond. Learning and Inference in Graphical Models, 1998.

    Google Scholar 

  13. R.C. Williamson, A.J. Smola, and B. Schölkopf. Generalization performance of regularization networks and support vector machines via entropy numbers of compact operators. NeuroCOLT NC-TR-98-019, Royal Holloway College, 1998.

    Google Scholar 

  14. A. Yuille and N. Grzywacz. The motion coherence theory. In Proceedings of the International Conference on Computer Vision, pages 344–354, Washington, D.C., December 1988. IEEE Computer Society Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Smola, A.J., Williamson, R.C., Mika, S., Schölkopf, B. (1999). Regularized Principal Manifolds. In: Fischer, P., Simon, H.U. (eds) Computational Learning Theory. EuroCOLT 1999. Lecture Notes in Computer Science(), vol 1572. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-49097-3_17

Download citation

  • DOI: https://doi.org/10.1007/3-540-49097-3_17

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-65701-9

  • Online ISBN: 978-3-540-49097-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics