Skip to main content

Stable Transductive Learning

  • Conference paper
Learning Theory (COLT 2006)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4005))

Included in the following conference series:

Abstract

We develop a new error bound for transductive learning algorithms. The slack term in the new bound is a function of a relaxed notion of transductive stability, which measures the sensitivity of the algorithm to most pairwise exchanges of training and test set points. Our bound is based on a novel concentration inequality for symmetric functions of permutations. We also present a simple sampling technique that can estimate, with high probability, the weak stability of transductive learning algorithms with respect to a given dataset. We demonstrate the usefulness of our estimation technique on a well known transductive learning algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Azuma, K.: Weighted sums of certain dependent random variables. Tohoku Mathematical Journal 19, 357–367 (1967)

    Article  MathSciNet  MATH  Google Scholar 

  2. Belkin, M., Matveeva, I., Niyogi, P.: Regularization and semi-supervised learning on large graphs. In: Shawe-Taylor, J., Singer, Y. (eds.) COLT 2004. LNCS, vol. 3120, pp. 624–638. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  3. Blum, A., Langford, J.: PAC-MDL bounds. In: Schölkopf, B., Warmuth, M.K. (eds.) COLT/Kernel 2003. LNCS, vol. 2777, pp. 344–357. Springer, Heidelberg (2003)

    Chapter  Google Scholar 

  4. Bousquet, O., Elisseeff, A.: Stability and generalization. Journal of Machine Learning Research 2, 499–526 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  5. Derbeko, P., El-Yaniv, R., Meir, R.: Explicit learning curves for transduction and application to clustering and compression algorithms. Journal of Artificial Intelligence Research 22, 117–142 (2004)

    MathSciNet  MATH  Google Scholar 

  6. Devroye, L., Györfi, L., Lugosi, G.: A Probabilistic Theory of Pattern Recognition. Springer, New York (1996)

    MATH  Google Scholar 

  7. Grimmett, G.R., Stirzaker, D.R.: Probability and Random Processes, 2nd edn. Oxford Science Publications (1995)

    Google Scholar 

  8. Huang, T.M., Kecman, V.: Performance comparisons of semi-supervised learning algorithms. In: ICML Workshop, Learning with Partially Classified Training Data, pp. 45–49 (2005)

    Google Scholar 

  9. Hush, D., Scovel, C., Steinwart, I.: Stability of unstable learning algorithms. Technical Report LA-UR-03-4845, Los Alamos National Laboratory (2003)

    Google Scholar 

  10. Kearns, M., Ron, D.: Algorithmic stability and sanity-check bounds for leave-one-out cross-validation. Neural Computation 11(6), 1427–1453 (1999)

    Article  Google Scholar 

  11. Kutin, S.: Extensions to McDiarmid’s inequality when differences are bounded with high probability. Technical Report TR-2002-04, University of Chicago (2002)

    Google Scholar 

  12. Kutin, S., Niyogi, P.: Almost-everywhere algorithmic stability and generalization error. In: UAI, pp. 275–282 (2002)

    Google Scholar 

  13. Ledoux, M.: The concentration of measure phenomenon. Mathematical Surveys and Monographs, vol. 89. American Mathematical Society (2001)

    Google Scholar 

  14. Manku, G.S., Rajagopalan, S., Lindsay, B.G.: Approximate medians and other quantiles in one pass and with limited memory. In: SIGMOD, vol. 28, pp. 426–435 (1998)

    Google Scholar 

  15. Mukherjee, S., Niyogi, P., Poggio, T., Rifkin, R.: Statistical learning: Stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization. Technical Report AI Memo 2002–024, MIT (2004)

    Google Scholar 

  16. Rakhlin, A., Mukherjee, S., Poggio, T.: Stability results in learning theory. Analysis and Applications 3(4), 397–419 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  17. Talagrand, M.: Majorizing measures: the generic chaining. Springer, Heidelberg (2005)

    Google Scholar 

  18. Vapnik, V.N.: Estimation of Dependences Based on Empirical Data. Springer Verlag, New York (1982)

    MATH  Google Scholar 

  19. Vapnik, V.N.: Statistical Learning Theory. Wiley Interscience, New York (1998)

    MATH  Google Scholar 

  20. Zhou, D., Bousquet, O., Lal, T.N., Weston, J., Scholkopf, B.: Learning with local and global consistency. In: NIPS, pp. 321–328 (2003)

    Google Scholar 

  21. Zhu, X., Ghahramani, Z., Lafferty, J.D.: Semi-supervised learning using gaussian fields and harmonic functions. In: ICML, pp. 912–919 (2003)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

El-Yaniv, R., Pechyony, D. (2006). Stable Transductive Learning. In: Lugosi, G., Simon, H.U. (eds) Learning Theory. COLT 2006. Lecture Notes in Computer Science(), vol 4005. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11776420_6

Download citation

  • DOI: https://doi.org/10.1007/11776420_6

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-35294-5

  • Online ISBN: 978-3-540-35296-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics