Skip to main content

Exploiting Cluster-Structure to Predict the Labeling of a Graph

  • Conference paper
Algorithmic Learning Theory (ALT 2008)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 5254))

Included in the following conference series:

Abstract

The nearest neighbor and the perceptron algorithms are intuitively motivated by the aims to exploit the “cluster” and “linear separation” structure of the data to be classified, respectively. We develop a new online perceptron-like algorithm, Pounce, to exploit both types of structure. We refine the usual margin-based analysis of a perceptron-like algorithm to now additionally reflect the cluster-structure of the input space. We apply our methods to study the problem of predicting the labeling of a graph. We find that when both the quantity and extent of the clusters are small we may improve arbitrarily over a purely margin-based analysis.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aronszajn, N.: Theory of reproducing kernels. Trans. Amer. Math. Soc. 68, 337–404 (1950)

    Article  MATH  MathSciNet  Google Scholar 

  2. Belkin, M., Niyogi, P., Sindhwani, V.: Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. J. Mach. Learn. Res. 7, 2399–2434 (2006)

    MathSciNet  Google Scholar 

  3. Blum, A., Chawla, S.: Learning from labeled and unlabeled data using graph mincuts. In: Proc. 18th International Conf. on Machine Learning (2001)

    Google Scholar 

  4. Cesa-Bianchi, N., Conconi, A., Gentile, C.: A second-order perceptron algorithm. SIAM J. Comput. 34(3), 640–668 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  5. Chapelle, O., Schölkopf, B., Zien, A. (eds.): Semi-Supervised Learning. MIT Press, Cambridge (2006)

    Google Scholar 

  6. Crammer, K., Dekel, O., Keshet, J., Shalev-Shwartz, S., Singer, Y.: Online passive-aggressive algorithms. J. Mach. Learn. Res. 7, 551–585 (2006)

    MathSciNet  Google Scholar 

  7. Doyle, P., Snell, J.: Random walks and electric networks. MAA (1984)

    Google Scholar 

  8. Freund, Y., Schapire, R.E.: Large margin classification using the perceptron algorithm. Machine Learning 37(3), 277–296 (1999)

    Article  MATH  Google Scholar 

  9. Galeano, S.R., Herbster, M.: A fast method to predict the labeling of a tree. In: ECML 2007 Workshop on Graph Labeling (2007)

    Google Scholar 

  10. Gentile, C.: The robustness of the p-norm algorithms. Machine Learning 53(3), 265–299 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  11. Goldberg, A., Zhu, X., Wright, S.: Dissimilarity in graph-based semi-supervised classification. In: 11th Intl. Conf. on Artificial Intelligence and Statistics (2007)

    Google Scholar 

  12. Herbrich, R., Williamson, R.C.: Algorithmic luckiness. J. Mach. Learn. Res. 3, 175–212 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  13. Herbster, M.: Learning additive models online with fast evaluating kernels. In: Proc. of the 14th Annual Conf. on Computational Learning Theory (2001)

    Google Scholar 

  14. Herbster, M.: A linear lower bound for the perceptron for input sets of constant cardinality. Research Note RN/08/03, University College London, London (2008)

    Google Scholar 

  15. Herbster, M., Pontil, M.: Prediction on a graph with a perceptron. In: Advances in Neural Information Processing Systems 19, pp. 577–584. MIT Press, Cambridge (2007)

    Google Scholar 

  16. Herbster, M., Pontil, M., Wainer, L.: Online learning over graphs. In: Proc. 22nd Intl Conf. on Machine Learning, pp. 305–312. ACM Press, New York (2005)

    Chapter  Google Scholar 

  17. Johnson, R., Zhang, T.: On the effectiveness of laplacian normalization for graph semi-supervised learning. J. Mach. Learn. Res. 8, 1489–1517 (2007)

    MathSciNet  Google Scholar 

  18. Kivinen, J., Warmuth, M.: Additive versus exponentiated gradient updates for linear prediction. In: Proc. 27th Annu. ACM Symp. on Theory of Computing (1995)

    Google Scholar 

  19. Klein, D., Randić, M.: Resistance distance. J. of Math. Chem. 12(1), 81–95 (1993)

    Article  Google Scholar 

  20. Novikoff, A.: On convergence proofs for perceptrons. In: Proc. Sympos. Math. Theory of Automata, Polytechnic Press of Polytechnic Inst. of Brooklyn, NY (1963)

    Google Scholar 

  21. Rockafellar, R.: Convex Analysis. Princeton University Press, Princeton (1970)

    MATH  Google Scholar 

  22. Zhu, X., Ghahramani, Z., Lafferty, J.: Semi-supervised learning using gaussian fields and harmonic functions. In: 20th Intl. Conf. on Machine Learning (2003)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Herbster, M. (2008). Exploiting Cluster-Structure to Predict the Labeling of a Graph. In: Freund, Y., Györfi, L., Turán, G., Zeugmann, T. (eds) Algorithmic Learning Theory. ALT 2008. Lecture Notes in Computer Science(), vol 5254. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-87987-9_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-87987-9_9

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-87986-2

  • Online ISBN: 978-3-540-87987-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics