Advertisement

Learning nested differences in the presence of malicious noise

  • Peter Auer
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 997)

Abstract

We investigate the learnability of nested differences of intersection-closed classes in the presence of malicious noise. Examples of intersection-closed classes include axis-parallel rectangles, monomials, linear sub-spaces, and so forth. We present an on-line algorithm whose mistake bound is optimal in the sense that there are concept classes for which each learning algorithm (using nested differences as hypotheses) can be forced to make at least that many mistakes. We also present an algorithm for learning in the PAC model with malicious noise. Surprisingly enough, the noise rate tolerable by these algorithms does not depend on the complexity of the target class but depends only on the complexity of the underlying intersection-closed class.

Keywords

Concept Class Target Concept Noise Rate Hypothesis Class Noise Tolerance 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [AC94]
    Peter Auer and Nicolò Cesa-Bianchi. On-line learning with malicious noise and the closure algorithm. In Setsno Arikawa and Klaus P. Jantke, editors, Algorithmic Learnung Theory. AII'94. ALT'94, pages 229–247. Lecture Notes in Artificial Intelligence 872, Springer, 1994.Google Scholar
  2. [Ang88]
    D. Angluin. Queries and concept: learning. Machine Learning, 2(4):319–342, April 1988.Google Scholar
  3. [AST93]
    M. Anthony and J. Shawe-Taylor. A result of Vapnik with applications. Discrete Applied Mathematics. 47:207–217, 1993.Google Scholar
  4. [Aue93]
    Peter Auer. On-line learning of rectangles in noisy environments. In Proceedings of the Sixth Annual ACM Conference on Computational Learning Theory. pages 253–261. ACM Press, 1993.Google Scholar
  5. [HLW88]
    D. Haussler, N. Littlestone, and M. K. Warmuth. Predicting {0,1} functions on randomly drawn points. In Proceedings of the 29th Annual IEEE Symposium on Foundations of Computer Science, pages 100–109. IEEE Computer Society Press, 1988.Google Scholar
  6. [HSW90]
    David Helmbold, Robert Sloan, and Manfred K. Warmuth. Lerning nested differences of intersection-closed concept classes. Machine Learning, 5:165–196, 1990.Google Scholar
  7. [HSW92]
    D. Helmbold, R. Sloan, and M. K. Warmuth. Learning integer lattices. SIAM J. Comput., 21(2):240–266, 1992.Google Scholar
  8. [KL93]
    M. Kearns and M. Li. Learning in the presence of malicious errors. SIAM J. Comput., 22:807–837, 1993.Google Scholar
  9. [Lit88]
    N. Littlestone. Learning quickly when irrelevant attributes abound: A new linearthreshold algorithm. Machine Learning. 2:285–318, 1988.Google Scholar
  10. [Nat91]
    B. K. Natarajan. Machine Learning: A Theoretical Approach. Morgan Kaufmann, San Mateo. CA, 1991.Google Scholar
  11. [Val84]
    L. G. Valiant. A theory of the learnable. Commun. ACM. 27(11):1134–1142, November 1984.Google Scholar
  12. [VC71]
    V. N. Vapnik and A. Y. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probab. and its Applications. 16(2):264–280, 1971.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1995

Authors and Affiliations

  • Peter Auer
    • 1
  1. 1.University of California at Santa CruzSanta CruzUSA

Personalised recommendations