Advertisement

PAC learning with simple examples

  • François Denis
  • Cyrille D'Halluin
  • Rémi Gilleron
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1046)

Abstract

We define a new PAC learning model. In this model, examples are drawn according to the universal distribution m(. ¦ f) of Solomomoff-Levin, where f is the target concept. The consequence is that the simple examples of the target concept have a high probability to be provided to the learning algorithm. We prove an Occam's Razor theorem. We show that the class of poly-term DNF is learnable, and the class of k-reversible languages is learnable from positive data, in this new model.

Keywords

Turing Machine Regular Language Positive Data Target Concept Boolean Formula 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    D. Angluin, Inference of reversible languages, J. Assoc. Comput. Mach. 29 (1982) 741–765.Google Scholar
  2. 2.
    D. Angluin, Queries and Concept Learning, Machine Learning 2 (1982) 741–765.Google Scholar
  3. 3.
    G.Benedek and A.Itai, Learnability by fixed ditribution, Proc. 1st ACM Workshop on Computational Learning Theory, (1988) 80–90.Google Scholar
  4. 4.
    A. Blum, M. Furst, J. Jackson, M. Kearns, Y. Mansour, S. Rudich, Weakly Learning DNF and Characterizing Statistical Query Learning Using Fourier Analysis, Proc. th 26th ACM Symposium on Theory of Computing, (1994) 253–262.Google Scholar
  5. 5.
    A. Blumer, A. Ehrenfeucht, D. Haussler, and M.K. Warmuth, Occam's razor, Inform. Proc. Lett. 24 (1987) 377–380.MathSciNetGoogle Scholar
  6. 6.
    J. Castro, A note on learning decision lists, Report de Recerca, LSI-95-2-R, Dept LSI, UPC, (1995).Google Scholar
  7. 7.
    D.Haussler, M.Kearns, N.Littlestone and M.Warmuth, Equivalence of models for polynomial learnability, Proc. 1st ACM Workshop on Computational Learning Theory, (1988) 42–55.Google Scholar
  8. 8.
    M.Kearns, M.Li, L.Pitt and L.G.Valiant, On the learnability of boolean formulae, Proc. 19th ACM Symposium on Theory of Computing, (1987) 285–295.Google Scholar
  9. 9.
    M. Li and P. Vitányi, Learning simple concepts under simple distributions, SIAM J. Comput. 20 (1991) 911–935.CrossRefGoogle Scholar
  10. 10.
    M.Li and P.Vitányi, An introduction to Kolmogorov complexity and its applications, Texts and Monographs in Computer Science, Springer Verlag, (1993).Google Scholar
  11. 11.
    B.K.Natarajan, On learning boolean functions, Proc. 19th ACM Symposium on Theory of Computing, (1987) 296–304.Google Scholar
  12. 12.
    B.K.Natarajan, Machine Learning: a theoretical approach, Morgan Kaufman, (1991).Google Scholar
  13. 13.
    L.G.Valiant, A theory of the learnable, Comm. ACM., (1984) 1134–1142.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1996

Authors and Affiliations

  • François Denis
    • 1
  • Cyrille D'Halluin
    • 1
  • Rémi Gilleron
    • 1
  1. 1.LIFL, URA 369 CNRSIEEA Université de Lille IFrance

Personalised recommendations