Advertisement

Exploring Learnability between Exact and PAC

  • Nader H. Bshouty
  • Jeffrey C. Jackson
  • Christino Tamon
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2375)

Abstract

We study a model of Probably Exactly Correct (PExact) learning that can be viewed either as the Exact model (learning from Equivalence Queries only) relaxed so that counterexamples to equivalence queries are distributionally drawn rather than adversarially chosen or as the Probably Approximately Correct (PAC) model strengthened to require a perfect hypothesis. We also introduce a model of Probably Almost Exactly Correct (PAExact) learning that requires a hypothesis with negligible error and thus lies between the PExact and PAC models. Unlike the Exact and PExact models, PAExact learning is applicable to classes of functions defined over infinite instance spaces. We obtain a number of separation results between these models. Of particular note are some positive results for efficient parallel learning in the PAExact model, which stand in stark contrast to earlier negative results for efficient parallel Exact learning.

Keywords

Exact Model Separation Result Equivalence Query Instance Space Probably Approximately Correct 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    M. Anthony, A. Biggs. Computational Learning Theory. Cambridge University Press, 1992.Google Scholar
  2. 2.
    Dana Angluin. Queries and Concept Learning. Machine Learning, 2:319–342, 1988.Google Scholar
  3. 3.
    Dana Angluin. Negative Results for Equivalence Queries. Machine Learning, 5:121–150, 1990.Google Scholar
  4. 4.
    Avrim Blum. Separating Distribution-Free and Mistake-Bound Learning Models over the Boolean Domain. SIAM Journal on Computing, 23(5):990–1000, 1994.CrossRefMathSciNetGoogle Scholar
  5. 5.
    Nader H. Bshouty. Towards the Learnability of DNF Formulae. Proceedings of the ACM Annual Symposium on Theory of Computing, 1996.Google Scholar
  6. 6.
    Nader H. Bshouty. Exact Learning of Formulas in Parallel. Machine Learning, 26:25–41, 1997.zbMATHCrossRefGoogle Scholar
  7. 7.
    Shai Ben-David, Eyal Kushilevitz, and Yishay Mansour. Online Learning versus Offline Learning. Machine Learning, 29:45–63, 1997.zbMATHCrossRefGoogle Scholar
  8. 8.
    Francois Denis. Learning Regular Languages from Simple Positive Examples. Machine Learning, 44(1/2):37–66, 2001.zbMATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    Matthias Krause, Pavel Pudlak. On the Computational Power of Depth 2 Circuits with Threshold and Modulo Gates Proceedings of the ACM Annual Symposium on Theory of Computing, pages 48–57, 1994.Google Scholar
  10. 10.
    Nick Littlestone. Learning Quickly When Irrelevant Attributes Abound: A New Linear-threshold Algorithm. Machine Learning, 2:285–318, 1988.Google Scholar
  11. 11.
    Nathan Linial, Yishay Mansour, Noam Nisan. Constant Depth Circuits, Fourier Transform, and Learnability Journal of the Association for Computing Machinery, 40(3):607–620, 1993.zbMATHMathSciNetGoogle Scholar
  12. 12.
    Rajesh Parekh and Vasant Honavar. Simple DFA are polynomially probably exactly learnable from simple examples. Proceedings of the 16th International Conference on Machine Learning, Morgan Kaufmann, San Francisco, CA, 298–306, 1999.Google Scholar
  13. 13.
    L. G. Valiant. A Theory of the Learnable. Communications of the ACM, 27(11):1134–1142, 1984.zbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Nader H. Bshouty
    • 1
  • Jeffrey C. Jackson
    • 2
  • Christino Tamon
    • 3
  1. 1.TechnionHaifaIsrael
  2. 2.Duquesne UniversityPittsburghUSA
  3. 3.Clarkson UniversityPotsdamUSA

Personalised recommendations