Advertisement

Stochastic Finite Learning

  • Thomas Zeugmann
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2264)

Abstract

Recently, we have developed a learning model, called stochastic finite learning, that makes a connection between concepts from PAC learning and inductive inference learning models. The motivation for this work is as follows. Within Gold’s (1967) model of learning in the limit many important learning problems can be formalized and it can be shown that they are algorithmically solvable in principle. However, since a limit learner is only supposed to converge, one never knows at any particular learning stage whether or not it has already been successful. Such an uncertainty may be not acceptable in many applications. The present paper surveys the new approach to overcome this uncertainty that potentially has a wide range of applicability.

Keywords

Inductive inference average-case analysis stochastic finite learning conjunctive concepts pattern languages 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    D. Angluin, Finding Patterns common to a Set of Strings, Journal of Computer and System Sciences 21 (1980), 46–62.zbMATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    A. Blumer, A. Ehrenfeucht, D. Haussler and M. Warmuth, Learnability and the Vapnik-Chervonenkis Dimension, Journal of the ACM 36 (1989), 929–965.zbMATHCrossRefMathSciNetGoogle Scholar
  3. 3.
    J. Case, S. Jain, S. Lange and T. Zeugmann, Incremental Concept Learning for Bounded Data Mining, Information and Computation 152, No. 1, 1999, 74–110.zbMATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    R. Daley and C.H. Smith. On the Complexity of Inductive Inference. Information and Control 69 (1986), 12–40.zbMATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    T. Erlebach, P. Rossmanith, H. Stadtherr, A. Steger and T. Zeugmann, Learning one-variable pattern languages very efficiently on average, in parallel, and by asking queries, Theoretical Computer Science 261, No. 1–2, 2001, 119–156.zbMATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    E.M. Gold, Language identification in the limit, Information and Control 10 (1967), 447–474.CrossRefzbMATHGoogle Scholar
  7. 7.
    S.A. Goldman, M.J. Kearns and R.E. Schapire, Exact identification of circuits using fixed points of amplification functions. SIAM Journal of Computing 22, 1993, 705–726.zbMATHCrossRefMathSciNetGoogle Scholar
  8. 8.
    D. Haussler, Bias, version spaces and Valiant’s learning framework. “Proc. 8th National Conference on Artificial Intelligence” (pp. 564–569). Morgan Kaufmann, 1987.Google Scholar
  9. 9.
    D. Haussler, M. Kearns, N. Littlestone and M.K. Warmuth, Equivalence of models for polynomial learnability. Information and Computation 95 (1991), 129–161.zbMATHCrossRefMathSciNetGoogle Scholar
  10. 10.
    M. Kearns L. Pitt, A polynomial-time algorithm for learning k-variable pattern languages from examples. “Proc. Second Annual ACM Workshop on Computational Learning Theory” (pp. 57–71). Morgan Kaufmann, 1989.Google Scholar
  11. 11.
    S. Lange and R. Wiehagen, Polynomial-time inference of arbitrary pattern languages. New Generation Computing 8 (1991), 361–370.zbMATHCrossRefGoogle Scholar
  12. 12.
    S. Lange and T. Zeugmann, Set-driven and Rearrangement-independent Learning of Recursive Languages, Mathematical Systems Theory 29 (1996), 599–634.zbMATHMathSciNetGoogle Scholar
  13. 13.
    S. Lange and T. Zeugmann, Incremental Learning from Positive Data, Journal of Computer and System Sciences 53(1996), 88–103.zbMATHCrossRefMathSciNetGoogle Scholar
  14. 14.
    A. Mitchell, A. Sharma, T. Scheffer and F. Stephan, The VC-dimension of Subclasses of Pattern Languages, in “Proc. 10th International Conference on Algorithmic Learning Theory,” (O. Watanabe and T. Yokomori, Eds.), Lecture Notes in Artificial Intelligence, Vol. 1720, pp. 93–105, Springer-Verlag, Berlin, 1999.Google Scholar
  15. 15.
    L. Pitt, Inductive Inference, DFAs and Computational Complexity, in “Proc. 2nd Int. Workshop on Analogical and Inductive Inference” (K.P. Jantke, Ed.), Lecture Notes in Artificial Intelligence, Vol. 397, pp. 18–44, Springer-Verlag, Berlin, 1989.Google Scholar
  16. 16.
    R. Reischuk and T. Zeugmann, Learning One-Variable Pattern Languages in Linear Average Time, in “Proc. 11th Annual Conference on Computational Learning Theory-COLT’98,” July 24th–26th, Madison, pp. 198–208, ACM Press 1998.Google Scholar
  17. 17.
    R. Reischuk and T. Zeugmann, A Complete and Tight Average-Case Analysis of Learning Monomials, in “Proc. 16th International Symposium on Theoretical Aspects of Computer Science,” (C. Meinel and S. Tison, Eds.), Lecture Notes in Computer Science, Vol. 1563, pp. 414–423, Springer-Verlag, Berlin 1999.Google Scholar
  18. 18.
    R. Reischuk and T. Zeugmann, An Average-Case Optimal One-Variable Pattern Language Learner, Journal of Computer and System Sciences 60, No. 2, 2000, 302–335.zbMATHCrossRefMathSciNetGoogle Scholar
  19. 19.
    P. Rossmanith and T. Zeugmann. Stochastic Finite Learning of the Pattern Languages, Machine Learning 44, No. 1–2, 2001, 67–91.zbMATHCrossRefGoogle Scholar
  20. 20.
    L.G. Valiant, A Theory of the Learnable, Communications of the ACM 27 (1984), 1134–1142.zbMATHCrossRefGoogle Scholar
  21. 21.
    R. Wiehagen and T. Zeugmann, Ignoring Data may be the only Way to Learn Efficiently, Journal of Experimental and Theoretical Artificial Intelligence 6 (1994), 131–144.zbMATHCrossRefGoogle Scholar
  22. 22.
    T. Zeugmann, Lange and Wiehagen’s Pattern Language Learning Algorithm: An Average-case Analysis with respect to its Total Learning Time, Annals of Mathematics and Artificial Intelligence 23, No. 1–2, 1998, 117–145.zbMATHCrossRefMathSciNetGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Thomas Zeugmann
    • 1
  1. 1.Institut für Theoretische InformatikMed. Universität zu LübeckLübeckGermany

Personalised recommendations