Advertisement

Probabilistic language learning under monotonicity constraints

  • Léa Meyer
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 997)

Abstract

The present paper deals with probabilistic identification of indexed families of uniformly recursive languages from positive data under various monotonicity constraints. Thereby, we consider strong-monotonic, monotonic and weak-monotonic probabilistic learning of indexed families with respect to class comprising, class preserving and exact hypothesis spaces and investigate the probabilistic hierarchies of these learning models. Earlier results in the field of probabilistic identification established that — considering function identification — each collection of recursive functions identifiable with probability p}>1/2 is deterministically identifiable (cf. [16]). In the case of language learning from text, each collection of recursive languages identifiable from text with probability p}>2/3 is deterministically identifiable (cf. [14]), but when dealing with the learning models mentioned above, we obtain probabilistic hierarchies highly structured without a “gap” between the probabilistic and deterministic learning classes. In the case of exact probabilistic learning, we are able to show the probabilistic hierarchy to be dense for every mentioned monotonicity condition. Considering class preserving weak-monotonic and monotonic probabilistic learning, we show the respective probabilistic hierarchies to be strictly decreasing for probability p→1, p}<1. These results extend previous work considerably (cf. [16], [17]). For class comprising weak-monotonic learning as well as for learning without additional constraints, we can prove that probabilistic identification and team identification are equivalent. This yields discrete probabilistic hierarchies in these cases.

Keywords

Probabilistic Identification Team Identification Recursive Function Inductive Inference Input String 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Angluin, D. (1980): Inductive Inference of formal languages from positive data, Information and Control 45, 117–135.CrossRefGoogle Scholar
  2. 2.
    Gold, E.M., (1967): Language identification in the limit, Information and Control 10, 447–474.CrossRefGoogle Scholar
  3. 3.
    Hopcroft, J., Ullman, J. (1979): Introduction to Automata Theory, Languages and Computation, Addison-Wesley Publ. Company.Google Scholar
  4. 4.
    Jain, S., Sharma, A. (1993): Probability is more powerful than team for language identification, in: Proc. of the 6th ACM Conf. on Comp. Learning Theory, Santa Cruz, July 1993, 192–198, ACM Press.Google Scholar
  5. 5.
    Jain, S., Sharma, A. (1995): personal communication.Google Scholar
  6. 6.
    Jantke, K.P., (1991): Monotonic and non-monotonic inductive inference, New Generation Computing 8, 349–360.Google Scholar
  7. 7.
    Lange, S., Zeugmann, T. (1992): Types of monotonic language learning an their characterisation, in: Proc. of the 5th ACM Conf. on Comp. Learning Theory, Pittsburgh, July 1992, 377–390, ACM Press.Google Scholar
  8. 8.
    Lange, S., Zeugmann, T. (1993): Monotonic versus non-monotonic language learning, in: Proc. 2nd Int. Workshop on Nonmonotonic an Inductive Logic, Dec. 1991, Rheinhardsbrunn, (G. Brewka, K.P.Jantke, P.H.Schmitt, Eds): Lecture Notes in AI 659, S. 254–269, Springer-Verlag, Berlin.Google Scholar
  9. 9.
    Lange, S., Zeugmann, T. (1993): Language learning in the dependence on the space of hypotheses, in: Proc. of the 6th ACM Conf. on Comp. Learning Theory, Santa Cruz, July 1993, 127–136, ACM Press.Google Scholar
  10. 10.
    Lange, S., Zeugmann, T. (1993): The learnability or recursive languages in dependence on the hypothesis space, GOSLER-Report 20/93, FB Mathemathik, Informatik und Naturwissenschaften, HTWK, Leipzig.Google Scholar
  11. 11.
    Lange, S., Zeugmann, T., Kapur, S. (1995): Monotonic and Dual Monotonic Language Learning, Theoretical Computer Science, to appear.Google Scholar
  12. 12.
    Meyer, L. (1995): Probabilistic learning of indexed families, Institutsbericht, Institut für Informatik und Gesellschaft, Freiburg, to appear.Google Scholar
  13. 13.
    Machtey, M., Young, P. (1978): An Introduction to the General Theory of Algorithms, North Holland, New York.Google Scholar
  14. 14.
    Pitt, L. (1985): Probabilistic Inductive Inference, PhD thesis, Yale University, 1985, Computer Science Dept. TR-400.Google Scholar
  15. 15.
    Wiehagen, R.: A Thesis in Inductive Inference, in: Proceedings First International Workshop on Nonmonotonic and Inductive Logic, December 1990, Karlsruhe, (J. Dix, K. P. Jantke, P. H. Schmitt, Eds.), Lecture Notes in Artificial Intelligence, Vol. 534, 184–207, Springer-Verlag, Berlin.Google Scholar
  16. 16.
    Wiehagen, R., Freivalds, R., Kinber, E.B. (1984): On the Power of Probabilistic Strategies in Inductive Inference, Theoretical Computer Science 28, 111–133.Google Scholar
  17. 17.
    Wiehagen, R., Freivalds, R., Kinber, E.B. (1988): Probabilistic versus deterministic Inductive Inference in Nonstandard Numberings, Zeitschr. f. math. Logik und Grundlagen d. Math. 34, 531–539.Google Scholar
  18. 18.
    Zeugmann, T., Lange, S. (1995): A Guided Tour Across the Boundaries of Learning Recursive Languages, in: Algorithmic Learning for Knowledge-Based Systems (K.P. Jantke and S. Lange, Eds.), Lecture Notes in Artificial Intelligence Vol. 961, 193–262, Springer-Verlag, Berlin.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1995

Authors and Affiliations

  • Léa Meyer
    • 1
  1. 1.Institut für Informatik und GesellschaftAlbert-Ludwigs-Universität FreiburgFreiburgGermany

Personalised recommendations