Skip to main content

Probabilistic language learning under monotonicity constraints

  • Conference paper
  • First Online:
Algorithmic Learning Theory (ALT 1995)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 997))

Included in the following conference series:

Abstract

The present paper deals with probabilistic identification of indexed families of uniformly recursive languages from positive data under various monotonicity constraints. Thereby, we consider strong-monotonic, monotonic and weak-monotonic probabilistic learning of indexed families with respect to class comprising, class preserving and exact hypothesis spaces and investigate the probabilistic hierarchies of these learning models. Earlier results in the field of probabilistic identification established that — considering function identification — each collection of recursive functions identifiable with probability p}>1/2 is deterministically identifiable (cf. [16]). In the case of language learning from text, each collection of recursive languages identifiable from text with probability p}>2/3 is deterministically identifiable (cf. [14]), but when dealing with the learning models mentioned above, we obtain probabilistic hierarchies highly structured without a “gap” between the probabilistic and deterministic learning classes. In the case of exact probabilistic learning, we are able to show the probabilistic hierarchy to be dense for every mentioned monotonicity condition. Considering class preserving weak-monotonic and monotonic probabilistic learning, we show the respective probabilistic hierarchies to be strictly decreasing for probability p→1, p}<1. These results extend previous work considerably (cf. [16], [17]). For class comprising weak-monotonic learning as well as for learning without additional constraints, we can prove that probabilistic identification and team identification are equivalent. This yields discrete probabilistic hierarchies in these cases.

In the previous work we gave a complete picture of exact probabilistic learning under monotonicity constraints and some results on class preserving and class comprising probabilistic learning. Further work will establish the probabilistic hierarchies for the cases not considered in this paper.

The author wishes to thank Thomas Zeugmann for suggesting the topic and helpful discussion. Furthermore, I wish to thank Britta Schinzel and Arun Sharma for helpful comments.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Angluin, D. (1980): Inductive Inference of formal languages from positive data, Information and Control 45, 117–135.

    Article  Google Scholar 

  2. Gold, E.M., (1967): Language identification in the limit, Information and Control 10, 447–474.

    Article  Google Scholar 

  3. Hopcroft, J., Ullman, J. (1979): Introduction to Automata Theory, Languages and Computation, Addison-Wesley Publ. Company.

    Google Scholar 

  4. Jain, S., Sharma, A. (1993): Probability is more powerful than team for language identification, in: Proc. of the 6th ACM Conf. on Comp. Learning Theory, Santa Cruz, July 1993, 192–198, ACM Press.

    Google Scholar 

  5. Jain, S., Sharma, A. (1995): personal communication.

    Google Scholar 

  6. Jantke, K.P., (1991): Monotonic and non-monotonic inductive inference, New Generation Computing 8, 349–360.

    Google Scholar 

  7. Lange, S., Zeugmann, T. (1992): Types of monotonic language learning an their characterisation, in: Proc. of the 5th ACM Conf. on Comp. Learning Theory, Pittsburgh, July 1992, 377–390, ACM Press.

    Google Scholar 

  8. Lange, S., Zeugmann, T. (1993): Monotonic versus non-monotonic language learning, in: Proc. 2nd Int. Workshop on Nonmonotonic an Inductive Logic, Dec. 1991, Rheinhardsbrunn, (G. Brewka, K.P.Jantke, P.H.Schmitt, Eds): Lecture Notes in AI 659, S. 254–269, Springer-Verlag, Berlin.

    Google Scholar 

  9. Lange, S., Zeugmann, T. (1993): Language learning in the dependence on the space of hypotheses, in: Proc. of the 6th ACM Conf. on Comp. Learning Theory, Santa Cruz, July 1993, 127–136, ACM Press.

    Google Scholar 

  10. Lange, S., Zeugmann, T. (1993): The learnability or recursive languages in dependence on the hypothesis space, GOSLER-Report 20/93, FB Mathemathik, Informatik und Naturwissenschaften, HTWK, Leipzig.

    Google Scholar 

  11. Lange, S., Zeugmann, T., Kapur, S. (1995): Monotonic and Dual Monotonic Language Learning, Theoretical Computer Science, to appear.

    Google Scholar 

  12. Meyer, L. (1995): Probabilistic learning of indexed families, Institutsbericht, Institut für Informatik und Gesellschaft, Freiburg, to appear.

    Google Scholar 

  13. Machtey, M., Young, P. (1978): An Introduction to the General Theory of Algorithms, North Holland, New York.

    Google Scholar 

  14. Pitt, L. (1985): Probabilistic Inductive Inference, PhD thesis, Yale University, 1985, Computer Science Dept. TR-400.

    Google Scholar 

  15. Wiehagen, R.: A Thesis in Inductive Inference, in: Proceedings First International Workshop on Nonmonotonic and Inductive Logic, December 1990, Karlsruhe, (J. Dix, K. P. Jantke, P. H. Schmitt, Eds.), Lecture Notes in Artificial Intelligence, Vol. 534, 184–207, Springer-Verlag, Berlin.

    Google Scholar 

  16. Wiehagen, R., Freivalds, R., Kinber, E.B. (1984): On the Power of Probabilistic Strategies in Inductive Inference, Theoretical Computer Science 28, 111–133.

    Google Scholar 

  17. Wiehagen, R., Freivalds, R., Kinber, E.B. (1988): Probabilistic versus deterministic Inductive Inference in Nonstandard Numberings, Zeitschr. f. math. Logik und Grundlagen d. Math. 34, 531–539.

    Google Scholar 

  18. Zeugmann, T., Lange, S. (1995): A Guided Tour Across the Boundaries of Learning Recursive Languages, in: Algorithmic Learning for Knowledge-Based Systems (K.P. Jantke and S. Lange, Eds.), Lecture Notes in Artificial Intelligence Vol. 961, 193–262, Springer-Verlag, Berlin.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Klaus P. Jantke Takeshi Shinohara Thomas Zeugmann

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Meyer, L. (1995). Probabilistic language learning under monotonicity constraints. In: Jantke, K.P., Shinohara, T., Zeugmann, T. (eds) Algorithmic Learning Theory. ALT 1995. Lecture Notes in Computer Science, vol 997. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-60454-5_37

Download citation

  • DOI: https://doi.org/10.1007/3-540-60454-5_37

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-60454-9

  • Online ISBN: 978-3-540-47470-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics