Abstract
The present paper deals with probabilistic identification of indexed families of uniformly recursive languages from positive data under various monotonicity constraints. Thereby, we consider strong-monotonic, monotonic and weak-monotonic probabilistic learning of indexed families with respect to class comprising, class preserving and exact hypothesis spaces and investigate the probabilistic hierarchies of these learning models. Earlier results in the field of probabilistic identification established that — considering function identification — each collection of recursive functions identifiable with probability p}>1/2 is deterministically identifiable (cf. [16]). In the case of language learning from text, each collection of recursive languages identifiable from text with probability p}>2/3 is deterministically identifiable (cf. [14]), but when dealing with the learning models mentioned above, we obtain probabilistic hierarchies highly structured without a “gap” between the probabilistic and deterministic learning classes. In the case of exact probabilistic learning, we are able to show the probabilistic hierarchy to be dense for every mentioned monotonicity condition. Considering class preserving weak-monotonic and monotonic probabilistic learning, we show the respective probabilistic hierarchies to be strictly decreasing for probability p→1, p}<1. These results extend previous work considerably (cf. [16], [17]). For class comprising weak-monotonic learning as well as for learning without additional constraints, we can prove that probabilistic identification and team identification are equivalent. This yields discrete probabilistic hierarchies in these cases.
In the previous work we gave a complete picture of exact probabilistic learning under monotonicity constraints and some results on class preserving and class comprising probabilistic learning. Further work will establish the probabilistic hierarchies for the cases not considered in this paper.
The author wishes to thank Thomas Zeugmann for suggesting the topic and helpful discussion. Furthermore, I wish to thank Britta Schinzel and Arun Sharma for helpful comments.
Preview
Unable to display preview. Download preview PDF.
References
Angluin, D. (1980): Inductive Inference of formal languages from positive data, Information and Control 45, 117–135.
Gold, E.M., (1967): Language identification in the limit, Information and Control 10, 447–474.
Hopcroft, J., Ullman, J. (1979): Introduction to Automata Theory, Languages and Computation, Addison-Wesley Publ. Company.
Jain, S., Sharma, A. (1993): Probability is more powerful than team for language identification, in: Proc. of the 6th ACM Conf. on Comp. Learning Theory, Santa Cruz, July 1993, 192–198, ACM Press.
Jain, S., Sharma, A. (1995): personal communication.
Jantke, K.P., (1991): Monotonic and non-monotonic inductive inference, New Generation Computing 8, 349–360.
Lange, S., Zeugmann, T. (1992): Types of monotonic language learning an their characterisation, in: Proc. of the 5th ACM Conf. on Comp. Learning Theory, Pittsburgh, July 1992, 377–390, ACM Press.
Lange, S., Zeugmann, T. (1993): Monotonic versus non-monotonic language learning, in: Proc. 2nd Int. Workshop on Nonmonotonic an Inductive Logic, Dec. 1991, Rheinhardsbrunn, (G. Brewka, K.P.Jantke, P.H.Schmitt, Eds): Lecture Notes in AI 659, S. 254–269, Springer-Verlag, Berlin.
Lange, S., Zeugmann, T. (1993): Language learning in the dependence on the space of hypotheses, in: Proc. of the 6th ACM Conf. on Comp. Learning Theory, Santa Cruz, July 1993, 127–136, ACM Press.
Lange, S., Zeugmann, T. (1993): The learnability or recursive languages in dependence on the hypothesis space, GOSLER-Report 20/93, FB Mathemathik, Informatik und Naturwissenschaften, HTWK, Leipzig.
Lange, S., Zeugmann, T., Kapur, S. (1995): Monotonic and Dual Monotonic Language Learning, Theoretical Computer Science, to appear.
Meyer, L. (1995): Probabilistic learning of indexed families, Institutsbericht, Institut für Informatik und Gesellschaft, Freiburg, to appear.
Machtey, M., Young, P. (1978): An Introduction to the General Theory of Algorithms, North Holland, New York.
Pitt, L. (1985): Probabilistic Inductive Inference, PhD thesis, Yale University, 1985, Computer Science Dept. TR-400.
Wiehagen, R.: A Thesis in Inductive Inference, in: Proceedings First International Workshop on Nonmonotonic and Inductive Logic, December 1990, Karlsruhe, (J. Dix, K. P. Jantke, P. H. Schmitt, Eds.), Lecture Notes in Artificial Intelligence, Vol. 534, 184–207, Springer-Verlag, Berlin.
Wiehagen, R., Freivalds, R., Kinber, E.B. (1984): On the Power of Probabilistic Strategies in Inductive Inference, Theoretical Computer Science 28, 111–133.
Wiehagen, R., Freivalds, R., Kinber, E.B. (1988): Probabilistic versus deterministic Inductive Inference in Nonstandard Numberings, Zeitschr. f. math. Logik und Grundlagen d. Math. 34, 531–539.
Zeugmann, T., Lange, S. (1995): A Guided Tour Across the Boundaries of Learning Recursive Languages, in: Algorithmic Learning for Knowledge-Based Systems (K.P. Jantke and S. Lange, Eds.), Lecture Notes in Artificial Intelligence Vol. 961, 193–262, Springer-Verlag, Berlin.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1995 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Meyer, L. (1995). Probabilistic language learning under monotonicity constraints. In: Jantke, K.P., Shinohara, T., Zeugmann, T. (eds) Algorithmic Learning Theory. ALT 1995. Lecture Notes in Computer Science, vol 997. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-60454-5_37
Download citation
DOI: https://doi.org/10.1007/3-540-60454-5_37
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-60454-9
Online ISBN: 978-3-540-47470-8
eBook Packages: Springer Book Archive