Skip to main content

Uniform characterizations of various kinds of language learning

  • Selected Papers
  • Conference paper
  • First Online:
Algorithmic Learning Theory (ALT 1993)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 744))

Included in the following conference series:

Abstract

Learnability of families of recursive languages from positive data is studied in the Gold paradigm of inductive inference. A large amount of work has focused on trying to understand how the language learning ability of an inductive inference machine is affected when it is constrained. For example, derived from work in inductive logic, notions of monotonicity have been studied which variously reflect the requirement that the learner's guess must monotonically ‘improve’ with regard to the target language. A unique characterization theorem is obtained which uniformly characterizes all classes learnable under a number of different constraints specified via a parametric description. It is also shown how many known characterizations can be obtained by straightforward applications of this theorem. It is argued that the new parameterization scheme for specifying constraints works for a wide variety of constraints.

The author would like to thank Gianfranco Bilardi for a useful suggestion and Steffen Lange and Thomas Zeugmann for interesting discussions. The author was supported in part by ARO grant DAAL 03-89-C-0031, DARPA grant N00014-90-J-1863, NSF grant IRI 90-16592 and Ben Franklin grant 91S.3078C-1.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Dana Angluin: Inductive inference of formal languages from positive data. Information and Control, 45:117–135, 1980.

    Article  Google Scholar 

  2. Dana Angluin, Carl H. Smith: Formal inductive inference. In S. C. Shapiro (ed.) Encyclopedia of Artificial Intelligence, volume 1. Wiley-Interscience Publication, New York, 1987.

    Google Scholar 

  3. Robert Berwick: The Acquisition of Syntactic Knowledge. MIT press, Cambridge, MA, 1985.

    Google Scholar 

  4. L. Blum, M. Blum: Toward a mathematical theory of inductive inference. Information and Control, 28:125–155, 1975.

    Article  Google Scholar 

  5. E. M. Gold: Language identification in the limit. Information and Control, 10:447–474, 1967.

    Article  Google Scholar 

  6. Klaus P. Jantke: Monotonic and non-monotonic inductive inference. New Generation Computing, 8:349–360, 1991.

    Google Scholar 

  7. Shyam Kapur: Computational Learning of Languages. PhD thesis, Cornell University, September 1991. Computer Science Department Technical Report 91-1234.

    Google Scholar 

  8. Shyam Kapur: Monotonic language learning. In Proceedings of the third Work-Shop on Algorithmic Learning Theory, October 1992. Also in New Generation Computing (To appear).

    Google Scholar 

  9. Shyam Kapur: Language learning under teams of constraints. Manuscript, 1993.

    Google Scholar 

  10. Shyam Kapur, Gianfranco Bilardi: Language learning without overgeneralization. In Proceedings of the 9th Symposium on Theoretical Aspects of Computer Science (Lecture Notes in Computer Science 577), pages 245–256. Springer-Verlag, 1992.

    Google Scholar 

  11. Shyam Kapur, Barbara Lust, Wayne Harbert, Gita Martohardjono: Universal grammar and learnability theory: the case of binding domains and the subset principle. In Knowledge and Language: Issues in Representation and Acquisition. Kluwer Academic Publishers, 1993.

    Google Scholar 

  12. Steffen Lange, Thomas Zeugmann: Monotonic versus non-monotonic language learning. In Proceedings of the 2nd International Workshop on Nonmonotonic and Inductive Logic (Lecture Notes in Artificial Intelligence Series), 1991.

    Google Scholar 

  13. Steffen Lange, Thomas Zeugmann: Types of monotonic language learning and their characterization. In Proceedings of the 5th Conference on Computational Learning Theory. Morgan-Kaufman, 1992.

    Google Scholar 

  14. Steffen Lange, Thomas Zeugmann: Language learning in dependence on the space of hypotheses. In Proceedings of the 6th Conference on Computational Learning Theory. Morgan-Kaufman, 1993.

    Google Scholar 

  15. Steffen Lange, Thomas Zeugmann, Shyam Kapur: Class preserving monotonic and dual monotonic language learning. Technical Report GOSLER-14/92, FB Mathematik und Informatik, TH Leipzig, August 1992.

    Google Scholar 

  16. M. R. Manzini, Kenneth Wexler: Parameters, binding theory and learnability. Linguistic Inquiry, 18:413–444, 1987.

    Google Scholar 

  17. Gary Marcus: Negative evidence in language acquisition. Cognition, 46:53–85, 1993.

    PubMed  Google Scholar 

  18. Irene Mazurkewich, Lydia White: The acquisition of dative-alternation: unlearning overgeneralizations. Cognition, 16(3):261–283, 1984.

    PubMed  Google Scholar 

  19. Tatsuya Motoki: Consistent, responsive and conservative inference from positive data. In Proceedings of the LA Symposium, pages 55–60, 1990.

    Google Scholar 

  20. Yasuhito Mukouchi: Characterization of finite identification. In Proceedings of the International Workshop on Analogical and Inductive Inference, Lecture notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science), volume 642, 1992.

    Google Scholar 

  21. Daniel N. Osherson, Michael Stob, Scott Weinstein: Systems that Learn: An Introduction to Learning. MIT press, Cambridge, MA, 1986.

    Google Scholar 

  22. E. Y. Shapiro: Inductive inference of theories from facts. Technical Report 192, Yale University, 1981.

    Google Scholar 

  23. Robert Irving Soare: Recursively enumerable sets and degrees: a study of computable functions and computably generated sets. Springer-Verlag, Berlin; New York, 1987.

    Google Scholar 

  24. Rolf Wiehagen: A thesis in inductive inference: In Proceedings of the 1st International Workshop on Nonmonotonic and Inductive Logic. Springer-Verlag, 1991. Lecture Notes in Artificial Intelligence Vol. 543.

    Google Scholar 

  25. Thomas Zeugmann, Steffen Lange, and Shyam Kapur: Characterizations of class preserving monotonic and dual monotonic language learning. Technical Report IRCS-92-24, Institute for Research in Cognitive Science, University of Pennsylvania, September 1992.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Klaus P. Jantke Shigenobu Kobayashi Etsuji Tomita Takashi Yokomori

Rights and permissions

Reprints and permissions

Copyright information

© 1993 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kapur, S. (1993). Uniform characterizations of various kinds of language learning. In: Jantke, K.P., Kobayashi, S., Tomita, E., Yokomori, T. (eds) Algorithmic Learning Theory. ALT 1993. Lecture Notes in Computer Science, vol 744. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-57370-4_48

Download citation

  • DOI: https://doi.org/10.1007/3-540-57370-4_48

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-57370-8

  • Online ISBN: 978-3-540-48096-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics